WorldWideScience

Sample records for highest average concentration

  1. Relation of average and highest solvent vapor concentrations in workplaces in small to medium enterprises and large enterprises.

    Science.gov (United States)

    Ukai, Hirohiko; Ohashi, Fumiko; Samoto, Hajime; Fukui, Yoshinari; Okamoto, Satoru; Moriguchi, Jiro; Ezaki, Takafumi; Takada, Shiro; Ikeda, Masayuki

    2006-04-01

    The present study was initiated to examine the relationship between the workplace concentrations and the estimated highest concentrations in solvent workplaces (SWPs), with special references to enterprise size and types of solvent work. Results of survey conducted in 1010 SWPs in 156 enterprises were taken as a database. Workplace air was sampled at > or = 5 crosses in each SWP following a grid sampling strategy. An additional air was grab-sampled at the site where the worker's exposure was estimated to be highest (estimated highest concentration or EHC). The samples were analyzed for 47 solvents designated by regulation, and solvent concentrations in each sample were summed up by use of additiveness formula. From the workplace concentrations at > or = 5 points, geometric mean and geometric standard deviations were calculated as the representative workplace concentration (RWC) and the indicator of variation in workplace concentration (VWC). Comparison between RWC and EHC in the total of 1010 SWPs showed that EHC was 1.2 (in large enterprises with>300 employees) to 1.7 times [in small to medium (SM) enterprises with enterprises and large enterprises, both RWC and EHC were significantly higher in SM enterprises than in large enterprises. Further comparison by types of solvent work showed that the difference was more marked in printing, surface coating and degreasing/cleaning/wiping SWPs, whereas it was less remarkable in painting SWPs and essentially nil in testing/research laboratories. In conclusion, the present observation as discussed in reference to previous publications suggests that RWC, EHC and the ratio of EHC/WRC varies substantially among different types of solvent work as well as enterprise size, and are typically higher in printing SWPs in SM enterprises.

  2. Measurement of radon concentration in dwellings in the region of highest lung cancer incidence in India

    International Nuclear Information System (INIS)

    Zoliana, B.; Rohmingliana, P.C.; Sahoo, B.K.; Mayya, Y.S.

    2015-01-01

    Monitoring of radon exhalation from soil and its concentration in indoor is found to be helpful in many investigations such as health risk assessment and others as radiation damage to bronchial cells which eventually can be the second leading cause of lung cancer next to smoking. The fact that Aizawl District, Mizoram, India has the highest lung cancer incidence rates among males and females in Age Adjusted Rate (AAR) in India as declared by Population Based Cancer Registry Report 2008 indicates the need for quantification of radon and its anomalies attached to it. Measurement of radon concentration had been carried out inside the dwellings in Aizawl district, Mizoram. A time integrated method of measurement was employed by using a solid state nuclear track detector (SSNTD) type (LR-115 films) kept in a twin cup dosimeter for measurement of concentration of radon and thoron. The dosimeters were suspended over bed rooms or living rooms in selected dwellings. They were deployed for a period of about 120 days at a time in 63 houses which were selected according to their place of location viz. fault region, places where fossil remains were found and geologically unidentified region. After the desired period of exposure, the detectors were retrieved and chemically etched which were then counted by using a spark counter. The recorded nuclear tract densities are then converted into air concentrations of Radon and Thoron

  3. Measurement of average radon gas concentration at workplaces

    International Nuclear Information System (INIS)

    Kavasi, N.; Somlai, J.; Kovacs, T.; Gorjanacz, Z.; Nemeth, Cs.; Szabo, T.; Varhegyi, A.; Hakl, J.

    2003-01-01

    In this paper results of measurement of average radon gas concentration at workplaces (the schools and kindergartens and the ventilated workplaces) are presented. t can be stated that the one month long measurements means very high variation (as it is obvious in the cases of the hospital cave and the uranium tailing pond). Consequently, in workplaces where the expectable changes of radon concentration considerable with the seasons should be measure for 12 months long. If it is not possible, the chosen six months period should contain summer and winter months as well. The average radon concentration during working hours can be differ considerable from the average of the whole time in the cases of frequent opening the doors and windows or using artificial ventilation. (authors)

  4. Understanding coastal morphodynamic patterns from depth-averaged sediment concentration

    NARCIS (Netherlands)

    Ribas, F.; Falques, A.; de Swart, H. E.; Dodd, N.; Garnier, R.; Calvete, D.

    This review highlights the important role of the depth-averaged sediment concentration (DASC) to understand the formation of a number of coastal morphodynamic features that have an alongshore rhythmic pattern: beach cusps, surf zone transverse and crescentic bars, and shoreface-connected sand

  5. Concentration fluctuations and averaging time in vapor clouds

    CERN Document Server

    Wilson, David J

    2010-01-01

    This book contributes to more reliable and realistic predictions by focusing on sampling times from a few seconds to a few hours. Its objectives include developing clear definitions of statistical terms, such as plume sampling time, concentration averaging time, receptor exposure time, and other terms often confused with each other or incorrectly specified in hazard assessments; identifying and quantifying situations for which there is no adequate knowledge to predict concentration fluctuations in the near-field, close to sources, and far downwind where dispersion is dominated by atmospheric t

  6. The highest global concentrations and increased abundance of oceanic plastic debris in the North Pacific: Evidence from seabirds

    Science.gov (United States)

    Robards, Martin D.; Gould, Patrick J.; Coe, James M.; Rogers, Donald B.

    1997-01-01

    Plastic pollution has risen dramatically with an increase in production of plastic resin during the past few decades. Plastic production in the United States increased from 2.9 million tons in I960 to 47.9 million tons in 1985 (Society of the Plastics Industry 1986). This has been paralleled by a significant increase in the concentration of plastic particles in oceanic surface waters of the North Pacific from the 1970s to the late 1980s (Day and Shaw 1987; Day et al. 1990a). Research during the past few decades has indicated two major interactions between marine life and oceanic plastic: entanglement and ingestion (Laist 1987). Studies in the last decade have documented the prevalence of plastic in the diets of many seabird species in the North Pacific and the need for further monitoring of those species and groups that ingest the most plastic (Day et al. 1985).

  7. 40 CFR 80.1238 - How is a refinery's or importer's average benzene concentration determined?

    Science.gov (United States)

    2010-07-01

    ... concentration determined? (a) The average benzene concentration of gasoline produced at a refinery or imported... percent benzene). i = Individual batch of gasoline produced at the refinery or imported during the applicable averaging period. n = Total number of batches of gasoline produced at the refinery or imported...

  8. 38 CFR 4.76a - Computation of average concentric contraction of visual fields.

    Science.gov (United States)

    2010-07-01

    ... concentric contraction of visual fields. 4.76a Section 4.76a Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS SCHEDULE FOR RATING DISABILITIES Disability Ratings The Organs of Special Sense § 4.76a Computation of average concentric contraction of visual fields. Table III—Normal Visual...

  9. Application of trajectory clustering and source attribution methods for investigating regional CO2 and CH4 concentrations at Germany's highest mountain site

    Science.gov (United States)

    Giemsa, Esther; Jacobeit, Jucundus; Ries, Ludwig; Frank, Gabriele; Hachinger, Stephan; Meyer-Arnek, Julian

    2017-04-01

    Carbon dioxide (CO2) and methane (CH4) represent the most important contributors to increased radiative forcing enhancing it together by contemporary 2.65 W/m2 on the global average (IPCC 2013). The unbroken increase of atmospheric greenhouse gases (GHG) has been unequivocally attributed to human emissions mainly coming from fossil fuel burning and land-use changes, while the oceans and terrestrial ecosystems slightly attenuate this rise with seasonally varying strength. Short-term fluctuations in the GHG concentrations that superimpose the seasonal cycle and the climate change driven trend reflect the presence of regional sources and sinks. A perfect place for investigating the comprehensive influence of these regional emissions is provided by the Environmental Research Station Schneefernerhaus (47.42°N, 10.98°E, 2.650m a.s.l.) situated in the eastern Alps at the southern side of Zugspitze mountain. Located just 300m below the highest peak of the German Alps, the exposed site is one of the currently 30 global core sites of the World Meteorological Organisation (WMO) Global Atmosphere Watch (GAW) programme and thus provides ideal conditions to study source-receptor relationships for greenhouse gases. We propose a stepwise statistical methodology for examining the relationship between synoptic-scale atmospheric transport patterns and climate gas mole fractions to finally receive a characterization of the sampling site with regard to the key processes driving CO2 and CH4 concentration levels. The first step entails a reliable radon-based filtering approach to subdivide the detected air masses according to their regional or 'background' origin. Simultaneously, a large number of ten-day back-trajectories from Schneefernerhaus every two hours over the entire study period 2011 - 2015 is calculated with the Lagrangian transport and dispersion model FLEXPART (Stohl et al. 2005) and subjected to cluster analysis. The weather- and emission strength-related (short

  10. Variation in the annual average radon concentration measured in homes in Mesa County, Colorado

    International Nuclear Information System (INIS)

    Rood, A.S.; George, J.L.; Langner, G.H. Jr.

    1990-04-01

    The purpose of this study is to examine the variability in the annual average indoor radon concentration. The TMC has been collecting annual average radon data for the past 5 years in 33 residential structures in Mesa County, Colorado. This report is an interim report that presents the data collected up to the present. Currently, the plans are to continue this study in the future. 62 refs., 3 figs., 12 tabs

  11. Parameterization of Time-Averaged Suspended Sediment Concentration in the Nearshore

    Directory of Open Access Journals (Sweden)

    Hyun-Doug Yoon

    2015-11-01

    Full Text Available To quantify the effect of wave breaking turbulence on sediment transport in the nearshore, the vertical distribution of time-averaged suspended sediment concentration (SSC in the surf zone was parameterized in terms of the turbulent kinetic energy (TKE at different cross-shore locations, including the bar crest, bar trough, and inner surf zone. Using data from a large-scale laboratory experiment, a simple relationship was developed between the time-averaged SSC and the time-averaged TKE. The vertical variation of the time-averaged SSC was fitted to an equation analogous to the turbulent dissipation rate term. At the bar crest, the proposed equation was slightly modified to incorporate the effect of near-bed sediment processes and yielded reasonable agreement. This parameterization yielded the best agreement at the bar trough, with a coefficient of determination R2 ≥ 0.72 above the bottom boundary layer. The time-averaged SSC in the inner surf zone showed good agreement near the bed but poor agreement near the water surface, suggesting that there is a different sedimentation mechanism that controls the SSC in the inner surf zone.

  12. Protocol for the estimation of average indoor radon-daughter concentrations: Second edition

    International Nuclear Information System (INIS)

    Langner, G.H. Jr.; Pacer, J.C.

    1988-05-01

    The Technical Measurements Center has developed a protocol which specifies the procedures to be used for determining indoor radon-daughter concentrations in support of Department of Energy remedial action programs. This document is the central part of the protocol and is to be used in conjunction with the individual procedure manuals. The manuals contain the information and procedures required to implement the proven methods for estimating average indoor radon-daughter concentration. Proven in this case means that these methods have been determined to provide reasonable assurance that the average radon-daughter concentration within a structure is either above, at, or below the standards established for remedial action programs. This document contains descriptions of the generic aspects of methods used for estimating radon-daughter concentration and provides guidance with respect to method selection for a given situation. It is expected that the latter section of this document will be revised whenever another estimation method is proven to be capable of satisfying the criteria of reasonable assurance and cost minimization. 22 refs., 6 figs., 3 tabs

  13. Risk-informed Analytical Approaches to Concentration Averaging for the Purpose of Waste Classification

    International Nuclear Information System (INIS)

    Esh, D.W.; Pinkston, K.E.; Barr, C.S.; Bradford, A.H.; Ridge, A.Ch.

    2009-01-01

    Nuclear Regulatory Commission (NRC) staff has developed a concentration averaging approach and guidance for the review of Department of Energy (DOE) non-HLW determinations. Although the approach was focused on this specific application, concentration averaging is generally applicable to waste classification and thus has implications for waste management decisions as discussed in more detail in this paper. In the United States, radioactive waste has historically been classified into various categories for the purpose of ensuring that the disposal system selected is commensurate with the hazard of the waste such that public health and safety will be protected. However, the risk from the near-surface disposal of radioactive waste is not solely a function of waste concentration but is also a function of the volume (quantity) of waste and its accessibility. A risk-informed approach to waste classification for near-surface disposal of low-level waste would consider the specific characteristics of the waste, the quantity of material, and the disposal system features that limit accessibility to the waste. NRC staff has developed example analytical approaches to estimate waste concentration, and therefore waste classification, for waste disposed in facilities or with configurations that were not anticipated when the regulation for the disposal of commercial low-level waste (i.e. 10 CFR Part 61) was developed. (authors)

  14. Field test analysis of concentrator photovoltaic system focusing on average photon energy and temperature

    Science.gov (United States)

    Husna, Husyira Al; Ota, Yasuyuki; Minemoto, Takashi; Nishioka, Kensuke

    2015-08-01

    The concentrator photovoltaic (CPV) system is unique and different from the common flat-plate PV system. It uses a multi-junction solar cell and a Fresnel lens to concentrate direct solar radiation onto the cell while tracking the sun throughout the day. The cell efficiency could reach over 40% under high concentration ratio. In this study, we analyzed a one year set of environmental condition data of the University of Miyazaki, Japan, where the CPV system was installed. Performance ratio (PR) was discussed to describe the system’s performance. Meanwhile, the average photon energy (APE) was used to describe the spectrum distribution at the site where the CPV system was installed. A circuit simulator network was used to simulate the CPV system electrical characteristics under various environmental conditions. As for the result, we found that the PR of the CPV systems depends on the APE level rather than the cell temperature.

  15. Average concentrations of FSH and LH in seminal plasma as determined by radioimmunoassay

    International Nuclear Information System (INIS)

    Milbradt, R.; Linzbach, P.; Feller, H.

    1979-01-01

    In 322 males, 25 to 50 years of age, levels of LH and FSH respectively were determined in seminal plasma by radioimmunoassay. Average values of 0,78 ng/ml and 3,95 ng/ml were found as for FSH and LH respectively. Sperm count and motility were not related to FSH levels, but were to LH levels. A high count of spermatozoa corresponded to high concentration of LH, and normal motility was associated with higher levels of LH as compared to levels associated with asthenozoospermia. With respect to count of spermatozoa of a single or the average patient, it is suggested that the ratio of FSH/LH would be more meaningful than LH level alone. (orig.) [de

  16. Influence of the turbulence typing scheme upon the cumulative frequency distribution of the calculated relative concentrations for different averaging times

    Energy Technology Data Exchange (ETDEWEB)

    Kretzschmar, J.G.; Mertens, I.

    1984-01-01

    Over the period 1977-1979, hourly meteorological measurements at the Nuclear Energy Research Centre, Mol, Belgium and simultaneous synoptic observations at the nearby military airport of Kleine Brogel, have been compiled as input data for a bi-Gaussian dispersion model. The available information has first of all been used to determine hourly stability classes in ten widely used turbulent diffusion typing schemes. Systematic correlations between different systems were rare. Twelve different combinations of diffusion typing scheme-dispersion parameters were then used for calculating cumulative frequency distributions of 1 h, 8 h, 16 h, 3 d, and 26 d average ground-level concentrations at receptors respectively at 500 m, 1 km, 2 km, 4 km and 8 km from continuous ground-level release and an elevated release at 100 m height. Major differences were noted as well in the extreme values, the higher percentiles, as in the annual mean concentrations. These differences are almost entirely due to the differences in the numercial values (as a function of distance) of the various sets of dispersion parameters actually in use for impact assessment studies. Dispersion parameter sets giving the lowest normalized ground-level concentration values for ground level releases give the highest results for elevated releases and vice versa. While it was illustrated once again that the applicability of a given set of dispersion parameters is restricted due to the specific conditions under which the given set derived, it was also concluded that systematic experimental work to validate certain assumptions is urgently needed.

  17. An improved procedure for determining grain boundary diffusion coefficients from averaged concentration profiles

    Science.gov (United States)

    Gryaznov, D.; Fleig, J.; Maier, J.

    2008-03-01

    Whipple's solution of the problem of grain boundary diffusion and Le Claire's relation, which is often used to determine grain boundary diffusion coefficients, are examined for a broad range of ratios of grain boundary to bulk diffusivities Δ and diffusion times t. Different reasons leading to errors in determining the grain boundary diffusivity (DGB) when using Le Claire's relation are discussed. It is shown that nonlinearities of the diffusion profiles in lnCav-y6/5 plots and deviations from "Le Claire's constant" (-0.78) are the major error sources (Cav=averaged concentration, y =coordinate in diffusion direction). An improved relation (replacing Le Claire's constant) is suggested for analyzing diffusion profiles particularly suited for small diffusion lengths (short times) as often required in diffusion experiments on nanocrystalline materials.

  18. Analysis of the distributions of hourly NO2 concentrations contributing to annual average NO2 concentrations across the European monitoring network between 2000 and 2014

    Directory of Open Access Journals (Sweden)

    C. S. Malley

    2018-03-01

    Full Text Available Exposure to nitrogen dioxide (NO2 is associated with negative human health effects, both for short-term peak concentrations and from long-term exposure to a wider range of NO2 concentrations. For the latter, the European Union has established an air quality limit value of 40 µg m−3 as an annual average. However, factors such as proximity and strength of local emissions, atmospheric chemistry, and meteorological conditions mean that there is substantial variation in the hourly NO2 concentrations contributing to an annual average concentration. The aim of this analysis was to quantify the nature of this variation at thousands of monitoring sites across Europe through the calculation of a standard set of chemical climatology statistics. Specifically, at each monitoring site that satisfied data capture criteria for inclusion in this analysis, annual NO2 concentrations, as well as the percentage contribution from each month, hour of the day, and hourly NO2 concentrations divided into 5 µg m−3 bins were calculated. Across Europe, 2010–2014 average annual NO2 concentrations (NO2AA exceeded the annual NO2 limit value at 8 % of > 2500 monitoring sites. The application of this chemical climatology approach showed that sites with distinct monthly, hour of day, and hourly NO2 concentration bin contributions to NO2AA were not grouped into specific regions of Europe, furthermore, within relatively small geographic regions there were sites with similar NO2AA, but with differences in these contributions. Specifically, at sites with highest NO2AA, there were generally similar contributions from across the year, but there were also differences in the contribution of peak vs. moderate hourly NO2 concentrations to NO2AA, and from different hours across the day. Trends between 2000 and 2014 for 259 sites indicate that, in general, the contribution to NO2AA from winter months has increased, as has the contribution from the rush-hour periods of

  19. Analysis of the distributions of hourly NO2 concentrations contributing to annual average NO2 concentrations across the European monitoring network between 2000 and 2014

    Science.gov (United States)

    Malley, Christopher S.; von Schneidemesser, Erika; Moller, Sarah; Braban, Christine F.; Hicks, W. Kevin; Heal, Mathew R.

    2018-03-01

    Exposure to nitrogen dioxide (NO2) is associated with negative human health effects, both for short-term peak concentrations and from long-term exposure to a wider range of NO2 concentrations. For the latter, the European Union has established an air quality limit value of 40 µg m-3 as an annual average. However, factors such as proximity and strength of local emissions, atmospheric chemistry, and meteorological conditions mean that there is substantial variation in the hourly NO2 concentrations contributing to an annual average concentration. The aim of this analysis was to quantify the nature of this variation at thousands of monitoring sites across Europe through the calculation of a standard set of chemical climatology statistics. Specifically, at each monitoring site that satisfied data capture criteria for inclusion in this analysis, annual NO2 concentrations, as well as the percentage contribution from each month, hour of the day, and hourly NO2 concentrations divided into 5 µg m-3 bins were calculated. Across Europe, 2010-2014 average annual NO2 concentrations (NO2AA) exceeded the annual NO2 limit value at 8 % of > 2500 monitoring sites. The application of this chemical climatology approach showed that sites with distinct monthly, hour of day, and hourly NO2 concentration bin contributions to NO2AA were not grouped into specific regions of Europe, furthermore, within relatively small geographic regions there were sites with similar NO2AA, but with differences in these contributions. Specifically, at sites with highest NO2AA, there were generally similar contributions from across the year, but there were also differences in the contribution of peak vs. moderate hourly NO2 concentrations to NO2AA, and from different hours across the day. Trends between 2000 and 2014 for 259 sites indicate that, in general, the contribution to NO2AA from winter months has increased, as has the contribution from the rush-hour periods of the day, while the contribution from

  20. Uncertainties of estimating average radon and radon decay product concentrations in occupied houses

    International Nuclear Information System (INIS)

    Ronca-Battista, M.; Magno, P.; Windham, S.

    1986-01-01

    Radon and radon decay product measurements made in up to 68 Butte, Montana homes over a period of 18 months were used to estimate the uncertainty in estimating long-term average radon and radon decay product concentrations from a short-term measurement. This analysis was performed in support of the development of radon and radon decay product measurement protocols by the Environmental Protection Agency (EPA). The results of six measurement methods were analyzed: continuous radon and working level monitors, radon progeny integrating sampling units, alpha-track detectors, and grab radon and radon decay product techniques. Uncertainties were found to decrease with increasing sampling time and to be smaller when measurements were conducted during the winter months. In general, radon measurements had a smaller uncertainty than radon decay product measurements. As a result of this analysis, the EPA measurements protocols specify that all measurements be made under closed-house (winter) conditions, and that sampling times of at least a 24 hour period be used when the measurement will be the basis for a decision about remedial action or long-term health risks. 13 references, 3 tables

  1. Determination of seasonal, diurnal, and height resolved average number concentration in a pollution impacted rural continental location

    Science.gov (United States)

    Bullard, Robert L.; Stanier, Charles O.; Ogren, John A.; Sheridan, Patrick J.

    2013-05-01

    The impact of aerosols on Earth's radiation balance and the associated climate forcing effects of aerosols represent significant uncertainties in assessment reports. The main source of ultrafine aerosols in the atmosphere is the nucleation and subsequent growth of gas phase aerosol precursors into liquid or solid phase particles. Long term records of aerosol number, nucleation event frequency, and vertical profiles of number concentration are rare. The data record from multiagency monitoring assets at Bondville, IL can contribute important information on long term and vertically resolved patterns. Although particle number size distribution data are only occasionally available at Bondville, highly time-resolved particle number concentration data have been measured for nearly twenty years by the NOAA ESRL Global Monitoring Division. Furthermore, vertically-resolved aerosol counts and other aerosol physical parameters are available from more than 300 flights of the NOAA Airborne Aerosol Observatory (AAO). These data sources are used to better understand the seasonal, diurnal, and vertical variation and trends in atmospheric aerosols. The highest peaks in condensation nuclei greater than 14 nm occur during the spring months (May, April) with slightly lower peaks during the fall months (September, October). The diurnal pattern of aerosol number has a midday peak and the timing of the peak has seasonal patterns (earlier during warm months and later during colder months). The seasonal and diurnal patterns of high particle number peaks correspond to seasons and times of day associated with low aerosol mass and surface area. Average vertical profiles show a nearly monotonic decrease with altitude in all months, and with peak magnitudes occurring in the spring and fall. Individual flight tracks show evidence of plumes (i.e., enhanced aerosol number is limited to a small altitude range, is not homogeneous horizontally, or both) as well as periods with enhanced particle number

  2. Average daily and annual courses of 222Rn concentration in some natural medium

    International Nuclear Information System (INIS)

    Holy, K.; Bohm, R.; Polaskova, A.; Stelina, J.; Sykora, I.; Hola, O.

    1996-01-01

    Simultaneous measurements of the 222 Rn concentration in the outdoor atmosphere of Bratislava and in the soil air over one year period have been made. Daily and seasonal variations of the 222 Rn concentration in both media were found. Some attributes of these variations as well as methods of measurements are presented in this work. (author). 17 refs., 6 figs

  3. Association between average daily gain, faecal dry matter content and concentration of Lawsonia intracellularis in faeces

    DEFF Research Database (Denmark)

    Pedersen, Ken Steen; Skrubel, Rikke; Stege, Helle

    2012-01-01

    Background The objective of this study was to investigate the association between average daily gain and the number of Lawsonia intracellularis bacteria in faeces of growing pigs with different levels of diarrhoea. Methods A longitudinal field study (n?=?150 pigs) was performed in a Danish herd f...

  4. Highest energy cosmic rays

    International Nuclear Information System (INIS)

    Nikolskij, S.

    1984-01-01

    Primary particles of cosmic radiation with highest energies cannot in view of their low intensity be recorded directly but for this purpose the phenomenon is used that these particles interact with nuclei in the atmosphere and give rise to what are known as extensive air showers. It was found that 40% of primary particles with an energy of 10 15 to 10 16 eV consist of protons, 12 to 15% of helium nuclei, 15% of iron nuclei, the rest of nuclei of other elements. Radiation intensity with an energy of 10 18 to 10 19 eV depends on the direction of incoming particles. Maximum intensity is in the direction of the centre of the nearest clustre of galaxies, minimal in the direction of the central area of our galaxy. (Ha)

  5. Sources Contributing to the Average Extracellular Concentration of Dopamine in the Nucleus Accumbens

    OpenAIRE

    Owesson-White, CA; Roitman, MF; Sombers, LA; Belle, AM; Keithley, RB; Peele, JL; Carelli, RM; Wightman, RM

    2012-01-01

    Mesolimbic dopamine neurons fire in both tonic and phasic modes resulting in detectable extracellular levels of dopamine in the nucleus accumbens (NAc). In the past, different techniques have targeted dopamine levels in the NAc to establish a basal concentration. In this study we used in vivo fast scan cyclic voltammetry (FSCV) in the NAc of awake, freely moving rats. The experiments were primarily designed to capture changes in dopamine due to phasic firing – that is, the measurement of dopa...

  6. Highest priority in Pakistan.

    Science.gov (United States)

    Adil, E

    1968-01-01

    Responding to the challenge posed by its population problem, Pakistan's national leadership gave the highest priority to family planning in its socioeconomic development plan. In Pakistan, as elsewhere in the world, the first family planning effort originated in the private sector. The Family Planning Association of Pakistan made a tentative beginning in popularizing family planning in the country. Some clinics were opened and some publicity and education were undertaken to emphasize the need for family limitation. It was recognized soon that the government needed to assume the primarily responsibility if family planning efforts were to be successful. For the 1st plan period, 1955-60, about $10 million was allocated by the central government in the social welfare sector for voluntary family planning. The level of support continued on the same basis during the 2nd plan, 1960-65, but has been raised 4-fold in the 1965-70 scheme of family planning. Pakistan's Family Planning Association continues to play vital collaborative roles in designing and pretesting of prototype publicity material, involvement of voluntary social workers, and functional research in the clinical and public relations fields. The real breakthrough in the program came with the 3rd 5-year plan, 1965-70. High priority assigned to family planning is reflected by the total initial budget of Rs.284 million (about $60,000,000) for the 5-year period. Current policy is postulated on 6 basic assumptions: family planning efforts need to be public relations-oriented; operations should be conducted through autonomous bodies with decentralized authority at all tiers down to the grassroots level, for expeditious decision making; monetary incentives play an important role; interpersonal motivation in terms of life experience of the clientele through various contacts, coupled with mass media for publicity, can produce a sociological breakthrough; supplies and services in all related disciplines should be

  7. Highest Resolution Gaspra Mosaic

    Science.gov (United States)

    1992-01-01

    This picture of asteroid 951 Gaspra is a mosaic of two images taken by the Galileo spacecraft from a range of 5,300 kilometers (3,300 miles), some 10 minutes before closest approach on October 29, 1991. The Sun is shining from the right; phase angle is 50 degrees. The resolution, about 54 meters/pixel, is the highest for the Gaspra encounter and is about three times better than that in the view released in November 1991. Additional images of Gaspra remain stored on Galileo's tape recorder, awaiting playback in November. Gaspra is an irregular body with dimensions about 19 x 12 x 11 kilometers (12 x 7.5 x 7 miles). The portion illuminated in this view is about 18 kilometers (11 miles) from lower left to upper right. The north pole is located at upper left; Gaspra rotates counterclockwise every 7 hours. The large concavity on the lower right limb is about 6 kilometers (3.7 miles) across, the prominent crater on the terminator, center left, about 1.5 kilometers (1 mile). A striking feature of Gaspra's surface is the abundance of small craters. More than 600 craters, 100-500 meters (330-1650 feet) in diameter are visible here. The number of such small craters compared to larger ones is much greater for Gaspra than for previously studied bodies of comparable size such as the satellites of Mars. Gaspra's very irregular shape suggests that the asteroid was derived from a larger body by nearly catastrophic collisions. Consistent with such a history is the prominence of groove-like linear features, believed to be related to fractures. These linear depressions, 100-300 meters wide and tens of meters deep, are in two crossing groups with slightly different morphology, one group wider and more pitted than the other. Grooves had previously been seen only on Mars's moon Phobos, but were predicted for asteroids as well. Gaspra also shows a variety of enigmatic curved depressions and ridges in the terminator region at left. The Galileo project, whose primary mission is the

  8. Gaspra - Highest Resolution Mosaic

    Science.gov (United States)

    1992-01-01

    This picture of asteroid 951 Gaspra is a mosaic of two images taken by the Galileo spacecraft from a range of 5,300 kilometers (3,300 miles), some 10 minutes before closest approach on October 29, 1991. The Sun is shining from the right; phase angle is 50 degrees. The resolution, about 54 meters/pixel, is the highest for the Gaspra encounter and is about three times better than that in the view released in November 1991. Additional images of Gaspra remain stored on Galileo's tape recorder, awaiting playback in November. Gaspra is an irregular body with dimensions about 19 x 12 x 11 kilometers (12 x 7.5 x 7 miles). The portion illuminated in this view is about 18 kilometers (11 miles) from lower left to upper right. The north pole is located at upper left; Gaspra rotates counterclockwise every 7 hours. The large concavity on the lower right limb is about 6 kilometers (3.7 miles) across, the prominent crater on the terminator, center left, about 1.5 kilometers (1 mile). A striking feature of Gaspra's surface is the abundance of small craters. More than 600 craters, 100-500 meters (330-1650 feet) in diameter are visible here. The number of such small craters compared to larger ones is much greater for Gaspra than for previously studied bodies of comparable size such as the satellites of Mars. Gaspra's very irregular shape suggests that the asteroid was derived from a larger body by nearly catastrophic collisions. Consistent with such a history is the prominence of groove-like linear features, believed to be related to fractures. These linear depressions, 100-300 meters wide and tens of meters deep, are in two crossing groups with slightly different morphology, one group wider and more pitted than the other. Grooves had previously been seen only on Mars's moon Phobos, but were predicted for asteroids as well. Gaspra also shows a variety of enigmatic curved depressions and ridges in the terminator region at left. The Galileo project, whose primary mission is the

  9. Metallurgical source-contribution analysis of PM10 annual average concentration: A dispersion modeling approach in moravian-silesian region

    Directory of Open Access Journals (Sweden)

    P. Jančík

    2013-10-01

    Full Text Available The goal of the article is to present analysis of metallurgical industry contribution to annual average PM10 concentrations in Moravian-Silesian based on means of the air pollution modelling in accord with the Czech reference methodology SYMOS´97.

  10. The average concentrations of 226Ra and 210Pb in foodstuff cultivated in the Pocos de Caldas plateau

    International Nuclear Information System (INIS)

    Hollanda Vasconcellos, L.M. de.

    1984-01-01

    The average concentrations of 226 Ra and 210 Pb in vegetables cultivated in the Pocos de Caldas plateau, mainly potatoes, carrots, beans and corn and the estimation of the average transfer factors soil-foodstuff for both radionuclides, were performed. The total 226 Ra and 210 Pb content in the soil was determined by gamma spectrometry. The exchangeable fraction was obtained by the classical radon emanation procedure and the 210 Pb was isolated by a radiochemical procedure and determined by radiometry of its daughter 210 Bi beta emissions with a Geiger Muller Counter. (M.A.C.) [pt

  11. How precise is the determination of the average radon concentration in buildings from measurements lasting only a few days

    International Nuclear Information System (INIS)

    Janik, M.; Loskiewicz, J.; Olko, P.; Swakon, J.

    1998-01-01

    Radon concentration in outdoor air and in buildings is very variable, showing diurnal and seasonal variations. Long term track etch detectors measurements lasting up to one year give the most precise one year averages. It arrives, however, that we are obliged to get results much sooner e.g. for screening measurements. How long should we measure to get proper results? We have studied the problem of selecting proper time interval on the basis of our five long term (ca. 30 days) measurements in Cracow using AlphaGUARD ionization chamber detector. The mean radon concentration ranged from 543 to 1107 Bq/m 3 . It was found that the relative error of k day average was decreasing exponentially with a time constant of 4 days. Therefore we recommended a minimal measuring time of four (k = 4) and better six days. (author)

  12. Analysis of compound parabolic concentrators and aperture averaging to mitigate fading on free-space optical links

    Science.gov (United States)

    Wasiczko, Linda M.; Smolyaninov, Igor I.; Davis, Christopher C.

    2004-01-01

    Free space optics (FSO) is one solution to the bandwidth bottleneck resulting from increased demand for broadband access. It is well known that atmospheric turbulence distorts the wavefront of a laser beam propagating through the atmosphere. This research investigates methods of reducing the effects of intensity scintillation and beam wander on the performance of free space optical communication systems, by characterizing system enhancement using either aperture averaging techniques or nonimaging optics. Compound Parabolic Concentrators, nonimaging optics made famous by Winston and Welford, are inexpensive elements that may be easily integrated into intensity modulation-direct detection receivers to reduce fading caused by beam wander and spot breakup in the focal plane. Aperture averaging provides a methodology to show the improvement of a given receiver aperture diameter in averaging out the optical scintillations over the received wavefront.

  13. Procedure for the characterization of radon potential in existing dwellings and to assess the annual average indoor radon concentration

    International Nuclear Information System (INIS)

    Collignan, Bernard; Powaga, Emilie

    2014-01-01

    Risk assessment due to radon exposure indoors is based on annual average indoor radon activity concentration. To assess the radon exposure in a building, measurement is generally performed during at least two months during heating period in order to be representative of the annual average value. This is because radon presence indoors could be very variable during time. This measurement protocol is fairly reliable but may be a limiting in the radon risk management, particularly during a real estate transaction due to the duration of the measurement and the limitation of the measurement period. A previous field study defined a rapid methodology to characterize radon entry in dwellings. The objective of this study was at first, to test this methodology in various dwellings to assess its relevance with a daily test. At second, a ventilation model was used to assess numerically the air renewal of a building, the indoor air quality all along the year and the annual average indoor radon activity concentration, based on local meteorological conditions, some building characteristics and in-situ characterization of indoor pollutant emission laws. Experimental results obtained on thirteen individual dwellings showed that it is generally possible to obtain a representative characterization of radon entry into homes. It was also possible to refine the methodology defined in the previous study. In addition, numerical assessments of annual average indoor radon activity concentration showed generally a good agreement with measured values. These results are encouraging to allow a procedure with a short measurement time to be used to characterize long-term radon potential in dwellings. - Highlights: • Test of a daily procedure to characterize radon potential in dwellings. • Numerical assessment of the annual radon concentration. • Procedure applied on thirteen dwellings, characterization generally satisfactory. • Procedure useful to manage radon risk in dwellings, for real

  14. Comparison of depth-averaged concentration and bed load flux sediment transport models of dam-break flow

    Directory of Open Access Journals (Sweden)

    Jia-heng Zhao

    2017-10-01

    Full Text Available This paper presents numerical simulations of dam-break flow over a movable bed. Two different mathematical models were compared: a fully coupled formulation of shallow water equations with erosion and deposition terms (a depth-averaged concentration flux model, and shallow water equations with a fully coupled Exner equation (a bed load flux model. Both models were discretized using the cell-centered finite volume method, and a second-order Godunov-type scheme was used to solve the equations. The numerical flux was calculated using a Harten, Lax, and van Leer approximate Riemann solver with the contact wave restored (HLLC. A novel slope source term treatment that considers the density change was introduced to the depth-averaged concentration flux model to obtain higher-order accuracy. A source term that accounts for the sediment flux was added to the bed load flux model to reflect the influence of sediment movement on the momentum of the water. In a one-dimensional test case, a sensitivity study on different model parameters was carried out. For the depth-averaged concentration flux model, Manning's coefficient and sediment porosity values showed an almost linear relationship with the bottom change, and for the bed load flux model, the sediment porosity was identified as the most sensitive parameter. The capabilities and limitations of both model concepts are demonstrated in a benchmark experimental test case dealing with dam-break flow over variable bed topography.

  15. Performance of a geostationary mission, geoCARB, to measure CO2, CH4 and CO column-averaged concentrations

    Directory of Open Access Journals (Sweden)

    I. N. Polonsky

    2014-04-01

    Full Text Available GeoCARB is a proposed instrument to measure column averaged concentrations of CO2, CH4 and CO from geostationary orbit using reflected sunlight in near-infrared absorption bands of the gases. The scanning options, spectral channels and noise characteristics of geoCARB and two descope options are described. The accuracy of concentrations from geoCARB data is investigated using end-to-end retrievals; spectra at the top of the atmosphere in the geoCARB bands are simulated with realistic trace gas profiles, meteorology, aerosol, cloud and surface properties, and then the concentrations of CO2, CH4 and CO are estimated from the spectra after addition of noise characteristic of geoCARB. The sensitivity of the algorithm to aerosol, the prior distributions assumed for the gases and the meteorology are investigated. The contiguous spatial sampling and fine temporal resolution of geoCARB open the possibility of monitoring localised sources such as power plants. Simulations of emissions from a power plant with a Gaussian plume are conducted to assess the accuracy with which the emission strength may be recovered from geoCARB spectra. Scenarios for "clean" and "dirty" power plants are examined. It is found that a reliable estimate of the emission rate is possible, especially for power plants that have particulate filters, by averaging emission rates estimated from multiple snapshots of the CO2 field surrounding the plant. The result holds even in the presence of partial cloud cover.

  16. Procedure manual for the estimation of average indoor radon-daughter concentrations using the filtered alpha-track method

    International Nuclear Information System (INIS)

    George, J.L.

    1988-04-01

    One of the measurement needs of US Department of Energy (DOE) remedial action programs is the estimation of the annual-average indoor radon-daughter concentration (RDC) in structures. The filtered alpha-track method, using a 1-year exposure period, can be used to accomplish RDC estimations for the DOE remedial action programs. This manual describes the procedure used to obtain filtered alpha-track measurements to derive average RDC estimates from the measurrements. Appropriate quality-assurance and quality-control programs are also presented. The ''prompt'' alpha-track method of exposing monitors for 2 to 6 months during specific periods of the year is also briefly discussed in this manual. However, the prompt alpha-track method has been validated only for use in the Mesa County, Colorado, area. 3 refs., 3 figs

  17. Estimation of Radionuclide Concentrations and Average Annual Committed Effective Dose due to Ingestion for the Population in the Red River Delta, Vietnam.

    Science.gov (United States)

    Van, Tran Thi; Bat, Luu Tam; Nhan, Dang Duc; Quang, Nguyen Hao; Cam, Bui Duy; Hung, Luu Viet

    2018-02-16

    Radioactivity concentrations of nuclides of the 232 Th and 238 U radioactive chains and 40 K, 90 Sr, 137 Cs, and 239+240 Pu were surveyed for raw and cooked food of the population in the Red River delta region, Vietnam, using α-, γ-spectrometry, and liquid scintillation counting techniques. The concentration of 40 K in the cooked food was the highest compared to those of other radionuclides ranging from (23 ± 5) (rice) to (347 ± 50) Bq kg -1 dw (tofu). The 210 Po concentration in the cooked food ranged from its limit of detection (LOD) of 5 mBq kg -1  dw (rice) to (4.0 ± 1.6) Bq kg -1  dw (marine bivalves). The concentrations of other nuclides of the 232 Th and 238 U chains in the food were low, ranging from LOD of 0.02 Bq kg -1  dw to (1.1 ± 0.3) Bq kg -1  dw. The activity concentrations of 90 Sr, 137 Cs, and 239+240 Pu in the food were minor compared to that of the natural radionuclides. The average annual committed effective dose to adults in the study region was estimated and it ranged from 0.24 to 0.42 mSv a -1 with an average of 0.32 mSv a -1 , out of which rice, leafy vegetable, and tofu contributed up to 16.2%, 24.4%, and 21.3%, respectively. The committed effective doses to adults due to ingestion of regular diet in the Red River delta region, Vietnam are within the range determined in other countries worldwide. This finding suggests that Vietnamese food is safe for human consumption with respect to radiation exposure.

  18. Analysis for average heat transfer empirical correlation of natural convection on the concentric vertical cylinder modelling of APWR

    International Nuclear Information System (INIS)

    Daddy Setyawan

    2011-01-01

    There are several passive safety systems on APWR reactor design. One of the passive safety system is the cooling system with natural circulation air on the surface of concentric vertical cylinder containment wall. Since the natural circulation air performance in the Passive Containment Cooling System (PCCS) application is related to safety, the cooling characteristics of natural circulation air on concentric vertical cylinder containment wall should be studied experimentally. This paper focuses on the experimental study of the heat transfer coefficient of natural circulation air with heat flux level varied on the characteristics of APWR concentric vertical cylinder containment wall. The procedure of this experimental study is composed of 4 stages as follows: the design of APWR containment with scaling 1:40, the assembling of APWR containment with its instrumentation, calibration and experimentation. The experimentation was conducted in the transient and steady-state with the variation of heat flux, from 119 W/m 2 until 575 W/m 2 . From The experimentation result obtained average heat transfer empirical correlation of natural convection Nu L = 0,008(Ra * L ) 0,68 for the concentric vertical cylinder geometry modelling of APWR. (author)

  19. Procedure manual for the estimation of average indoor radon-daughter concentrations using the radon grab-sampling method

    International Nuclear Information System (INIS)

    George, J.L.

    1986-04-01

    The US Department of Energy (DOE) Office of Remedial Action and Waste Technology established the Technical Measurements Center to provide standardization, calibration, comparability, verification of data, quality assurance, and cost-effectiveness for the measurement requirements of DOE remedial action programs. One of the remedial-action measurement needs is the estimation of average indoor radon-daughter concentration. One method for accomplishing such estimations in support of DOE remedial action programs is the radon grab-sampling method. This manual describes procedures for radon grab sampling, with the application specifically directed to the estimation of average indoor radon-daughter concentration (RDC) in highly ventilated structures. This particular application of the measurement method is for cases where RDC estimates derived from long-term integrated measurements under occupied conditions are below the standard and where the structure being evaluated is considered to be highly ventilated. The radon grab-sampling method requires that sampling be conducted under standard maximized conditions. Briefly, the procedure for radon grab sampling involves the following steps: selection of sampling and counting equipment; sample acquisition and processing, including data reduction; calibration of equipment, including provisions to correct for pressure effects when sampling at various elevations; and incorporation of quality-control and assurance measures. This manual describes each of the above steps in detail and presents an example of a step-by-step radon grab-sampling procedure using a scintillation cell

  20. Development of a stacked ensemble model for forecasting and analyzing daily average PM2.5 concentrations in Beijing, China.

    Science.gov (United States)

    Zhai, Binxu; Chen, Jianguo

    2018-04-18

    A stacked ensemble model is developed for forecasting and analyzing the daily average concentrations of fine particulate matter (PM 2.5 ) in Beijing, China. Special feature extraction procedures, including those of simplification, polynomial, transformation and combination, are conducted before modeling to identify potentially significant features based on an exploratory data analysis. Stability feature selection and tree-based feature selection methods are applied to select important variables and evaluate the degrees of feature importance. Single models including LASSO, Adaboost, XGBoost and multi-layer perceptron optimized by the genetic algorithm (GA-MLP) are established in the level 0 space and are then integrated by support vector regression (SVR) in the level 1 space via stacked generalization. A feature importance analysis reveals that nitrogen dioxide (NO 2 ) and carbon monoxide (CO) concentrations measured from the city of Zhangjiakou are taken as the most important elements of pollution factors for forecasting PM 2.5 concentrations. Local extreme wind speeds and maximal wind speeds are considered to extend the most effects of meteorological factors to the cross-regional transportation of contaminants. Pollutants found in the cities of Zhangjiakou and Chengde have a stronger impact on air quality in Beijing than other surrounding factors. Our model evaluation shows that the ensemble model generally performs better than a single nonlinear forecasting model when applied to new data with a coefficient of determination (R 2 ) of 0.90 and a root mean squared error (RMSE) of 23.69μg/m 3 . For single pollutant grade recognition, the proposed model performs better when applied to days characterized by good air quality than when applied to days registering high levels of pollution. The overall classification accuracy level is 73.93%, with most misclassifications made among adjacent categories. The results demonstrate the interpretability and generalizability of

  1. A meta-analysis of cortisol concentration, vocalization, and average daily gain associated with castration in beef cattle.

    Science.gov (United States)

    Canozzi, Maria Eugênia Andrighetto; Mederos, America; Manteca, Xavier; Turner, Simon; McManus, Concepta; Zago, Daniele; Barcellos, Júlio Otávio Jardim

    2017-10-01

    A systematic review and meta-analysis (MA) were performed to summarize all scientific evidence for the effects of castration in male beef cattle on welfare indicators based on cortisol concentration, average daily gain (ADG), and vocalization. We searched five electronic databases, conference proceedings, and experts were contacted electronically. The main inclusion criteria involved completed studies using beef cattle up to one year of age undergoing surgical and non-surgical castration that presented cortisol concentration, ADG, or vocalization as an outcome. A random effect MA was conducted for each indicator separately with the mean of the control and treated groups. A total of 20 publications reporting 26 studies and 162 trials were included in the MA involving 1814 cattle. Between study heterogeneity was observed when analysing cortisol (I 2 =56.7%) and ADG (I 2 =79.6%). Surgical and non-surgical castration without drug administration compared to uncastrated animals showed no change (P≥0.05) in cortisol level. Multimodal therapy for pain did not decrease (P≥0.05) cortisol concentration after 30min when non-surgical castration was performed. Comparison between surgical castration, with and without anaesthesia, showed a tendency (P=0.077) to decrease cortisol levels after 120min of intervention. Non-surgical and surgical castration, performed with no pain mitigation, increased and tended to increase the ADG by 0.814g/d (P=0.001) and by 0.140g/d (P=0.091), respectively, when compared to a non-castrated group. Our MA study demonstrates an inconclusive result to draw recommendations on preferred castration practices to minimize pain in beef cattle. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Predicting long-term average concentrations of traffic-related air pollutants using GIS-based information

    Science.gov (United States)

    Hochadel, Matthias; Heinrich, Joachim; Gehring, Ulrike; Morgenstern, Verena; Kuhlbusch, Thomas; Link, Elke; Wichmann, H.-Erich; Krämer, Ursula

    Global regression models were developed to estimate individual levels of long-term exposure to traffic-related air pollutants. The models are based on data of a one-year measurement programme including geographic data on traffic and population densities. This investigation is part of a cohort study on the impact of traffic-related air pollution on respiratory health, conducted at the westerly end of the Ruhr-area in North-Rhine Westphalia, Germany. Concentrations of NO 2, fine particle mass (PM 2.5) and filter absorbance of PM 2.5 as a marker for soot were measured at 40 sites spread throughout the study region. Fourteen-day samples were taken between March 2002 and March 2003 for each season and site. Annual average concentrations for the sites were determined after adjustment for temporal variation. Information on traffic counts in major roads, building densities and community population figures were collected in a geographical information system (GIS). This information was used to calculate different potential traffic-based predictors: (a) daily traffic flow and maximum traffic intensity of buffers with radii from 50 to 10 000 m and (b) distances to main roads and highways. NO 2 concentration and PM 2.5 absorbance were strongly correlated with the traffic-based variables. Linear regression prediction models, which involved predictors with radii of 50 to 1000 m, were developed for the Wesel region where most of the cohort members lived. They reached a model fit ( R2) of 0.81 and 0.65 for NO 2 and PM 2.5 absorbance, respectively. Regression models for the whole area required larger spatial scales and reached R2=0.90 and 0.82. Comparison of predicted values with NO 2 measurements at independent public monitoring stations showed a satisfactory association ( r=0.66). PM 2.5 concentration, however, was only slightly correlated and thus poorly predictable by traffic-based variables ( rGIS-based regression models offer a promising approach to assess individual levels of

  3. Global Estimates of Average Ground-Level Fine Particulate Matter Concentrations from Satellite-Based Aerosol Optical Depth

    Science.gov (United States)

    Van Donkelaar, A.; Martin, R. V.; Brauer, M.; Kahn, R.; Levy, R.; Verduzco, C.; Villeneuve, P.

    2010-01-01

    Exposure to airborne particles can cause acute or chronic respiratory disease and can exacerbate heart disease, some cancers, and other conditions in susceptible populations. Ground stations that monitor fine particulate matter in the air (smaller than 2.5 microns, called PM2.5) are positioned primarily to observe severe pollution events in areas of high population density; coverage is very limited, even in developed countries, and is not well designed to capture long-term, lower-level exposure that is increasingly linked to chronic health effects. In many parts of the developing world, air quality observation is absent entirely. Instruments aboard NASA Earth Observing System satellites, such as the MODerate resolution Imaging Spectroradiometer (MODIS) and the Multi-angle Imaging SpectroRadiometer (MISR), monitor aerosols from space, providing once daily and about once-weekly coverage, respectively. However, these data are only rarely used for health applications, in part because the can retrieve the amount of aerosols only summed over the entire atmospheric column, rather than focusing just on the near-surface component, in the airspace humans actually breathe. In addition, air quality monitoring often includes detailed analysis of particle chemical composition, impossible from space. In this paper, near-surface aerosol concentrations are derived globally from the total-column aerosol amounts retrieved by MODIS and MISR. Here a computer aerosol simulation is used to determine how much of the satellite-retrieved total column aerosol amount is near the surface. The five-year average (2001-2006) global near-surface aerosol concentration shows that World Health Organization Air Quality standards are exceeded over parts of central and eastern Asia for nearly half the year.

  4. Estimation of time averages from irregularly spaced observations - With application to coastal zone color scanner estimates of chlorophyll concentration

    Science.gov (United States)

    Chelton, Dudley B.; Schlax, Michael G.

    1991-01-01

    The sampling error of an arbitrary linear estimate of a time-averaged quantity constructed from a time series of irregularly spaced observations at a fixed located is quantified through a formalism. The method is applied to satellite observations of chlorophyll from the coastal zone color scanner. The two specific linear estimates under consideration are the composite average formed from the simple average of all observations within the averaging period and the optimal estimate formed by minimizing the mean squared error of the temporal average based on all the observations in the time series. The resulting suboptimal estimates are shown to be more accurate than composite averages. Suboptimal estimates are also found to be nearly as accurate as optimal estimates using the correct signal and measurement error variances and correlation functions for realistic ranges of these parameters, which makes it a viable practical alternative to the composite average method generally employed at present.

  5. Catching the Highest Energy Neutrinos

    Energy Technology Data Exchange (ETDEWEB)

    Stanev, Todor [Bartol Research Institute and Department of Physics and Astronomy, University of Delaware, Newark, DE 19716 (United States)

    2011-08-15

    We briefly discuss the possible sources of ultrahigh energy neutrinos and the methods for their detection. Then we present the results obtained by different experiments for detection of the highest energy neutrinos.

  6. The influence of poly(acrylic) acid number average molecular weight and concentration in solution on the compressive fracture strength and modulus of a glass-ionomer restorative.

    LENUS (Irish Health Repository)

    Dowling, Adam H

    2011-06-01

    The aim was to investigate the influence of number average molecular weight and concentration of the poly(acrylic) acid (PAA) liquid constituent of a GI restorative on the compressive fracture strength (σ) and modulus (E).

  7. The average concentrations of As, Cd, Cr, Hg, Ni and Pb in residential soil and drinking water obtained from springs and wells in Rosia Montana area.

    Data.gov (United States)

    U.S. Environmental Protection Agency — The average concentrations of As, Cd, Cr, Hg, Ni and Pb in n=84 residential soil samples, in Rosia Montana area, analyzed by X-ray fluorescence spectrometry are...

  8. Lowest cost due to highest productivity and highest quality

    Science.gov (United States)

    Wenk, Daniel

    2003-03-01

    Since global purchasing in the automotive industry has been taken up all around the world there is one main key factor that makes a TB-supplier today successful: Producing highest quality at lowest cost. The fact that Tailored Blanks, which today may reach up to 1/3 of a car body weight, are purchased on the free market but from different steel suppliers, especially in Europe and NAFTA, the philosophy on OEM side has been changing gradually towards tough evaluation criteria. "No risk at the stamping side" calls for top quality Tailored- or Tubular Blank products. Outsourcing Tailored Blanks has been starting in Japan but up to now without any quality request from the OEM side like ISO 13919-1B (welding quality standard in Europe and USA). Increased competition will automatically push the quality level and the ongoing approach to combine high strength steel with Tailored- and Tubular Blanks will ask for even more reliable system concepts which enables to weld narrow seams at highest speed. Beside producing quality, which is the key to reduce one of the most important cost driver "material scrap," in-line quality systems with true and reliable evaluation is going to be a "must" on all weld systems. Traceability of all process related data submitted to interfaces according to customer request in combination with ghost-shift-operation of TB systems are tomorrow's state-of-the-art solutions of Tailored Blank-facilities.

  9. Greater-than-Class C low-level waste characterization. Appendix I: Impact of concentration averaging low-level radioactive waste volume projections

    International Nuclear Information System (INIS)

    Tuite, P.; Tuite, K.; O'Kelley, M.; Ely, P.

    1991-08-01

    This study provides a quantitative framework for bounding unpackaged greater-than-Class C low-level radioactive waste types as a function of concentration averaging. The study defines the three concentration averaging scenarios that lead to base, high, and low volumetric projections; identifies those waste types that could be greater-than-Class C under the high volume, or worst case, concentration averaging scenario; and quantifies the impact of these scenarios on identified waste types relative to the base case scenario. The base volume scenario was assumed to reflect current requirements at the disposal sites as well as the regulatory views. The high volume scenario was assumed to reflect the most conservative criteria as incorporated in some compact host state requirements. The low volume scenario was assumed to reflect the 10 CFR Part 61 criteria as applicable to both shallow land burial facilities and to practices that could be employed to reduce the generation of Class C waste types

  10. On the applicability of short time measurements to the determination of annual average of radon concentration in dwelling

    International Nuclear Information System (INIS)

    Loskiewicz, J.; Olko, P.; Swakon, J.; Bogacz, J.; Janik, M.; Mazur, D.; Mazur, J.

    1998-01-01

    The variation of radon concentration in some houses in the Krakow region was investigated in order to compare results obtained using various measuring techniques. It is concluded that short-term measurements should last at least 4 days to avoid errors exceeding 30%; that weather parameters and human activity during the measurement should be recorded; that measurements should be repeated several times under various weather conditions; that seasonal variation in the region should be taken into account. (A.K.)

  11. Up to the highest peak!

    CERN Multimedia

    CERN Bulletin

    2010-01-01

    In the early hours of this morning, the beam energy was ramped up to 3.5 TeV, a new world record and the highest energy for this year’s run. Now operators will prepare the machine to make high-energy collisions later this month. CERN Operations Group leader Mike Lamont (foreground) and LHC engineer in charge Alick Macpherson in the CERN Control Centre early this morning. At 5:23 this morning, Friday 19 March, the energy of both beams in the LHC was ramped up to 3.5 TeV, a new world record. During the night, operators had tested the performance of the whole machine with two so-called ‘dry runs’, that is, without beams. Given the good overall response, beams were injected at around 3:00 a.m. and stabilized soon after. The ramp started at around 4:10 and lasted about one hour. Over the last couple of weeks, operation of the LHC at 450 GeV has become routinely reproducible. The operators were able to test and optimize the beam orbit, the beam collimation, the injection and ext...

  12. Satellite-derived ice data sets no. 2: Arctic monthly average microwave brightness temperatures and sea ice concentrations, 1973-1976

    Science.gov (United States)

    Parkinson, C. L.; Comiso, J. C.; Zwally, H. J.

    1987-01-01

    A summary data set for four years (mid 70's) of Arctic sea ice conditions is available on magnetic tape. The data include monthly and yearly averaged Nimbus 5 electrically scanning microwave radiometer (ESMR) brightness temperatures, an ice concentration parameter derived from the brightness temperatures, monthly climatological surface air temperatures, and monthly climatological sea level pressures. All data matrices are applied to 293 by 293 grids that cover a polar stereographic map enclosing the 50 deg N latitude circle. The grid size varies from about 32 X 32 km at the poles to about 28 X 28 km at 50 deg N. The ice concentration parameter is calculated assuming that the field of view contains only open water and first-year ice with an ice emissivity of 0.92. To account for the presence of multiyear ice, a nomogram is provided relating the ice concentration parameter, the total ice concentration, and the fraction of the ice cover which is multiyear ice.

  13. Verification of average daily maximum permissible concentration of styrene in the atmospheric air of settlements under the results of epidemiological studies of the children’s population

    Directory of Open Access Journals (Sweden)

    М.А. Zemlyanova

    2015-03-01

    Full Text Available We presented the materials on the verification of the average daily maximum permissible concentration of styrene in the atmospheric air of settlements performed under the results of own in-depth epidemiological studies of children’s population according to the principles of the international risk assessment practice. It was established that children in the age of 4–7 years when exposed to styrene at the level above 1.2 of threshold level value for continuous exposure develop the negative exposure effects in the form of disorders of hormonal regulation, pigmentary exchange, antioxidative activity, cytolysis, immune reactivity and cytogenetic disbalance which contribute to the increased morbidity of diseases of the central nervous system, endocrine system, respiratory organs, digestion and skin. Based on the proved cause-and-effect relationships between the biomarkers of negative effects and styrene concentration in blood it was demonstrated that the benchmark styrene concentration in blood is 0.002 mg/dm3. The justified value complies with and confirms the average daily styrene concentration in the air of settlements at the level of 0.002 mg/m3 accepted in Russia which provides the safety for the health of population (1 threshold level value for continuous exposure.

  14. State Averages

    Data.gov (United States)

    U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...

  15. Greater-than-Class C low-level radioactive waste characterization. Appendix E-5: Impact of the 1993 NRC draft Branch Technical Position on concentration averaging of greater-than-Class C low-level radioactive waste

    International Nuclear Information System (INIS)

    Tuite, P.; Tuite, K.; Harris, G.

    1994-09-01

    This report evaluates the effects of concentration averaging practices on the disposal of greater-than-Class C low-level radioactive waste (GTCC LLW) generated by the nuclear utility industry and sealed sources. Using estimates of the number of waste components that individually exceed Class C limits, this report calculates the proportion that would be classified as GTCC LLW after applying concentration averaging; this proportion is called the concentration averaging factor. The report uses the guidance outlined in the 1993 Nuclear Regulatory Commission (NRC) draft Branch Technical Position on concentration averaging, as well as waste disposal experience at nuclear utilities, to calculate the concentration averaging factors for nuclear utility wastes. The report uses the 1993 NRC draft Branch Technical Position and the criteria from the Barnwell, South Carolina, LLW disposal site to calculate concentration averaging factors for sealed sources. The report addresses three waste groups: activated metals from light water reactors, process wastes from light-water reactors, and sealed sources. For each waste group, three concentration averaging cases are considered: high, base, and low. The base case, which is the most likely case to occur, assumes using the specific guidance given in the 1993 NRC draft Branch Technical Position on concentration averaging. To project future GTCC LLW generation, each waste category is assigned a concentration averaging factor for the high, base, and low cases

  16. The moving-window Bayesian maximum entropy framework: estimation of PM(2.5) yearly average concentration across the contiguous United States.

    Science.gov (United States)

    Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L

    2012-09-01

    Geostatistical methods are widely used in estimating long-term exposures for epidemiological studies on air pollution, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and the uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian maximum entropy (BME) method and applied this framework to estimate fine particulate matter (PM(2.5)) yearly average concentrations over the contiguous US. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingness in the air-monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM(2.5) data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM(2.5). Moreover, the MWBME method further reduces the MSE by 8.4-43.7%, with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM(2.5) across large geographical domains with expected spatial non-stationarity.

  17. The moving-window Bayesian Maximum Entropy framework: Estimation of PM2.5 yearly average concentration across the contiguous United States

    Science.gov (United States)

    Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L.

    2013-01-01

    Geostatistical methods are widely used in estimating long-term exposures for air pollution epidemiological studies, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian Maximum Entropy (BME) method and applied this framework to estimate fine particulate matter (PM2.5) yearly average concentrations over the contiguous U.S. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingnees in the air monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM2.5 data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM2.5. Moreover, the MWBME method further reduces the MSE by 8.4% to 43.7% with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM2.5 across large geographical domains with expected spatial non-stationarity. PMID:22739679

  18. Origin of the highest energy cosmic rays

    Energy Technology Data Exchange (ETDEWEB)

    Biermann, Peter L.; Ahn, Eun-Joo; Medina-Tanco, Gustavo; Stanev, Todor

    2000-06-01

    Introducing a simple Galactic wind model patterned after the solar wind we show that back-tracing the orbits of the highest energy cosmic events suggests that they may all come from the Virgo cluster, and so probably from the active radio galaxy M87. This confirms a long standing expectation. Those powerful radio galaxies that have their relativistic jets stuck in the interstellar medium of the host galaxy, such as 3C147, will then enable us to derive limits on the production of any new kind of particle, expected in some extensions of the standard model in particle physics. New data from HIRES will be crucial in testing the model proposed here.

  19. Directional clustering in highest energy cosmic rays

    International Nuclear Information System (INIS)

    Goldberg, Haim; Weiler, Thomas J.

    2001-01-01

    An unexpected degree of small-scale clustering is observed in highest-energy cosmic ray events. Some directional clustering can be expected due to purely statistical fluctuations for sources distributed randomly in the sky. This creates a background for events originating in clustered sources. We derive analytic formulas to estimate the probability of random cluster configurations, and use these formulas to study the strong potential of the HiRes, Auger, Telescope Array and EUSO-OWL-AirWatch facilities for deciding whether any observed clustering is most likely due to nonrandom sources. For a detailed comparison to data, our analytical approach cannot compete with Monte Carlo simulations, including experimental systematics. However, our derived formulas do offer two advantages: (i) easy assessment of the significance of any observed clustering, and most importantly, (ii) an explicit dependence of cluster probabilities on the chosen angular bin size

  20. The highest energies in the Universe

    International Nuclear Information System (INIS)

    Rebel, H.

    2006-01-01

    There are not many issues of fundamental importance which have induced so many problems for astrophysicists like the question of the origin of cosmic rays. This radiation from the outer space has an energy density comparable with that of the visible starlight or of the microwave background radiation. It is an important feature of our environment with many interesting aspects. A most conspicuous feature is that the energy spectrum of cosmic rays seems to have no natural end, though resonant photopion production with the cosmic microwave background predicts a suppression of extragalactic protons above the so-called Greisen-Zatsepin-Kuz’min cutoff at about EGZK = 5 × 10"1"9 eV. In fact the highest particle energies ever observed on the Earth, stem from observations of Ultrahigh Energy Cosmic Rays (E > 3 × 10"1"9 eV). But the present observations by the AGASA and HiRes Collaborations, partly a matter of debate, are origin of a number of puzzling questions, where these particles are coming from, by which gigantic acceleration mechanism they could gain such tremendous energies and how they have been able to propagate to our Earth. These questions imply serious problems of the understanding of our Universe. There are several approaches to clarify the mysteries of the highest energies and to base the observations on larger statistical accuracy. The Pierre Auger Observatory, being in installation in the Pampa Amarilla in the Province Mendoza in Argentina, is a hybrid detector, combining a large array of water Cerenkov detectors (registering charged particles generated in giant extended air showers) with measurements of the fluorescence light produced during the air shower development. This contribution will illustrate the astrophysical motivation and the current status of the experimental efforts, and sketch the ideas about the origin of these particles.

  1. Concentrations and uncertainties of stratospheric trace species inferred from limb infrared monitor of the stratosphere data. I - Methodology and application to OH and HO2. II - Monthly averaged OH, HO2, H2O2, and HO2NO2

    Science.gov (United States)

    Kaye, J. A.; Jackman, C. H.

    1986-01-01

    Difficulties arise in connection with the verification of multidimensional chemical models of the stratosphere. The present study shows that LIMS data, together with a photochemical equilibrium model, may be used to infer concentrations of a variety of zonally averaged trace Ox, OHx, and NOx species over much of the stratosphere. In the lower stratosphere, where the photochemical equilibrium assumption for HOx species breaks down, inferred concentrations should still be accurate to about a factor of 2 for OH and 2.5 for HO2. The algebraic nature of the considered model makes it possible to see easily to the first order the effect of variation of any model input parameter or its uncertainty on the inferred concontration of the HOx species and their uncertainties.

  2. Neutron resonance averaging

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1986-10-01

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs

  3. Cortex Matures Faster in Youths With Highest IQ

    Science.gov (United States)

    ... NIH Cortex Matures Faster in Youths With Highest IQ Past Issues / Summer 2006 Table of Contents For ... on. Photo: Getty image (StockDisc) Youths with superior IQ are distinguished by how fast the thinking part ...

  4. Which Kids Are at Highest Risk for Suicide?

    Science.gov (United States)

    ... Share Which Kids are at Highest Risk for Suicide? Page Content Article Body No child is immune, ... who have lost a friend or relative to suicide. Studies show that a considerable number of youth ...

  5. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    1999-01-01

    In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belo...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion......In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  6. Lung Cancer Screening May Benefit Those at Highest Risk

    Science.gov (United States)

    People at the highest risk for lung cancer, based on a risk model, may be more likely to benefit from screening with low-dose CT, a new analysis suggests. The study authors believe the findings may better define who should undergo lung cancer screening, as this Cancer Currents blog post explains.

  7. Highest weight representations of the quantum algebra Uh(gl∞)

    International Nuclear Information System (INIS)

    Palev, T.D.; Stoilova, N.I.

    1997-04-01

    A class of highest weight irreducible representations of the quantum algebra U h (gl-∞) is constructed. Within each module a basis is introduced and the transformation relations of the basis under the action of the Chevalley generators are explicitly written. (author). 16 refs

  8. Exploring the cultural dimensions of the right to the highest ...

    African Journals Online (AJOL)

    The right to enjoying the highest attainable standard of health is incorporated in many international and regional human rights instruments. This right contains both freedoms and entitlements, including the freedom to control one's own health and body and the right to an accessible system of health care, goods and services.

  9. Averaged RMHD equations

    International Nuclear Information System (INIS)

    Ichiguchi, Katsuji

    1998-01-01

    A new reduced set of resistive MHD equations is derived by averaging the full MHD equations on specified flux coordinates, which is consistent with 3D equilibria. It is confirmed that the total energy is conserved and the linearized equations for ideal modes are self-adjoint. (author)

  10. Determining average yarding distance.

    Science.gov (United States)

    Roger H. Twito; Charles N. Mann

    1979-01-01

    Emphasis on environmental and esthetic quality in timber harvesting has brought about increased use of complex boundaries of cutting units and a consequent need for a rapid and accurate method of determining the average yarding distance and area of these units. These values, needed for evaluation of road and landing locations in planning timber harvests, are easily and...

  11. Average Revisited in Context

    Science.gov (United States)

    Watson, Jane; Chick, Helen

    2012-01-01

    This paper analyses the responses of 247 middle school students to items requiring the concept of average in three different contexts: a city's weather reported in maximum daily temperature, the number of children in a family, and the price of houses. The mixed but overall disappointing performance on the six items in the three contexts indicates…

  12. Averaging operations on matrices

    Indian Academy of Sciences (India)

    2014-07-03

    Jul 3, 2014 ... Role of Positive Definite Matrices. • Diffusion Tensor Imaging: 3 × 3 pd matrices model water flow at each voxel of brain scan. • Elasticity: 6 × 6 pd matrices model stress tensors. • Machine Learning: n × n pd matrices occur as kernel matrices. Tanvi Jain. Averaging operations on matrices ...

  13. Average-energy games

    Directory of Open Access Journals (Sweden)

    Patricia Bouyer

    2015-09-01

    Full Text Available Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in NP inter coNP and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements.

  14. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    2001-01-01

    In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong ...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation.......In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  15. The highest energy cosmic rays, photons and neutrinos

    International Nuclear Information System (INIS)

    Zas, Enrique

    1998-01-01

    In these lectures I introduce and discuss aspects of currently active fields of interest related to the production, transport and detection of high energy particles from extraterrestrial sources. I have payed most attention to the highest energies and I have divided the material according to the types of particles which will be searched for with different experimental facilities in planning: hadrons, gamma rays and neutrinos. Particular attention is given to shower development, stochastic acceleration and detection techniques

  16. Do optimally ripe blackberries contain the highest levels of metabolites?

    Science.gov (United States)

    Mikulic-Petkovsek, Maja; Koron, Darinka; Zorenc, Zala; Veberic, Robert

    2017-01-15

    Five blackberry cultivars were selected for the study ('Chester Thornless', 'Cacanska Bestrna', 'Loch Ness', 'Smoothstem' and 'Thornfree') and harvested at three different maturity stages (under-, optimal- and over-ripe). Optimally ripe and over-ripe blackberries contained significantly higher levels of total sugars compared to under-ripe fruit. 'Loch Ness' cultivar was characterized by 2.2-2.6-fold higher levels of total sugars than other cultivars and consequently, the highest sugar/acids ratio. 'Chester Thornless' stands out as the cultivar with the highest level of vitamin C in under-ripe (125.87mgkg(-1)) and optimally mature fruit (127.66mgkg(-1)). Maturity stage significantly affected the accumulation of phenolic compounds. The content of total anthocyanins increased for 43% at optimal maturity stage and cinnamic acid derivatives for 57% compared to under-ripe fruit. Over-ripe blackberries were distinguished by the highest content of total phenolics (1251-2115mg GAE kg(-1) FW) and greatest FRAP values (25.9-43.2mM TE kg(-1) FW). Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. The fifty highest cited papers in anterior cruciate ligament injury.

    Science.gov (United States)

    Vielgut, Ines; Dauwe, Jan; Leithner, Andreas; Holzer, Lukas A

    2017-07-01

    The anterior cruciate ligament (ACL) is one of the most common injured knee ligaments and at the same time, one of the most frequent injuries seen in the sport orthopaedic practice. Due to the clinical relevance of ACL injuries, numerous papers focussing on this topic including biomechanical-, basic science-, clinical- or animal studies, were published. The purpose of this study was to determine the most frequently cited scientific articles which address this subject, establish a ranking of the 50 highest cited papers and analyse them according to their characteristics. The 50 highest cited articles related to Anterior Cruciate Ligament Injury were searched in Thomson ISI Web of Science® by the use of defined search terms. All types of scientific papers with reference to our topic were ranked according to the absolute number of citations and analyzed for the following characteristics: journal title, year of publication, number of citations, citation density, geographic origin, article type and level of evidence. The 50 highest cited articles had up to 1624 citations. The top ten papers on this topic were cited 600 times at least. Most papers were published in the American Journal of Sports Medicine. The publication years spanned from 1941 to 2007, with the 1990s and 2000s accounting for half of the articles (n = 25). Seven countries contributed to the top 50 list, with the USA having by far the most contribution (n = 40). The majority of articles could be attributed to the category "Clinical Science & Outcome". Most of them represent a high level of evidence. Scientific articles in the field of ACL injury are highly cited. The majority of these articles are clinical studies that have a high level of evidence. Although most of the articles were published between 1990 and 2007, the highest cited articles in absolute and relative numbers were published in the early 1980s. These articles contain well established scoring- or classification systems. The

  18. A Highest Order Hypothesis Compatibility Test for Monocular SLAM

    OpenAIRE

    Edmundo Guerra; Rodrigo Munguia; Yolanda Bolea; Antoni Grau

    2013-01-01

    Simultaneous Location and Mapping (SLAM) is a key problem to solve in order to build truly autonomous mobile robots. SLAM with a unique camera, or monocular SLAM, is probably one of the most complex SLAM variants, based entirely on a bearing-only sensor working over six DOF. The monocular SLAM method developed in this work is based on the Delayed Inverse-Depth (DI-D) Feature Initialization, with the contribution of a new data association batch validation technique, the Highest Order Hyp...

  19. Average is Over

    Science.gov (United States)

    Eliazar, Iddo

    2018-02-01

    The popular perception of statistical distributions is depicted by the iconic bell curve which comprises of a massive bulk of 'middle-class' values, and two thin tails - one of small left-wing values, and one of large right-wing values. The shape of the bell curve is unimodal, and its peak represents both the mode and the mean. Thomas Friedman, the famous New York Times columnist, recently asserted that we have entered a human era in which "Average is Over" . In this paper we present mathematical models for the phenomenon that Friedman highlighted. While the models are derived via different modeling approaches, they share a common foundation. Inherent tipping points cause the models to phase-shift from a 'normal' bell-shape statistical behavior to an 'anomalous' statistical behavior: the unimodal shape changes to an unbounded monotone shape, the mode vanishes, and the mean diverges. Hence: (i) there is an explosion of small values; (ii) large values become super-large; (iii) 'middle-class' values are wiped out, leaving an infinite rift between the small and the super large values; and (iv) "Average is Over" indeed.

  20. Prevention of coronary and stroke events with atorvastatin in hypertensive patients who have average or lower-than-average cholesterol concentrations, in the Anglo-Scandinavian Cardiac Outcomes Trial--Lipid Lowering Arm (ASCOT-LLA): a multicentre randomised controlled trial

    DEFF Research Database (Denmark)

    Sever, Peter S; Dahlöf, Björn; Poulter, Neil R

    2003-01-01

    The lowering of cholesterol concentrations in individuals at high risk of cardiovascular disease improves outcome. No study, however, has assessed benefits of cholesterol lowering in the primary prevention of coronary heart disease (CHD) in hypertensive patients who are not conventionally deemed ...

  1. Prevention of coronary and stroke events with atorvastatin in hypertensive patients who have average or lower-than-average cholesterol concentrations, in the Anglo-Scandinavian Cardiac Outcomes Trial--Lipid Lowering Arm (ASCOT-LLA): a multicentre randomised controlled trial

    DEFF Research Database (Denmark)

    Sever, Peter S; Dahlöf, Björn; Poulter, Neil R

    2004-01-01

    The lowering of cholesterol concentrations in individuals at high risk of cardiovascular disease improves outcome. No study, however, has assessed benefits of cholesterol lowering in the primary prevention of coronary heart disease (CHD) in hypertensive patients who are not conventionally deemed ...

  2. Average nuclear surface properties

    International Nuclear Information System (INIS)

    Groote, H. von.

    1979-01-01

    The definition of the nuclear surface energy is discussed for semi-infinite matter. This definition is extended also for the case that there is a neutron gas instead of vacuum on the one side of the plane surface. The calculations were performed with the Thomas-Fermi Model of Syler and Blanchard. The parameters of the interaction of this model were determined by a least squares fit to experimental masses. The quality of this fit is discussed with respect to nuclear masses and density distributions. The average surface properties were calculated for different particle asymmetry of the nucleon-matter ranging from symmetry beyond the neutron-drip line until the system no longer can maintain the surface boundary and becomes homogeneous. The results of the calculations are incorporated in the nuclear Droplet Model which then was fitted to experimental masses. (orig.)

  3. Americans' Average Radiation Exposure

    International Nuclear Information System (INIS)

    2000-01-01

    We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body

  4. Robert Aymar receives one of the highest Finnish distinctions

    CERN Multimedia

    2008-01-01

    On 9 December 2008 Robert Aymar, CERN Director-General, was awarded the decoration of Commander, first class, of the Order of the Lion of Finland by the President of the Republic of Finland. This decoration, one of the highest of Finland, was presented in a ceremony by the Ambassador Hannu Himanen, Permanent Representative of Finland to the UN and other international organisations in Geneva. Robert Aymar was honoured for his service to CERN, the LHC, his role in the cooperation between Finland and CERN, as well as his contribution to science in general. In his speech the ambassador underlined CERN’s efforts in the field of education, mentioning the High school teachers programme.

  5. Analysis of a linear solar concentrated with stationary reflector and movable center for applications of average temperature; Analisis de un concetrador solar lineal con reflector estacionario y foco movil para aplicaciones de media temperatura

    Energy Technology Data Exchange (ETDEWEB)

    Pujol, R.; Moia, A.; Martinez, V.

    2008-07-01

    Three different geometries of a fixed solar mirror concentrator and tracking absorber have been analyzed for medium temperature: FSMC flat mirrors, FSMC parabolic mirrors and only one parabolic mirror OPMSC. These designs can track the sun by moving the receiver around a static reflector in a circular path. A forward ray tracing procedure was implemented by the authors to analyze the influence of the collector parameters on optical efficiency. Various combinations of D/W ratios and geometric concentration ratios C were studied. The analysis showed that as D/W increases the efficiency increases well. Annual efficiencies of a 40% can be reached, in front of 35 % estimated with commercial evacuated tubes at 120 degree centigrade. (Author)

  6. Z-burst scenario for the highest energy cosmic rays

    International Nuclear Information System (INIS)

    Fodor, Z.

    2002-10-01

    The origin of highest energy cosmic rays is yet unknown. An appealing possibility is the so-called Z-burst scenario, in which a large fraction of these cosmic rays are decay products of Z bosons produced in the scattering of ultrahigh energy neutrinos on cosmological relic neutrinos. The comparison between the observed and predicted spectra constrains the mass of the heaviest neutrino. The required neutrino mass is fairly robust against variations of the presently unknown quantities, such as the amount of relic neutrino clustering, the universal photon radio background and the extragalactic magnetic field. Considering different possibilities for the ordinary cosmic rays the required neutrino masses are determined. In the most plausible case that the ordinary cosmic rays are of extragalactic origin and the universal radio background is strong enough to suppress high energy photons, the required neutrino mass is 0.08 eV ≤ m ν ≤ 0.40 eV. The required ultrahigh energy neutrino flux should be detected in the near future by experiments such as AMANDA, RICE or the Pierre Auger Observatory. (orig.)

  7. Compatibility of Firm Positioning Strategy and Website Content: Highest

    Directory of Open Access Journals (Sweden)

    Evla MUTLU KESİCİ

    2017-07-01

    Full Text Available Corporate websites are essential platforms through which firms introduce their goods and services on B2B and B2C level, express financial information for the stakeholders and share corporate values, purposes and activities. Due to its facilities, websites take part in firm positioning strategy. Accordingly this study aims to understand the innovation oriented positioning through corporate websites. The method applied in this study has been adapted from the 2QCV2Q Model developed by Mich and Franch (2000 to evaluate websites and top 30 firms with the highest Research and Development expenditures listed in Turkishtime (2015 have been analyzed. Within this context, this study presents a revised and updated method for the assessments of websites through positioning strategy framework. Findings indicate no direct relationship between website evaluation and R&D expenditure, though some common weaknesses have been put forward, such as information about management of the firms. Besides, publicly traded firms are recognized to facilitate websites more efficiently than non-publicly traded firms. Study contribute to both academia and practitioners as putting forward a new approach for 2QCV2Q Model and indicating the similarities and differences among the corporate websites through positioning perspective.

  8. Estimation of the center frequency of the highest modulation filter.

    Science.gov (United States)

    Moore, Brian C J; Füllgrabe, Christian; Sek, Aleksander

    2009-02-01

    For high-frequency sinusoidal carriers, the threshold for detecting sinusoidal amplitude modulation increases when the signal modulation frequency increases above about 120 Hz. Using the concept of a modulation filter bank, this effect might be explained by (1) a decreasing sensitivity or greater internal noise for modulation filters with center frequencies above 120 Hz; and (2) a limited span of center frequencies of the modulation filters, the top filter being tuned to about 120 Hz. The second possibility was tested by measuring modulation masking in forward masking using an 8 kHz sinusoidal carrier. The signal modulation frequency was 80, 120, or 180 Hz and the masker modulation frequencies covered a range above and below each signal frequency. Four highly trained listeners were tested. For the 80-Hz signal, the signal threshold was usually maximal when the masker frequency equaled the signal frequency. For the 180-Hz signal, the signal threshold was maximal when the masker frequency was below the signal frequency. For the 120-Hz signal, two listeners showed the former pattern, and two showed the latter pattern. The results support the idea that the highest modulation filter has a center frequency in the range 100-120 Hz.

  9. Kyle Cranmer receives the highest recognition from the US government

    CERN Multimedia

    Allen Mincer

    Kyle Cranmer with Clay Sell, Deputy Secretary of EnergyKyle Cranmer, who has worked on ATLAS as a graduate student at the University of Wisconsin-Madison, a Goldhaber Fellow at Brookhaven National Laboratory, and, most recently, an Assistant Professor at New York University, has been awarded a Presidential Early Career Award for Scientists and Engineers (PECASE). As described at the United States Department of Energy web page: "The PECASE Awards are intended to recognize some of the finest scientists and engineers who, while early in their research careers, show exceptional potential for leadership at the frontiers of scientific knowledge during the twenty-first century...The PECASE Award is the highest honor bestowed by the U.S. government on outstanding scientists and engineers beginning their independent careers." Kyle's work on ATLAS focuses on tools and strategies for data analysis, triggering, and searches for the Higgs.At the awards ceremony, which took place on Thursday Nov. 1st in Washington, D.C.,...

  10. Recreational fishing selectively captures individuals with the highest fitness potential.

    Science.gov (United States)

    Sutter, David A H; Suski, Cory D; Philipp, David P; Klefoth, Thomas; Wahl, David H; Kersten, Petra; Cooke, Steven J; Arlinghaus, Robert

    2012-12-18

    Fisheries-induced evolution and its impact on the productivity of exploited fish stocks remains a highly contested research topic in applied fish evolution and fisheries science. Although many quantitative models assume that larger, more fecund fish are preferentially removed by fishing, there is no empirical evidence describing the relationship between vulnerability to capture and individual reproductive fitness in the wild. Using males from two lines of largemouth bass (Micropterus salmoides) selectively bred over three generations for either high (HV) or low (LV) vulnerability to angling as a model system, we show that the trait "vulnerability to angling" positively correlates with aggression, intensity of parental care, and reproductive fitness. The difference in reproductive fitness between HV and LV fish was particularly evident among larger males, which are also the preferred mating partners of females. Our study constitutes experimental evidence that recreational angling selectively captures individuals with the highest potential for reproductive fitness. Our study further suggests that selective removal of the fittest individuals likely occurs in many fisheries that target species engaged in parental care. As a result, depending on the ecological context, angling-induced selection may have negative consequences for recruitment within wild populations of largemouth bass and possibly other exploited species in which behavioral patterns that determine fitness, such as aggression or parental care, also affect their vulnerability to fishing gear.

  11. Academic Training - Tevatron: studying pp collisions at the highest energy

    CERN Multimedia

    2006-01-01

    ACADEMIC TRAINING LECTURE SERIES 15, 16, 17, 18 May Main Auditorium, bldg. 500 on 15, 16, 17 May - Council Chamber on 18 May Physics at the Tevatron B. HEINEMANN, Univ. of Liverpool, FERMILAB Physics Results from the Tevatron The Tevatron proton-antiproton collider at Fermilab in the US is currently the world's highest energy collider. At the experiments CDF and D0 a broad physics programme is being pursued, ranging from flavour physics via electroweak precision measurements to searches for the Higgs boson and new particles beyond the Standard Model. In my lecture I will describe some of the highlight measurements in the flavour, electroweak and searches sectors, and the experimental techniques that are used. ENSEIGNEMENT ACADEMIQUE ACADEMIC TRAINING Françoise Benz 73127 academic.training@cern.ch If you wish to participate in one of the following courses, please tell to your supervisor and apply electronically from the course description pages that can be found on the Web at: http://www.cern.ch/...

  12. A Highest Order Hypothesis Compatibility Test for Monocular SLAM

    Directory of Open Access Journals (Sweden)

    Edmundo Guerra

    2013-08-01

    Full Text Available Simultaneous Location and Mapping (SLAM is a key problem to solve in order to build truly autonomous mobile robots. SLAM with a unique camera, or monocular SLAM, is probably one of the most complex SLAM variants, based entirely on a bearing-only sensor working over six DOF. The monocular SLAM method developed in this work is based on the Delayed Inverse-Depth (DI-D Feature Initialization, with the contribution of a new data association batch validation technique, the Highest Order Hypothesis Compatibility Test, HOHCT. The Delayed Inverse-Depth technique is used to initialize new features in the system and defines a single hypothesis for the initial depth of features with the use of a stochastic technique of triangulation. The introduced HOHCT method is based on the evaluation of statistically compatible hypotheses and a search algorithm designed to exploit the strengths of the Delayed Inverse-Depth technique to achieve good performance results. This work presents the HOHCT with a detailed formulation of the monocular DI-D SLAM problem. The performance of the proposed HOHCT is validated with experimental results, in both indoor and outdoor environments, while its costs are compared with other popular approaches.

  13. The difference between alternative averages

    Directory of Open Access Journals (Sweden)

    James Vaupel

    2012-09-01

    Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.

  14. The part of the solar spectrum with the highest influence on the formation of SOA in the continental boundary layer

    Directory of Open Access Journals (Sweden)

    M. Boy

    2002-01-01

    Full Text Available The relationship between nucleation events and spectral solar irradiance was analysed using two years of data collected at the Station for Measuring Forest Ecosystem-Atmosphere Relations (SMEAR II in Hyytiälä, Finland. We analysed the data in two different ways. In the first step we calculated ten nanometer average values from the irradiance measurements between 280 and 580 nm and explored if any special wavelengths groups showed higher values on event days compared to a spectral reference curve for all the days for 2 years or to reference curves for every month. The results indicated that short wavelength irradiance between 300 and 340 nm is higher on event days in winter (February and March compared to the monthly reference graph but quantitative much smaller than in spring or summer. By building the ratio between the average values of different event classes and the yearly reference graph we obtained peaks between 1.17 and 1.6 in the short wavelength range (300--340 nm. In the next step we included number concentrations of particles between 3 and 10 nm and calculated correlation coefficients between the different wavelengths groups and the particles. The results were quite similar to those obtained previously; the highest correlation coefficients were reached for the spectral irradiance groups 3--5 (300--330 nm with average values for the single event classes around 0.6 and a nearly linear decrease towards higher wavelengths groups by 30%. Both analyses indicate quite clearly that short wavelength irradiance between 300 and 330 or 340 nm is the most important solar spectral radiation for the formation of newly formed aerosols. In the end we introduce a photochemical mechanism as one possible pathway how short wavelength irradiance can influence the formation of SOA by calculating the production rate of excited oxygen. This mechanism shows in which way short wavelength irradiance can influence the formation of new particles even though the

  15. How to average logarithmic retrievals?

    Directory of Open Access Journals (Sweden)

    B. Funke

    2012-04-01

    Full Text Available Calculation of mean trace gas contributions from profiles obtained by retrievals of the logarithm of the abundance rather than retrievals of the abundance itself are prone to biases. By means of a system simulator, biases of linear versus logarithmic averaging were evaluated for both maximum likelihood and maximum a priori retrievals, for various signal to noise ratios and atmospheric variabilities. These biases can easily reach ten percent or more. As a rule of thumb we found for maximum likelihood retrievals that linear averaging better represents the true mean value in cases of large local natural variability and high signal to noise ratios, while for small local natural variability logarithmic averaging often is superior. In the case of maximum a posteriori retrievals, the mean is dominated by the a priori information used in the retrievals and the method of averaging is of minor concern. For larger natural variabilities, the appropriateness of the one or the other method of averaging depends on the particular case because the various biasing mechanisms partly compensate in an unpredictable manner. This complication arises mainly because of the fact that in logarithmic retrievals the weight of the prior information depends on abundance of the gas itself. No simple rule was found on which kind of averaging is superior, and instead of suggesting simple recipes we cannot do much more than to create awareness of the traps related with averaging of mixing ratios obtained from logarithmic retrievals.

  16. Lagrangian averaging with geodesic mean.

    Science.gov (United States)

    Oliver, Marcel

    2017-11-01

    This paper revisits the derivation of the Lagrangian averaged Euler (LAE), or Euler- α equations in the light of an intrinsic definition of the averaged flow map as the geodesic mean on the volume-preserving diffeomorphism group. Under the additional assumption that first-order fluctuations are statistically isotropic and transported by the mean flow as a vector field, averaging of the kinetic energy Lagrangian of an ideal fluid yields the LAE Lagrangian. The derivation presented here assumes a Euclidean spatial domain without boundaries.

  17. Averaging in spherically symmetric cosmology

    International Nuclear Information System (INIS)

    Coley, A. A.; Pelavas, N.

    2007-01-01

    The averaging problem in cosmology is of fundamental importance. When applied to study cosmological evolution, the theory of macroscopic gravity (MG) can be regarded as a long-distance modification of general relativity. In the MG approach to the averaging problem in cosmology, the Einstein field equations on cosmological scales are modified by appropriate gravitational correlation terms. We study the averaging problem within the class of spherically symmetric cosmological models. That is, we shall take the microscopic equations and effect the averaging procedure to determine the precise form of the correlation tensor in this case. In particular, by working in volume-preserving coordinates, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. We find that the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background must be of the form of a spatial curvature. Inhomogeneities and spatial averaging, through this spatial curvature correction term, can have a very significant dynamical effect on the dynamics of the Universe and cosmological observations; in particular, we discuss whether spatial averaging might lead to a more conservative explanation of the observed acceleration of the Universe (without the introduction of exotic dark matter fields). We also find that the correlation tensor for a non-FLRW background can be interpreted as the sum of a spatial curvature and an anisotropic fluid. This may lead to interesting effects of averaging on astrophysical scales. We also discuss the results of averaging an inhomogeneous Lemaitre-Tolman-Bondi solution as well as calculations of linear perturbations (that is, the backreaction) in an FLRW background, which support the main conclusions of the analysis

  18. Averaging models: parameters estimation with the R-Average procedure

    Directory of Open Access Journals (Sweden)

    S. Noventa

    2010-01-01

    Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.

  19. Evaluations of average level spacings

    International Nuclear Information System (INIS)

    Liou, H.I.

    1980-01-01

    The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of 168 Er data. 19 figures, 2 tables

  20. The highest velocity and the shortest duration permitting attainment of VO2max during running

    Directory of Open Access Journals (Sweden)

    Tiago Turnes

    2015-02-01

    Full Text Available DOI: http://dx.doi.org/10.5007/1980-0037.2015v17n2p226   The severe-intensity domain has important applications for the prescription of running training and the elaboration of experimental designs. The objectives of this study were: 1 to investigate the validity of a previously proposed model to estimate the shortest exercise duration (TLOW and the highest velocity (VHIGH at which VO2max is reached during running, and 2 to evaluate the effects of aerobic training status on these variables. Eight runners and eight physically active subjects performed several treadmill running exercise tests to fatigue in order to mathematically estimate and to experimentally determine TLOW and VHIGH. The relationship between the time to achieve VO2max and time to exhaustion (Tlim was used to estimate TLOW. VHIGH was estimated using the critical velocity model. VHIGH was assumed to be the highest velocity at which VO2 was equal to or higher than the average VO2max minus one standard deviation. TLOW was defined as Tlim associated with VHIGH. Runners presented better aerobic fitness and higher VHIGH (22.2 ± 1.9 km.h-1 than active subjects (20.0 ± 2.1 km.h-1. However, TLOW did not differ between groups (runners: 101 ± 39 s; active subjects: 100 ± 35 s. TLOW and VHIGH were not well estimated by the model proposed, with high coefficients of variation (> 6% and a low correlation coefficient (r<0.70, a fact reducing the validity of the model. It was concluded that aerobic training status positively affected only VHIGH. Furthermore, the model proposed presented low validity to estimate the upper boundary of the severe-intensity domain (i.e., VHIGH, irrespective of the subjects’ training status.

  1. Ergodic averages via dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    2006-01-01

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain....

  2. High average power supercontinuum sources

    Indian Academy of Sciences (India)

    The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium.

  3. Extreme Markup: The Fifty US Hospitals With The Highest Charge-To-Cost Ratios.

    Science.gov (United States)

    Bai, Ge; Anderson, Gerard F

    2015-06-01

    Using Medicare cost reports, we examined the fifty US hospitals with the highest charge-to-cost ratios in 2012. These hospitals have markups (ratios of charges over Medicare-allowable costs) approximately ten times their Medicare-allowable costs compared to a national average of 3.4 and a mode of 2.4. Analysis of the fifty hospitals showed that forty-nine are for profit (98 percent), forty-six are owned by for-profit hospital systems (92 percent), and twenty (40 percent) operate in Florida. One for-profit hospital system owns half of these fifty hospitals. While most public and private health insurers do not use hospital charges to set their payment rates, uninsured patients are commonly asked to pay the full charges, and out-of-network patients and casualty and workers' compensation insurers are often expected to pay a large portion of the full charges. Because it is difficult for patients to compare prices, market forces fail to constrain hospital charges. Federal and state governments may want to consider limitations on the charge-to-cost ratio, some form of all-payer rate setting, or mandated price disclosure to regulate hospital markups. Project HOPE—The People-to-People Health Foundation, Inc.

  4. When good = better than average

    Directory of Open Access Journals (Sweden)

    Don A. Moore

    2007-10-01

    Full Text Available People report themselves to be above average on simple tasks and below average on difficult tasks. This paper proposes an explanation for this effect that is simpler than prior explanations. The new explanation is that people conflate relative with absolute evaluation, especially on subjective measures. The paper then presents a series of four studies that test this conflation explanation. These tests distinguish conflation from other explanations, such as differential weighting and selecting the wrong referent. The results suggest that conflation occurs at the response stage during which people attempt to disambiguate subjective response scales in order to choose an answer. This is because conflation has little effect on objective measures, which would be equally affected if the conflation occurred at encoding.

  5. Autoregressive Moving Average Graph Filtering

    OpenAIRE

    Isufi, Elvin; Loukas, Andreas; Simonetto, Andrea; Leus, Geert

    2016-01-01

    One of the cornerstones of the field of signal processing on graphs are graph filters, direct analogues of classical filters, but intended for signals defined on graphs. This work brings forth new insights on the distributed graph filtering problem. We design a family of autoregressive moving average (ARMA) recursions, which (i) are able to approximate any desired graph frequency response, and (ii) give exact solutions for tasks such as graph signal denoising and interpolation. The design phi...

  6. Averaging Robertson-Walker cosmologies

    International Nuclear Information System (INIS)

    Brown, Iain A.; Robbers, Georg; Behrend, Juliane

    2009-01-01

    The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ω eff 0 ≈ 4 × 10 −6 , with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10 −8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w eff < −1/3 can be found for strongly phantom models

  7. Metals in the Scheldt estuary: From environmental concentrations to bioaccumulation.

    Science.gov (United States)

    Van Ael, Evy; Blust, Ronny; Bervoets, Lieven

    2017-09-01

    To investigate the relationship between metal concentrations in abiotic compartments and in aquatic species, sediment, suspended matter and several aquatic species (Polychaeta, Oligochaeta, four crustacean species, three mollusc species and eight fish species) were collected during three seasons at six locations along the Scheldt estuary (the Netherlands-Belgium) and analysed on their metal content (Ag, Cd, Co, Cr, Cu, Ni, Pb, Zn and the metalloid As). Sediment and biota tissue concentrations were significantly influenced by sampling location, but not by season. Measurements of Acid Volatile Sulphides (AVS) concentrations in relation to Simultaneously Extracted Metals (SEM) in the sediment suggested that not all metals in the sediment will be bound to sulphides and some metals might be bioavailable. For all metals but zinc, highest concentrations were measured in invertebrate species; Ag and Ni in periwinkle, Cr, Co and Pb in Oligochaete worms and As, Cd and Cu in crabs and shrimp. Highest concentrations of Zn were measured in the kidney of European smelt. In fish, for most of the metals, the concentrations were highest in liver or kidney and lowest in muscle. For Zn however, highest concentrations were measured in the kidney of European smelt. For less than half of the metals significant correlations between sediment metal concentrations and bioaccumulated concentrations were found (liver/hepatopancreas or whole organism). To calculate the possible human health risk by consumption, average and maximum metal concentrations in the muscle tissues were compared to the minimum risk levels (MRLs). Concentrations of As led to the highest risk potential for all consumable species. Cadmium and Cu posed only a risk when consuming the highest contaminated shrimp and shore crabs. Consuming blue mussel could result in a risk for the metals As, Cd and Cr. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Topological quantization of ensemble averages

    International Nuclear Information System (INIS)

    Prodan, Emil

    2009-01-01

    We define the current of a quantum observable and, under well-defined conditions, we connect its ensemble average to the index of a Fredholm operator. The present work builds on a formalism developed by Kellendonk and Schulz-Baldes (2004 J. Funct. Anal. 209 388) to study the quantization of edge currents for continuous magnetic Schroedinger operators. The generalization given here may be a useful tool to scientists looking for novel manifestations of the topological quantization. As a new application, we show that the differential conductance of atomic wires is given by the index of a certain operator. We also comment on how the formalism can be used to probe the existence of edge states

  9. Flexible time domain averaging technique

    Science.gov (United States)

    Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng

    2013-09-01

    Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

  10. Pareto Principle in Datamining: an Above-Average Fencing Algorithm

    Directory of Open Access Journals (Sweden)

    K. Macek

    2008-01-01

    Full Text Available This paper formulates a new datamining problem: which subset of input space has the relatively highest output where the minimal size of this subset is given. This can be useful where usual datamining methods fail because of error distribution asymmetry. The paper provides a novel algorithm for this datamining problem, and compares it with clustering of above-average individuals.

  11. The average Indian female nose.

    Science.gov (United States)

    Patil, Surendra B; Kale, Satish M; Jaiswal, Sumeet; Khare, Nishant; Math, Mahantesh

    2011-12-01

    This study aimed to delineate the anthropometric measurements of the noses of young women of an Indian population and to compare them with the published ideals and average measurements for white women. This anthropometric survey included a volunteer sample of 100 young Indian women ages 18 to 35 years with Indian parents and no history of previous surgery or trauma to the nose. Standardized frontal, lateral, oblique, and basal photographs of the subjects' noses were taken, and 12 standard anthropometric measurements of the nose were determined. The results were compared with published standards for North American white women. In addition, nine nasal indices were calculated and compared with the standards for North American white women. The nose of Indian women differs significantly from the white nose. All the nasal measurements for the Indian women were found to be significantly different from those for North American white women. Seven of the nine nasal indices also differed significantly. Anthropometric analysis suggests differences between the Indian female nose and the North American white nose. Thus, a single aesthetic ideal is inadequate. Noses of Indian women are smaller and wider, with a less projected and rounded tip than the noses of white women. This study established the nasal anthropometric norms for nasal parameters, which will serve as a guide for cosmetic and reconstructive surgery in Indian women.

  12. Radon and radon-daughter concentrations in air in the vicinity of the Anaconda Uranium Mill

    Energy Technology Data Exchange (ETDEWEB)

    Momeni, M H; Lindstrom, J B; Dungey, C E; Kisieleski, W E

    1979-11-01

    Radon concentration, working level, and meteorological variables were measured continuously from June 1977 through June 1978 at three stations in the vicinity of the Anaconda Uranium Mill with measurements integrated to hourly intervals. Both radon and daughters show strong variations associated with low wind velocities and stable atmospheric conditions, and diurnal variations associated with thermal inversions. Average radon concentration shows seasonal dependence with highest concentrations observed during fall and winter. Comparison of radon concentrations and working levels between three stations shows strong dependence on wind direction and velocity. Radon concentrations and working-level distributions for each month and each station were analyzed. The average maximum, minimum, and modal concentration and working levels were estimated with observed frequencies. The highest concentration is 11,000 pCi/m/sup 3/ on the tailings. Working-level variations parallel radon variations but lag by less than one hour. The highest working levels were observed at night when conditions of higher secular radioactive equilibrium for radon daughters exist. Background radon concentration was measured at two stations, each located about 25 km from the mill, and the average is 408 pCi/m/sup 3/. Average working-level background is 3.6 x 10/sup -3/.

  13. Radon and radon-daughter concentrations in air in the vicinity of the Anaconda Uranium Mill

    International Nuclear Information System (INIS)

    Momeni, M.H.; Lindstrom, J.B.; Dungey, C.E.; Kisieleski, W.E.

    1979-11-01

    Radon concentration, working level, and meteorological variables were measured continuously from June 1977 through June 1978 at three stations in the vicinity of the Anaconda Uranium Mill with measurements integrated to hourly intervals. Both radon and daughters show strong variations associated with low wind velocities and stable atmospheric conditions, and diurnal variations associated with thermal inversions. Average radon concentration shows seasonal dependence with highest concentrations observed during fall and winter. Comparison of radon concentrations and working levels between three stations shows strong dependence on wind direction and velocity. Radon concentrations and working-level distributions for each month and each station were analyzed. The average maximum, minimum, and modal concentration and working levels were estimated with observed frequencies. The highest concentration is 11,000 pCi/m 3 on the tailings. Working-level variations parallel radon variations but lag by less than one hour. The highest working levels were observed at night when conditions of higher secular radioactive equilibrium for radon daughters exist. Background radon concentration was measured at two stations, each located about 25 km from the mill, and the average is 408 pCi/m 3 . Average working-level background is 3.6 x 10 -3

  14. ASSESSMENT OF PAHS AND SELECTED PESTICIDES IN SHALLOW GROUNDWATER IN THE HIGHEST PROTECTED AREAS IN THE OPOLE REGION, POLAND

    Directory of Open Access Journals (Sweden)

    Mariusz Głowacki

    2014-04-01

    Full Text Available The ground water quality was determined after the analyses of water samples from 18 wells. The wells were in the Groundwater Area with the Highest Protection (Triassic water, Opole region, Poland, rural build up. The water table level was low: 0.5 – 18.0 m below the ground surface level (except for one artesian well. The following parameters were determined: pH, EC, colour, ammonium, nitrite, nitrate, dissolved orthophosphate, total phosphorus, dissolved oxygen, BOD, COD-Mn, COD-Cr, humic substances, chloride, sulphate, total hardness, alkalinity, dry residue PAHs (16 compounds, pesticides (6 compounds, however, only selected data were presented in this paper. In all the analysed water samples chloro-organic pesticides were observed. The analysed water contained heptachlor in the highest concentrations of 15.97 mg/dm3. Good quality water must not include concentrations higher than 0.5 mg/dm3 of heptachlor. However, the concentration was circa 32 times higher than this value. The second pesticide determining poor water quality is dieldrin. This compound in the investigated groundwater was 1.94 mg/dm3 – 4 times higher than the limit for acceptable quality ground water. The concentration of pesticides also changed over the course of the research; the concentration in the analysed groundwater in the same well changed quite dramatically over a period of 1 year. Although PAHs and pesticides are potentially toxic for biological organisms they do exist in the environment as a product of the natural biological transformation of organic matter. The noted concentrations and compositions of PAH compounds were different to natural PAHs. It confirms the fact that agricultural activity influences groundwater quality.

  15. Composition and Concentration of Phenolic Compounds of ‘Auksis’ Apple Grown on Various Rootstocks

    Directory of Open Access Journals (Sweden)

    Kviklys Darius

    2017-06-01

    Full Text Available The trial was carried out at the Institute of Horticulture, Lithuanian Research Centre for Agriculture and Forestry in 2013-2015. Cv. ‘Auksis’ was tested on 12 rootstocks: B.396, B.9, M.9, M.26, P 22, P 59, P 61, P 62, P 66, P 67, PB.4, and Pure 1. Accumulation of phenolic compounds depended on fruit yield and average fruit weight. On average, significantly lower concentration among rootstocks occurred when apple trees had abundant yield and fruits were smaller. On average chlorogenic acid constituted 50% and total procyanidins 28% of total phenols in ‘Auksis’ fruits. Flavonoid concentration most depended on rootstock and the highest variation was recorded. More than 50% difference occurred between the highest total flavonoid concentration in apples on PB.4 and the lowest on M.9 rootstocks. Low variability of total procyanidin concentration among rootstocks was observed. Differences between the highest and lowest concentration was 15%. Total concentration of phenolic compounds differed among rootstocks by 29-35% depending on the year. Differences in accumulation of phenolic compounds depended on rootstock genotype but not on yield or fruit weight. PB.4 and P 67 rootstocks had the highest, and M.9, P 62 and M.26 had the lowest concentration of total phenol in ‘Auksis’ fruits

  16. A new derivation of the highest-weight polynomial of a unitary lie algebra

    International Nuclear Information System (INIS)

    P Chau, Huu-Tai; P Van, Isacker

    2000-01-01

    A new method is presented to derive the expression of the highest-weight polynomial used to build the basis of an irreducible representation (IR) of the unitary algebra U(2J+1). After a brief reminder of Moshinsky's method to arrive at the set of equations defining the highest-weight polynomial of U(2J+1), an alternative derivation of the polynomial from these equations is presented. The method is less general than the one proposed by Moshinsky but has the advantage that the determinantal expression of the highest-weight polynomial is arrived at in a direct way using matrix inversions. (authors)

  17. The Highest Good and the Practical Regulative Knowledge in Kant’s Critique of Practical Reason

    OpenAIRE

    Joel Thiago Klein

    2016-01-01

    In this paper I defend three different points: first, that the concept of highest good is derived from an a priori but subjective argument, namely a maxim of pure practical reason; secondly, that the theory regarding the highest good has the validity of a practical regulative knowledge; and thirdly, that the practical regulative knowledge can be understood as the same “holding something to be true” as Kant attributes to hope and believe.

  18. [The gender gap in highest quality medical research - A scientometric analysis of the representation of female authors in highest impact medical journals].

    Science.gov (United States)

    Bendels, Michael H K; Wanke, Eileen M; Benik, Steffen; Schehadat, Marc S; Schöffel, Norman; Bauer, Jan; Gerber, Alexander; Brüggmann, Dörthe; Oremek, Gerhard M; Groneberg, David A

    2018-05-01

     The study aims to elucidate the state of gender equality in high-impact medical research, analyzing the representation of female authorships from January, 2008 to September, 2017.  133 893 male and female authorships from seven high-impact medical journals were analyzed. The key methodology was the combined analysis of the relative frequency, odds ratio and citations of female authorships. The Prestige Index measures the distribution of prestigious authorships between the two genders.  35.0 % of all authorships and 34.3 % of the first, 36.1 % of the co- and 24.2 % of the last authorships were held by women. Female authors have an odds ratio of 0.97 (KI: 0.93 - 1.01) for first, 1.36 (KI: 1.32 - 1.40) for co- und 0.57 (KI: 0.54 - 0.60) for last authorships compared to male authors. The proportion of female authorships exhibits an annual growth of 1.3 % overall, with 0.5 % for first, 1.2 % for co-, and 0.8 % for last authorships. Women are underrepresented at prestigious authorship compared to men (Prestige Index = -0.38). The underrepresentation accentuates in highly competitive articles attracting the highest citation rates, namely, articles with many authors and articles that were published in highest-impact journals. Multi-author articles with male key authors are more frequently cited than articles with female key authors. The gender-specific differences in citation rates increase the more authors contribute to an article. Women publish fewer articles compared to men (39.6 % female authors are responsible for 35.0 % of the authorships) and are underrepresented at productivity levels of more than 1 article per author. Distinct differences at the country level were revealed.  High impact medical research is characterized by few female group leaders as last authors and many female researchers being first or co-authors early in their career. It is very likely that this gender-specific career dichotomy will persistent in

  19. Averaging of nonlinearity-managed pulses

    International Nuclear Information System (INIS)

    Zharnitsky, Vadim; Pelinovsky, Dmitry

    2005-01-01

    We consider the nonlinear Schroedinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons

  20. Dipole model analysis of highest precision HERA data, including very low Q"2's

    International Nuclear Information System (INIS)

    Luszczak, A.; Kowalski, H.

    2016-12-01

    We analyse, within a dipole model, the final, inclusive HERA DIS cross section data in the low χ region, using fully correlated errors. We show, that these highest precision data are very well described within the dipole model framework starting from Q"2 values of 3.5 GeV"2 to the highest values of Q"2=250 GeV"2. To analyze the saturation effects we evaluated the data including also the very low 0.35< Q"2 GeV"2 region. The fits including this region show a preference of the saturation ansatz.

  1. Mercury contamination from artisanal gold mining in Antioquia, Colombia: The world's highest per capita mercury pollution.

    Science.gov (United States)

    Cordy, Paul; Veiga, Marcello M; Salih, Ibrahim; Al-Saadi, Sari; Console, Stephanie; Garcia, Oseas; Mesa, Luis Alberto; Velásquez-López, Patricio C; Roeser, Monika

    2011-12-01

    The artisanal gold mining sector in Colombia has 200,000 miners officially producing 30tonnes Au/a. In the Northeast of the Department of Antioquia, there are 17 mining towns and between 15,000 and 30,000 artisanal gold miners. Guerrillas and paramilitary activities in the rural areas of Antioquia pushed miners to bring their gold ores to the towns to be processed in Processing Centers or entables. These Centers operate in the urban areas amalgamating the whole ore, i.e. without previous concentration, and later burn gold amalgam without any filtering/condensing system. Based on mercury mass balance in 15 entables, 50% of the mercury added to small ball mills (cocos) is lost: 46% with tailings and 4% when amalgam is burned. In just 5 cities of Antioquia, with a total of 150,000 inhabitants: Segovia, Remedios, Zaragoza, El Bagre, and Nechí, there are 323 entables producing 10-20tonnes Au/a. Considering the average levels of mercury consumption estimated by mass balance and interviews of entables owners, the mercury consumed (and lost) in these 5 municipalities must be around 93tonnes/a. Urban air mercury levels range from 300ng Hg/m(3) (background) to 1million ng Hg/m(3) (inside gold shops) with 10,000ng Hg/m(3) being common in residential areas. The WHO limit for public exposure is 1000ng/m(3). The total mercury release/emissions to the Colombian environment can be as high as 150tonnes/a giving this country the shameful first position as the world's largest mercury polluter per capita from artisanal gold mining. One necessary government intervention is to cut the supply of mercury to the entables. In 2009, eleven companies in Colombia legally imported 130tonnes of metallic mercury, much of it flowing to artisanal gold mines. Entables must be removed from urban centers and technical assistance is badly needed to improve their technology and reduce emissions. Copyright © 2011 Elsevier B.V. All rights reserved.

  2. Correlation of the highest-energy cosmic rays with the positions of nearby active galactic nuclei

    NARCIS (Netherlands)

    Abraham, J.; Abreu, P.; Aglietta, M.; Aguirre, C.; Allard, D.; Allekotte, I.; Allen, J.; Allison, P.; Alvarez-Muniz, J.; Ambrosio, M.; Anchordoqui, L.; Andringa, S.; Anzalone, A.; Aramo, C.; Argiro, S.; Arisaka, K.; Armengaud, E.; Arneodo, F.; Arqueros, F.; Asch, T.; Asorey, H.; Assis, P.; Atulugama, B. S.; Aublin, J.; Ave, M.; Avila, G.; Baecker, T.; Badagnani, D.; Barbosa, A. F.; Barnhill, D.; Barroso, S. L. C.; Bauleo, P.; Beatty, J. J.; Beau, T.; Becker, B. R.; Becker, K. H.; Bellido, J. A.; BenZvi, S.; Berat, C.; Bergmann, T.; Bernardini, P.; Bertou, X.; Biermann, P. L.; Billoir, P.; Blanch-Bigas, O.; Blanco, F.; Blasi, P.; Bleve, C.; Bluemer, H.; Bohacova, M.; Bonifazi, C.; Bonino, R.; Brack, J.; Brogueira, P.; Brown, W. C.; Buchholz, P.; Bueno, A.; Burton, R. E.; Busca, N. G.; Caballero-Mora, K. S.; Cai, B.; Camin, D. V.; Caramete, L.; Caruso, R.; Carvalho, W.; Castellina, A.; Catalano, O.; Cataldi, G.; Cazon, L.; Cester, R.; Chauvin, J.; Chiavassa, A.; Chinellato, J. A.; Chou, A.; Chye, J.; Clay, R. W.; Colombo, E.; Conceicao, R.; Connolly, B.; Contreras, F.; Coppens, J.; Cordier, A.; Cotti, U.; Coutu, S.; Covault, C. E.; Creusot, A.; Criss, A.; Cronin, J.; Curutiu, A.; Dagoret-Campagne, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; De Donato, C.; Bg, S. J. de Jong; De La Vega, G.; de Mello, W. J. M.; de Mello Neto, J. R. T.; De Mitri, I.; de Souza, V.; del Peral, L.; Deligny, O.; Della Selva, A.; Delle Fratte, C.; Dembinski, H.; Di Giulio, C.; Diaz, J. C.; Diep, P. N.; Dobrigkeit, C.; D'Olivo, J. C.; Dong, P. N.; Dornic, D.; Dorofeev, A.; dos Anjos, J. C.; Dova, M. T.; D'Urso, D.; Dutan, I.; DuVernois, M. A.; Engel, R.; Epele, L.; Escobar, C. O.; Etchegoyen, A.; Luis, P. Facal San; Falcke, H.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Ferrer, F.; Ferry, S.; Fick, B.; Filevich, A.; Filipcic, A.; Fleck, I.; Fracchiolla, C. E.; Fulgione, W.; Garcia, B.; Gaimez, D. Garcia; Garcia-Pinto, D.; Garrido, X.; Geenen, H.; Gelmini, G.; Gemmeke, H.; Ghia, P. L.; Giller, M.; Glass, H.; Gold, M. S.; Golup, G.; Albarracin, F. Gomez; Berisso, M. Gomez; Herrero, R. Gomez; Goncalves, P.; do Amaral, M. Goncalves; Gonzalez, D.; Gonzalezc, J. G.; Gonzalez, M.; Gora, D.; Gorgi, A.; Gouffon, P.; Grassi, V.; Grillo, A. F.; Grunfeld, C.; Guardincerri, Y.; Guarino, F.; Guedes, G. P.; Gutierrez, J.; Hague, J. D.; Hamilton, J. C.; Hansen, P.; Harari, D.; Harmsma, S.; Harton, J. L.; Haungs, A.; Hauschildt, T.; Healy, M. D.; Hebbeker, T.; Hebrero, G.; Heck, D.; Hojvat, C.; Holmes, V. C.; Homola, P.; Hoerandel, J.; Horneffer, A.; Horvat, M.; Hrabovsky, M.; Huege, T.; Hussain, M.; Larlori, M.; Insolia, A.; Ionita, F.; Italiano, A.; Kaducak, M.; Kampert, K. H.; Karova, T.; Kegl, B.; Keilhauer, B.; Kemp, E.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Knapik, R.; Knapp, J.; Koanga, V. -H.; Krieger, A.; Kroemer, O.; Kuempel, D.; Kunka, N.; Kusenko, A.; La Rosa, G.; Lachaud, C.; Lago, B. L.; Lebrun, D.; LeBrun, P.; Lee, J.; de Oliveira, M. A. Leigui; Lopez, R.; Letessier-Selvon, A.; Leuthold, M.; Lhenry-Yvon, I.; Aguera, A. Lopez; Bahilo, J. Lozano; Garcia, R. Luna; Maccarone, M. C.; Macolino, C.; Maldera, S.; Mancarella, G.; Mancenido, M. E.; Mandatat, D.; Mantsch, P.; Mariazzi, A. G.; Maris, I. C.; Falcon, H. R. Marquez; Martello, D.; Martinez, J.; Bravo, O. Martinez; Mathes, H. J.; Matthews, J.; Matthews, J. A. J.; Matthiae, G.; Maurizio, D.; Mazur, P. O.; McCauley, T.; McEwen, M.; McNeil, R. R.; Medina, M. C.; Medina-Tanco, G.; Meli, A.; Melo, D.; Menichetti, E.; Menschikov, A.; Meurer, Chr.; Meyhandan, R.; Micheletti, M. I.; Miele, G.; Miller, W.; Mollerach, S.; Monasor, M.; Ragaigne, D. Monnier; Montanet, F.; Morales, B.; Morello, C.; Moreno, J. C.; Morris, C.; Mostafa, M.; Muller, M. A.; Mussa, R.; Navarra, G.; Navarro, J. L.; Navas, S.; Necesal, P.; Nellen, L.; Newman-Holmes, C.; Newton, D.; Nhung, P. T.; Nierstenhoefer, N.; Nitz, D.; Nosek, D.; Nozka, L.; Oehlschlaeger, J.; Ohnuki, T.; Olinto, A.; Olmos-Gilbaja, V. M.; Ortiz, M.; Ortolani, F.; Ostapchenko, S.; Otero, L.; Pacheco, N.; Selmi-Dei, D. Pakk; Palatka, M.; Pallotta, J.; Parente, G.; Parizot, E.; Parlati, S.; Pastor, S.; Patel, M.; Paul, T.; Pavlidou, V.; Payet, K.; Pech, M.; Pekala, J.; Pelayo, R.; Pepe, I. M.; Perrone, L.; Petrera, S.; Petrinca, P.; Petrov, Y.; Pichel, A.; Piegaia, R.; Pierog, T.; Pimenta, M.; Pinto, T.; Pirronello, V.; Pisanti, O.; Platino, M.; Pochon, J.; Privitera, P.; Prouza, M.; Quel, E. J.; Rautenberg, J.; Redondo, A.; Reucroft, S.; Revenu, B.; Rezende, F. A. S.; Ridky, J.; Riggi, S.; Risse, M.; Riviere, C.; Rizi, V.; Roberts, M.; Robledo, C.; Rodriguez, G.; Martino, J. Rodriguez; Rojo, J. Rodriguez; Rodriguez-Cabo, I.; Rodriguez-Frias, M. D.; Ros, G.; Rosado, J.; Roth, M.; Rouille-d'Orfeuil, B.; Roulet, E.; Roverok, A. C.; Salamida, F.; Salazar, H.; Salina, G.; Sanchez, F.; Santander, M.; Santo, C. E.; Santos, E. M.; Sarazin, F.; Sarkar, S.; Sato, R.; Scherini, V.; Schieler, H.; Schmidt, A.; Schmidt, F.; Schmidt, T.; Scholten, O.; Schovanek, P.; Schuessler, F.; Sciutto, S. J.; Scuderi, M.; Segreto, A.; Semikoz, D.; Settimo, M.; Shellard, R. C.; Sidelnik, I.; Siffert, B. B.; Sigl, G.; De Grande, N. Smetniansky; Smialkowski, A.; Smida, R.; Smith, A. G. K.; Smith, B. E.; Snow, G. R.; Sokolsky, P.; Sommers, P.; Sorokin, J.; Spinka, H.; Squartini, R.; Strazzeri, E.; Stutz, A.; Suarez, F.; Suomijarvi, T.; Supanitsky, A. D.; Sutherland, M. S.; Swain, J.; Szadkowski, Z.; Takahashi, J.; Tamashiro, A.; Tamburro, A.; Tascau, O.; Tcaciuc, R.; Thao, N. T.; Thomas, D.; Ticona, R.; Tiffenberg, J.; Timmermans, C.; Tkaczyk, W.; Peixoto, C. J. Todero; Tome, B.; Tonachini, A.; Torres, I.; Travnicek, P.; Tripathi, A.; Tristram, G.; Tscherniakhovski, D.; Tueros, M.; Ulrich, R.; Unger, M.; Urban, M.; Galicia, J. F. Valdes; Valino, I.; Valore, L.; van den Berg, A. M.; van Elewyck, V.; Vazquez, R. A.; Veberic, D.; Veiga, A.; Velarde, A.; Venters, T.; Verzi, V.; Videla, M.; Villasenor, L.; Vorobiov, S.; Voyvodic, L.; Wahlberg, H.; Wainberg, O.; Warner, D.; Watson, A. A.; Westerhoff, S.; Wieczorek, G.; Wiencke, L.; Wilczynska, B.; Wilczynski, H.; Wileman, C.; Winnick, M. G.; Wu, H.; Wundheiler, B.; Yamamoto, T.; Younk, P.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zech, A.; Zepeda, A.; Ziolkowski, M.

    Data collected by the Pierre Auger Observatory provide evidence for anisotropy in the arrival directions of the cosmic rays with the highest-energies, which are correlated with the positions of relatively nearby active galactic nuclei (AGN) [Pierre Auger Collaboration, Science 318 (2007) 938]. The

  3. Alpha-1-antitrypsin deficiency in Madeira (Portugal): the highest prevalence in the world.

    Science.gov (United States)

    Spínola, Carla; Bruges-Armas, Jácome; Pereira, Conceição; Brehm, António; Spínola, Hélder

    2009-10-01

    Alpha-1-antitrypsin (AAT) deficiency is a common genetic disease which affects both lung and liver. Early diagnosis can help asymptomatic patients to adjust their lifestyle choices in order to reduce the risk of Chronic Obstructive Pulmonary Disease (COPD). The determination of this genetic deficiency prevalence in Madeira Island (Portugal) population is important to clarify susceptibility and define the relevance of performing genetic tests for AAT on individuals at risk for COPD. Two hundred samples of unrelated individuals from Madeira Island were genotyped for the two most common AAT deficiency alleles, PI*S and PI*Z, using Polymerase Chain Reaction-Mediated Site-Directed Mutagenesis. Our results show one of the highest frequencies for both mutations when compared to any already studied population in the world. In fact, PI*S mutation has the highest prevalence (18%), and PI*Z mutation (2.5%) was the third highest worldwide. The frequency of AAT deficiency genotypes in Madeira (PI*ZZ, PI*SS, and PI*SZ) is estimated to be the highest in the world: 41 per 1000. This high prevalence of AAT deficiency on Madeira Island reveals an increased genetic susceptibility to COPD and suggests a routine genetic testing for individuals at risk.

  4. Metals in the Scheldt estuary: From environmental concentrations to bioaccumulation

    International Nuclear Information System (INIS)

    Van Ael, Evy; Blust, Ronny; Bervoets, Lieven

    2017-01-01

    To investigate the relationship between metal concentrations in abiotic compartments and in aquatic species, sediment, suspended matter and several aquatic species (Polychaeta, Oligochaeta, four crustacean species, three mollusc species and eight fish species) were collected during three seasons at six locations along the Scheldt estuary (the Netherlands-Belgium) and analysed on their metal content (Ag, Cd, Co, Cr, Cu, Ni, Pb, Zn and the metalloid As). Sediment and biota tissue concentrations were significantly influenced by sampling location, but not by season. Measurements of Acid Volatile Sulphides (AVS) concentrations in relation to Simultaneously Extracted Metals (SEM) in the sediment suggested that not all metals in the sediment will be bound to sulphides and some metals might be bioavailable. For all metals but zinc, highest concentrations were measured in invertebrate species; Ag and Ni in periwinkle, Cr, Co and Pb in Oligochaete worms and As, Cd and Cu in crabs and shrimp. Highest concentrations of Zn were measured in the kidney of European smelt. In fish, for most of the metals, the concentrations were highest in liver or kidney and lowest in muscle. For Zn however, highest concentrations were measured in the kidney of European smelt. For less than half of the metals significant correlations between sediment metal concentrations and bioaccumulated concentrations were found (liver/hepatopancreas or whole organism). To calculate the possible human health risk by consumption, average and maximum metal concentrations in the muscle tissues were compared to the minimum risk levels (MRLs). Concentrations of As led to the highest risk potential for all consumable species. Cadmium and Cu posed only a risk when consuming the highest contaminated shrimp and shore crabs. Consuming blue mussel could result in a risk for the metals As, Cd and Cr. - Highlights: • This is the first study investigating metal distribution along the aquatic ecosystem of the Scheldt

  5. The average size of ordered binary subgraphs

    NARCIS (Netherlands)

    van Leeuwen, J.; Hartel, Pieter H.

    To analyse the demands made on the garbage collector in a graph reduction system, the change in size of an average graph is studied when an arbitrary edge is removed. In ordered binary trees the average number of deleted nodes as a result of cutting a single edge is equal to the average size of a

  6. Optimization in the nuclear fuel cycle II: Concentration of alpha emitters in the air

    International Nuclear Information System (INIS)

    Pereira, W.S.; Silva, A.X.; Lopes, J.M.; Carmo, A.S.; Mello, C.R.; Fernandes, T.S.; Kelecom, A.

    2017-01-01

    Optimization is one of the bases of radioprotection and aims to move doses away from the dose limit that is the borderline of acceptable radiological risk. The work aims to use the monitoring of the concentration of alpha emitters in the air as a tool of the optimization process. We analyzed 27 sampling points of airborne alpha concentration in a nuclear fuel cycle facility. The monthly averages were considered statistically different, the highest in the month of February and the lowest in the month of August. All other months were found to have identical mean activity concentration values. Regarding the sampling points, the points with the highest averages were points 12, 15 and 9. These points were indicated for the beginning of the optimization process. Analysis of the production of the facility should be performed to verify possible correlations between production and concentration of alpha emitters in the air

  7. Highest manageable level of radioactivity in the waste storage facilities of power plants

    International Nuclear Information System (INIS)

    Elkert, J.; Lennartsson, R.

    1991-01-01

    This project presents and discusses an investigation of the highest level of radioactivity possible to handle in the waste storage facilities. The amount of radioactivity, about 0.1% of the fuel inventory, is the same in both of the cases but the amount of water is very different. The hypothetical accident was supposed to be damage of the reactor fuel caused by loss of coolant. (K.A.E.)

  8. Highest cited papers published in Neurology India: An analysis for the years 1993-2014.

    Science.gov (United States)

    Pandey, Paritosh; Subeikshanan, V; Madhugiri, Venkatesh S

    2016-01-01

    The highest cited papers published in a journal provide a snapshot of the clinical practice and research in that specialty and/or region. The aim of this study was to determine the highest cited papers published in Neurology India and analyze their attributes. This study was a citation analysis of all papers published in Neurology India since online archiving commenced in 1993. All papers published in Neurology India between the years 1993-2014 were listed. The number of times each paper had been cited up till the time of performing this study was determined by performing a Google Scholar search. Published papers were then ranked on the basis of total times cited since publication and the annual citation rate. Statistical Techniques: Simple counts and percentages were used to report most results. The mean citations received by papers in various categories were compared using the Student's t-test or a one-way analysis of variance, as appropriate. All analyses were carried out on SAS University Edition (SAS/STAT®, SAS Institute Inc, NC, USA) and graphs were generated on MS Excel 2016. The top papers on the total citations and annual citation rate rank lists pertained to basic neuroscience research. The highest cited paper overall had received 139 citations. About a quarter of the papers published had never been cited at all. The major themes represented were vascular diseases and infections. The highest cited papers reflect the diseases that are of major concern in India. Certain domains such as trauma, allied neurosciences, and basic neuroscience research were underrepresented.

  9. ATLAS event at 13 TeV - Highest mass dijets resonance event in 2015 data

    CERN Multimedia

    ATLAS Collaboration

    2015-01-01

    The highest-mass, central dijet event passing the dijet resonance selection collected in 2015 (Event 1273922482, Run 280673) : the two central high-pT jets have an invariant mass of 6.9 TeV, the two leading jets have a pT of 3.2 TeV. The missing transverse momentum in this event is 46 GeV.

  10. ATLAS event at 13 TeV - Highest mass dijets angular event in 2015 data

    CERN Multimedia

    ATLAS Collaboration

    2015-01-01

    The highest-mass dijet event passing the angular selection collected in 2015 (Event 478442529, Run 280464): the two central high-pT jets have an invariant mass of 7.9 TeV, the three leading jets have a pT of 1.99, 1.86 and 0.74 TeV respectively. The missing transverse momentum in this event is 46 GeV

  11. Eyebrow hairs from actinic keratosis patients harbor the highest number of cutaneous human papillomaviruses.

    Science.gov (United States)

    Schneider, Ines; Lehmann, Mandy D; Kogosov, Vlada; Stockfleth, Eggert; Nindl, Ingo

    2013-04-24

    Cutaneous human papillomavirus (HPV) infections seem to be associated with the onset of actinic keratosis (AK). This study compares the presence of cutaneous HPV types in eyebrow hairs to those in tissues of normal skin and skin lesions of 75 immunocompetent AK patients. Biopsies from AK lesions, normal skin and plucked eyebrow hairs were collected from each patient. DNA from these specimens was tested for the presence of 28 cutaneous HPV (betaPV and gammaPV) by a PCR based method. The highest number of HPV prevalence was detected in 84% of the eyebrow hairs (63/75, median 6 types) compared to 47% of AK lesions (35/75, median 3 types) (pAK and 69 in normal skin. In all three specimens HPV20, HPV23 and/or HPV37 were the most prevalent types. The highest number of multiple types of HPV positive specimens was found in 76% of the eyebrow hairs compared to 60% in AK and 57% in normal skin. The concordance of at least one HPV type in virus positive specimens was 81% (three specimens) and 88-93% of all three combinations with two specimens. Thus, eyebrow hairs revealed the highest number of cutaneous HPV infections, are easy to collect and are an appropriate screening tool in order to identify a possible association of HPV and AK.

  12. Determination of concentration factors for Cs-137 and Ra-226 in the mullet species Chelon labrosus (Mugilidae) from the South Adriatic Sea

    Energy Technology Data Exchange (ETDEWEB)

    Antovic, Ivanka [Department for Biochemical and Medical Sciences, State University of Novi Pazar, Vuka Karadzica bb, 36 300 Novi Pazar (Serbia); Antovic, Nevenka M., E-mail: nenaa@rc.pmf.ac.me [Faculty of Natural Sciences and Mathematics, University of Montenegro, Dzordza Vasingtona bb, 20 000 Podgorica (Montenegro)

    2011-07-15

    Concentration factors for Cs-137 and Ra-226 transfer from seawater, and dried sediment or mud with detritus, have been determined for whole, fresh weight, Chelon labrosus individuals and selected organs. Cesium was detected in 5 of 22 fish individuals, and its activity ranged from 1.0 to 1.6 Bq kg{sup -1}. Radium was detected in all fish, and ranged from 0.4 to 2.1 Bq kg{sup -1}, with an arithmetic mean of 1.0 Bq kg{sup -1}. In regards to fish organs, cesium activity concentration was highest in muscles (maximum - 3.7 Bq kg{sup -1}), while radium was highest in skeletons (maximum - 25 Bq kg{sup -1}). Among cesium concentration factors, those for muscles were the highest (from seawater - an average of 47, from sediment - an average of 3.3, from mud with detritus - an average of 0.8). Radium concentration factors were the highest for skeleton (from seawater - an average of 130, from sediment - an average of 1.8, from mud with detritus - an average of 1.5). Additionally, annual intake of cesium and radium by human adults consuming muscles of this fish species has been estimated to provide, in aggregate, an effective dose of about 4.1 {mu}Sv y{sup -1}. - Highlights: > Radionuclide transfer from seawater, sediment and mud with detritus. > Concentration factors for Cs-137 and Ra-226 in C. labrosus whole fish and organs. > Cs-137 concentration factors are highest for C. labrosus muscles. > Ra-226 concentration factors are highest for C. labrosus skeleton.

  13. Determination of concentration factors for Cs-137 and Ra-226 in the mullet species Chelon labrosus (Mugilidae) from the South Adriatic Sea

    International Nuclear Information System (INIS)

    Antovic, Ivanka; Antovic, Nevenka M.

    2011-01-01

    Concentration factors for Cs-137 and Ra-226 transfer from seawater, and dried sediment or mud with detritus, have been determined for whole, fresh weight, Chelon labrosus individuals and selected organs. Cesium was detected in 5 of 22 fish individuals, and its activity ranged from 1.0 to 1.6 Bq kg -1 . Radium was detected in all fish, and ranged from 0.4 to 2.1 Bq kg -1 , with an arithmetic mean of 1.0 Bq kg -1 . In regards to fish organs, cesium activity concentration was highest in muscles (maximum - 3.7 Bq kg -1 ), while radium was highest in skeletons (maximum - 25 Bq kg -1 ). Among cesium concentration factors, those for muscles were the highest (from seawater - an average of 47, from sediment - an average of 3.3, from mud with detritus - an average of 0.8). Radium concentration factors were the highest for skeleton (from seawater - an average of 130, from sediment - an average of 1.8, from mud with detritus - an average of 1.5). Additionally, annual intake of cesium and radium by human adults consuming muscles of this fish species has been estimated to provide, in aggregate, an effective dose of about 4.1 μSv y -1 . - Highlights: → Radionuclide transfer from seawater, sediment and mud with detritus. → Concentration factors for Cs-137 and Ra-226 in C. labrosus whole fish and organs. → Cs-137 concentration factors are highest for C. labrosus muscles. → Ra-226 concentration factors are highest for C. labrosus skeleton.

  14. Highest recorded electrical conductivity and microstructure in polypropylene-carbon nanotubes composites and the effect of carbon nanofibers addition

    Science.gov (United States)

    Ramírez-Herrera, C. A.; Pérez-González, J.; Solorza-Feria, O.; Romero-Partida, N.; Flores-Vela, A.; Cabañas-Moreno, J. G.

    2018-04-01

    In the last decade, numerous investigations have been devoted to the preparation of polypropylene-multiwalled carbon nanotubes (PP/MWCNT) nanocomposites having enhanced properties, and in particular, high electrical conductivities (> 1 S cm-1). The present work establishes that the highest electrical conductivity in PP/MWCNT nanocomposites is limited by the amount of nanofiller content which can be incorporated in the polymer matrix, namely, about 20 wt%. This concentration of MWCNT in PP leads to a maximum electrical conductivity slightly lower than 8 S cm-1, but only by assuring an adequate combination of dispersion and spatial distribution of the carbon nanotubes. The realization of such an optimal microstructure depends on the characteristics of the production process of the PP/MWCNT nanocomposites; in our experiments, involving composite fabrication by melt mixing and hot pressing, a second re-processing cycle is shown to increase the electrical conductivity values by up to two orders of magnitude, depending on the MWCNT content of the nanocomposite. A modest increase of the highest electrical conductivity obtained in nanocomposites with 21.5 wt% MWCNT content has been produced by the combined use of carbon nanofibers (CNF) and MWCNT, so that the total nanofiller content was increased to 30 wt% in the nanocomposite with PP—15 wt% MWCNT—15 wt%CNF.

  15. Determination of average activating thermal neutron flux in bulk samples

    International Nuclear Information System (INIS)

    Doczi, R.; Csikai, J.; Doczi, R.; Csikai, J.; Hassan, F. M.; Ali, M.A.

    2004-01-01

    A previous method used for the determination of the average neutron flux within bulky samples has been applied for the measurements of hydrogen contents of different samples. An analytical function is given for the description of the correlation between the activity of Dy foils and the hydrogen concentrations. Results obtained by the activation and the thermal neutron reflection methods are compared

  16. Case study of stratospheric ozone affecting ground-level oxidant concentrations

    International Nuclear Information System (INIS)

    Lamb, R.G.

    1977-01-01

    During the predawn hours of 19 November 1972, the air pollution monitoring station at Santa Rosa, Calif., recorded five consecutive hours of oxidant concentrations in excess of the present National Ambient Air Quality Standard. The highest of the hourly averages was 0.23 ppm. From a detailed analysis of the meteorological conditions surrounding this incident, it is shown that the ozone responsible for the anomalous concentrations originated in the stratosphere and not from anthropogenic sources

  17. Eyebrow hairs from actinic keratosis patients harbor the highest number of cutaneous human papillomaviruses

    Science.gov (United States)

    2013-01-01

    Background Cutaneous human papillomavirus (HPV) infections seem to be associated with the onset of actinic keratosis (AK). This study compares the presence of cutaneous HPV types in eyebrow hairs to those in tissues of normal skin and skin lesions of 75 immunocompetent AK patients. Methods Biopsies from AK lesions, normal skin and plucked eyebrow hairs were collected from each patient. DNA from these specimens was tested for the presence of 28 cutaneous HPV (betaPV and gammaPV) by a PCR based method. Results The highest number of HPV prevalence was detected in 84% of the eyebrow hairs (63/75, median 6 types) compared to 47% of AK lesions (35/75, median 3 types) (p< 0.001) and 37% of normal skin (28/75, median 4 types) (p< 0.001), respectively. A total of 228 HPV infections were found in eyebrow hairs compared to only 92 HPV infections in AK and 69 in normal skin. In all three specimens HPV20, HPV23 and/or HPV37 were the most prevalent types. The highest number of multiple types of HPV positive specimens was found in 76% of the eyebrow hairs compared to 60% in AK and 57% in normal skin. The concordance of at least one HPV type in virus positive specimens was 81% (three specimens) and 88-93% of all three combinations with two specimens. Conclusions Thus, eyebrow hairs revealed the highest number of cutaneous HPV infections, are easy to collect and are an appropriate screening tool in order to identify a possible association of HPV and AK. PMID:23618013

  18. 17-Year-Old Boy with Renal Failure and the Highest Reported Creatinine in Pediatric Literature

    Directory of Open Access Journals (Sweden)

    Vimal Master Sankar Raj

    2015-01-01

    Full Text Available The prevalence of chronic kidney disease (CKD is on the rise and constitutes a major health burden across the world. Clinical presentations in early CKD are usually subtle. Awareness of the risk factors for CKD is important for early diagnosis and treatment to slow the progression of disease. We present a case report of a 17-year-old African American male who presented in a life threatening hypertensive emergency with renal failure and the highest reported serum creatinine in a pediatric patient. A brief discussion on CKD criteria, complications, and potential red flags for screening strategies is provided.

  19. Observations of the highest energy gamma-rays from gamma-ray bursts

    International Nuclear Information System (INIS)

    Dingus, Brenda L.

    2001-01-01

    EGRET has extended the highest energy observations of gamma-ray bursts to GeV gamma rays. Such high energies imply the fireball that is radiating the gamma-rays has a bulk Lorentz factor of several hundred. However, EGRET only detected a few gamma-ray bursts. GLAST will likely detect several hundred bursts and may extend the maximum energy to a few 100 GeV. Meanwhile new ground based detectors with sensitivity to gamma-ray bursts are beginning operation, and one recently reported evidence for TeV emission from a burst

  20. Addressing the Highest Risk: Environmental Programs at Los Alamos National Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Forbes, Elaine E [Los Alamos National Laboratory

    2012-06-08

    Report topics: Current status of cleanup; Shift in priorities to address highest risk; Removal of above-ground waste; and Continued focus on protecting water resources. Partnership between the National Nuclear Security Administration's Los Alamos Site Office, DOE Carlsbad Field Office, New Mexico Environment Department, and contractor staff has enabled unprecedented cleanup progress. Progress on TRU campaign is well ahead of plan. To date, have completed 130 shipments vs. 104 planned; shipped 483 cubic meters of above-ground waste (vs. 277 planned); and removed 11,249 PE Ci of material at risk (vs. 9,411 planned).

  1. Averaging for solitons with nonlinearity management

    International Nuclear Information System (INIS)

    Pelinovsky, D.E.; Kevrekidis, P.G.; Frantzeskakis, D.J.

    2003-01-01

    We develop an averaging method for solitons of the nonlinear Schroedinger equation with a periodically varying nonlinearity coefficient, which is used to effectively describe solitons in Bose-Einstein condensates, in the context of the recently proposed technique of Feshbach resonance management. Using the derived local averaged equation, we study matter-wave bright and dark solitons and demonstrate a very good agreement between solutions of the averaged and full equations

  2. Soil dioxin concentrations in Baden-Wuerttemberg

    International Nuclear Information System (INIS)

    Wolf, D.

    1993-01-01

    Soil dioxin levels in Baden-Wuerttemberg are generally low. Where high dioxin concentrations have been reported like in Rastatt, Rheinfelden, Crailsheim-Maulach and Eppingen these phenomena are local. Already at less than 100 metres distance, drastically lower concentrations are measured. At 1500 to 2000 metres distance the values are back to the ordinary background level. A programme for detecting sources of emission in the entire state revealed no further sites of heavy contamination. For this assessment of soil dioxin concentrations in Baden-Wuerttemberg 1275 soil samples were used, which is a vast amount also in comparison with nation-wide surveys. The average dioxin content in farmland is about 1 ng I-TEq/kg m T . Soil dioxin concentrations are the higher the greater the density of settlements and industry. In cities they are about three to five times higher than the ubiquitous background concentration. The highest concentrations measured were 5-20 ng I-TE/kg in garden soils in cities. Sewage sludge may be a significant source of dioxin contamination for farmland, far beyond the ubiquitous background concentration. Automobile exhaust gas caused higher soil contamination within 10 m along both sides of the roads as a function of traffic. Because scavengers in gasoline are now prohibited and catalysts are becoming more and more common the rate of additional dioxin and furan contamination due to traffic will decrease. Currently, traffic-related emissions in Baden-Wuerttemberg are well below 2 g I-TEq. (orig./EF) [de

  3. DSCOVR Magnetometer Level 2 One Minute Averages

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-minute average of Level 1 data

  4. DSCOVR Magnetometer Level 2 One Second Averages

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-second average of Level 1 data

  5. Spacetime averaging of exotic singularity universes

    International Nuclear Information System (INIS)

    Dabrowski, Mariusz P.

    2011-01-01

    Taking a spacetime average as a measure of the strength of singularities we show that big-rips (type I) are stronger than big-bangs. The former have infinite spacetime averages while the latter have them equal to zero. The sudden future singularities (type II) and w-singularities (type V) have finite spacetime averages. The finite scale factor (type III) singularities for some values of the parameters may have an infinite average and in that sense they may be considered stronger than big-bangs.

  6. NOAA Average Annual Salinity (3-Zone)

    Data.gov (United States)

    California Natural Resource Agency — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...

  7. The Mass Elevation Effect of the Central Andes and Its Implications for the Southern Hemisphere's Highest Treeline

    Directory of Open Access Journals (Sweden)

    Wenhui He

    2016-05-01

    Full Text Available One of the highest treelines in the world is at 4810 m above sea level on the Sajama Volcano in the central Andes. The climatological cause of that exceptionally high treeline position is still unclear. Although it has been suggested that the mass elevation effect (MEE explains the upward shift of treelines in the Altiplano region, the magnitude of MEE has not yet been quantified for that region. This paper defines MEE as the air temperature difference in summer at the same elevation between the inner mountains/plateaus (Altiplano and the free atmosphere above the adjacent lowlands of the Andean Cordillera. The Altiplano air temperature was obtained from the Global Historical Climatology Network-Monthly temperature database, and the air temperature above the adjacent lowlands was interpolated based on the National Center for Environmental Prediction/National Center for Atmospheric Research Reanalysis 1 data set. We analyzed the mean air temperature differences for January, July, and the warm months from October to April. The air temperature was mostly higher on the Altiplano than over the neighboring lowlands at the same altitude. The air temperature difference increased from the outer Andean east-facing slope to the interior of the Altiplano in summer, and it increased from high latitudes to low latitudes in winter. The mean air temperature in the Altiplano in summer is approximately 5 K higher than it is above the adjacent lowlands at the same mean elevation, averaging about 3700 m above sea level. This upward shift of isotherms in the inner part of the Cordillera enables the treeline to climb to 4810 m, with shrub-size trees reaching even higher. Therefore, the MEE explains the occurrence of one of the world’s highest treelines in the central Andes.

  8. Automatic orbital GTAW welding: Highest quality welds for tomorrow's high-performance systems

    Science.gov (United States)

    Henon, B. K.

    1985-01-01

    Automatic orbital gas tungsten arc welding (GTAW) or TIG welding is certain to play an increasingly prominent role in tomorrow's technology. The welds are of the highest quality and the repeatability of automatic weldings is vastly superior to that of manual welding. Since less heat is applied to the weld during automatic welding than manual welding, there is less change in the metallurgical properties of the parent material. The possibility of accurate control and the cleanliness of the automatic GTAW welding process make it highly suitable to the welding of the more exotic and expensive materials which are now widely used in the aerospace and hydrospace industries. Titanium, stainless steel, Inconel, and Incoloy, as well as, aluminum can all be welded to the highest quality specifications automatically. Automatic orbital GTAW equipment is available for the fusion butt welding of tube-to-tube, as well as, tube to autobuttweld fittings. The same equipment can also be used for the fusion butt welding of up to 6 inch pipe with a wall thickness of up to 0.154 inches.

  9. Optimasi Penggunaan Lahan Kosong di Kecamatan Baturiti Untuk Properti Komersial Dengan Prinsip Highest and Best Use

    Directory of Open Access Journals (Sweden)

    Made Darmawan Saputra Mahardika

    2013-09-01

    Full Text Available Kecamatan Baturiti merupakan satu-satunya kecamatan di Kabupaten Tabanan yang berkembang dalam sektor ekonomi agrowisata karena lokasinya yang strategis dekat dengan berbagai obyek wisata terkenal. Dengan lokasi yang strategis, pembangunan untuk properti komersial tentu akan memberikan potensi keuntungan tinggi bagi investor yang memiliki lahan kosong di Kecamatan Baturiti. Kondisi seperti ini menyebabkan permintaan yang tinggi akan lahan, padahal ketersediaan lahan selalu berkurang. Pembangunan properti komersial di Kecamatan Baturiti perlu dioptimalisasi agar dicapai keuntungan maksimum bagi investor. Berdasarkan hal tersebut, investor yang ingin membangun di Kecamatan Baturiti memerlukan analisa untuk mendapatkan alternatif pemanfaatan lahan kosong. Lahan yang dianalisa merupakan lahan kosong belum terbangun seluas 22.175 m2 di Kecamatan Baturiti, Kabupaten Tabanan. Metode yang digunakan untuk mengetahui alternatif pendirian bangunan komersial yang memiliki nilai pasar tertinggi adalah Highest and Best Use (HBU. Dengan metode tersebut, pemilik lahan dapat mengetahui alternatif terbaik yang memenuhi syarat-syarat diijinkan secara legal, memungkinkan secara fisik, layak secara finansial, dan memiliki produktivitas maksimum. Hasil yang diperoleh dari analisa Highest and Best Use ini adalah alternatif mixed-use berupa hotel dan toko souvenir dengan nilai lahan tertinggi dibandingkan alternatif lainnya sebesar Rp 7,950,714.60 per m2.

  10. A novel method to predict the highest hardness of plasma sprayed coating without micro-defects

    Science.gov (United States)

    Zhuo, Yukun; Ye, Fuxing; Wang, Feng

    2018-04-01

    The plasma sprayed coatings are stacked by splats, which are regarded generally as the elementary units of coating. Many researchers have focused on the morphology and formation mechanism of splat. However, a novel method to predict the highest hardness of plasma sprayed coating without micro-defects is proposed according to the nanohardness of splat in this paper. The effectiveness of this novel method was examined by experiments. Firstly, the microstructure of splats and coating, meanwhile the 3D topography of the splats were observed by SEM (SU1510) and video microscope (VHX-2000). Secondly, the nanohardness of splats was evaluated by nanoindentation (NHT) in order to be compared with microhardness of coating measured by microhardness tester (HV-1000A). The results show that the nanohardness of splats with diameter of 70 μm, 100 μm and 140 μm were in the scope of 11∼12 GPa while the microhardness of coating were in the range of 8∼9 GPa. Because the splats had not micro-defects such as pores and cracks in the nanohardness evaluated nano-zone, the nanohardness of the splats can be utilized to predict the highest hardness of coating without micro-defects. This method indicates the maximum of sprayed coating hardness and will reduce the test number to get high hardness coating for better wear resistance.

  11. Lateral dispersion coefficients as functions of averaging time

    International Nuclear Information System (INIS)

    Sheih, C.M.

    1980-01-01

    Plume dispersion coefficients are discussed in terms of single-particle and relative diffusion, and are investigated as functions of averaging time. To demonstrate the effects of averaging time on the relative importance of various dispersion processes, and observed lateral wind velocity spectrum is used to compute the lateral dispersion coefficients of total, single-particle and relative diffusion for various averaging times and plume travel times. The results indicate that for a 1 h averaging time the dispersion coefficient of a plume can be approximated by single-particle diffusion alone for travel times <250 s and by relative diffusion for longer travel times. Furthermore, it is shown that the power-law formula suggested by Turner for relating pollutant concentrations for other averaging times to the corresponding 15 min average is applicable to the present example only when the averaging time is less than 200 s and the tral time smaller than about 300 s. Since the turbulence spectrum used in the analysis is an observed one, it is hoped that the results could represent many conditions encountered in the atmosphere. However, as the results depend on the form of turbulence spectrum, the calculations are not for deriving a set of specific criteria but for demonstrating the need in discriminating various processes in studies of plume dispersion

  12. Determination of concentration factors for Cs-137 and Ra-226 in the mullet species Chelon labrosus (Mugilidae) from the South Adriatic Sea.

    Science.gov (United States)

    Antovic, Ivanka; Antovic, Nevenka M

    2011-07-01

    Concentration factors for Cs-137 and Ra-226 transfer from seawater, and dried sediment or mud with detritus, have been determined for whole, fresh weight, Chelon labrosus individuals and selected organs. Cesium was detected in 5 of 22 fish individuals, and its activity ranged from 1.0 to 1.6 Bq kg(-1). Radium was detected in all fish, and ranged from 0.4 to 2.1 Bq kg(-1), with an arithmetic mean of 1.0 Bq kg(-1). In regards to fish organs, cesium activity concentration was highest in muscles (maximum - 3.7 Bq kg(-1)), while radium was highest in skeletons (maximum - 25 Bq kg(-1)). Among cesium concentration factors, those for muscles were the highest (from seawater - an average of 47, from sediment - an average of 3.3, from mud with detritus - an average of 0.8). Radium concentration factors were the highest for skeleton (from seawater - an average of 130, from sediment - an average of 1.8, from mud with detritus - an average of 1.5). Additionally, annual intake of cesium and radium by human adults consuming muscles of this fish species has been estimated to provide, in aggregate, an effective dose of about 4.1 μSv y(-1). 2011 Elsevier Ltd. All rights reserved.

  13. Improving consensus structure by eliminating averaging artifacts

    Directory of Open Access Journals (Sweden)

    KC Dukka B

    2009-03-01

    Full Text Available Abstract Background Common structural biology methods (i.e., NMR and molecular dynamics often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures. However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%; in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38 of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA 1, our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction 2, which

  14. 40 CFR 76.11 - Emissions averaging.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Emissions averaging. 76.11 Section 76.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General...

  15. Determinants of College Grade Point Averages

    Science.gov (United States)

    Bailey, Paul Dean

    2012-01-01

    Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the…

  16. Analisa Highest and Best Use Pada Lahan Kosong Di Jemur Gayungan II Surabaya

    Directory of Open Access Journals (Sweden)

    Finda Virgitta Faradiany

    2014-09-01

    Full Text Available Perkembangan bisnis properti di Surabaya yang semakin pesat, mengakibatkan permintaan terhadap lahan semakin tinggi. Namun fakta di lapangan menampakkan hal yang sebaliknya karena ternyata masih terdapat lahan-lahan yang dibiarkan kosong tidak dimanfaatkan oleh pemiliknya. Kondisi yang demikian memerlukan efisiensi dan optimalisasi penggunaan lahan dengan mendirikan sebuah properti komersial yang memberikan keuntungan bagi pemilik serta lingkungan sekitarnya.Lahan “X” seluas 1786 m2 berlokasi di Jl. Jemur Gayungan II merupakan lahan kosong yang terletak di dekat daerah perkantoran dan berpotensi dikembangkan menjadi properti komersial. Penentuan nilai lahan “X” bergantung pada penggunaan lahan. Metode penilaian yang digunakan adalah analisa penggunaan tertinggi dan terbaik atau Highest and Best Use (HBU yang secara legal diijinkan, secara fisik memungkinkan, layak secara finansial dan memiliki produktifitas maksimum. Dari hasil penelitian didapatkan alternatif yang menghasilkan nilai lahan tertinggi dan produktivitas maksimum adalah hotel. Nilai lahan yang didapatkan sebesar Rp 9.722.718/m2 dengan produktivitas meningkat sebesar 486%.

  17. Soil pollution from motor car emissions in the highest region of the Tauern mountains autobahn

    International Nuclear Information System (INIS)

    Kasperowski, E.; Frank, E.

    1990-01-01

    In a pilot study, pollutant loads from motor traffic were investigated and quantified in soils and grassland near the autobahn. Near the motorway, increased concentrations of inorganic and organic pollutants were found, depending on distance, both in soil and in grassland. The decreased soil life is also attributed to this. (orig.) [de

  18. Failure of ETV in patients with the highest ETV success scores.

    Science.gov (United States)

    Gianaris, Thomas J; Nazar, Ryan; Middlebrook, Emily; Gonda, David D; Jea, Andrew; Fulkerson, Daniel H

    2017-09-01

    OBJECTIVE Endoscopic third ventriculostomy (ETV) is a surgical alternative to placing a CSF shunt in certain patients with hydrocephalus. The ETV Success Score (ETVSS) is a reliable, simple method to estimate the success of the procedure by 6 months of postoperative follow-up. The highest score is 90, estimating a 90% chance of the ETV effectively treating hydrocephalus without requiring a shunt. Treatment with ETV fails in certain patients, despite their being the theoretically best candidates for the procedure. In this study the authors attempted to identify factors that further predicted success in patients with the highest ETVSSs. METHODS A retrospective review was performed of all patients treated with ETV at 3 institutions. Demographic, radiological, and clinical data were recorded. All patients by definition were older than 1 year, had obstructive hydrocephalus, and did not have a prior shunt. Failure of ETV was defined as the need for a shunt by 1 year. The ETV was considered a success if the patient did not require another surgery (either shunt placement or a repeat endoscopic procedure) by 1 year. A statistical analysis was performed to identify factors associated with success or failure. RESULTS Fifty-nine patients met the entry criteria for the study. Eleven patients (18.6%) required further surgery by 1 year. All of these patients received a shunt. The presenting symptom of lethargy statistically correlated with success (p = 0.0126, odds ratio [OR] = 0.072). The preoperative radiological finding of transependymal flow (p = 0.0375, OR 0.158) correlated with success. A postoperative larger maximum width of the third ventricle correlated with failure (p = 0.0265). CONCLUSIONS The preoperative findings of lethargy and transependymal flow statistically correlated with success. This suggests that the best candidates for ETV are those with a relatively acute elevation of intracranial pressure. Cases without these findings may represent the failures in this

  19. Role of spatial averaging in multicellular gradient sensing.

    Science.gov (United States)

    Smith, Tyler; Fancher, Sean; Levchenko, Andre; Nemenman, Ilya; Mugler, Andrew

    2016-05-20

    Gradient sensing underlies important biological processes including morphogenesis, polarization, and cell migration. The precision of gradient sensing increases with the length of a detector (a cell or group of cells) in the gradient direction, since a longer detector spans a larger range of concentration values. Intuition from studies of concentration sensing suggests that precision should also increase with detector length in the direction transverse to the gradient, since then spatial averaging should reduce the noise. However, here we show that, unlike for concentration sensing, the precision of gradient sensing decreases with transverse length for the simplest gradient sensing model, local excitation-global inhibition. The reason is that gradient sensing ultimately relies on a subtraction of measured concentration values. While spatial averaging indeed reduces the noise in these measurements, which increases precision, it also reduces the covariance between the measurements, which results in the net decrease in precision. We demonstrate how a recently introduced gradient sensing mechanism, regional excitation-global inhibition (REGI), overcomes this effect and recovers the benefit of transverse averaging. Using a REGI-based model, we compute the optimal two- and three-dimensional detector shapes, and argue that they are consistent with the shapes of naturally occurring gradient-sensing cell populations.

  20. Tritium concentrations in bees and honey at Los Alamos National Laboratory: 1979-1996

    Energy Technology Data Exchange (ETDEWEB)

    Fresquez, P.R.; Armstrong, D.R.; Pratt, L.H.

    1997-01-01

    Honeybees are effective monitors of environmental pollution. The objective of this study was to summarize tritium ({sup 3}H) concentrations in bees and honey collected from within and around Los Alamos National Laboratory (LANL) over an 18-year period. Based on the long-term average, bees from nine out of eleven hives and honey from six out of eleven hives on LANL lands contained {sup 3}H that was significantly higher (p <0.05) than background. The highest average concentration of {sup 3}H in bees (435 pCi mL{sup -1}) collected over the years was from LANL`s Technical Area (TA) 54-a low-level radioactive waste disposal site (Area G). Similarly, the highest average concentration of {sup 3}H in honey (709 pCi mL{sup - 1}) was collected from a hive located near three {sup 3}H storage ponds at LANL TA-53. The average concentrations of {sup 3}H in bees and honey from background hives was 1.0 pCi mL{sup -1} and 1.5 pCi ML{sup -1}, respectively. Although the concentrations of 3H in bees and honey from most LANL and perimeter (White Rock/Pajarito Acres) areas were significantly higher than background, most areas, with the exception of TA-53 and TA-54, generally exhibited decreasing 3H concentrations over time.

  1. Computation of the bounce-average code

    International Nuclear Information System (INIS)

    Cutler, T.A.; Pearlstein, L.D.; Rensink, M.E.

    1977-01-01

    The bounce-average computer code simulates the two-dimensional velocity transport of ions in a mirror machine. The code evaluates and bounce-averages the collision operator and sources along the field line. A self-consistent equilibrium magnetic field is also computed using the long-thin approximation. Optionally included are terms that maintain μ, J invariance as the magnetic field changes in time. The assumptions and analysis that form the foundation of the bounce-average code are described. When references can be cited, the required results are merely stated and explained briefly. A listing of the code is appended

  2. Potassium concentration in bone of Beijing residents

    International Nuclear Information System (INIS)

    Lin Lianqing

    1994-01-01

    The author presents the measuring results of K concentration in bone ash samples of 65 residents in Beijing area by atomic absorption spectrum. The results indicate that the K concentrations in bone ash is logarithmic normal distribution, ranging level is 0.75-8.44 g/kg ash, the average and standard deviation is respectively 2.40 and 1.52 g/kg ash. The highest concentration (4.88 g/kg ash) appears in infantile stage (< 1y), the lowest concentration (1.59 g/kg ash) appears in young stage (2-5 y). The lower concentration (1.88 g/kg ash) appears in adolescence (11-20 y). The K concentration in wet bone as a percentage of weight is 0.067%. The specific activities of K-40 in bone is about 22 mBq/g wet bone. The total radioactivity of K-40 in whole bone is 220 Bq. The annual absorbed dose is received in adult bone from K-40 is estimated to be 95 μGy

  3. Rotational averaging of multiphoton absorption cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Friese, Daniel H., E-mail: daniel.h.friese@uit.no; Beerepoot, Maarten T. P.; Ruud, Kenneth [Centre for Theoretical and Computational Chemistry, University of Tromsø — The Arctic University of Norway, N-9037 Tromsø (Norway)

    2014-11-28

    Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.

  4. Sea Surface Temperature Average_SST_Master

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using ArcGIS...

  5. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-01-01

    to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic

  6. Should the average tax rate be marginalized?

    Czech Academy of Sciences Publication Activity Database

    Feldman, N. E.; Katuščák, Peter

    -, č. 304 (2006), s. 1-65 ISSN 1211-3298 Institutional research plan: CEZ:MSM0021620846 Keywords : tax * labor supply * average tax Subject RIV: AH - Economics http://www.cerge-ei.cz/pdf/wp/Wp304.pdf

  7. A practical guide to averaging functions

    CERN Document Server

    Beliakov, Gleb; Calvo Sánchez, Tomasa

    2016-01-01

    This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...

  8. MN Temperature Average (1961-1990) - Line

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  9. MN Temperature Average (1961-1990) - Polygon

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  10. Average Bandwidth Allocation Model of WFQ

    Directory of Open Access Journals (Sweden)

    Tomáš Balogh

    2012-01-01

    Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.

  11. Nonequilibrium statistical averages and thermo field dynamics

    International Nuclear Information System (INIS)

    Marinaro, A.; Scarpetta, Q.

    1984-01-01

    An extension of thermo field dynamics is proposed, which permits the computation of nonequilibrium statistical averages. The Brownian motion of a quantum oscillator is treated as an example. In conclusion it is pointed out that the procedure proposed to computation of time-dependent statistical average gives the correct two-point Green function for the damped oscillator. A simple extension can be used to compute two-point Green functions of free particles

  12. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....

  13. African American Women: Surviving Breast Cancer Mortality against the Highest Odds

    Directory of Open Access Journals (Sweden)

    Shelley White-Means

    2015-12-01

    Full Text Available Among the country’s 25 largest cities, the breast cancer mortality disparity is highest in Memphis, Tennessee, where African American women are twice as likely to die from breast cancer as White women. This qualitative study of African-American breast cancer survivors explores experiences during and post treatment that contributed to their beating the high odds of mortality. Using a semi-structured interview guide, a focus group session was held in 2012 with 10 breast cancer survivors. Thematic analysis and a deductive a priori template of codes were used to analyze the data. Five main themes were identified: family history, breast/body awareness and preparedness to manage a breast cancer event, diagnosis experience and reaction to the diagnosis, family reactions, and impact on life. Prayer and family support were central to coping, and survivors voiced a cultural acceptance of racial disparities in health outcomes. They reported lack of provider sensitivity regarding pain, financial difficulties, negative responses from family/friends, and resiliency strategies for coping with physical and mental limitations. Our research suggested that a patient-centered approach of demystifying breast cancer (both in patient-provider communication and in community settings would impact how women cope with breast cancer and respond to information about its diagnosis.

  14. THE IMPACT OF FREQUENCY STANDARDS ON COHERENCE IN VLBI AT THE HIGHEST FREQUENCIES

    Energy Technology Data Exchange (ETDEWEB)

    Rioja, M.; Dodson, R. [ICRAR, University of Western Australia, Perth (Australia); Asaki, Y. [Institute of Space and Astronautical Science, 3-1-1 Yoshinodai, Chuou, Sagamihara, Kanagawa 252-5210 (Japan); Hartnett, J. [School of Physics, University of Western Australia, Perth (Australia); Tingay, S., E-mail: maria.rioja@icrar.org [ICRAR, Curtin University, Perth (Australia)

    2012-10-01

    We have carried out full imaging simulation studies to explore the impact of frequency standards in millimeter and submillimeter very long baseline interferometry (VLBI), focusing on the coherence time and sensitivity. In particular, we compare the performance of the H-maser, traditionally used in VLBI, to that of ultra-stable cryocooled sapphire oscillators over a range of observing frequencies, weather conditions, and analysis strategies. Our simulations show that at the highest frequencies, the losses induced by H-maser instabilities are comparable to those from high-quality tropospheric conditions. We find significant benefits in replacing H-masers with cryocooled sapphire oscillator based frequency references in VLBI observations at frequencies above 175 GHz in sites which have the best weather conditions; at 350 GHz we estimate a 20%-40% increase in sensitivity over that obtained when the sites have H-masers, for coherence losses of 20%-10%, respectively. Maximum benefits are to be expected by using co-located Water Vapor Radiometers for atmospheric correction. In this case, we estimate a 60%-120% increase in sensitivity over the H-maser at 350 GHz.

  15. Exchange Interactions on the Highest-Spin Reported Molecule: the Mixed-Valence Fe42 Complex

    Science.gov (United States)

    Aravena, Daniel; Venegas-Yazigi, Diego; Ruiz, Eliseo

    2016-04-01

    The finding of high-spin molecules that could behave as conventional magnets has been one of the main challenges in Molecular Magnetism. Here, the exchange interactions, present in the highest-spin molecule published in the literature, Fe42, have been analysed using theoretical methods based on Density Functional Theory. The system with a total spin value S = 45 is formed by 42 iron centres containing 18 high-spin FeIII ferromagnetically coupled and 24 diamagnetic low-spin FeII ions. The bridging ligands between the two paramagnetic centres are two cyanide ligands coordinated to the diamagnetic FeII cations. Calculations were performed using either small Fe4 or Fe3 models or the whole Fe42 complex, showing the presence of two different ferromagnetic couplings between the paramagnetic FeIII centres. Finally, Quantum Monte Carlo simulations for the whole system were carried out in order to compare the experimental and simulated magnetic susceptibility curves from the calculated exchange coupling constants with the experimental one. This comparison allows for the evaluation of the accuracy of different exchange-correlation functionals to reproduce such magnetic properties.

  16. Analisis Highest and Best Use (HBU pada Lahan Jl. Gubeng Raya No. 54 Surabaya

    Directory of Open Access Journals (Sweden)

    Akmaluddin Akmaluddin

    2013-03-01

    Full Text Available Laju pertumbuhan penduduk dan tingkat perekonomian yang semakin meningkat di  kota-kota besar seperti Surabaya, bertolak belakang dengan  ketersediaan lahan yang terbatas. Selayaknya properti yang akan dibangun di atas suatu lahan dapat memberikan manfaat yang maksimal serta efisien agar hasilnya dapat dirasakan demi pembangunan wilayah tersebut. Oleh karena itu, perlu dilakukan perhitungan  penggunaan yang paling memungkinkan dan diizinkan dari suatu tanah kosong atau tanah yang  sudah dibangun, dimana secara fisik dimungkinkan, didukung atau dibenarkan oleh peraturan, layak secara keuangan dan menghasilkan nilai tertinggi. Dalam penelitian ini dilakukan analisis Highest and Best Use (HBU pada lahan di Jl. Gubeng Raya No. 54 Surabaya seluas 1.150 m2 yang direncanakan akan dibangun hotel. Lahan tersebut berpotensi untuk dikembangkan menjadi properti komersial seperti hotel, apartemen, perkantoran dan pertokoan. Analisis tersebut menggunakan tinjauan terhadap aspek fisik, legal, finansial dan produktivitas maksimumnya. Dari hasil penelitian ini didapatkan alternatif properti komersial hotel yang memiliki penggunaan tertinggi dan terbaik pada pemanfaatan lahan dengan nilai lahan Rp. 67.069.980,31/ m2.

  17. Analisa Alternatif Revitalisasi Pasar Gubeng Masjid Surabaya dengan Metode Highest And Best Use

    Directory of Open Access Journals (Sweden)

    Marsha Swalia Mustika

    2017-01-01

    Full Text Available Dalam era globalisasi ini banyak bermunculan pasar-pasar modern yang dibangun dengan segala kelebihan dan fasilitasnya. Munculnya pasar-pasar modern membuat keberadaan pasar tradisional tersudut, tidak terkecuali Pasar Gubeng Masjid Surabaya. Namun keberadaan pasar yang strategis yaitu dekat dengan perkantoran, hotel dan pusat perbelanjaan, serta stasiun kereta api membuat pasar tersebut memiliki potensi untuk dikembangkan menjadi properti yang memberikan nilai lahan tertinggi dan terbaik. Oleh karena itu, perlu dilakukan analisa Highest and Best Use (HBU yang dapat memberikan masukan untuk melakukan investasi terbaik. Analisa HBU ini menggunakan empat kriteria yaitu secara fisik dimungkinkan, secara legal diizinkan , secara finansial layak, dan memiliki produktivitas maksimum. Alternatif yang memiliki produktivitas maksimum tersebut dapat dijadikan sebagai nilai lahan tertinggi dan terbaik pada lahan Pasar Gubeng Masjid Surabaya. Dari hasil penelitian didapatkan alternatif yang menghasilkan nilai lahan tertinggi dan produktivitas maksimum adalah alternatif pengembangan multi use pasar dengan pusat perbelanjaan.. Nilai lahan yang didapatkan sebesar Rp 46.946.524,-/m2 dengan produktivitas meningkat sebesar 312%.

  18. Transitional care for the highest risk patients: findings of a randomised control study

    Directory of Open Access Journals (Sweden)

    Kheng Hock Lee

    2015-10-01

    Full Text Available Background: Interventions to prevent readmissions of patients at highest risk have not been rigorously evaluated. We conducted a randomised controlled trial to determine if a post-discharge transitional care programme can reduce readmissions of such patients in Singapore. Methods: We randomised 840 patients with two or more unscheduled readmissions in the prior 90 days and Length of stay, Acuity of admission, Comorbidity of patient, Emergency department utilisation score ≥10 to the intervention programme (n = 419 or control (n = 421. Patients allocated to the intervention group received post-discharge surveillance by a multidisciplinary integrated care team and early review in the clinic. The primary outcome was the proportion of patients with at least one unscheduled readmission within 30 days after discharge. Results: We found no statistically significant reduction in readmissions or emergency department visits in patients on the intervention group compared to usual care. However, patients in the intervention group reported greater patient satisfaction (p < 0.001. Conclusion: Any beneficial effect of interventions initiated after discharge is small for high-risk patients with multiple comorbidity and complex care needs. Future transitional care interventions should focus on providing the entire cycle of care for such patients starting from time of admission to final transition to the primary care setting. Trial Registration: Clinicaltrials.gov, no NCT02325752

  19. Transitional care for the highest risk patients: findings of a randomised control study

    Directory of Open Access Journals (Sweden)

    Kheng Hock Lee

    2015-10-01

    Full Text Available Background: Interventions to prevent readmissions of patients at highest risk have not been rigorously evaluated. We conducted a randomised controlled trial to determine if a post-discharge transitional care programme can reduce readmissions of such patients in Singapore.Methods: We randomised 840 patients with two or more unscheduled readmissions in the prior 90 days and Length of stay, Acuity of admission, Comorbidity of patient, Emergency department utilisation score ≥10 to the intervention programme (n = 419 or control (n = 421. Patients allocated to the intervention group received post-discharge surveillance by a multidisciplinary integrated care team and early review in the clinic. The primary outcome was the proportion of patients with at least one unscheduled readmission within 30 days after discharge.Results: We found no statistically significant reduction in readmissions or emergency department visits in patients on the intervention group compared to usual care. However, patients in the intervention group reported greater patient satisfaction (p < 0.001.Conclusion: Any beneficial effect of interventions initiated after discharge is small for high-risk patients with multiple comorbidity and complex care needs. Future transitional care interventions should focus on providing the entire cycle of care for such patients starting from time of admission to final transition to the primary care setting.Trial Registration: Clinicaltrials.gov, no NCT02325752

  20. A System with a Choice of Highest-Bidder-First and FIFO Services

    Directory of Open Access Journals (Sweden)

    Tejas Bodas

    2015-02-01

    Full Text Available Service systems using a highest-bidder-first (HBF policy have been studied in queueing literature for various applications and in economics literature to model corruption. Such systems have applications in modern problems like scheduling jobs in cloud computing scenarios or placement of ads on web pages. However, using a HBF service is like using a spot market and may not be preferred by many users. For such users, it may be good to provide a simple scheduler, e.g., a FIFO service. Further, in some situations it may even be necessary that a free service queue operates alongside a HBF queue. Motivated by such a scenario, we propose and analyze a service system with a FIFO server and a HBF server in parallel. Arriving customers are from a heterogeneous population with different valuations of their delay costs. They strategically choose between FIFO and HBF service; if HBF is chosen, they also choose the bid value to optimize an individual cost. We characterize the Wardrop equilibrium in such a system and analyze the revenue to the server. We see that when the total capacity is fixed and is shared between the FIFO and HBF servers, revenue is maximised when the FIFO capacity is non zero. However, if the FIFO server is added to an HBF server, then the revenue decreases with increasing FIFO capacity. We also discuss the case when customers are allowed to balk.

  1. African American Women: Surviving Breast Cancer Mortality against the Highest Odds

    Science.gov (United States)

    White-Means, Shelley; Rice, Muriel; Dapremont, Jill; Davis, Barbara; Martin, Judy

    2015-01-01

    Among the country’s 25 largest cities, the breast cancer mortality disparity is highest in Memphis, Tennessee, where African American women are twice as likely to die from breast cancer as White women. This qualitative study of African-American breast cancer survivors explores experiences during and post treatment that contributed to their beating the high odds of mortality. Using a semi-structured interview guide, a focus group session was held in 2012 with 10 breast cancer survivors. Thematic analysis and a deductive a priori template of codes were used to analyze the data. Five main themes were identified: family history, breast/body awareness and preparedness to manage a breast cancer event, diagnosis experience and reaction to the diagnosis, family reactions, and impact on life. Prayer and family support were central to coping, and survivors voiced a cultural acceptance of racial disparities in health outcomes. They reported lack of provider sensitivity regarding pain, financial difficulties, negative responses from family/friends, and resiliency strategies for coping with physical and mental limitations. Our research suggested that a patient-centered approach of demystifying breast cancer (both in patient-provider communication and in community settings) would impact how women cope with breast cancer and respond to information about its diagnosis. PMID:26703655

  2. A cosmopolitan design of teacher education and a progressive orientation towards the highest good

    Directory of Open Access Journals (Sweden)

    Klas Roth

    2013-01-01

    Full Text Available In this paper I discuss a Kantian conception of cosmopolitan education. It suggests that we pursue the highest good – an object of morality – in the world together, and requires that we acknowledge the value of freedom, render ourselves both efficacious and autonomous in practice, cultivate our judgment, and unselfishly co-operate in the co-ordination and fulfilment of our morally permissible ends. Now, such an accomplishment is one of the most difficult challenges, and may not be achieved in our time, if ever. In the first part of the paper I show that we, according to Kant, have to interact with each other, and comply with the moral law in the quest of general happiness, not merely personal happiness. In the second part, I argue that a cosmopolitan design of teacher education in Kantian terms can establish moral character, even though good moral character is ultimately the outcome of free choice. Such a design can do so by optimizing the freedom of those concerned to set and pursue their morally permissible ends, and to cultivate their judgment through the use of examples. This requires, inter alia, that they be enabled, and take responsibility, to think for themselves, in the position of everyone else, and consistently; and to strengthen their virtue or self-mastery to comply, in practice, with the moral law.

  3. Oxygen pathway modeling estimates high reactive oxygen species production above the highest permanent human habitation.

    Directory of Open Access Journals (Sweden)

    Isaac Cano

    Full Text Available The production of reactive oxygen species (ROS from the inner mitochondrial membrane is one of many fundamental processes governing the balance between health and disease. It is well known that ROS are necessary signaling molecules in gene expression, yet when expressed at high levels, ROS may cause oxidative stress and cell damage. Both hypoxia and hyperoxia may alter ROS production by changing mitochondrial Po2 (PmO2. Because PmO2 depends on the balance between O2 transport and utilization, we formulated an integrative mathematical model of O2 transport and utilization in skeletal muscle to predict conditions to cause abnormally high ROS generation. Simulations using data from healthy subjects during maximal exercise at sea level reveal little mitochondrial ROS production. However, altitude triggers high mitochondrial ROS production in muscle regions with high metabolic capacity but limited O2 delivery. This altitude roughly coincides with the highest location of permanent human habitation. Above 25,000 ft., more than 90% of exercising muscle is predicted to produce abnormally high levels of ROS, corresponding to the "death zone" in mountaineering.

  4. Towards highest peak intensities for ultra-short MeV-range ion bunches

    Science.gov (United States)

    Busold, Simon; Schumacher, Dennis; Brabetz, Christian; Jahn, Diana; Kroll, Florian; Deppert, Oliver; Schramm, Ulrich; Cowan, Thomas E.; Blažević, Abel; Bagnoud, Vincent; Roth, Markus

    2015-01-01

    A laser-driven, multi-MeV-range ion beamline has been installed at the GSI Helmholtz center for heavy ion research. The high-power laser PHELIX drives the very short (picosecond) ion acceleration on μm scale, with energies ranging up to 28.4 MeV for protons in a continuous spectrum. The necessary beam shaping behind the source is accomplished by applying magnetic ion lenses like solenoids and quadrupoles and a radiofrequency cavity. Based on the unique beam properties from the laser-driven source, high-current single bunches could be produced and characterized in a recent experiment: At a central energy of 7.8 MeV, up to 5 × 108 protons could be re-focused in time to a FWHM bunch length of τ = (462 ± 40) ps via phase focusing. The bunches show a moderate energy spread between 10% and 15% (ΔE/E0 at FWHM) and are available at 6 m distance to the source und thus separated from the harsh laser-matter interaction environment. These successful experiments represent the basis for developing novel laser-driven ion beamlines and accessing highest peak intensities for ultra-short MeV-range ion bunches. PMID:26212024

  5. Medical school dropout--testing at admission versus selection by highest grades as predictors.

    Science.gov (United States)

    O'Neill, Lotte; Hartvigsen, Jan; Wallstedt, Birgitta; Korsholm, Lars; Eika, Berit

    2011-11-01

    Very few studies have reported on the effect of admission tests on medical school dropout. The main aim of this study was to evaluate the predictive validity of non-grade-based admission testing versus grade-based admission relative to subsequent dropout. This prospective cohort study followed six cohorts of medical students admitted to the medical school at the University of Southern Denmark during 2002-2007 (n=1544). Half of the students were admitted based on their prior achievement of highest grades (Strategy 1) and the other half took a composite non-grade-based admission test (Strategy 2). Educational as well as social predictor variables (doctor-parent, origin, parenthood, parents living together, parent on benefit, university-educated parents) were also examined. The outcome of interest was students' dropout status at 2 years after admission. Multivariate logistic regression analysis was used to model dropout. Strategy 2 (admission test) students had a lower relative risk for dropping out of medical school within 2 years of admission (odds ratio 0.56, 95% confidence interval 0.39-0.80). Only the admission strategy, the type of qualifying examination and the priority given to the programme on the national application forms contributed significantly to the dropout model. Social variables did not predict dropout and neither did Strategy 2 admission test scores. Selection by admission testing appeared to have an independent, protective effect on dropout in this setting. © Blackwell Publishing Ltd 2011.

  6. Towards highest peak intensities for ultra-short MeV-range ion bunches

    Science.gov (United States)

    Busold, Simon; Schumacher, Dennis; Brabetz, Christian; Jahn, Diana; Kroll, Florian; Deppert, Oliver; Schramm, Ulrich; Cowan, Thomas E.; Blažević, Abel; Bagnoud, Vincent; Roth, Markus

    2015-07-01

    A laser-driven, multi-MeV-range ion beamline has been installed at the GSI Helmholtz center for heavy ion research. The high-power laser PHELIX drives the very short (picosecond) ion acceleration on μm scale, with energies ranging up to 28.4 MeV for protons in a continuous spectrum. The necessary beam shaping behind the source is accomplished by applying magnetic ion lenses like solenoids and quadrupoles and a radiofrequency cavity. Based on the unique beam properties from the laser-driven source, high-current single bunches could be produced and characterized in a recent experiment: At a central energy of 7.8 MeV, up to 5 × 108 protons could be re-focused in time to a FWHM bunch length of τ = (462 ± 40) ps via phase focusing. The bunches show a moderate energy spread between 10% and 15% (ΔE/E0 at FWHM) and are available at 6 m distance to the source und thus separated from the harsh laser-matter interaction environment. These successful experiments represent the basis for developing novel laser-driven ion beamlines and accessing highest peak intensities for ultra-short MeV-range ion bunches.

  7. Highest-order optical phonon-mediated relaxation in CdTe/ZnTe quantum dots

    International Nuclear Information System (INIS)

    Masumoto, Yasuaki; Nomura, Mitsuhiro; Okuno, Tsuyoshi; Terai, Yoshikazu; Kuroda, Shinji; Takita, K.

    2003-01-01

    The highest 19th-order longitudinal optical (LO) phonon-mediated relaxation was observed in photoluminescence excitation spectra of CdTe self-assembled quantum dots grown in ZnTe. Hot excitons photoexcited highly in the ZnTe barrier layer are relaxed into the wetting-layer state by emitting multiple LO phonons of the barrier layer successively. Below the wetting-layer state, the LO phonons involved in the relaxation are transformed to those of interfacial Zn x Cd 1-x Te surrounding CdTe quantum dots. The ZnTe-like and CdTe-like LO phonons of Zn x Cd 1-x Te and lastly acoustic phonons are emitted in the relaxation into the CdTe dots. The observed main relaxation is the fast relaxation directly into CdTe quantum dots and is not the relaxation through either the wetting-layer quantum well or the band bottom of the ZnTe barrier layer. This observation shows very efficient optical phonon-mediated relaxation of hot excitons excited highly in the ZnTe conduction band through not only the ZnTe extended state but also localized state in the CdTe quantum dots reflecting strong exciton-LO phonon interaction of telluride compounds

  8. Which Environmental Factors Have the Highest Impact on the Performance of People Experiencing Difficulties in Capacity?

    Directory of Open Access Journals (Sweden)

    Verena Loidl

    2016-04-01

    Full Text Available Disability is understood by the World Health Organization (WHO as the outcome of the interaction between a health condition and personal and environmental factors. Comprehensive data about environmental factors is therefore essential to understand and influence disability. We aimed to identify which environmental factors have the highest impact on the performance of people with mild, moderate and severe difficulties in capacity, who are at risk of experiencing disability to different extents, using data from a pilot study of the WHO Model Disability Survey in Cambodia and random forest regression. Hindering or facilitating aspects of places to socialize in community activities, transportation and natural environment as well as use and need of personal assistance and use of medication on a regular basis were the most important environmental factors across groups. Hindering or facilitating aspects of the general environment were the most relevant in persons experiencing mild levels of difficulties in capacity, while social support, attitudes of others and use of medication on a regular basis were highly relevant for the performance of persons experiencing moderate to higher levels of difficulties in capacity. Additionally, we corroborate the high importance of the use and need of assistive devices for people with severe difficulties in capacity.

  9. How to reliably deliver narrow individual-patient error bars for optimization of pacemaker AV or VV delay using a "pick-the-highest" strategy with haemodynamic measurements.

    Science.gov (United States)

    Francis, Darrel P

    2013-03-10

    Intuitive and easily-described, "pick-the-highest" is often recommended for quantitative optimization of AV and especially VV delay settings of biventricular pacemakers (BVP; cardiac resynchronization therapy, CRT). But reliable selection of the optimum setting is challenged by beat-to-beat physiological variation, which "pick-the-highest" combats by averaging multiple heartbeats. Optimization is not optimization unless the optimum is identified confidently. This document shows how to calculate how many heartbeats must be averaged to optimize reliably by pick-the-highest. Any reader, by conducting a few measurements, can calculate for locally-available methods (i) biological scatter between replicate measurements, and (ii) curvature of the biological response. With these, for any clinically-desired precision of optimization, the necessary number of heartbeats can be calculated. To achieve 95% confidence of getting within ±∆x of the true optimum, the number of heartbeats needed is 2(scatter/curvature)(2)/∆x(4) per setting. Applying published scatter/curvature values (which readers should re-evaluate locally) indicates that optimizing AV, even coarsely with a 40ms-wide band of precision, requires many thousand beats. For VV delay, the number approaches a million. Moreover, identifying the optimum twice as precisely requires 30-fold more beats. "Pick the highest" is quick to say but slow to do. We must not expect staff to do the impossible; nor criticise them for not doing so. Nor should we assume recommendations and published protocols are well-designed. Reliable AV or VV optimization, using "pick-the-highest" on commonly-recommended manual measurements, is unrealistic. Improving time-efficiency of the optimization process to become clinically realistic may need a curve-fitting strategy instead, with all acquired data marshalled conjointly. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  10. Radon concentration in The Netherlands

    International Nuclear Information System (INIS)

    Meijer, R.J. de; Put, L.W.; Veldhuizen, A.

    1986-02-01

    In 1000 dwellings, which can be assumed to be an reasonable representation of the average Dutch dwellings, time averaged radon concentrations, radon daughter concentrations and gamma-exposure tempi are determined during a year with passive dosemeters. They are also determined outdoor at circa 200 measure points. (Auth.)

  11. Improved averaging for non-null interferometry

    Science.gov (United States)

    Fleig, Jon F.; Murphy, Paul E.

    2013-09-01

    Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.

  12. Information ranks highest: Expectations of female adolescents with a rare genital malformation towards health care services.

    Directory of Open Access Journals (Sweden)

    Elisabeth Simoes

    Full Text Available Access to highly specialized health care services and support to meet the patient's specific needs is critical for health outcome, especially during age-related transitions within the health care system such as with adolescents entering adult medicine. Being affected by an orphan disease complicates the situation in several important respects. Long distances to dedicated institutions and scarcity of knowledge, even among medical doctors, may present major obstacles for proper access to health care services and health chances. This study is part of the BMBF funded TransCareO project examining in a mixed-method design health care provisional deficits, preferences, and barriers in health care access as perceived by female adolescents affected by the Mayer-Rokitansky-Küster-Hauser syndrome (MRKHS, a rare (orphan genital malformation.Prior to a communicative validation workshop, critical elements of MRKHS related care and support (items were identified in interviews with MRKHS patients. During the subsequent workshop, 87 persons involved in health care and support for MRKHS were asked to rate the items using a 7-point Likert scale (7, strongly agree; 1, strongly disagree as to 1 the elements' potential importance (i.e., health care expected to be "best practice", or priority and 2 the presently experienced care. A gap score between the two was computed highlighting fields of action. Items were arranged into ten separate questionnaires representing domains of care and support (e.g., online-portal, patient participation. Within each domain, several items addressed various aspects of "information" and "access". Here, we present the outcome of items' evaluation by patients (attended, NPAT = 35; respondents, NRESP = 19.Highest priority scores occurred for domains "Online-Portal", "Patient participation", and "Tailored informational offers", characterizing them as extremely important for the perception as best practice. Highest gap scores yielded domains

  13. Nonlinear Analysis to Detect if Excellent Nursing Work Environments Have Highest Well-Being.

    Science.gov (United States)

    Casalicchio, Giuseppe; Lesaffre, Emmanuel; Küchenhoff, Helmut; Bruyneel, Luk

    2017-09-01

    To detect potentially nonlinear associations between nurses' work environment and nurse staffing on the one hand and nurse burnout on the other hand. A cross-sectional multicountry study for which data collection using a survey of 33,731 registered nurses in 12 European countries took place during 2009 to 2010. A semiparametric latent variable model that describes both linear and potentially nonlinear associations between burnout (Maslach Burnout Inventory: emotional exhaustion, depersonalization, personal accomplishment) and work environment (Practice Environment Scale of the Nursing Work Index: managerial support for nursing, doctor-nurse collegial relations, promotion of care quality) and staffing (patient-to-nurse ratio). Similar conclusions are reached from linear and nonlinear models estimating the association between work environment and burnout. For staffing, an increase in the patient-to-nurse ratio is associated with an increase in emotional exhaustion. At about 15 patients per nurse, no further increase in emotional exhaustion is seen. Absence of evidence for diminishing returns of improving work environments suggests that continuous improvement and achieving excellence in nurse work environments pays off strongly in terms of lower nurse-reported burnout rates. Nurse staffing policy would benefit from a larger number of studies that identify specific minimum as well as maximum thresholds at which inputs affect nurse and patient outcomes. Nurse burnout is omnipresent and has previously been shown to be related to worse patient outcomes. Additional increments in characteristics of excellent work environments, up to the highest possible standard, correspond to lower nurse burnout. © 2017 Sigma Theta Tau International.

  14. Lost opportunities in HIV prevention: programmes miss places where exposures are highest

    Science.gov (United States)

    Sandøy, Ingvild F; Siziya, Seter; Fylkesnes, Knut

    2008-01-01

    Background Efforts at HIV prevention that focus on high risk places might be more effective and less stigmatizing than those targeting high risk groups. The objective of the present study was to assess risk behaviour patterns, signs of current preventive interventions and apparent gaps in places where the risk of HIV transmission is high and in communities with high HIV prevalence. Methods The PLACE method was used to collect data. Inhabitants of selected communities in Lusaka and Livingstone were interviewed about where people met new sexual partners. Signs of HIV preventive activities in these places were recorded. At selected venues, people were interviewed about their sexual behaviour. Peer educators and staff of NGOs were also interviewed. Results The places identified were mostly bars, restaurants or sherbeens, and fewer than 20% reported any HIV preventive activity such as meetings, pamphlets or posters. In 43% of places in Livingstone and 26% in Lusaka, condoms were never available. There were few active peer educators. Among the 432 persons in Lusaka and 676 in Livingstone who were invited for interview about sexual behaviour, consistent condom use was relatively high in Lusaka (77%) but low in Livingstone (44% of men and 34% of women). Having no condom available was the most common reason for not using one. Condom use in Livingstone was higher among individuals socializing in places where condoms always were available. Conclusion In the places studied we found a high prevalence of behaviours with a high potential for HIV transmission but few signs of HIV preventive interventions. Covering the gaps in prevention in these high exposure places should be given the highest priority. PMID:18218124

  15. How to identify the person holding the highest position in the criminal hierarchy?

    Directory of Open Access Journals (Sweden)

    Grigoryev D.A.

    2014-12-01

    Full Text Available The current version of the resolution of the RF Supreme Court Plenum of June 10, 2010 N 12, clarifying the provisions of the law on liability for crimes committed by a person holding the highest position in the criminal hierarchy (Part 4 of Article 210 of the RF Criminal Code, is criticized. Evaluative character of the considered aggravating circumstance doesn’t allow to develop clear criteria for identifying the leaders of the criminal environment. Basing on the theory provisions and court practice, the authors suggest three criteria. The first criterion is specific actions including: establishment and leadership of the criminal association (criminal organization; coordinating criminal acts; creating sustainable links between different organized groups acting independently; dividing spheres of criminal influence, sharing criminal income and other criminal activities, indicating person’s authority and leadership in a particular area or in a particular sphere of activity. The second is having money, valuables and other property obtained by criminal means, without the person’s direct participation in their acquisition; transferring money, valuables and other property to that person systematically, without legal grounds (unjust enrichment; spending that money, valuables and other property to carry out criminal activities (crimes themselves and conditions of their commission. The third is international criminal ties manifested in committing one of the crimes under Part 1 of Article 210 of the RF Criminal Code, if this crime is transnational in nature; ties with extremist and (or terrorist organizations, as well as corruption ties. The court may use one or several of these criteria.

  16. Analisa Penggunaan Tertinggi dan Terbaik (Highest and Best Use Analysis pada Lahan Pasar Turi Lama Surabaya

    Directory of Open Access Journals (Sweden)

    Maulida Herradiyanti

    2017-01-01

    Full Text Available Pasar  Turi  merupakan  pasar  yang  telah lama  menjadi  ikon  perdagangan  tidak  hanya  di Surabaya,  namun  juga  di  Indonesia  Timur.  Kebakaran hebat yang terjadi pada Juli 2007 telah menghanguskan bangunan Pasar Turi. Aktivitas perdagangan di tempat tersebut  otomatis  terhenti.  Hingga saat ini, lahan Pasar Turi  Tahap  III  atau  yang  biasa  disebut  Pasar  Turi Lama  masih  terbengkalai.  Padahal,  lahan  seluas  16281 m2tersebut terletak di wilayah sentra perdagangan dan cocok  untuk  dikembangkan  menjadi properti komersial seperti perkantoran, pertokoan, rumah toko (ruko, danpasar tradisional. Salah  satu  cara  untuk  menentukan  penggunaan lahan Pasar Turi Lama adalah dengan metode Highest and  Best  Use  (HBU.  HBU  adalah  suatu  metode  untuk menentukan  penggunaan  aset  yang  memberikan peruntukan  paling  optimal sehingga dapat memberikan nilai  lahan  tertinggi.  Kriteria  HBU  yaitu  diijinkan secara  legal,  memungkinkan  secara  fisik,  layak  secara finansial, dan memiliki produktivitas maksimum.Hasil penelitian ini didapatkan alternatif pertokoan sebagai  alternatif  penggunaan  lahan  terbaik  dengan nilai  lahan  tertinggi  yaitu  sebesar  Rp27.994.695,78/m2dengan produktivitas maksimum sebesar  124%.

  17. Risk of influenza transmission in a hospital emergency department during the week of highest incidence.

    Science.gov (United States)

    Esteve-Esteve, Miguel; Bautista-Rentero, Daniel; Zanón-Viguer, Vicente

    2018-02-01

    To estimate the risk of influenza transmission in patients coming to a hospital emergency department during the week of highest incidence and to analyze factors associated with transmission. Retrospective observational analysis of a cohort of patients treated in the emergency room during the 2014-2015 flu season. The following variables were collected from records: recorded influenza diagnosis, results of a rapid influenza confirmation test, point of exposure (emergency department, outpatient clinic, or the community), age, sex, flu vaccination or not, number of emergency visits, time spent in the waiting room, and total time in the hospital. We compiled descriptive statistics and performed bivariate and multivariate analyses by means of a Poisson regression to estimate relative risk (RR) and 95% CIs. The emergency department patients had a RR of contracting influenza 3.29 times that of the communityexposed population (95% CI, 1.53-7.08, P=.002); their risk was 2.05 times greater than that of outpatient clinic visitors (95% CI, 1.04-4.02, P=.036). Emergency patients under the age of 15 years had a 5.27 greater risk than older patients (95% CI, 1.59-17.51; P=.007). The RR of patients visiting more than once was 11.43 times greater (95% CI, 3.58-36.44; P<.001). The risk attributable to visiting the emergency department risk was 70.5%, whereas risk attributable to community exposure was 2%. The risk of contracting influenza is greater for emergency department patients than for the general population or for patients coming to the hospital for outpatient clinic visits. Patients under the age of 15 years incur greater risk.

  18. Distribution of Mercury Concentrations in Tree Rings and Surface Soils Adjacent to a Phosphate Fertilizer Plant in Southern Korea.

    Science.gov (United States)

    Jung, Raae; Ahn, Young Sang

    2017-08-01

    This study aimed to determine mercury concentrations in tree rings and surface soils at distances of 4, 26 and 40 km from a fertilizer plant located in Yeosu City, Korea. Mercury concentrations in all tree rings were low prior to the establishment of the plant in 1977 and became elevated thereafter. The highest average mercury concentration in the tree rings was 11.96 ng g -1 at the Yeosu site located nearest to the plant, with the lowest average mercury concentration of 4.45 ng g -1 at the Suncheon site furthest away from the plant. In addition, the highest mercury content in the surface soil was 108.51 ng cm -3 at the Yeosu site, whereas the lowest mercury content in the surface soil was 31.47 ng cm -3 at the Suncheon site. The mercury levels decreased gradually with increasing distance from the plant.

  19. Assessment of natural background radiation in one of the highest regions of Ecuador

    Science.gov (United States)

    Pérez, Mario; Chávez, Estefanía; Echeverría, Magdy; Córdova, Rafael; Recalde, Celso

    2018-05-01

    Natural background radiation was measured in the province of Chimborazo (Ecuador) with the following reference coordinates 1°40'00''S 78°39'00''W, where the furthest point to the center of the planet is located. Natural background radiation measurements were performed at 130 randomly selected sites using a Geiger Müller GCA-07W portable detector; these measurements were run at 6 m away from buildings or walls and 1 m above the ground. The global average natural background radiation established by UNSCEAR is 2.4 mSv y-1. In the study area measurements ranged from 0.57 mSv y-1 to 3.09 mSv y-1 with a mean value of 1.57 mSv y-1, the maximum value was recorded in the north of the study area at 5073 metres above sea level (m.a.s.l.), and the minimum value was recorded in the southwestern area at 297 m.a.s.l. An isodose map was plotted to represent the equivalent dose rate due to natural background radiation. An analysis of variance (ANOVA) between the data of the high and low regions of the study area showed a significant difference (p < α), in addition a linear correlation coefficient of 0.92 was obtained, supporting the hypothesis that in high altitude zones extraterrestrial radiation contributes significantly to natural background radiation.

  20. Asynchronous Gossip for Averaging and Spectral Ranking

    Science.gov (United States)

    Borkar, Vivek S.; Makhijani, Rahul; Sundaresan, Rajesh

    2014-08-01

    We consider two variants of the classical gossip algorithm. The first variant is a version of asynchronous stochastic approximation. We highlight a fundamental difficulty associated with the classical asynchronous gossip scheme, viz., that it may not converge to a desired average, and suggest an alternative scheme based on reinforcement learning that has guaranteed convergence to the desired average. We then discuss a potential application to a wireless network setting with simultaneous link activation constraints. The second variant is a gossip algorithm for distributed computation of the Perron-Frobenius eigenvector of a nonnegative matrix. While the first variant draws upon a reinforcement learning algorithm for an average cost controlled Markov decision problem, the second variant draws upon a reinforcement learning algorithm for risk-sensitive control. We then discuss potential applications of the second variant to ranking schemes, reputation networks, and principal component analysis.

  1. Benchmarking statistical averaging of spectra with HULLAC

    Science.gov (United States)

    Klapisch, Marcel; Busquet, Michel

    2008-11-01

    Knowledge of radiative properties of hot plasmas is important for ICF, astrophysics, etc When mid-Z or high-Z elements are present, the spectra are so complex that one commonly uses statistically averaged description of atomic systems [1]. In a recent experiment on Fe[2], performed under controlled conditions, high resolution transmission spectra were obtained. The new version of HULLAC [3] allows the use of the same model with different levels of details/averaging. We will take advantage of this feature to check the effect of averaging with comparison with experiment. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Quant. Spectros. Rad. Transf. 65, 43 (2000). [2] J. E. Bailey, G. A. Rochau, C. A. Iglesias et al., Phys. Rev. Lett. 99, 265002-4 (2007). [3]. M. Klapisch, M. Busquet, and A. Bar-Shalom, AIP Conference Proceedings 926, 206-15 (2007).

  2. An approach to averaging digitized plantagram curves.

    Science.gov (United States)

    Hawes, M R; Heinemeyer, R; Sovak, D; Tory, B

    1994-07-01

    The averaging of outline shapes of the human foot for the purposes of determining information concerning foot shape and dimension within the context of comfort of fit of sport shoes is approached as a mathematical problem. An outline of the human footprint is obtained by standard procedures and the curvature is traced with a Hewlett Packard Digitizer. The paper describes the determination of an alignment axis, the identification of two ray centres and the division of the total curve into two overlapping arcs. Each arc is divided by equiangular rays which intersect chords between digitized points describing the arc. The radial distance of each ray is averaged within groups of foot lengths which vary by +/- 2.25 mm (approximately equal to 1/2 shoe size). The method has been used to determine average plantar curves in a study of 1197 North American males (Hawes and Sovak 1993).

  3. Books average previous decade of economic misery.

    Science.gov (United States)

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  4. Books Average Previous Decade of Economic Misery

    Science.gov (United States)

    Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  5. Exploiting scale dependence in cosmological averaging

    International Nuclear Information System (INIS)

    Mattsson, Teppo; Ronkainen, Maria

    2008-01-01

    We study the role of scale dependence in the Buchert averaging method, using the flat Lemaitre–Tolman–Bondi model as a testing ground. Within this model, a single averaging scale gives predictions that are too coarse, but by replacing it with the distance of the objects R(z) for each redshift z, we find an O(1%) precision at z<2 in the averaged luminosity and angular diameter distances compared to their exact expressions. At low redshifts, we show the improvement for generic inhomogeneity profiles, and our numerical computations further verify it up to redshifts z∼2. At higher redshifts, the method breaks down due to its inability to capture the time evolution of the inhomogeneities. We also demonstrate that the running smoothing scale R(z) can mimic acceleration, suggesting that it could be at least as important as the backreaction in explaining dark energy as an inhomogeneity induced illusion

  6. Stochastic Averaging and Stochastic Extremum Seeking

    CERN Document Server

    Liu, Shu-Jun

    2012-01-01

    Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering  and analysis of bacterial  convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...

  7. Aperture averaging in strong oceanic turbulence

    Science.gov (United States)

    Gökçe, Muhsin Caner; Baykal, Yahya

    2018-04-01

    Receiver aperture averaging technique is employed in underwater wireless optical communication (UWOC) systems to mitigate the effects of oceanic turbulence, thus to improve the system performance. The irradiance flux variance is a measure of the intensity fluctuations on a lens of the receiver aperture. Using the modified Rytov theory which uses the small-scale and large-scale spatial filters, and our previously presented expression that shows the atmospheric structure constant in terms of oceanic turbulence parameters, we evaluate the irradiance flux variance and the aperture averaging factor of a spherical wave in strong oceanic turbulence. Irradiance flux variance variations are examined versus the oceanic turbulence parameters and the receiver aperture diameter are examined in strong oceanic turbulence. Also, the effect of the receiver aperture diameter on the aperture averaging factor is presented in strong oceanic turbulence.

  8. Regional averaging and scaling in relativistic cosmology

    International Nuclear Information System (INIS)

    Buchert, Thomas; Carfora, Mauro

    2002-01-01

    Averaged inhomogeneous cosmologies lie at the forefront of interest, since cosmological parameters such as the rate of expansion or the mass density are to be considered as volume-averaged quantities and only these can be compared with observations. For this reason the relevant parameters are intrinsically scale-dependent and one wishes to control this dependence without restricting the cosmological model by unphysical assumptions. In the latter respect we contrast our way to approach the averaging problem in relativistic cosmology with shortcomings of averaged Newtonian models. Explicitly, we investigate the scale-dependence of Eulerian volume averages of scalar functions on Riemannian three-manifolds. We propose a complementary view of a Lagrangian smoothing of (tensorial) variables as opposed to their Eulerian averaging on spatial domains. This programme is realized with the help of a global Ricci deformation flow for the metric. We explain rigorously the origin of the Ricci flow which, on heuristic grounds, has already been suggested as a possible candidate for smoothing the initial dataset for cosmological spacetimes. The smoothing of geometry implies a renormalization of averaged spatial variables. We discuss the results in terms of effective cosmological parameters that would be assigned to the smoothed cosmological spacetime. In particular, we find that on the smoothed spatial domain B-bar evaluated cosmological parameters obey Ω-bar B-bar m + Ω-bar B-bar R + Ω-bar B-bar A + Ω-bar B-bar Q 1, where Ω-bar B-bar m , Ω-bar B-bar R and Ω-bar B-bar A correspond to the standard Friedmannian parameters, while Ω-bar B-bar Q is a remnant of cosmic variance of expansion and shear fluctuations on the averaging domain. All these parameters are 'dressed' after smoothing out the geometrical fluctuations, and we give the relations of the 'dressed' to the 'bare' parameters. While the former provide the framework of interpreting observations with a 'Friedmannian bias

  9. Average: the juxtaposition of procedure and context

    Science.gov (United States)

    Watson, Jane; Chick, Helen; Callingham, Rosemary

    2014-09-01

    This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.

  10. Average-case analysis of numerical problems

    CERN Document Server

    2000-01-01

    The average-case analysis of numerical problems is the counterpart of the more traditional worst-case approach. The analysis of average error and cost leads to new insight on numerical problems as well as to new algorithms. The book provides a survey of results that were mainly obtained during the last 10 years and also contains new results. The problems under consideration include approximation/optimal recovery and numerical integration of univariate and multivariate functions as well as zero-finding and global optimization. Background material, e.g. on reproducing kernel Hilbert spaces and random fields, is provided.

  11. Grassmann Averages for Scalable Robust PCA

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Black, Michael J.

    2014-01-01

    As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...

  12. Investigation of Behçet's Disease and Recurrent Aphthous Stomatitis Frequency: The Highest Prevalence in Turkey

    Directory of Open Access Journals (Sweden)

    Yalçın Baş

    2016-08-01

    Full Text Available Background: The Recurrent Aphthous Stomatitis (RAS is the most frequently observed painful pathology of the oral mucosa in the society. It appears mostly in idiopathic form; however, it may also be related with systemic diseases like Behçet’s Disease (BD. Aims: Determining the prevalence of RAS and BD in the Northern Anatolian Region, which is one of the important routes on the Antique Silk Road. Study Design: Cross-sectional study. Methods: Overall, 85 separate exemplification groups were formed to reflect the population density, and the demographic data of the region they represent. In the first stage, the individuals, who were selected in random order, were invited to a Family Physician Unit at a certain date and time. The dermatological examinations of the volunteering individuals were performed by only 3 dermatology specialists. In the second stage, those individuals who had symptoms of BD were invited to our hospital, and the Pathergy Test and eye examinations were performed. Results: The annual prevalence of RAS was determined as 10.84%. The annual prevalence was determined to be higher in women than in men (p=0.000. It was observed that the prevalence was at the peak level in the 3rd decade, and then decreased proportionally in the following decades (p=0.000. It was also observed that the aphtha recurrence decreased in the following decades (p=0.048. The Behçet’s prevalence was found to be 0.60%. The prevalence in women was found to be higher than in men (0.86% female, 0.14% male; p=0.022. Conclusion: While the RAS prevalence ratio was at an average value when compared with the other societies; the BD prevalence was found as the highest ratio in the world according to the literature.

  13. Concentrated Ownership

    DEFF Research Database (Denmark)

    Rose, Caspar

    2014-01-01

    This entry summarizes the main theoretical contributions and empirical findings in relation to concentrated ownership from a law and economics perspective. The various forms of concentrated ownership are described as well as analyzed from the perspective of the legal protection of investors......, especially minority shareholders. Concentrated ownership is associated with benefits and costs. Concentrated ownership may reduce agency costs by increased monitoring of top management. However, concentrated ownership may also provide dominating owners with private benefits of control....

  14. Model averaging, optimal inference and habit formation

    Directory of Open Access Journals (Sweden)

    Thomas H B FitzGerald

    2014-06-01

    Full Text Available Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function – the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge – that of determining which model or models of their environment are the best for guiding behaviour. Bayesian model averaging – which says that an agent should weight the predictions of different models according to their evidence – provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent’s behaviour should show an equivalent balance. We hypothesise that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realisable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behaviour. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded Bayesian inference, focussing particularly upon the relationship between goal-directed and habitual behaviour.

  15. Generalized Jackknife Estimators of Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic...

  16. Average beta measurement in EXTRAP T1

    International Nuclear Information System (INIS)

    Hedin, E.R.

    1988-12-01

    Beginning with the ideal MHD pressure balance equation, an expression for the average poloidal beta, Β Θ , is derived. A method for unobtrusively measuring the quantities used to evaluate Β Θ in Extrap T1 is described. The results if a series of measurements yielding Β Θ as a function of externally applied toroidal field are presented. (author)

  17. HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS

    International Nuclear Information System (INIS)

    2005-01-01

    Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department

  18. Bayesian Averaging is Well-Temperated

    DEFF Research Database (Denmark)

    Hansen, Lars Kai

    2000-01-01

    Bayesian predictions are stochastic just like predictions of any other inference scheme that generalize from a finite sample. While a simple variational argument shows that Bayes averaging is generalization optimal given that the prior matches the teacher parameter distribution the situation is l...

  19. Gibbs equilibrium averages and Bogolyubov measure

    International Nuclear Information System (INIS)

    Sankovich, D.P.

    2011-01-01

    Application of the functional integration methods in equilibrium statistical mechanics of quantum Bose-systems is considered. We show that Gibbs equilibrium averages of Bose-operators can be represented as path integrals over a special Gauss measure defined in the corresponding space of continuous functions. We consider some problems related to integration with respect to this measure

  20. High average-power induction linacs

    International Nuclear Information System (INIS)

    Prono, D.S.; Barrett, D.; Bowles, E.; Caporaso, G.J.; Chen, Yu-Jiuan; Clark, J.C.; Coffield, F.; Newton, M.A.; Nexsen, W.; Ravenscroft, D.; Turner, W.C.; Watson, J.A.

    1989-01-01

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of ∼ 50-ns duration pulses to > 100 MeV. In this paper the authors report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  1. Function reconstruction from noisy local averages

    International Nuclear Information System (INIS)

    Chen Yu; Huang Jianguo; Han Weimin

    2008-01-01

    A regularization method is proposed for the function reconstruction from noisy local averages in any dimension. Error bounds for the approximate solution in L 2 -norm are derived. A number of numerical examples are provided to show computational performance of the method, with the regularization parameters selected by different strategies

  2. A singularity theorem based on spatial averages

    Indian Academy of Sciences (India)

    journal of. July 2007 physics pp. 31–47. A singularity theorem based on spatial ... In this paper I would like to present a result which confirms – at least partially – ... A detailed analysis of how the model fits in with the .... Further, the statement that the spatial average ...... Financial support under grants FIS2004-01626 and no.

  3. Multiphase averaging of periodic soliton equations

    International Nuclear Information System (INIS)

    Forest, M.G.

    1979-01-01

    The multiphase averaging of periodic soliton equations is considered. Particular attention is given to the periodic sine-Gordon and Korteweg-deVries (KdV) equations. The periodic sine-Gordon equation and its associated inverse spectral theory are analyzed, including a discussion of the spectral representations of exact, N-phase sine-Gordon solutions. The emphasis is on physical characteristics of the periodic waves, with a motivation from the well-known whole-line solitons. A canonical Hamiltonian approach for the modulational theory of N-phase waves is prescribed. A concrete illustration of this averaging method is provided with the periodic sine-Gordon equation; explicit averaging results are given only for the N = 1 case, laying a foundation for a more thorough treatment of the general N-phase problem. For the KdV equation, very general results are given for multiphase averaging of the N-phase waves. The single-phase results of Whitham are extended to general N phases, and more importantly, an invariant representation in terms of Abelian differentials on a Riemann surface is provided. Several consequences of this invariant representation are deduced, including strong evidence for the Hamiltonian structure of N-phase modulational equations

  4. A dynamic analysis of moving average rules

    NARCIS (Netherlands)

    Chiarella, C.; He, X.Z.; Hommes, C.H.

    2006-01-01

    The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type

  5. Essays on model averaging and political economics

    NARCIS (Netherlands)

    Wang, W.

    2013-01-01

    This thesis first investigates various issues related with model averaging, and then evaluates two policies, i.e. West Development Drive in China and fiscal decentralization in U.S, using econometric tools. Chapter 2 proposes a hierarchical weighted least squares (HWALS) method to address multiple

  6. 7 CFR 1209.12 - On average.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false On average. 1209.12 Section 1209.12 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS... CONSUMER INFORMATION ORDER Mushroom Promotion, Research, and Consumer Information Order Definitions § 1209...

  7. High average-power induction linacs

    International Nuclear Information System (INIS)

    Prono, D.S.; Barrett, D.; Bowles, E.

    1989-01-01

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  8. Average Costs versus Net Present Value

    NARCIS (Netherlands)

    E.A. van der Laan (Erwin); R.H. Teunter (Ruud)

    2000-01-01

    textabstractWhile the net present value (NPV) approach is widely accepted as the right framework for studying production and inventory control systems, average cost (AC) models are more widely used. For the well known EOQ model it can be verified that (under certain conditions) the AC approach gives

  9. Average beta-beating from random errors

    CERN Document Server

    Tomas Garcia, Rogelio; Langner, Andy Sven; Malina, Lukas; Franchi, Andrea; CERN. Geneva. ATS Department

    2018-01-01

    The impact of random errors on average β-beating is studied via analytical derivations and simulations. A systematic positive β-beating is expected from random errors quadratic with the sources or, equivalently, with the rms β-beating. However, random errors do not have a systematic effect on the tune.

  10. Reliability Estimates for Undergraduate Grade Point Average

    Science.gov (United States)

    Westrick, Paul A.

    2017-01-01

    Undergraduate grade point average (GPA) is a commonly employed measure in educational research, serving as a criterion or as a predictor depending on the research question. Over the decades, researchers have used a variety of reliability coefficients to estimate the reliability of undergraduate GPA, which suggests that there has been no consensus…

  11. Plasma magnesium concentration in patients undergoing coronary artery bypass grafting.

    Science.gov (United States)

    Kotlinska-Hasiec, Edyta; Makara-Studzinska, Marta; Czajkowski, Marek; Rzecki, Ziemowit; Olszewski, Krzysztof; Stadnik, Adam; Pilat, Jacek; Rybojad, Beata; Dabrowski, Wojciech

    2017-05-11

    [b]Introduction[/b]. Magnesium (Mg) plays a crucial role in cell physiology and its deficiency may cause many disorders which often require intensive treatment. The aim of this study was to analyse some factors affecting preoperative plasma Mg concentration in patients undergoing coronary artery bypass grafting (CABG). [b]Materials and method[/b]. Adult patients scheduled for elective CABG with cardio-pulmonary bypass (CPB) under general anaesthesia were studied. Plasma Mg concentration was analysed before surgery in accordance with age, domicile, profession, tobacco smoking and preoperative Mg supplementation. Blood samples were obtained from the radial artery just before the administration of anaesthesia. [b]Results. [/b]150 patients were studied. Mean preoperative plasma Mg concentration was 0.93 ± 0.17 mmol/L; mean concentration in patients - 1.02 ± 0.16; preoperative Mg supplementation was significantly higher than in patients without such supplementation. Moreover, intellectual workers supplemented Mg more frequently and had higher plasma Mg concentration than physical workers. Plasma Mg concentration decreases in elderly patients. Patients living in cities, on average, had the highest plasma Mg concentration. Smokers had significantly lower plasma Mg concentration than non-smokers. [b]Conclusions. [/b]1. Preoperative magnesium supplementation increases its plasma concentration. 2. Intellectual workers frequently supplement magnesium. 3. Smoking cigarettes decreases plasma magnesium concentration.

  12. Highest Plasma Phenylalanine Levels in (Very Premature Infants on Intravenous Feeding; A Need for Concern.

    Directory of Open Access Journals (Sweden)

    Ernesto Cortés-Castell

    Full Text Available To analyse the association in newborns between blood levels of phenylalanine and feeding method and gestational age.This observational, cross-sectional study included a sample of 11,829 infants between 2008 and 2013 in a Spanish region. Data were recorded on phenylalanine values, feeding method [breast, formula, mixed (breast plus formula, or partial or fully intravenous feeding], gestational age in weeks (<32, 32-37, ≥37, gender and days since birth at the moment of blood collection. Outcomes were [phenylalanine] and [phenylalanine] ≥95th percentile. Associations were analysed using multivariate models [linear (means difference and logistic regression (adjusted odds ratios].Higher phenylalanine values were associated with lower gestational age (p<0.001 and with intravenous feeding (p<0.001.The degree of prematurity and intravenous feeding influenced the plasma concentration of phenylalanine in the newborn. Caution should be taken in [phenylalanine] for newborns with intravenous feeding, monitoring them carefully. Very preterm infants given the recommended amount of amino acids should also be strictly monitored. These findings should be taken into consideration and call for adapting the amounts to the needs of the infant.

  13. Tendon surveillance requirements - average tendon force

    International Nuclear Information System (INIS)

    Fulton, J.F.

    1982-01-01

    Proposed Rev. 3 to USNRC Reg. Guide 1.35 discusses the need for comparing, for individual tendons, the measured and predicted lift-off forces. Such a comparison is intended to detect any abnormal tendon force loss which might occur. Recognizing that there are uncertainties in the prediction of tendon losses, proposed Guide 1.35.1 has allowed specific tolerances on the fundamental losses. Thus, the lift-off force acceptance criteria for individual tendons appearing in Reg. Guide 1.35, Proposed Rev. 3, is stated relative to a lower bound predicted tendon force, which is obtained using the 'plus' tolerances on the fundamental losses. There is an additional acceptance criterion for the lift-off forces which is not specifically addressed in these two Reg. Guides; however, it is included in a proposed Subsection IWX to ASME Code Section XI. This criterion is based on the overriding requirement that the magnitude of prestress in the containment structure be sufficeint to meet the minimum prestress design requirements. This design requirement can be expressed as an average tendon force for each group of vertical hoop, or dome tendons. For the purpose of comparing the actual tendon forces with the required average tendon force, the lift-off forces measured for a sample of tendons within each group can be averaged to construct the average force for the entire group. However, the individual lift-off forces must be 'corrected' (normalized) prior to obtaining the sample average. This paper derives the correction factor to be used for this purpose. (orig./RW)

  14. Simplifying consent for HIV testing is associated with an increase in HIV testing and case detection in highest risk groups, San Francisco January 2003-June 2007.

    Directory of Open Access Journals (Sweden)

    Nicola M Zetola

    2008-07-01

    Full Text Available Populations at highest risk for HIV infection face multiple barriers to HIV testing. To facilitate HIV testing procedures, the San Francisco General Hospital Medical Center eliminated required written patient consent for HIV testing in its medical settings in May 2006. To describe the change in HIV testing rates in different hospital settings and populations after the change in HIV testing policy in the SFDH medical center, we performed an observational study using interrupted time series analysis.Data from all patients aged 18 years and older seen from January 2003 through June 2007 at the San Francisco Department of Public Health (SFDPH medical care system were included in the analysis. The monthly HIV testing rate per 1000 had patient-visits was calculated for the overall population and stratified by hospital setting, age, sex, race/ethnicity, homelessness status, insurance status and primary language.By June 2007, the average monthly rate of HIV tests per 1000 patient-visits increased 4.38 (CI, 2.17-6.60, p<0.001 over the number predicted if the policy change had not occurred (representing a 44% increase. The monthly average number of new positive HIV tests increased from 8.9 (CI, 6.3-11.5 to 14.9 (CI, 10.6-19.2, p<0.001, representing a 67% increase. Although increases in HIV testing were seen in all populations, populations at highest risk for HIV infection, particularly men, the homeless, and the uninsured experienced the highest increases in monthly HIV testing rates after the policy change.The elimination of the requirement for written consent in May 2006 was associated with a significant and sustained increase in HIV testing rates and HIV case detection in the SFDPH medical center. Populations facing the higher barriers to HIV testing had the highest increases in HIV testing rates and case detection in response to the policy change.

  15. Consumer Airfare Report: Table 5 - Detailed Fare Information For Highest and Lowest Fare Markets Under 750 Miles

    Data.gov (United States)

    Department of Transportation — Provides detailed fare information for highest and lowest fare markets under 750 miles. For a more complete explanation, please read the introductory information at...

  16. Elliptical concentrators.

    Science.gov (United States)

    Garcia-Botella, Angel; Fernandez-Balbuena, Antonio Alvarez; Bernabeu, Eusebio

    2006-10-10

    Nonimaging optics is a field devoted to the design of optical components for applications such as solar concentration or illumination. In this field, many different techniques have been used to produce optical devices, including the use of reflective and refractive components or inverse engineering techniques. However, many of these optical components are based on translational symmetries, rotational symmetries, or free-form surfaces. We study a new family of nonimaging concentrators called elliptical concentrators. This new family of concentrators provides new capabilities and can have different configurations, either homofocal or nonhomofocal. Translational and rotational concentrators can be considered as particular cases of elliptical concentrators.

  17. The antioxidant level of Alaska's wild berries: high, higher and highest

    Directory of Open Access Journals (Sweden)

    Roxie Rodgers Dinstel

    2013-08-01

    . Alaska wild berries have extraordinarily high antioxidant levels. Though cooking lowered the antioxidant level, and adding ingredients such as sugar diluted the antioxidant concentration, products made from berries are high sources of antioxidants.

  18. Statistics on exponential averaging of periodograms

    Energy Technology Data Exchange (ETDEWEB)

    Peeters, T.T.J.M. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Ciftcioglu, Oe. [Istanbul Technical Univ. (Turkey). Dept. of Electrical Engineering

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a {chi}{sup 2} distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.).

  19. Statistics on exponential averaging of periodograms

    International Nuclear Information System (INIS)

    Peeters, T.T.J.M.; Ciftcioglu, Oe.

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a χ 2 distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.)

  20. ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE

    Directory of Open Access Journals (Sweden)

    Carmen BOGHEAN

    2013-12-01

    Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.

  1. Weighted estimates for the averaging integral operator

    Czech Academy of Sciences Publication Activity Database

    Opic, Bohumír; Rákosník, Jiří

    2010-01-01

    Roč. 61, č. 3 (2010), s. 253-262 ISSN 0010-0757 R&D Projects: GA ČR GA201/05/2033; GA ČR GA201/08/0383 Institutional research plan: CEZ:AV0Z10190503 Keywords : averaging integral operator * weighted Lebesgue spaces * weights Subject RIV: BA - General Mathematics Impact factor: 0.474, year: 2010 http://link.springer.com/article/10.1007%2FBF03191231

  2. Average Transverse Momentum Quantities Approaching the Lightfront

    OpenAIRE

    Boer, Daniel

    2015-01-01

    In this contribution to Light Cone 2014, three average transverse momentum quantities are discussed: the Sivers shift, the dijet imbalance, and the $p_T$ broadening. The definitions of these quantities involve integrals over all transverse momenta that are overly sensitive to the region of large transverse momenta, which conveys little information about the transverse momentum distributions of quarks and gluons inside hadrons. TMD factorization naturally suggests alternative definitions of su...

  3. Time-averaged MSD of Brownian motion

    OpenAIRE

    Andreanov, Alexei; Grebenkov, Denis

    2012-01-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we de...

  4. Average configuration of the geomagnetic tail

    International Nuclear Information System (INIS)

    Fairfield, D.H.

    1979-01-01

    Over 3000 hours of Imp 6 magnetic field data obtained between 20 and 33 R/sub E/ in the geomagnetic tail have been used in a statistical study of the tail configuration. A distribution of 2.5-min averages of B/sub z/ as a function of position across the tail reveals that more flux crosses the equatorial plane near the dawn and dusk flanks (B-bar/sub z/=3.γ) than near midnight (B-bar/sub z/=1.8γ). The tail field projected in the solar magnetospheric equatorial plane deviates from the x axis due to flaring and solar wind aberration by an angle α=-0.9 Y/sub SM/-2.7, where Y/sub SM/ is in earth radii and α is in degrees. After removing these effects, the B/sub y/ component of the tail field is found to depend on interplanetary sector structure. During an 'away' sector the B/sub y/ component of the tail field is on average 0.5γ greater than that during a 'toward' sector, a result that is true in both tail lobes and is independent of location across the tail. This effect means the average field reversal between northern and southern lobes of the tail is more often 178 0 rather than the 180 0 that is generally supposed

  5. Unscrambling The "Average User" Of Habbo Hotel

    Directory of Open Access Journals (Sweden)

    Mikael Johnson

    2007-01-01

    Full Text Available The “user” is an ambiguous concept in human-computer interaction and information systems. Analyses of users as social actors, participants, or configured users delineate approaches to studying design-use relationships. Here, a developer’s reference to a figure of speech, termed the “average user,” is contrasted with design guidelines. The aim is to create an understanding about categorization practices in design through a case study about the virtual community, Habbo Hotel. A qualitative analysis highlighted not only the meaning of the “average user,” but also the work that both the developer and the category contribute to this meaning. The average user a represents the unknown, b influences the boundaries of the target user groups, c legitimizes the designer to disregard marginal user feedback, and d keeps the design space open, thus allowing for creativity. The analysis shows how design and use are intertwined and highlights the developers’ role in governing different users’ interests.

  6. Changing mortality and average cohort life expectancy

    Directory of Open Access Journals (Sweden)

    Robert Schoen

    2005-10-01

    Full Text Available Period life expectancy varies with changes in mortality, and should not be confused with the life expectancy of those alive during that period. Given past and likely future mortality changes, a recent debate has arisen on the usefulness of the period life expectancy as the leading measure of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure, the average cohort life expectancy (ACLE, to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate measures of mortality are calculated for England and Wales, Norway, and Switzerland for the years 1880 to 2000. CAL is found to be sensitive to past and present changes in death rates. ACLE requires the most data, but gives the best representation of the survivorship of cohorts present at a given time.

  7. Radionuclide concentrations in wild waterfowl using the test reactor area radioactive leaching pond

    International Nuclear Information System (INIS)

    Halford, D.K.; Millard, J.B.; Markham, O.D.

    1978-01-01

    Waterfowl use the Test Reactor Area (TRA) Radioactive Leaching Pond on the Idaho National Engineering Laboratory Site (INEL Site) as a resting area. Daily observations of waterfowl were made to determine species composition and numbers. Eight ducks and one coot were collected from the TRA pond during 1976 and 1977. Seven background samples were also collected. Each bird was dissected and tissue samples were analyzed for gamma-emitting radionuclides. Duck tissues contained 25 radionuclides. Average and maximum radionuclide concentrations were highest in gut followed by feathers, liver, and muscle, Chromium-51 had the highest concentrations of all radionuclides identified 130,000 pCi/g (4800 Bq/g) in the gut and 37,500 pCi/g (1390 Bq/g) on the feathres). Neodymium-147 had the highest concentration on feathers of any radionuclide (104,000 pCi/g, 3850 Bq/g). Cesium-137 was the predominant radionuclide in muscle with a maximum concentration of 4,070 pCi/g (150 Bq/g). The ducks had lower radionuclide concentrations in the edible tissues than in the non-edible tissues. Potential whole-body and thyroid dose commitments to man consuming contaminated ducks were calculated using muscle concentrations of Cs-134, Cs-137, and I-131. Although assumptions used for dose calculations maximized the dose commitment to man, results indicated that consumption of contaminated duck tissue is not a radiation hazard to humans. Even the highest dose commitments were below the limits recommended for individuals of the general population by the Internatioal Commission on Radiological Protection (ICRP). The highest potential dose commitment to man would result from the consumption of an American coot known to have spent 20 days on the TRA pond. The average dose commitment to man would be 20 mrem

  8. Efavirenz Has the Highest Anti-Proliferative Effect of Non-Nucleoside Reverse Transcriptase Inhibitors against Pancreatic Cancer Cells.

    Directory of Open Access Journals (Sweden)

    Markus Hecht

    Full Text Available Cancer prevention and therapy in HIV-1-infected patients will play an important role in future. The non-nucleoside reverse transcriptase inhibitors (NNRTI Efavirenz and Nevirapine are cytotoxic against cancer cells in vitro. As other NNRTIs have not been studied so far, all clinically used NNRTIs were tested and the in vitro toxic concentrations were compared to drug levels in patients to predict possible anti-cancer effects in vivo.Cytotoxicity was studied by Annexin-V-APC/7AAD staining and flow cytometry in the pancreatic cancer cell lines BxPC-3 and Panc-1 and confirmed by colony formation assays. The 50% effective cytotoxic concentrations (EC50 were calculated and compared to the blood levels in our patients and published data.The in vitro EC50 of the different drugs in the BxPC-3 pancreatic cancer cells were: Efavirenz 31.5 μmol/l (= 9944 ng/ml, Nevirapine 239 μmol/l (= 63,786 ng/ml, Etravirine 89.0 μmol/l (= 38,740 ng/ml, Lersivirine 543 μmol/l (= 168,523 ng/ml, Delavirdine 171 μmol/l (= 78,072 ng/ml, Rilpivirine 24.4 μmol/l (= 8941 ng/ml. As Efavirenz and Rilpivirine had the highest cytotoxic potential and Nevirapine is frequently used in HIV-1 positive patients, the results of these three drugs were further studied in Panc-1 pancreatic cancer cells and confirmed with colony formation assays. 205 patient blood levels of Efavirenz, 127 of Rilpivirine and 31 of Nevirapine were analyzed. The mean blood level of Efavirenz was 3587 ng/ml (range 162-15,363 ng/ml, of Rilpivirine 144 ng/ml (range 0-572 ng/ml and of Nevirapine 4955 ng/ml (range 1856-8697 ng/ml. Blood levels from our patients and from published data had comparable Efavirenz levels to the in vitro toxic EC50 in about 1 to 5% of all patients.All studied NNRTIs were toxic against cancer cells. A low percentage of patients taking Efavirenz reached in vitro cytotoxic blood levels. It can be speculated that in HIV-1 positive patients having high Efavirenz blood levels pancreatic

  9. Average contents of uranium and thorium in the most important types of rocks of the Ukrainian shield

    International Nuclear Information System (INIS)

    Belevtsev, Ya.N.; Egorov, Yu.P.; Titov, V.K.; Sukhinin, A.M.; Grechishnikova, Z. M.; Zayats, V.B.; Tikhonenko, V.A.; Zhukova, A.M.

    1975-01-01

    The data given concern uranium and thorium contents in the most important rock types of the Ukraina shield. The smallest quantities of uranium are characteristic for the vulcanic rocks of basic and ultrabasic rocks. Archean formations, whose source materials were mainly basic and ultrabasic vulcanites, are marked by this low uranium content. The highest uranium content is observed in the clastogenic rocks of low Proterozoic. The average uranium content is observed in silty argellite rocks represented by crystal slates and paragneissis. Rheomorphic and metasomatic granites and granosyenites of low and middle Proterozoic are also characterized by an increased content of uranium. The platform precipitation rocks of high Proterozoic possess a relatively low uranium content. Thorium concentrations with low thorium-uranium proportions in granites, syenites and granosyenites prove their enrichment in uranium

  10. Average subentropy, coherence and entanglement of random mixed quantum states

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Lin, E-mail: godyalin@163.com [Institute of Mathematics, Hangzhou Dianzi University, Hangzhou 310018 (China); Singh, Uttam, E-mail: uttamsingh@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India); Pati, Arun K., E-mail: akpati@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India)

    2017-02-15

    Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate that mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.

  11. Concentration risk

    Directory of Open Access Journals (Sweden)

    Matić Vesna

    2016-01-01

    Full Text Available Concentration risk has been gaining a special dimension in the contemporary financial and economic environment. Financial institutions are exposed to this risk mainly in the field of lending, mostly through their credit activities and concentration of credit portfolios. This refers to the concentration of different exposures within a single risk category (credit risk, market risk, operational risk, liquidity risk.

  12. Transuranic concentrations in reef and pelagic fish from the Marshall Islands

    International Nuclear Information System (INIS)

    Noshkin, V.E.; Eagle, R.J.; Wong, K.M.; Jokela, T.A.

    1980-09-01

    Concentrations of /sup 239 + 240/Pu are reported in tissues of several species of reef and pelagic fish caught at 14 different atolls in the northern Marshall Islands. Several regularities that are species dependent are evident in the distribution of /sup 239 + 240/Pu among different body tissues. Concentrations in liver always exceeded those in bone and concentrations were lowest in the muscle of all fish analyzed. A progressive discrimination against /sup 239 + 240/Pu was observed at successive trophic levels at all atolls except Bikini and Enewetak, where it was difficult to conclude if any real difference exists between the average concentration factor for /sup 239 + 240/Pu among all fish, which include bottom feeding and grazing herbivores, bottom feeding carnivores, and pelagic carnivores from different atoll locations. The average concentration of /sup 239 + 240/Pu in the muscle of surgeonfish from Bikini and Enewetak was not significantly different from the average concentrations determined in these fish at the other, lesser contaminated atolls. Concentrations among all 3rd, 4th, and 5th trophic level species are highest at Bikini where higher environmental concentrations are found. The reasons for the anomalously low concentrations in herbivores from Bikini and Enewetak are not known

  13. Transuranic concentrations in reef and pelagic fish from the Marshall Islands

    International Nuclear Information System (INIS)

    Noshkin, V.E.; Eagle, R.J.; Wong, K.M.; Jokela, T.A.

    1981-01-01

    Concentrations of sup(239+240)Pu are reported in tissues of several species of reef and pelagic fish caught at 14 different atolls in the northern Marshall Islands. Several regularities that are species dependent are evident in the distribution of sup(239+240)Pu among different body tissues. Concentrations in liver always exceeded those in bone and concentrations were lowest in the muscle of all fish analysed. A progressive discrimination against sup(239+240)Pu was observed at successive trophic levels at all atolls except Bikini and Enewetak, where it was difficult to conclude if any real difference exists between the average concentration factor for sup(239+240)Pu among all fish, which include bottom-feeding and grazing herbivores, bottom-feeding carnivores and pelagic carnivores from different atoll locations. The average concentration of sup(239+240)Pu in the muscle of surgeonfish from Bikini and Enewetak was not significantly different from the average concentrations determined in these fish at the other lesser contaminated atolls. Concentrations among all 3rd, 4th and 5th trophic level species are highest at Bikini where higher environmental concentrations are found. The reasons for the anomalously low concentrations in herbivores from Bikini and Enewetak are not known. (author)

  14. Average level of satisfaction in 10 European countries: explanation of differences

    OpenAIRE

    Veenhoven, Ruut

    1996-01-01

    textabstractABSTRACT Surveys in 10 European nations assessed satisfaction with life-as-a-whole and satisfaction with three life-domains (finances, housing, social contacts). Average satisfaction differs markedly across countries. Both satisfaction with life-as-a-whole and satisfaction with life-domains are highest in North-Western Europe, medium in Southern Europe and lowest in the East-European nations. Cultural measurement bias is unlikely to be involved. The country differences in average ...

  15. Radon concentrations in well water in Sichuan Province, China

    International Nuclear Information System (INIS)

    Chen Yibin; Wu Qun; Zhang Bo; Chen Daifu

    1998-01-01

    There are 110 million people in Sichuan Province, China. Although most of the people in cities of Sichuan use river water, which contains low levels of radon, as potable water, people in countryside and in some communities of big cities still use well water as domestic consumption. This paper reports the radon concentrations in well water investigated in four cities, i.e. Chengdu, Chongqing, Leshan and Leijiang in Sichuan Province. Of the 80 wells investigated, the radon concentrations range from 3.5 to 181.6 KBqm -3 . Of the four cities, Chongqing has the highest well water radon concentration with the average 49.6 ± 54.1 KBqm -3 and the greatest variation. The investigation in four cities showed that the radon concentrations in well water are much higher than that in tap-water. In Chongqing where there are complex geological structures, mainly granite stratum, for example, the average radon concentration in well water is 112 times higher than that in the tap-water, and even much higher than that in river water in Yangtse River, Jialing River, Jinsha River and Mingjiang River. The population in four cities is about one sixth of the total population in Sichuan Province. Because of the common use of well water and the high radon concentrations in well water in Sichuan Province, the health effect of radon in well water to the public should be stressed. (author)

  16. Concentrator Photovoltaics

    CERN Document Server

    Luque, Antonio L

    2007-01-01

    Photovoltaic solar-energy conversion is one of the most promising technologies for generating renewable energy, and conversion of concentrated sunlight can lead to reduced cost for solar electricity. In fact, photovoltaic conversion of concentrated sunlight insures an efficient and cost-effective sustainable power resource. This book gives an overview of all components, e.g. cells, concentrators, modules and systems, for systems of concentrator photovoltaics. The authors report on significant results related to design, technology, and applications, and also cover the fundamental physics and market considerations. Specific contributions include: theory and practice of sunlight concentrators; an overview of concentrator PV activities; a description of concentrator solar cells; design and technology of modules and systems; manufacturing aspects; and a market study.

  17. Natural radionuclides in food in an area with high concentrations of radionuclides

    International Nuclear Information System (INIS)

    Pereira, W.S.; Moraes, S.R.; Cavalcante, J.J.V.; Kelecom, A.; Silva, A.X. da; Lopez, J.M.; Filgueiras, R.; Carmo, A.S.

    2017-01-01

    Areas of high natural radiation expose the local population to doses greater than the world average. One of the routes of exposure is food intake. The activity concentration (AC) of 5 natural radionuclides in 7 types of foods was analyzed. The highest CA measured was 2.40 Bq.kg -1 for the U nat in the potato. The multivariate statistic identified two groups: (U nat e 232 Th) and [( 210 Pb and 228 Ra) and 226 Ra

  18. Operator product expansion and its thermal average

    Energy Technology Data Exchange (ETDEWEB)

    Mallik, S [Saha Inst. of Nuclear Physics, Calcutta (India)

    1998-05-01

    QCD sum rules at finite temperature, like the ones at zero temperature, require the coefficients of local operators, which arise in the short distance expansion of the thermal average of two-point functions of currents. We extend the configuration space method, applied earlier at zero temperature, to the case at finite temperature. We find that, upto dimension four, two new operators arise, in addition to the two appearing already in the vacuum correlation functions. It is argued that the new operators would contribute substantially to the sum rules, when the temperature is not too low. (orig.) 7 refs.

  19. Fluctuations of wavefunctions about their classical average

    International Nuclear Information System (INIS)

    Benet, L; Flores, J; Hernandez-Saldana, H; Izrailev, F M; Leyvraz, F; Seligman, T H

    2003-01-01

    Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics

  20. Phase-averaged transport for quasiperiodic Hamiltonians

    CERN Document Server

    Bellissard, J; Schulz-Baldes, H

    2002-01-01

    For a class of discrete quasi-periodic Schroedinger operators defined by covariant re- presentations of the rotation algebra, a lower bound on phase-averaged transport in terms of the multifractal dimensions of the density of states is proven. This result is established under a Diophantine condition on the incommensuration parameter. The relevant class of operators is distinguished by invariance with respect to symmetry automorphisms of the rotation algebra. It includes the critical Harper (almost-Mathieu) operator. As a by-product, a new solution of the frame problem associated with Weyl-Heisenberg-Gabor lattices of coherent states is given.

  1. Baseline-dependent averaging in radio interferometry

    Science.gov (United States)

    Wijnholds, S. J.; Willis, A. G.; Salvini, S.

    2018-05-01

    This paper presents a detailed analysis of the applicability and benefits of baseline-dependent averaging (BDA) in modern radio interferometers and in particular the Square Kilometre Array. We demonstrate that BDA does not affect the information content of the data other than a well-defined decorrelation loss for which closed form expressions are readily available. We verify these theoretical findings using simulations. We therefore conclude that BDA can be used reliably in modern radio interferometry allowing a reduction of visibility data volume (and hence processing costs for handling visibility data) by more than 80 per cent.

  2. Multistage parallel-serial time averaging filters

    International Nuclear Information System (INIS)

    Theodosiou, G.E.

    1980-01-01

    Here, a new time averaging circuit design, the 'parallel filter' is presented, which can reduce the time jitter, introduced in time measurements using counters of large dimensions. This parallel filter could be considered as a single stage unit circuit which can be repeated an arbitrary number of times in series, thus providing a parallel-serial filter type as a result. The main advantages of such a filter over a serial one are much less electronic gate jitter and time delay for the same amount of total time uncertainty reduction. (orig.)

  3. Time-averaged MSD of Brownian motion

    International Nuclear Information System (INIS)

    Andreanov, Alexei; Grebenkov, Denis S

    2012-01-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution

  4. Time-dependent angularly averaged inverse transport

    International Nuclear Information System (INIS)

    Bal, Guillaume; Jollivet, Alexandre

    2009-01-01

    This paper concerns the reconstruction of the absorption and scattering parameters in a time-dependent linear transport equation from knowledge of angularly averaged measurements performed at the boundary of a domain of interest. Such measurement settings find applications in medical and geophysical imaging. We show that the absorption coefficient and the spatial component of the scattering coefficient are uniquely determined by such measurements. We obtain stability results on the reconstruction of the absorption and scattering parameters with respect to the measured albedo operator. The stability results are obtained by a precise decomposition of the measurements into components with different singular behavior in the time domain

  5. Independence, Odd Girth, and Average Degree

    DEFF Research Database (Denmark)

    Löwenstein, Christian; Pedersen, Anders Sune; Rautenbach, Dieter

    2011-01-01

      We prove several tight lower bounds in terms of the order and the average degree for the independence number of graphs that are connected and/or satisfy some odd girth condition. Our main result is the extension of a lower bound for the independence number of triangle-free graphs of maximum...... degree at most three due to Heckman and Thomas [Discrete Math 233 (2001), 233–237] to arbitrary triangle-free graphs. For connected triangle-free graphs of order n and size m, our result implies the existence of an independent set of order at least (4n−m−1) / 7.  ...

  6. Bootstrapping Density-Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...... (1989). In many cases validity of bootstrap-based inference procedures is found to depend crucially on whether the bandwidth sequence satisfies a particular (asymptotic linearity) condition. An exception to this rule occurs for inference procedures involving a studentized estimator employing a "robust...

  7. Average Nuclear properties based on statistical model

    International Nuclear Information System (INIS)

    El-Jaick, L.J.

    1974-01-01

    The rough properties of nuclei were investigated by statistical model, in systems with the same and different number of protons and neutrons, separately, considering the Coulomb energy in the last system. Some average nuclear properties were calculated based on the energy density of nuclear matter, from Weizsscker-Beth mass semiempiric formulae, generalized for compressible nuclei. In the study of a s surface energy coefficient, the great influence exercised by Coulomb energy and nuclear compressibility was verified. For a good adjust of beta stability lines and mass excess, the surface symmetry energy were established. (M.C.K.) [pt

  8. Time-averaged MSD of Brownian motion

    Science.gov (United States)

    Andreanov, Alexei; Grebenkov, Denis S.

    2012-07-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution.

  9. Bayesian model averaging and weighted average least squares : Equivariance, stability, and numerical issues

    NARCIS (Netherlands)

    De Luca, G.; Magnus, J.R.

    2011-01-01

    In this article, we describe the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals, which implement, respectively, the exact Bayesian model-averaging estimator and the weighted-average least-squares

  10. Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.

    Science.gov (United States)

    Dirks, Jean; And Others

    1983-01-01

    Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)

  11. Analysis of the average daily radon variations in the soil air

    International Nuclear Information System (INIS)

    Holy, K.; Matos, M.; Boehm, R.; Stanys, T.; Polaskova, A.; Hola, O.

    1998-01-01

    In this contribution the search of the relation between the daily variations of the radon concentration and the regular daily oscillations of the atmospheric pressure are presented. The deviation of the radon activity concentration in the soil air from the average daily value reaches only a few percent. For the dry summer months the average daily course of the radon activity concentration can be described by the obtained equation. The analysis of the average daily courses could give the information concerning the depth of the gas permeable soil layer. The soil parameter is determined by others method with difficulty

  12. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  13. Averaged null energy condition from causality

    Science.gov (United States)

    Hartman, Thomas; Kundu, Sandipan; Tajdini, Amirhossein

    2017-07-01

    Unitary, Lorentz-invariant quantum field theories in flat spacetime obey mi-crocausality: commutators vanish at spacelike separation. For interacting theories in more than two dimensions, we show that this implies that the averaged null energy, ∫ duT uu , must be non-negative. This non-local operator appears in the operator product expansion of local operators in the lightcone limit, and therefore contributes to n-point functions. We derive a sum rule that isolates this contribution and is manifestly positive. The argument also applies to certain higher spin operators other than the stress tensor, generating an infinite family of new constraints of the form ∫ duX uuu··· u ≥ 0. These lead to new inequalities for the coupling constants of spinning operators in conformal field theory, which include as special cases (but are generally stronger than) the existing constraints from the lightcone bootstrap, deep inelastic scattering, conformal collider methods, and relative entropy. We also comment on the relation to the recent derivation of the averaged null energy condition from relative entropy, and suggest a more general connection between causality and information-theoretic inequalities in QFT.

  14. Beta-energy averaging and beta spectra

    International Nuclear Information System (INIS)

    Stamatelatos, M.G.; England, T.R.

    1976-07-01

    A simple yet highly accurate method for approximately calculating spectrum-averaged beta energies and beta spectra for radioactive nuclei is presented. This method should prove useful for users who wish to obtain accurate answers without complicated calculations of Fermi functions, complex gamma functions, and time-consuming numerical integrations as required by the more exact theoretical expressions. Therefore, this method should be a good time-saving alternative for investigators who need to make calculations involving large numbers of nuclei (e.g., fission products) as well as for occasional users interested in restricted number of nuclides. The average beta-energy values calculated by this method differ from those calculated by ''exact'' methods by no more than 1 percent for nuclides with atomic numbers in the 20 to 100 range and which emit betas of energies up to approximately 8 MeV. These include all fission products and the actinides. The beta-energy spectra calculated by the present method are also of the same quality

  15. Asymptotic Time Averages and Frequency Distributions

    Directory of Open Access Journals (Sweden)

    Muhammad El-Taha

    2016-01-01

    Full Text Available Consider an arbitrary nonnegative deterministic process (in a stochastic setting {X(t,  t≥0} is a fixed realization, i.e., sample-path of the underlying stochastic process with state space S=(-∞,∞. Using a sample-path approach, we give necessary and sufficient conditions for the long-run time average of a measurable function of process to be equal to the expectation taken with respect to the same measurable function of its long-run frequency distribution. The results are further extended to allow unrestricted parameter (time space. Examples are provided to show that our condition is not superfluous and that it is weaker than uniform integrability. The case of discrete-time processes is also considered. The relationship to previously known sufficient conditions, usually given in stochastic settings, will also be discussed. Our approach is applied to regenerative processes and an extension of a well-known result is given. For researchers interested in sample-path analysis, our results will give them the choice to work with the time average of a process or its frequency distribution function and go back and forth between the two under a mild condition.

  16. Chaotic Universe, Friedmannian on the average 2

    Energy Technology Data Exchange (ETDEWEB)

    Marochnik, L S [AN SSSR, Moscow. Inst. Kosmicheskikh Issledovanij

    1980-11-01

    The cosmological solutions are found for the equations for correlators, describing a statistically chaotic Universe, Friedmannian on the average in which delta-correlated fluctuations with amplitudes h >> 1 are excited. For the equation of state of matter p = n epsilon, the kind of solutions depends on the position of maximum of the spectrum of the metric disturbances. The expansion of the Universe, in which long-wave potential and vortical motions and gravitational waves (modes diverging at t ..-->.. 0) had been excited, tends asymptotically to the Friedmannian one at t ..-->.. identity and depends critically on n: at n < 0.26, the solution for the scalefactor is situated higher than the Friedmannian one, and lower at n > 0.26. The influence of finite at t ..-->.. 0 long-wave fluctuation modes leads to an averaged quasiisotropic solution. The contribution of quantum fluctuations and of short-wave parts of the spectrum of classical fluctuations to the expansion law is considered. Their influence is equivalent to the contribution from an ultrarelativistic gas with corresponding energy density and pressure. The restrictions are obtained for the degree of chaos (the spectrum characteristics) compatible with the observed helium abundance, which could have been retained by a completely chaotic Universe during its expansion up to the nucleosynthesis epoch.

  17. Averaging in the presence of sliding errors

    International Nuclear Information System (INIS)

    Yost, G.P.

    1991-08-01

    In many cases the precision with which an experiment can measure a physical quantity depends on the value of that quantity. Not having access to the true value, experimental groups are forced to assign their errors based on their own measured value. Procedures which attempt to derive an improved estimate of the true value by a suitable average of such measurements usually weight each experiment's measurement according to the reported variance. However, one is in a position to derive improved error estimates for each experiment from the average itself, provided an approximate idea of the functional dependence of the error on the central value is known. Failing to do so can lead to substantial biases. Techniques which avoid these biases without loss of precision are proposed and their performance is analyzed with examples. These techniques are quite general and can bring about an improvement even when the behavior of the errors is not well understood. Perhaps the most important application of the technique is in fitting curves to histograms

  18. Concentrations and assessment of exposure to siloxanes and synthetic musks in personal care products from China

    International Nuclear Information System (INIS)

    Lu Yan; Yuan Tao; Wang Wenhua; Kannan, Kurunthachalam

    2011-01-01

    We investigated the concentrations and profiles of 15 siloxanes (four cyclic siloxanes, D 4 -D 7 ; 11 linear siloxanes, L 4 -L 14 ), four synthetic musks (two polycyclic musks, HHCB and AHTN; two nitro musks, MX and MK), and HHCB-lactone, in 158 personal care products marketed in China. Siloxanes were detected in 88% of the samples analyzed, at concentrations as high as 52.6 mg g -1 ; Linear siloxanes were the predominant compounds. Among synthetic musks, more than 80% of the samples contained at least one of these compounds, and their total concentrations were as high as 1.02 mg g -1 . HHCB was the predominant musk in all of the samples analyzed, on average, accounting for 52% of the total musk concentrations. Based on the median concentrations of siloxanes and musks and the average daily usage amounts of consumer products, dermal exposure rates in adults were calculated to be 3.69 and 3.38 mg d -1 for siloxanes and musks, respectively. - Highlights: → Siloxanes and synthetic musks are determined in personal care products. → Highest siloxane concentration was 52.6 mg g -1 . → Highest musk concentration was 1.02 mg g -1 . → Daily dermal exposure rates of siloxanes and musks were in mg levels. → Dermal exposure is a major pathway of human exposure to siloxanes and musks. - Dermal application of several personal care products is a major source of human exposure to cyclic and linear siloxanes.

  19. Investigation on residential radon concentration in Jingchuan county

    International Nuclear Information System (INIS)

    Zhang Wei; Wan Yihong; Chen Hongxiao; Shang Bin

    2009-01-01

    This paper reports an investigated result of residential radon concentration in Jingchuan County, Gansu Province, during May 2004 to November 2006. Alpha track detectors were used to measure radon level. Construction types of house and percentages of residents living in the county were also investigated through questionnaires. The result showed that the mean radon concentration in 62 investigated houses was 96.2 Bq·m -3 . The radon concentration in cave dwelling was the highest among all type of dwellings. The average level in cave dwelling is 110.2 Bq·m -3 , which was significantly higher than the national mean value published in literatures, and exceed the WHO recommended value of 100 Bq·m -3 . A considerable number of rural residents are living in cave dwellings in Jingchuan County. Attention should be paid to the radon problem and some proper protection measures taken. (authors)

  20. Highest weight generating functions for hyperKähler T{sup ⋆}(G/H) spaces

    Energy Technology Data Exchange (ETDEWEB)

    Hanany, Amihay [Theoretical Physics Group, Imperial College London,Prince Consort Road, London, SW7 2AZ (United Kingdom); Ramgoolam, Sanjaye [Centre for Research in String Theory,School of Physics and Astronomy, Queen Mary University of London,Mile End Road, London E1 4NS (United Kingdom); Rodriguez-Gomez, Diego [Department of Physics, Universidad de Oviedo,Avda. Calvo Sotelo 18, 33007, Oviedo (Spain)

    2016-10-05

    We develop an efficient procedure for counting holomorphic functions on a hyperKahler cone that has a resolution as a cotangent bundle of a homogeneous space by providing a formula for computing the corresponding Highest Weight Generating function.

  1. High average power linear induction accelerator development

    International Nuclear Information System (INIS)

    Bayless, J.R.; Adler, R.J.

    1987-07-01

    There is increasing interest in linear induction accelerators (LIAs) for applications including free electron lasers, high power microwave generators and other types of radiation sources. Lawrence Livermore National Laboratory has developed LIA technology in combination with magnetic pulse compression techniques to achieve very impressive performance levels. In this paper we will briefly discuss the LIA concept and describe our development program. Our goals are to improve the reliability and reduce the cost of LIA systems. An accelerator is presently under construction to demonstrate these improvements at an energy of 1.6 MeV in 2 kA, 65 ns beam pulses at an average beam power of approximately 30 kW. The unique features of this system are a low cost accelerator design and an SCR-switched, magnetically compressed, pulse power system. 4 refs., 7 figs

  2. FEL system with homogeneous average output

    Energy Technology Data Exchange (ETDEWEB)

    Douglas, David R.; Legg, Robert; Whitney, R. Roy; Neil, George; Powers, Thomas Joseph

    2018-01-16

    A method of varying the output of a free electron laser (FEL) on very short time scales to produce a slightly broader, but smooth, time-averaged wavelength spectrum. The method includes injecting into an accelerator a sequence of bunch trains at phase offsets from crest. Accelerating the particles to full energy to result in distinct and independently controlled, by the choice of phase offset, phase-energy correlations or chirps on each bunch train. The earlier trains will be more strongly chirped, the later trains less chirped. For an energy recovered linac (ERL), the beam may be recirculated using a transport system with linear and nonlinear momentum compactions M.sub.56, which are selected to compress all three bunch trains at the FEL with higher order terms managed.

  3. Quetelet, the average man and medical knowledge.

    Science.gov (United States)

    Caponi, Sandra

    2013-01-01

    Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.

  4. [Quetelet, the average man and medical knowledge].

    Science.gov (United States)

    Caponi, Sandra

    2013-01-01

    Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.

  5. Asymmetric network connectivity using weighted harmonic averages

    Science.gov (United States)

    Morrison, Greg; Mahadevan, L.

    2011-02-01

    We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.

  6. Angle-averaged Compton cross sections

    International Nuclear Information System (INIS)

    Nickel, G.H.

    1983-01-01

    The scattering of a photon by an individual free electron is characterized by six quantities: α = initial photon energy in units of m 0 c 2 ; α/sub s/ = scattered photon energy in units of m 0 c 2 ; β = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV

  7. Average Gait Differential Image Based Human Recognition

    Directory of Open Access Journals (Sweden)

    Jinyan Chen

    2014-01-01

    Full Text Available The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI, AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition.

  8. Reynolds averaged simulation of unsteady separated flow

    International Nuclear Information System (INIS)

    Iaccarino, G.; Ooi, A.; Durbin, P.A.; Behnia, M.

    2003-01-01

    The accuracy of Reynolds averaged Navier-Stokes (RANS) turbulence models in predicting complex flows with separation is examined. The unsteady flow around square cylinder and over a wall-mounted cube are simulated and compared with experimental data. For the cube case, none of the previously published numerical predictions obtained by steady-state RANS produced a good match with experimental data. However, evidence exists that coherent vortex shedding occurs in this flow. Its presence demands unsteady RANS computation because the flow is not statistically stationary. The present study demonstrates that unsteady RANS does indeed predict periodic shedding, and leads to much better concurrence with available experimental data than has been achieved with steady computation

  9. Angle-averaged Compton cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Nickel, G.H.

    1983-01-01

    The scattering of a photon by an individual free electron is characterized by six quantities: ..cap alpha.. = initial photon energy in units of m/sub 0/c/sup 2/; ..cap alpha../sub s/ = scattered photon energy in units of m/sub 0/c/sup 2/; ..beta.. = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV.

  10. Multisite study of particle number concentrations in urban air.

    Science.gov (United States)

    Harrison, Roy M; Jones, Alan M

    2005-08-15

    Particle number concentration data are reported from a total of eight urban site locations in the United Kingdom. Of these, six are central urban background sites, while one is an urban street canyon (Marylebone Road) and another is influenced by both a motorway and a steelworks (Port Talbot). The concentrations are generally of a similar order to those reported in the literature, although higher than those in some of the other studies. Highest concentrations are at the Marylebone Road site and lowest are at the Port Talbot site. The central urban background locations lie somewhere between with concentrations typically around 20 000 cm(-3). A seasonal pattern affects all sites, with highest concentrations in the winter months and lowest concentrations in the summer. Data from all sites show a diurnal variation with a morning rush hour peak typical of an anthropogenic pollutant. When the dilution effects of windspeed are accounted for, the data show little directionality at the central urban background sites indicating the influence of sources from all directions as might be expected if the major source were road traffic. At the London Marylebone Road site there is high directionality driven by the air circulation in the street canyon, and at the Port Talbot site different diurnal patterns are seen for particle number count and PM10 influenced by emissions from road traffic (particle number count) and the steelworks (PM10) and local meteorological factors. Hourly particle number concentrations are generally only weakly correlated to NO(x) and PM10, with the former showing a slightly closer relationship. Correlations between daily average particle number count and PM10 were also weak. Episodes of high PM10 concentration in summer typically show low particle number concentrations consistent with transport of accumulation mode secondary aerosol, while winter episodes are frequently associated with high PM10 and particle number count arising from poor dispersion of

  11. The balanced survivor average causal effect.

    Science.gov (United States)

    Greene, Tom; Joffe, Marshall; Hu, Bo; Li, Liang; Boucher, Ken

    2013-05-07

    Statistical analysis of longitudinal outcomes is often complicated by the absence of observable values in patients who die prior to their scheduled measurement. In such cases, the longitudinal data are said to be "truncated by death" to emphasize that the longitudinal measurements are not simply missing, but are undefined after death. Recently, the truncation by death problem has been investigated using the framework of principal stratification to define the target estimand as the survivor average causal effect (SACE), which in the context of a two-group randomized clinical trial is the mean difference in the longitudinal outcome between the treatment and control groups for the principal stratum of always-survivors. The SACE is not identified without untestable assumptions. These assumptions have often been formulated in terms of a monotonicity constraint requiring that the treatment does not reduce survival in any patient, in conjunction with assumed values for mean differences in the longitudinal outcome between certain principal strata. In this paper, we introduce an alternative estimand, the balanced-SACE, which is defined as the average causal effect on the longitudinal outcome in a particular subset of the always-survivors that is balanced with respect to the potential survival times under the treatment and control. We propose a simple estimator of the balanced-SACE that compares the longitudinal outcomes between equivalent fractions of the longest surviving patients between the treatment and control groups and does not require a monotonicity assumption. We provide expressions for the large sample bias of the estimator, along with sensitivity analyses and strategies to minimize this bias. We consider statistical inference under a bootstrap resampling procedure.

  12. ship between IS-month mating mass and average lifetime repro

    African Journals Online (AJOL)

    1976; Elliol, Rae & Wickham, 1979; Napier, et af., 1980). Although being in general agreement with results in the literature, it is evident that the present phenotypic correlations between I8-month mating mass and average lifetime lambing and weaning rate tended to be equal to the highest comparable estimates in the ...

  13. Influence of sports games classes in specialized sections on formation of healthy lifestyle at students of the highest educational institutions

    OpenAIRE

    Kudryavtsev, M.; Galimova, A.; Alshuvayli, Kh.; Altuvayni, A.

    2018-01-01

    In modern society, the problem of formation of healthy lifestyle at youth, in particular, at students of the highest educational institutions is very relevant. Sport is a good mean for motivation, in this case – sports games. Purpose: to reveal consequences of participation in sports games and influence of these actions on healthy lifestyle of students of the highest educational institutions, to designate a role of classes in the sections, specializing in preparation for sports games in this ...

  14. Averaging processes in granular flows driven by gravity

    Science.gov (United States)

    Rossi, Giulia; Armanini, Aronne

    2016-04-01

    One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental

  15. New Nordic diet versus average Danish diet

    DEFF Research Database (Denmark)

    Khakimov, Bekzod; Poulsen, Sanne Kellebjerg; Savorani, Francesco

    2016-01-01

    and 3-hydroxybutanoic acid were related to a higher weight loss, while higher concentrations of salicylic, lactic and N-aspartic acids, and 1,5-anhydro-D-sorbitol were related to a lower weight loss. Specific gender- and seasonal differences were also observed. The study strongly indicates that healthy...... metabolites reflecting specific differences in the diets, especially intake of plant foods and seafood, and in energy metabolism related to ketone bodies and gluconeogenesis, formed the predominant metabolite pattern discriminating the intervention groups. Among NND subjects higher levels of vaccenic acid...

  16. 77 FR 34411 - Branch Technical Position on Concentration Averaging and Encapsulation

    Science.gov (United States)

    2012-06-11

    ..., ``Licensing Requirements for Land Disposal of Radioactive Waste,'' establishes a waste classification system... Commission paper, SECY-07-0180, ``Strategic Assessment of Low- Level Radioactive Waste Regulatory Program... Requirements Memorandum for SECY-10-0043, ``Blending of Low-Level Radioactive Waste,'' (ADAMS Accession No...

  17. Average pollutant concentration in soil profile simulated with Convective-Dispersive Equation. Model and Manual

    Science.gov (United States)

    Different parts of soil solution move with different velocities, and therefore chemicals are leached gradually from soil with infiltrating water. Solute dispersivity is the soil parameter characterizing this phenomenon. To characterize the dispersivity of soil profile at field scale, it is desirable...

  18. Industrial Applications of High Average Power FELS

    CERN Document Server

    Shinn, Michelle D

    2005-01-01

    The use of lasers for material processing continues to expand, and the annual sales of such lasers exceeds $1 B (US). Large scale (many m2) processing of materials require the economical production of laser powers of the tens of kilowatts, and therefore are not yet commercial processes, although they have been demonstrated. The development of FELs based on superconducting RF (SRF) linac technology provides a scaleable path to laser outputs above 50 kW in the IR, rendering these applications economically viable, since the cost/photon drops as the output power increases. This approach also enables high average power ~ 1 kW output in the UV spectrum. Such FELs will provide quasi-cw (PRFs in the tens of MHz), of ultrafast (pulsewidth ~ 1 ps) output with very high beam quality. This talk will provide an overview of applications tests by our facility's users such as pulsed laser deposition, laser ablation, and laser surface modification, as well as present plans that will be tested with our upgraded FELs. These upg...

  19. Calculating Free Energies Using Average Force

    Science.gov (United States)

    Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)

    2001-01-01

    A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.

  20. Geographic Gossip: Efficient Averaging for Sensor Networks

    Science.gov (United States)

    Dimakis, Alexandros D. G.; Sarwate, Anand D.; Wainwright, Martin J.

    Gossip algorithms for distributed computation are attractive due to their simplicity, distributed nature, and robustness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repeatedly recirculating redundant information. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing times of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of $n$ and $\\sqrt{n}$ respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy $\\epsilon$ using $O(\\frac{n^{1.5}}{\\sqrt{\\log n}} \\log \\epsilon^{-1})$ radio transmissions, which yields a $\\sqrt{\\frac{n}{\\log n}}$ factor improvement over standard gossip algorithms. We illustrate these theoretical results with experimental comparisons between our algorithm and standard methods as applied to various classes of random fields.

  1. High-average-power solid state lasers

    International Nuclear Information System (INIS)

    Summers, M.A.

    1989-01-01

    In 1987, a broad-based, aggressive R ampersand D program aimed at developing the technologies necessary to make possible the use of solid state lasers that are capable of delivering medium- to high-average power in new and demanding applications. Efforts were focused along the following major lines: development of laser and nonlinear optical materials, and of coatings for parasitic suppression and evanescent wave control; development of computational design tools; verification of computational models on thoroughly instrumented test beds; and applications of selected aspects of this technology to specific missions. In the laser materials areas, efforts were directed towards producing strong, low-loss laser glasses and large, high quality garnet crystals. The crystal program consisted of computational and experimental efforts aimed at understanding the physics, thermodynamics, and chemistry of large garnet crystal growth. The laser experimental efforts were directed at understanding thermally induced wave front aberrations in zig-zag slabs, understanding fluid mechanics, heat transfer, and optical interactions in gas-cooled slabs, and conducting critical test-bed experiments with various electro-optic switch geometries. 113 refs., 99 figs., 18 tabs

  2. The concept of average LET values determination

    International Nuclear Information System (INIS)

    Makarewicz, M.

    1981-01-01

    The concept of average LET (linear energy transfer) values determination, i.e. ordinary moments of LET in absorbed dose distribution vs. LET of ionizing radiation of any kind and any spectrum (even the unknown ones) has been presented. The method is based on measurement of ionization current with several values of voltage supplying an ionization chamber operating in conditions of columnar recombination of ions or ion recombination in clusters while the chamber is placed in the radiation field at the point of interest. By fitting a suitable algebraic expression to the measured current values one can obtain coefficients of the expression which can be interpreted as values of LET moments. One of the advantages of the method is its experimental and computational simplicity. It has been shown that for numerical estimation of certain effects dependent on LET of radiation it is not necessary to know the dose distribution but only a number of parameters of the distribution, i.e. the LET moments. (author)

  3. On spectral averages in nuclear spectroscopy

    International Nuclear Information System (INIS)

    Verbaarschot, J.J.M.

    1982-01-01

    In nuclear spectroscopy one tries to obtain a description of systems of bound nucleons. By means of theoretical models one attemps to reproduce the eigenenergies and the corresponding wave functions which then enable the computation of, for example, the electromagnetic moments and the transition amplitudes. Statistical spectroscopy can be used for studying nuclear systems in large model spaces. In this thesis, methods are developed and applied which enable the determination of quantities in a finite part of the Hilbert space, which is defined by specific quantum values. In the case of averages in a space defined by a partition of the nucleons over the single-particle orbits, the propagation coefficients reduce to Legendre interpolation polynomials. In chapter 1 these polynomials are derived with the help of a generating function and a generalization of Wick's theorem. One can then deduce the centroid and the variance of the eigenvalue distribution in a straightforward way. The results are used to calculate the systematic energy difference between states of even and odd parity for nuclei in the mass region A=10-40. In chapter 2 an efficient method for transforming fixed angular momentum projection traces into fixed angular momentum for the configuration space traces is developed. In chapter 3 it is shown that the secular behaviour can be represented by a Gaussian function of the energies. (Auth.)

  4. Characterizing individual painDETECT symptoms by average pain severity

    Directory of Open Access Journals (Sweden)

    Sadosky A

    2016-07-01

    Full Text Available Alesia Sadosky,1 Vijaya Koduru,2 E Jay Bienen,3 Joseph C Cappelleri4 1Pfizer Inc, New York, NY, 2Eliassen Group, New London, CT, 3Outcomes Research Consultant, New York, NY, 4Pfizer Inc, Groton, CT, USA Background: painDETECT is a screening measure for neuropathic pain. The nine-item version consists of seven sensory items (burning, tingling/prickling, light touching, sudden pain attacks/electric shock-type pain, cold/heat, numbness, and slight pressure, a pain course pattern item, and a pain radiation item. The seven-item version consists only of the sensory items. Total scores of both versions discriminate average pain-severity levels (mild, moderate, and severe, but their ability to discriminate individual item severity has not been evaluated.Methods: Data were from a cross-sectional, observational study of six neuropathic pain conditions (N=624. Average pain severity was evaluated using the Brief Pain Inventory-Short Form, with severity levels defined using established cut points for distinguishing mild, moderate, and severe pain. The Wilcoxon rank sum test was followed by ridit analysis to represent the probability that a randomly selected subject from one average pain-severity level had a more favorable outcome on the specific painDETECT item relative to a randomly selected subject from a comparator severity level.Results: A probability >50% for a better outcome (less severe pain was significantly observed for each pain symptom item. The lowest probability was 56.3% (on numbness for mild vs moderate pain and highest probability was 76.4% (on cold/heat for mild vs severe pain. The pain radiation item was significant (P<0.05 and consistent with pain symptoms, as well as with total scores for both painDETECT versions; only the pain course item did not differ.Conclusion: painDETECT differentiates severity such that the ability to discriminate average pain also distinguishes individual pain item severity in an interpretable manner. Pain

  5. Exposure to fine particulate, black carbon, and particle number concentration in transportation microenvironments

    Science.gov (United States)

    Morales Betancourt, R.; Galvis, B.; Balachandran, S.; Ramos-Bonilla, J. P.; Sarmiento, O. L.; Gallo-Murcia, S. M.; Contreras, Y.

    2017-05-01

    This research determined intake dose of fine particulate matter (PM2.5), equivalent black carbon (eBC), and number of sub-micron particles (Np) for commuters in Bogotá, Colombia. Doses were estimated through measurements of exposure concentration, a surrogate of physical activity, as well as travel times and speeds. Impacts of travel mode, traffic load, and street configuration on dose and exposure were explored. Three road segments were selected because of their different traffic loads and composition, and dissimilar street configuration. The transport modes considered include active modes (walking and cycling) and motorized modes (bus, car, taxi, and motorcycle). Measurements were performed simultaneously in the available modes at each road segment. High average eBC concentrations were observed throughout the campaign, ranging from 20 to 120 μgm-3 . Commuters in motorized modes experienced significantly higher exposure concentrations than pedestrians and bicyclists. The highest average concentrations of PM2.5, eBC , and Np were measured inside the city's Bus Rapid Transit (BRT) system vehicles. Pedestrians and bicycle users in an open street configuration were exposed to the lowest average concentrations of PM2.5 and eBC , six times lower than those experienced by commuters using the BRT in the same street segment. Pedestrians experienced the highest particulate matter intake dose in the road segments studied, despite being exposed to lower concentrations than commuters in motorized modes. Average potential dose of PM2.5 and eBC per unit length traveled were nearly three times higher for pedestrians in a street canyon configuration compared to commuters in public transport. Slower travel speed and elevated inhalation rates dominate PM dose for pedestrians. The presence of dedicated bike lanes on sidewalks has a significant impact on reducing the exposure concentration for bicyclists compared to those riding in mixed traffic lanes. This study proposes a simple

  6. Concentrating Radioactivity

    Science.gov (United States)

    Herrmann, Richard A.

    1974-01-01

    By concentrating radioactivity contained on luminous dials, a teacher can make a high reading source for classroom experiments on radiation. The preparation of the source and its uses are described. (DT)

  7. Highest PBDE levels (max 63 ppm) yet found in biota measured in seabird eggs from San Francisco Bay

    Energy Technology Data Exchange (ETDEWEB)

    She, J.; Holden, A.; Tanner, M.; Sharp, M.; Hooper, K. [Department of Toxic Substances Control, Berkeley, CA (United States). Hazardous Materials Lab.; Adelsbach, T. [Environmental Contaminants Division, Sacramento Fish and Wildlife Office, US Fish and Wildlife Service, Sacramento, CA (United States)

    2004-09-15

    High levels of polybrominated diphenylethers (PBDEs) have been found in humans and wildlife from the San Francisco Bay Area, with levels in women among the highest in the world, and levels in piscivorous seabird eggs at the ppm level. Seabirds are useful for monitoring and assessing ecosystem health at various times and places because they occupy a high trophic level in the marine food web, are long-lived, and are generally localized near their breeding and non-breeding sites. In collaboration with the US Fish and Wildlife Services (USFWS), we are carrying out a three-year investigation of dioxin, PCB and PBDE levels in eggs from fish-eating seabirds. Year 1 (2002) PBDE measurements from 73 bird eggs were reported at Dioxin2003. Year 2 (2003) PBDE measurements from 45 samples are presented in this report. The highest PBDE level measured in eggs was 63 ppm, lipid, which is the highest PBDE level, yet reported in biota.

  8. RX: a nonimaging concentrator.

    Science.gov (United States)

    Miñano, J C; Benítez, P; González, J C

    1995-05-01

    A detailed description of the design procedure for a new concentrator, RX, and some examples of it's use are given. The method of design is basically the same as that used in the design of two other concentrators: the RR and the XR [Appl. Opt. 31, 3051 (1992)]. The RX is ideal in two-dimensional geometry. The performance of the rotational RX is good when the average angular spread of the input bundle is small: up to 95% of the power of the input bundle can be transferred to the output bundle (with the assumption of a constant radiance for the rays of the input bundle).

  9. Inclusion of Highest Glasgow Coma Scale Motor Component Score in Mortality Risk Adjustment for Benchmarking of Trauma Center Performance.

    Science.gov (United States)

    Gomez, David; Byrne, James P; Alali, Aziz S; Xiong, Wei; Hoeft, Chris; Neal, Melanie; Subacius, Harris; Nathens, Avery B

    2017-12-01

    The Glasgow Coma Scale (GCS) is the most widely used measure of traumatic brain injury (TBI) severity. Currently, the arrival GCS motor component (mGCS) score is used in risk-adjustment models for external benchmarking of mortality. However, there is evidence that the highest mGCS score in the first 24 hours after injury might be a better predictor of death. Our objective was to evaluate the impact of including the highest mGCS score on the performance of risk-adjustment models and subsequent external benchmarking results. Data were derived from the Trauma Quality Improvement Program analytic dataset (January 2014 through March 2015) and were limited to the severe TBI cohort (16 years or older, isolated head injury, GCS ≤8). Risk-adjustment models were created that varied in the mGCS covariates only (initial score, highest score, or both initial and highest mGCS scores). Model performance and fit, as well as external benchmarking results, were compared. There were 6,553 patients with severe TBI across 231 trauma centers included. Initial and highest mGCS scores were different in 47% of patients (n = 3,097). Model performance and fit improved when both initial and highest mGCS scores were included, as evidenced by improved C-statistic, Akaike Information Criterion, and adjusted R-squared values. Three-quarters of centers changed their adjusted odds ratio decile, 2.6% of centers changed outlier status, and 45% of centers exhibited a ≥0.5-SD change in the odds ratio of death after including highest mGCS score in the model. This study supports the concept that additional clinical information has the potential to not only improve the performance of current risk-adjustment models, but can also have a meaningful impact on external benchmarking strategies. Highest mGCS score is a good potential candidate for inclusion in additional models. Copyright © 2017 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  10. Average spectral efficiency analysis of FSO links over turbulence channel with adaptive transmissions and aperture averaging

    Science.gov (United States)

    Aarthi, G.; Ramachandra Reddy, G.

    2018-03-01

    In our paper, the impact of adaptive transmission schemes: (i) optimal rate adaptation (ORA) and (ii) channel inversion with fixed rate (CIFR) on the average spectral efficiency (ASE) are explored for free-space optical (FSO) communications with On-Off Keying (OOK), Polarization shift keying (POLSK), and Coherent optical wireless communication (Coherent OWC) systems under different turbulence regimes. Further to enhance the ASE we have incorporated aperture averaging effects along with the above adaptive schemes. The results indicate that ORA adaptation scheme has the advantage of improving the ASE performance compared with CIFR under moderate and strong turbulence regime. The coherent OWC system with ORA excels the other modulation schemes and could achieve ASE performance of 49.8 bits/s/Hz at the average transmitted optical power of 6 dBm under strong turbulence. By adding aperture averaging effect we could achieve an ASE of 50.5 bits/s/Hz under the same conditions. This makes ORA with Coherent OWC modulation as a favorable candidate for improving the ASE of the FSO communication system.

  11. Xanthium strumarium L. pollen concentration in aeroplankton of Lublin in the years 2003-2005

    Directory of Open Access Journals (Sweden)

    Elżbieta Weryszko-Chmielewska

    2012-12-01

    Full Text Available Xanthium strumarium (common cocklebur pollen grains are included in allergenic types. During a three-year study (2003-2005 conducted by using the gravimetric method at two trap sites in Lublin, daily concentrations, maximum concentrations and annual sums of pollen grains, as well as the length of pollen seasons of this species were compared. The pollen season of common cocklebur starts in the first or second decade of July and lasts until the third decade of September. The length of the pollen season is 70-80 days. The highest cocklebur pollen concentrations, amounting to 40-59 z·cm-2, occurred between 8 and 18 August. The maximum cocklebur pollen concentrations differed slightly in particular trap sites over the period of three years of study. A statistically significant correlation between the Xanthium strumarium pollen concentration and average temperature was demonstrated only in one year of study (2004.

  12. Temporal variation of carbonyl compound concentrations at a semi-rural site in Denmark

    DEFF Research Database (Denmark)

    Christensen, C.S.; Skov, H.; Nielsen, T.

    2000-01-01

    The atmospheric concentrations of formaldehyde, acetaldehyde and acetone were measured by the DNPH-technique at the semi-rural site Lille Valby, Denmark (55 degrees N) between May-July 1995, The average concentrations were observed to be 1.2 ppbv for formaldehyde, 0.8 ppbv for acetaldehyde and 1.......9 ppbv for acetone, For the set of carbonyl compounds, concentrations were found to be highly correlated, though only during daytime, The weak correlations observed during nighttime are believed to be caused by the dry deposition of especially formaldehyde, During periods with low photochemical activity...... of hydrocarbons during long-range transport. Especially, the concentration levels of acetone showed a pronounced seasonal-variation with the highest levels observed during summertime and lowest in winter and spring. The seasonal variation in the concentration levels of formaldehyde and acetaldehyde were small...

  13. Development of an Interferometric Phased Array Trigger for Balloon-Borne Detection of the Highest Energy Cosmic Particles

    Science.gov (United States)

    Vieregg, Abigail

    Through high energy neutrino astrophysics, we explore the structure and evolution of the universe in a unique way and learn about the physics inside of astrophysical sources that drives the acceleration of the highest energy particles. Neutrinos travel virtually unimpeded through the universe, making them unique messenger particles for cosmic sources and carrying information about very distant sources that would otherwise be unavailable. The highest energy neutrinos (E>10^{18} eV), created as a by-product of the interaction of the highest energy cosmic rays with the cosmic microwave background, are an important tool for determining the origin of the highest energy cosmic rays and still await discovery. Balloon-borne and ground-based experiments are poised to discover these ultra-high energy (UHE) cosmogenic neutrinos by looking for radio emission from two different types of neutrino interactions: particle cascades induced by neutrinos in glacial ice, and extensive air showers in the atmosphere induced by the charged-particle by-product of tau neutrinos interacting in the earth. These impulsive radio detectors are also sensitive to radio emission from extensive air showers induced directly by UHE cosmic rays. Balloon-borne experiments are especially well-suited for discovering the highest energy neutrinos, and are the only way to probe the high energy cutoff of the sources themselves to reveal the astrophysics that drives the central engines inside the most energetic accelerators in the universe. Balloon platforms offer the chance to monitor extremely large volumes of ice and atmosphere, but with a higher energy threshold compared to ground-based observatories, since the neutrino interaction happens farther from the detector. This tradeoff means that the sensitivity of balloon-borne experiments, such as the Antarctic Impulsive Transient Antenna (ANITA) or the ExaVolt Antenna, is optimized for discovery of the highest energy neutrinos. We are developing an

  14. Transuranic concentrations in reef and pelagic fish from the Marshall Islands. [/sup 239/Pu, /sup 240/Pu

    Energy Technology Data Exchange (ETDEWEB)

    Noshkin, V.E.; Eagle, R.J.; Wong, K.M.; Jokela, T.A.

    1980-09-01

    Concentrations of /sup 239 + 240/Pu are reported in tissues of several species of reef and pelagic fish caught at 14 different atolls in the northern Marshall Islands. Several regularities that are species dependent are evident in the distribution of /sup 239 + 240/Pu among different body tissues. Concentrations in liver always exceeded those in bone and concentrations were lowest in the muscle of all fish analyzed. A progressive discrimination against /sup 239 + 240/Pu was observed at successive trophic levels at all atolls except Bikini and Enewetak, where it was difficult to conclude if any real difference exists between the average concentration factor for /sup 239 + 240/Pu among all fish, which include bottom feeding and grazing herbivores, bottom feeding carnivores, and pelagic carnivores from different atoll locations. The average concentration of /sup 239 + 240/Pu in the muscle of surgeonfish from Bikini and Enewetak was not significantly different from the average concentrations determined in these fish at the other, lesser contaminated atolls. Concentrations among all 3rd, 4th, and 5th trophic level species are highest at Bikini where higher environmental concentrations are found. The reasons for the anomalously low concentrations in herbivores from Bikini and Enewetak are not known.

  15. To quantum averages through asymptotic expansion of classical averages on infinite-dimensional space

    International Nuclear Information System (INIS)

    Khrennikov, Andrei

    2007-01-01

    We study asymptotic expansions of Gaussian integrals of analytic functionals on infinite-dimensional spaces (Hilbert and nuclear Frechet). We obtain an asymptotic equality coupling the Gaussian integral and the trace of the composition of scaling of the covariation operator of a Gaussian measure and the second (Frechet) derivative of a functional. In this way we couple classical average (given by an infinite-dimensional Gaussian integral) and quantum average (given by the von Neumann trace formula). We can interpret this mathematical construction as a procedure of 'dequantization' of quantum mechanics. We represent quantum mechanics as an asymptotic projection of classical statistical mechanics with infinite-dimensional phase space. This space can be represented as the space of classical fields, so quantum mechanics is represented as a projection of 'prequantum classical statistical field theory'

  16. Determining average path length and average trapping time on generalized dual dendrimer

    Science.gov (United States)

    Li, Ling; Guan, Jihong

    2015-03-01

    Dendrimer has wide number of important applications in various fields. In some cases during transport or diffusion process, it transforms into its dual structure named Husimi cactus. In this paper, we study the structure properties and trapping problem on a family of generalized dual dendrimer with arbitrary coordination numbers. We first calculate exactly the average path length (APL) of the networks. The APL increases logarithmically with the network size, indicating that the networks exhibit a small-world effect. Then we determine the average trapping time (ATT) of the trapping process in two cases, i.e., the trap placed on a central node and the trap is uniformly distributed in all the nodes of the network. In both case, we obtain explicit solutions of ATT and show how they vary with the networks size. Besides, we also discuss the influence of the coordination number on trapping efficiency.

  17. Ground-state energies and highest occupied eigenvalues of atoms in exchange-only density-functional theory

    Science.gov (United States)

    Li, Yan; Harbola, Manoj K.; Krieger, J. B.; Sahni, Viraht

    1989-11-01

    The exchange-correlation potential of the Kohn-Sham density-functional theory has recently been interpreted as the work required to move an electron against the electric field of its Fermi-Coulomb hole charge distribution. In this paper we present self-consistent results for ground-state total energies and highest occupied eigenvalues of closed subshell atoms as obtained by this formalism in the exchange-only approximation. The total energies, which are an upper bound, lie within 50 ppm of Hartree-Fock theory for atoms heavier than Be. The highest occupied eigenvalues, as a consequence of this interpretation, approximate well the experimental ionization potentials. In addition, the self-consistently calculated exchange potentials are very close to those of Talman and co-workers [J. D. Talman and W. F. Shadwick, Phys. Rev. A 14, 36 (1976); K. Aashamar, T. M. Luke, and J. D. Talman, At. Data Nucl. Data Tables 22, 443 (1978)].

  18. Planar waveguide concentrator used with a seasonal tracker.

    Science.gov (United States)

    Bouchard, Sébastien; Thibault, Simon

    2012-10-01

    Solar concentrators offer good promise for reducing the cost of solar power. Planar waveguides equipped with a microlens slab have already been proposed as an excellent approach to produce medium to high concentration levels. Instead, we suggest the use of a cylindrical microlens array to get useful concentration without tracking during the day. To use only a seasonal tracking system and get the highest possible concentration, cylindrical microlenses are placed in the east-west orientation. Our new design has an acceptance angle in the north-south direction of ±9° and ±54° in the east-west axis. Simulation of our optimized system achieves a 4.6× average concentration level from 8:30 to 16:30 with a maximum of 8.1× and 80% optical efficiency. The low-cost advantage of waveguide-based solar concentrators could support their use in roof-mounted solar panels and eliminate the need for an expensive and heavy active tracker.

  19. Toluene diisocyanate concentration investigation among TDI-related factories in Taiwan and their relations to the type of industry.

    Science.gov (United States)

    Yeh, Hui-Jung; Shih, Tung-Sheng; Tsai, Perng-Jy; Chang, Ho-Yuan

    2002-03-01

    To determine nationwide 2,4- and 2,6-toluene diisocyanates (TDI) concentrations among polyurethane (PU) resin, PU foam, and other TDI-related industries in Taiwan. The ratios of 2,4-/2,6-TDI and the noncarcinogenic risk among these three industries were also investigated. Personal and fixed-area monitoring of TDI concentrations as well as questionnaires were performed for 26 factories in Taiwan. The modified OHSA 42 method was applied in sampling and analysis. Noncarcinogenic hazard index was estimated for these three industries based on the average concentration measurements. Significant differences of TDI concentrations were found among the three industry categories. For personal monitoring, PU foam was found to have the highest TDI levels [18.6 (+/-33.6) and 22.1 (+/-42.3) ppb for 2,4- and 2,6-TDI], Others average [8.3 (+/-18.9) and 10.2 (+/-17.2) ppb], and PU resin lowest [2.0 (+/-3.5) and 0.7 (+/-1.2) ppb]. The estimated average hazard indices were found to be 310-3310. A substantial percentage of airborne TDI concentrations among in Taiwan industries exceeded current TDI occupational exposure limit, and significant difference of TDI levels were found among the three industry categories. The control remedy for the tasks of charging and foaming should be enforced with the highest priority. A separate 2,6-TDI exposure standard is warranted.

  20. Concentrations of Heavy Metals in NPK Fertilizers Imported in Serbia

    Directory of Open Access Journals (Sweden)

    Jelena Milinović

    2008-01-01

    Full Text Available Concentrations of Pb, Cd, Cu and Mn in sixteen NPK fertilizers imported and widely used in Serbia were determined by flame atomic absorption spectrometry (AAS. The results show that contents of heavy metals varied significantly in different fertilizers dependingon N:P:K ratio and fertilizer origin. Pb and Cd contents in water solution of fertilizers occurred at low ranges: 2.0-3.1 and 0.03- 1.56 mg/kg, respectively. An NPK (15:15:15 fertilizer from Romania was found to contain the highest concentration of Pb and Cd as impurities. Cu content, ranging from 7.1 to 974.7 mg/kg, was the highest in coloured fertilizers from Hungary, the Netherlands and Greece. Mn value in a Hungarian NPK product (10:10:20 exceeds the average Mn value in soil. The data indicate variable contents of heavy metals in fertilizers, some of which are significantly higher than natural concentrations in soil, which suggests that they need to be continuously monitored.

  1. 20 CFR 404.221 - Computing your average monthly wage.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the average...

  2. Average and local structure of α-CuI by configurational averaging

    International Nuclear Information System (INIS)

    Mohn, Chris E; Stoelen, Svein

    2007-01-01

    Configurational Boltzmann averaging together with density functional theory are used to study in detail the average and local structure of the superionic α-CuI. We find that the coppers are spread out with peaks in the atom-density at the tetrahedral sites of the fcc sublattice of iodines. We calculate Cu-Cu, Cu-I and I-I pair radial distribution functions, the distribution of coordination numbers and the distribution of Cu-I-Cu, I-Cu-I and Cu-Cu-Cu bond-angles. The partial pair distribution functions are in good agreement with experimental neutron diffraction-reverse Monte Carlo, extended x-ray absorption fine structure and ab initio molecular dynamics results. In particular, our results confirm the presence of a prominent peak at around 2.7 A in the Cu-Cu pair distribution function as well as a broader, less intense peak at roughly 4.3 A. We find highly flexible bonds and a range of coordination numbers for both iodines and coppers. This structural flexibility is of key importance in order to understand the exceptional conductivity of coppers in α-CuI; the iodines can easily respond to changes in the local environment as the coppers diffuse, and a myriad of different diffusion-pathways is expected due to the large variation in the local motifs

  3. Optimization in the nuclear fuel cycle II: Concentration of alpha emitters in the air; Otimização no ciclo do combustível nuclear II: concentração de alfa emissores no ar

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, W.S., E-mail: pereiras@gmail.com [Universidade Veiga de Ameida (UVA), Rio de Janeiro, RJ (Brazil); Silva, A.X.; Lopes, J.M.; Carmo, A.S.; Mello, C.R.; Fernandes, T.S., E-mail: lararapls@hotmail.com, E-mail: Ademir@nuclear.ufrj.br [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil); Kelecom, A. [Universidade Federal Fluminense (UFF), Niterói, RJ (Brazil)

    2017-07-01

    Optimization is one of the bases of radioprotection and aims to move doses away from the dose limit that is the borderline of acceptable radiological risk. The work aims to use the monitoring of the concentration of alpha emitters in the air as a tool of the optimization process. We analyzed 27 sampling points of airborne alpha concentration in a nuclear fuel cycle facility. The monthly averages were considered statistically different, the highest in the month of February and the lowest in the month of August. All other months were found to have identical mean activity concentration values. Regarding the sampling points, the points with the highest averages were points 12, 15 and 9. These points were indicated for the beginning of the optimization process. Analysis of the production of the facility should be performed to verify possible correlations between production and concentration of alpha emitters in the air.

  4. The Experience Elicited by Hallucinogens Presents the Highest Similarity to Dreaming within a Large Database of Psychoactive Substance Reports

    Science.gov (United States)

    Sanz, Camila; Zamberlan, Federico; Erowid, Earth; Erowid, Fire; Tagliazucchi, Enzo

    2018-01-01

    Ever since the modern rediscovery of psychedelic substances by Western society, several authors have independently proposed that their effects bear a high resemblance to the dreams and dreamlike experiences occurring naturally during the sleep-wake cycle. Recent studies in humans have provided neurophysiological evidence supporting this hypothesis. However, a rigorous comparative analysis of the phenomenology (“what it feels like” to experience these states) is currently lacking. We investigated the semantic similarity between a large number of subjective reports of psychoactive substances and reports of high/low lucidity dreams, and found that the highest-ranking substance in terms of the similarity to high lucidity dreams was the serotonergic psychedelic lysergic acid diethylamide (LSD), whereas the highest-ranking in terms of the similarity to dreams of low lucidity were plants of the Datura genus, rich in deliriant tropane alkaloids. Conversely, sedatives, stimulants, antipsychotics, and antidepressants comprised most of the lowest-ranking substances. An analysis of the most frequent words in the subjective reports of dreams and hallucinogens revealed that terms associated with perception (“see,” “visual,” “face,” “reality,” “color”), emotion (“fear”), setting (“outside,” “inside,” “street,” “front,” “behind”) and relatives (“mom,” “dad,” “brother,” “parent,” “family”) were the most prevalent across both experiences. In summary, we applied novel quantitative analyses to a large volume of empirical data to confirm the hypothesis that, among all psychoactive substances, hallucinogen drugs elicit experiences with the highest semantic similarity to those of dreams. Our results and the associated methodological developments open the way to study the comparative phenomenology of different altered states of consciousness and its relationship with non-invasive measurements of brain physiology. PMID

  5. The Experience Elicited by Hallucinogens Presents the Highest Similarity to Dreaming within a Large Database of Psychoactive Substance Reports

    Directory of Open Access Journals (Sweden)

    Camila Sanz

    2018-01-01

    Full Text Available Ever since the modern rediscovery of psychedelic substances by Western society, several authors have independently proposed that their effects bear a high resemblance to the dreams and dreamlike experiences occurring naturally during the sleep-wake cycle. Recent studies in humans have provided neurophysiological evidence supporting this hypothesis. However, a rigorous comparative analysis of the phenomenology (“what it feels like” to experience these states is currently lacking. We investigated the semantic similarity between a large number of subjective reports of psychoactive substances and reports of high/low lucidity dreams, and found that the highest-ranking substance in terms of the similarity to high lucidity dreams was the serotonergic psychedelic lysergic acid diethylamide (LSD, whereas the highest-ranking in terms of the similarity to dreams of low lucidity were plants of the Datura genus, rich in deliriant tropane alkaloids. Conversely, sedatives, stimulants, antipsychotics, and antidepressants comprised most of the lowest-ranking substances. An analysis of the most frequent words in the subjective reports of dreams and hallucinogens revealed that terms associated with perception (“see,” “visual,” “face,” “reality,” “color”, emotion (“fear”, setting (“outside,” “inside,” “street,” “front,” “behind” and relatives (“mom,” “dad,” “brother,” “parent,” “family” were the most prevalent across both experiences. In summary, we applied novel quantitative analyses to a large volume of empirical data to confirm the hypothesis that, among all psychoactive substances, hallucinogen drugs elicit experiences with the highest semantic similarity to those of dreams. Our results and the associated methodological developments open the way to study the comparative phenomenology of different altered states of consciousness and its relationship with non-invasive measurements of brain

  6. Communication: The highest frequency hydrogen bond vibration and an experimental value for the dissociation energy of formic acid dimer

    DEFF Research Database (Denmark)

    Kollipost, F.; Larsen, René Wugt; Domanskaya, A.V.

    2012-01-01

    The highest frequency hydrogen bond fundamental of formic acid dimer, ν24 (Bu), is experimentally located at 264 cm−1. FTIR spectra of this in-plane bending mode of (HCOOH)2 and band centers of its symmetric D isotopologues (isotopomers) recorded in a supersonic slit jet expansion are presented...... thermodynamics treatment of the dimerization process up to room temperature. We obtain D0 = 59.5(5) kJ/mol as the best experimental estimate for the dimer dissociation energy at 0 K. Further improvements have to wait for a more consistent determination of the room temperature equilibrium constant....

  7. The Physikalisch-Technische Bundesanstalt PTB (physical-technical Federal institution) - research institute and highest technical authority

    International Nuclear Information System (INIS)

    Klages, H.

    1976-01-01

    The PTB Braunschweig and Berlin is a Federal institution for the natural sciences and engineering and the highest technical authority for measurements. It is subject to the directions of the Federal Ministry for Economic Affairs. Its main tasks are representation, maintenance and development of physical units and, in connection with this, research, examinations, and granting permissions for calibration measuring equipment, as well as examinations of building types and permissions. The types of measuring equipment are represented. Many examinations are carried out on a voluntary basis. The advisory activities and the PTB's publications are also reported on. An organizational plan informs of the structure of the PTB. (orig.) [de

  8. The Licancabur Project: Exploring the Limits of Life in the Highest Lake on Earth as an Analog to Martian Paleolakes

    Science.gov (United States)

    Cabrol, N. A.; Grin, E. A.; McKay, C. P.; Friedmann, I.; Diaz, G. Chong; Demergasso, C.; Kisse, K.; Grigorszky, I.; Friedmann, R. Ocampo; Hock, A.

    2003-01-01

    The Licancabur volcano (6017 m) hosts the highest and one of the least explored lakes in the world in its summit crater. It is located 22 deg.50 min. South / 67 deg.53 min. West at the boundary of Chile and Bolivia in the High-Andes. In a freezing environment, the lake located in volcano-tectonic environment combines low-oxygen, low atmospheric pressure due to altitude, and high-UV radiation (see table). However, its bottom water temperature remains above 0 C year-round. These conditions make Licancabur a unique analog to Martian paleolakes considered high-priority sites for the search for life on Mars.

  9. Potential need for re-definition of the highest priority recovery action in the Krsko SAG-1

    International Nuclear Information System (INIS)

    Bilic Zabric, T.; Basic, I.

    2005-01-01

    Replacement of old SG (Steam Generators) [7] and the characteristic of new ones throws the question of proper accident management strategy, which leans on philosophy that repair and recovery actions have first priority. In the current NPP Krsko SAMGs (Severe Accident Management Guidelines), water supply to the SG has priority over re-injection water into the core. NPP Krsko reconsidered the highest priority of SAG-1 (inject water to the SG), against the WOG (Westinghouse Owners Group) generic approach (inject water into the core) and potential revision of Severe Accident Phenomenology Evaluations using MAAP (Modular accident Analysis Program) 4.0.5 code. (author)

  10. Averaged emission factors for the Hungarian car fleet

    Energy Technology Data Exchange (ETDEWEB)

    Haszpra, L. [Inst. for Atmospheric Physics, Budapest (Hungary); Szilagyi, I. [Central Research Inst. for Chemistry, Budapest (Hungary)

    1995-12-31

    The vehicular emission of non-methane hydrocarbon (NMHC) is one of the largest anthropogenic sources of NMHC in Hungary and in most of the industrialized countries. Non-methane hydrocarbon plays key role in the formation of photo-chemical air pollution, usually characterized by the ozone concentration, which seriously endangers the environment and human health. The ozone forming potential of the different NMHCs differs from each other significantly, while the NMHC composition of the car exhaust is influenced by the fuel and engine type, technical condition of the vehicle, vehicle speed and several other factors. In Hungary the majority of the cars are still of Eastern European origin. They represent the technological standard of the 70`s, although there are changes recently. Due to the long-term economical decline in Hungary the average age of the cars was about 9 years in 1990 and reached 10 years by 1993. The condition of the majority of the cars is poor. In addition, almost one third (31.2 %) of the cars are equipped with two-stroke engines which emit less NO{sub x} but much more hydrocarbon. The number of cars equipped with catalytic converter was negligible in 1990 and is slowly increasing only recently. As a consequence of these facts the traffic emission in Hungary may differ from that measured in or estimated for the Western European countries and the differences should be taken into account in the air pollution models. For the estimation of the average emission of the Hungarian car fleet a one-day roadway tunnel experiment was performed in the downtown of Budapest in summer, 1991. (orig.)

  11. Concentrations of prioritized pharmaceuticals in effluents from 50 large wastewater treatment plants in the US and implications for risk estimation.

    Science.gov (United States)

    Kostich, Mitchell S; Batt, Angela L; Lazorchak, James M

    2014-01-01

    We measured concentrations of 56 active pharmaceutical ingredients (APIs) in effluent samples from 50 large wastewater treatment plants across the US. Hydrochlorothiazide was found in every sample. Metoprolol, atenolol, and carbamazepine were found in over 90% of the samples. Valsartan had the highest concentration (5300 ng/L), and also had the highest average concentration (1600 ng/L) across all 50 samples. Estimates of potential risks to healthy human adults were greatest for six anti-hypertensive APIs (lisinopril, hydrochlorothiazide, valsartan, atenolol, enalaprilat, and metoprolol), but nevertheless suggest risks of exposure to individual APIs as well as their mixtures are generally very low. Estimates of potential risks to aquatic life were also low for most APIs, but suggest more detailed study of potential ecological impacts from four analytes (sertraline, propranolol, desmethylsertraline, and valsartan). Published by Elsevier Ltd.

  12. Analysis and comparison of safety models using average daily, average hourly, and microscopic traffic.

    Science.gov (United States)

    Wang, Ling; Abdel-Aty, Mohamed; Wang, Xuesong; Yu, Rongjie

    2018-02-01

    There have been plenty of traffic safety studies based on average daily traffic (ADT), average hourly traffic (AHT), or microscopic traffic at 5 min intervals. Nevertheless, not enough research has compared the performance of these three types of safety studies, and seldom of previous studies have intended to find whether the results of one type of study is transferable to the other two studies. First, this study built three models: a Bayesian Poisson-lognormal model to estimate the daily crash frequency using ADT, a Bayesian Poisson-lognormal model to estimate the hourly crash frequency using AHT, and a Bayesian logistic regression model for the real-time safety analysis using microscopic traffic. The model results showed that the crash contributing factors found by different models were comparable but not the same. Four variables, i.e., the logarithm of volume, the standard deviation of speed, the logarithm of segment length, and the existence of diverge segment, were positively significant in the three models. Additionally, weaving segments experienced higher daily and hourly crash frequencies than merge and basic segments. Then, each of the ADT-based, AHT-based, and real-time models was used to estimate safety conditions at different levels: daily and hourly, meanwhile, the real-time model was also used in 5 min intervals. The results uncovered that the ADT- and AHT-based safety models performed similar in predicting daily and hourly crash frequencies, and the real-time safety model was able to provide hourly crash frequency. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Presentation and verification of a simple mathematical model foridentification of the areas behind noise barrierwith the highest performance

    Directory of Open Access Journals (Sweden)

    M. Monazzam

    2009-07-01

    Full Text Available Background and aims   Traffic noise barriers are the most important measure to control the environmental noise pollution. Diffraction from top edge of noise barriers is the most important path of indirect sound wave moves towards receiver.Therefore, most studies are focused on  improvement of this kind.   Methods   T-shape profile barriers are one of the most successful barrier among many different profiles. In this investigation the theory of destructive effect of diffracted waves from real edge of barrier and the wave diffracted from image of the barrier with phase difference of radians is used. Firstly a simple mathematical representation of the zones behind rigid and absorbent T- shape barriers with the highest insertion loss using the destructive effect of indirect path via barrier  image is introduced and then two different profile reflective and absorption barrier is used for  verification of the introduced model   Results   The results are then compared with the results of a verified two dimensional boundary element method at 1/3 octave band frequencies and in a wide field behind those barriers. Avery good agreement between the results has been achieved. In this method effective height is used for any different profile barriers.   Conclusion   The introduced model is very simple, flexible and fast and could be used for choosing the best location of profile rigid and absorptive barriers to achieve the highest  performance.  

  14. The Idea of a Highest Divine Principle — Founding Reason and Spirituality. A Necessary Concept of a Comparative Philosophy?

    Directory of Open Access Journals (Sweden)

    Claudia Bickmann

    2012-10-01

    Full Text Available By reference to the Platonic, Aristotelian, and Neo-Platonic philosophical traditions (and then to German Idealism, including Husserl and Heidegger, I will indicate the way in which the concept of reason—on the one side—depends on the horizon of spirituality (by searching for the ultimate ground within us and the striving for the highest good; and inversely—how far the idea of the divine or our spiritual self may be deepened, understood and transmitted by reference to reason and rationality. But whereas philosophical analysis aims at the universal dimensions of spirituality or the divine (as in Plato's idea of the 'highest good', the Aristotelian 'Absolute substance', the 'Oneness of the One' (Plotinus and the Neo-Platonists or the Hegelian 'Absolute spirit',—Comparative Theology may preserve the dimension of spirituality or divinity in its individuality and specifity. Comparative Theology mediates between the universality of the philosophical discourse and the uniqueness of our individual experience (symbolized by a sacred person—such as Jesus, Brahman, Buddha or Mohammed by reflecting and analyzing our religious experiences and practices. Religion may lose its specificity by comparative conceptual analysis within the field of philosophy, but Comparative Theology may enhance the vital dimensions of the very same spiritual experience by placing them in a comparative perspective.

  15. Fitting the Probability Distribution Functions to Model Particulate Matter Concentrations

    International Nuclear Information System (INIS)

    El-Shanshoury, Gh.I.

    2017-01-01

    The main objective of this study is to identify the best probability distribution and the plotting position formula for modeling the concentrations of Total Suspended Particles (TSP) as well as the Particulate Matter with an aerodynamic diameter<10 μm (PM 10 ). The best distribution provides the estimated probabilities that exceed the threshold limit given by the Egyptian Air Quality Limit value (EAQLV) as well the number of exceedance days is estimated. The standard limits of the EAQLV for TSP and PM 10 concentrations are 24-h average of 230 μg/m 3 and 70 μg/m 3 , respectively. Five frequency distribution functions with seven formula of plotting positions (empirical cumulative distribution functions) are compared to fit the average of daily TSP and PM 10 concentrations in year 2014 for Ain Sokhna city. The Quantile-Quantile plot (Q-Q plot) is used as a method for assessing how closely a data set fits a particular distribution. A proper probability distribution that represents the TSP and PM 10 has been chosen based on the statistical performance indicator values. The results show that Hosking and Wallis plotting position combined with Frechet distribution gave the highest fit for TSP and PM 10 concentrations. Burr distribution with the same plotting position follows Frechet distribution. The exceedance probability and days over the EAQLV are predicted using Frechet distribution. In 2014, the exceedance probability and days for TSP concentrations are 0.052 and 19 days, respectively. Furthermore, the PM 10 concentration is found to exceed the threshold limit by 174 days

  16. Airborne concentrations of Polybrominated diethyl etherin residential homes

    Directory of Open Access Journals (Sweden)

    S. Rahimzadeh

    2009-07-01

    Full Text Available Background and aims   Polybrominated diphenyl ethers (PBDE have been widely applied to different home and offices' appliances as flame retardant additives to inhibit ignition and enhance  the fire safety. Their toxicity, health effects, and resistance to environmental degradation are  matters of great interested among scientists. Airborne concentrations of PBDE in residential  homes were determined in this study.   Methods   In a cross sectional study, 33 residential homes were selected and airborne concentrations of PBDEs were investigated using PUF disk passive air samplers. Also in two building the concentraction of PBDEs were monitored in two rooms of a department in each building for 12 months.   Results   Average airborne concentration of ?PBDE (sum of congener #s 17, 28, 47, 49, 66, 85, 99, 100, 153, and 154 for all locations monitored was 52 (4-245 pgm -3 . While in one of the buildings the contaminant level of bedroom was significantly higher than the living room, PBDE   concentrations remained relatively constant for whole monitoring period.   Conclusion   The range of concentrations results to a wide variation between inhalation intakes of dwellers of the lowest and the highest contaminated homes (~50 folds.  

  17. Prediction of indoor radon concentration based on residence location and construction

    International Nuclear Information System (INIS)

    Maekelaeinen, I.; Voutilainen, A.; Castren, O.

    1992-01-01

    We have constructed a model for assessing indoor radon concentrations in houses where measurements cannot be performed. It has been used in an epidemiological study and to determine the radon potential of new building sites. The model is based on data from about 10,000 buildings. Integrated radon measurements were made during the cold season in all the houses; their geographic coordinates were also known. The 2-mo measurement results were corrected to annual average concentrations. Construction data were collected from questionnaires completed by residents; geological data were determined from geological maps. Data were classified according to geographical, geological, and construction factors. In order to describe different radon production levels, the country was divided into four zones. We assumed that the factors were multiplicative, and a linear concentration-prediction model was used. The most significant factor in determining radon concentration was the geographical region, followed by soil type, year of construction, and type of foundation. The predicted indoor radon concentrations given by the model varied from 50 to 440 Bq m -3 . The lower figure represents a house with a basement, built in the 1950s on clay soil, in the region with the lowest radon concentration levels. The higher value represents a house with a concrete slab in contact with the ground, built in the 1980s, on gravel, in the region with the highest average radon concentration

  18. Nonimaging concentrators for diode-pumped slab lasers

    Science.gov (United States)

    Lacovara, Philip; Gleckman, Philip L.; Holman, Robert L.; Winston, Roland

    1991-10-01

    Diode-pumped slab lasers require concentrators for high-average power operation. We detail the properties of diode lasers and slab lasers which set the concentration requirements and the concentrator design methodologies that are used, and describe some concentrator designs used in high-average power slab lasers at Lincoln Laboratory.

  19. Who jumps the highest? Anthropometric and physiological correlations of vertical jump in youth elite female volleyball players.

    Science.gov (United States)

    Nikolaidis, Pantelis T; Gkoudas, Konstantinos; Afonso, José; Clemente-Suarez, Vicente J; Knechtle, Beat; Kasabalis, Stavros; Kasabalis, Athanasios; Douda, Helen; Tokmakidis, Savvas; Torres-Luque, Gema

    2017-06-01

    The aim of the present study was to examine the relationship of vertical jump (Abalakov jump [AJ]) with anthropometric and physiological parameters in youth elite female volleyball players. Seventy-two selected volleyball players from the region of Athens (age 13.3±0.7 years, body mass 62.0±7.2 kg, height 171.5±5.7 cm, body fat 21.2±4.5%), classified into quartiles according to AJ performance (group A, 21.4-26.5 cm; group B, 26.8-29.9 cm; group C, 30.5-33.7 cm; group D, 33.8-45.9 cm), performed a series of physical fitness tests. AJ was correlated with anthropometric (age at peak height velocity [APHV]: r=0.38, Pvolleyball players that jumped the highest were those who matured later than others.

  20. Repeated Radionuclide therapy in metastatic paraganglioma leading to the highest reported cumulative activity of 131I-MIBG

    International Nuclear Information System (INIS)

    Ezziddin, Samer; Sabet, Amir; Ko, Yon-Dschun; Xun, Sunny; Matthies, Alexander; Biersack, Hans-Jürgen

    2012-01-01

    131 I-MIBG therapy for neuroendocrine tumours may be dose limited. The common range of applied cumulative activities is 10-40 GBq. We report the uneventful cumulative administration of 111 GBq (= 3 Ci) 131 I-MIBG in a patient with metastatic paraganglioma. Ten courses of 131 I-MIBG therapy were given within six years, accomplishing symptomatic, hormonal and tumour responses with no serious adverse effects. Chemotherapy with cisplatin/vinblastine/dacarbazine was the final treatment modality with temporary control of disease, but eventually the patient died of progression. The observed cumulative activity of 131 I-MIBG represents the highest value reported to our knowledge, and even though 12.6 GBq of 90 Y-DOTATOC were added intermediately, no associated relevant bone marrow, hepatic or other toxicity were observed. In an individual attempt to palliate metastatic disease high cumulative activity alone should not preclude the patient from repeat treatment

  1. Impact of thermal frequency drift on highest precision force microscopy using quartz-based force sensors at low temperatures

    Directory of Open Access Journals (Sweden)

    Florian Pielmeier

    2014-04-01

    Full Text Available In frequency modulation atomic force microscopy (FM-AFM the stability of the eigenfrequency of the force sensor is of key importance for highest precision force measurements. Here, we study the influence of temperature changes on the resonance frequency of force sensors made of quartz, in a temperature range from 4.8–48 K. The sensors are based on the qPlus and length extensional principle. The frequency variation with temperature T for all sensors is negative up to 30 K and on the order of 1 ppm/K, up to 13 K, where a distinct kink appears, it is linear. Furthermore, we characterize a new type of miniaturized qPlus sensor and confirm the theoretically predicted reduction in detector noise.

  2. Analysis of the highest transverse energy events seen in the UAl detector at the Spp-barS collider

    International Nuclear Information System (INIS)

    1987-06-01

    The first full solid angle analysis is presented of large transverse energy events in pp-bar collisions at the CERN collider. Events with transverse energies in excess of 200 GeV at √s = 630 GeV are studied for any non-standard physics and quantitatively compared with expectations from perturbative QCD Monte Carlo models. A corrected differential cross section is presented. A detailed examination is made of jet profiles, event jet multiplicities and the fraction of the transverse energy carried by the two jets with the highest transverse jet energies. There is good agreement with standard theory for events with transverse energies up to the largest observed values (approx. √s/2) and the analysis shows no evidence for any non-QCD mechanism to account for the event characteristics. (author)

  3. A common founder mutation in FANCA underlies the world's highest prevalence of Fanconi anemia in Gypsy families from Spain.

    Science.gov (United States)

    Callén, Elsa; Casado, José A; Tischkowitz, Marc D; Bueren, Juan A; Creus, Amadeu; Marcos, Ricard; Dasí, Angeles; Estella, Jesús M; Muñoz, Arturo; Ortega, Juan J; de Winter, Johan; Joenje, Hans; Schindler, Detlev; Hanenberg, Helmut; Hodgson, Shirley V; Mathew, Christopher G; Surrallés, Jordi

    2005-03-01

    Fanconi anemia (FA) is a genetic disease characterized by bone marrow failure and cancer predisposition. Here we have identified Spanish Gypsies as the ethnic group with the world's highest prevalence of FA (carrier frequency of 1/64-1/70). DNA sequencing of the FANCA gene in 8 unrelated Spanish Gypsy FA families after retroviral subtyping revealed a homozygous FANCA mutation (295C>T) leading to FANCA truncation and FA pathway disruption. This mutation appeared specific for Spanish Gypsies as it is not found in other Gypsy patients with FA from Hungary, Germany, Slovakia, and Ireland. Haplotype analysis showed that Spanish Gypsy patients all share the same haplotype. Our data thus suggest that the high incidence of FA among Spanish Gypsies is due to an ancestral founder mutation in FANCA that originated in Spain less than 600 years ago. The high carrier frequency makes the Spanish Gypsies a population model to study FA heterozygote mutations in cancer.

  4. Analysis of the highest transverse energy events seen in the UA1 detector at the Spanti pS collider

    International Nuclear Information System (INIS)

    Albajar, C.; Bezaguet, A.; Cennini, P.

    1987-01-01

    This is the first full solid angle analysis of large transverse energy events in panti p collisions at the CERN collider. Events with transverse energies in excess of 200 GeV at √s=630 GeV are studied for any non-standard physics and quantitatively compared with expectations from perturbative QCD Monte Carlo models. A corrected differential cross section is presented. A detailed examination is made of jet profiles, event jet multiplicities and the fraction of the transverse energy carried by the two jets with the highest transverse jet energies. There is good agreement with standard theory for events with transverse energies up to the largest observed values (≅ √s/2) and the analysis shows no evidence for any non-QCD mechanism to account for the event characteristics. (orig.)

  5. Contact with HIV prevention services highest in gay and bisexual men at greatest risk: cross-sectional survey in Scotland

    Directory of Open Access Journals (Sweden)

    Hart Graham J

    2010-12-01

    Full Text Available Abstract Background Men who have sex with men (MSM remain the group most at risk of acquiring HIV in the UK and new HIV prevention strategies are needed. In this paper, we examine what contact MSM currently have with HIV prevention activities and assess the extent to which these could be utilised further. Methods Anonymous, self-complete questionnaires and Orasure™ oral fluid collection kits were distributed to men visiting the commercial gay scenes in Glasgow and Edinburgh in April/May 2008. 1508 men completed questionnaires (70.5% response rate and 1277 provided oral fluid samples (59.7% response rate; 1318 men were eligible for inclusion in the analyses. Results 82.5% reported some contact with HIV prevention activities in the past 12 months, 73.1% obtained free condoms from a gay venue or the Internet, 51.1% reported accessing sexual health information (from either leaflets in gay venues or via the Internet, 13.5% reported talking to an outreach worker and 8.0% reported participating in counselling on sexual health or HIV prevention. Contact with HIV prevention activities was associated with frequency of gay scene use and either HIV or other STI testing in the past 12 months, but not with sexual risk behaviours. Utilising counselling was also more likely among men who reported having had an STI in the past 12 months and HIV-positive men. Conclusions Men at highest risk, and those likely to be in contact with sexual health services, are those who report most contact with a range of current HIV prevention activities. Offering combination prevention, including outreach by peer health workers, increased uptake of sexual health services delivering behavioural and biomedical interventions, and supported by social marketing to ensure continued community engagement and support, could be the way forward. Focused investment in the needs of those at highest risk, including those diagnosed HIV-positive, may generate a prevention dividend in the long

  6. Measurement of indoor radon Concentrations in Osaka, Nara, Wakayama and Hyogo with passive dosemeters

    International Nuclear Information System (INIS)

    Mori, Toshiaki; Hori, Yasuharu; Takeda, Atsuhiko; Iwasaki, Tamiko; Uchiyama, Masahumi; Fujimoto, Kenzo; Kankura, Takako; Kobayashi, Sadayosi.

    1989-01-01

    Indoor radon concentrations of 792 houses in Osaka, Nara, Wakayama and Hyogo were measured by the passive dosemeter which was developed in Karlsruhe Nuclear Research Center in West Germany. Each house was measured at two places for successive two periods of six months to obtain annual average exposure due to radon daughters. The arithmetic mean concentration of all houses was 45.2 Bq/m 3 with a standard deviation of 27.2; the geometric mean, 40.7 Bq/m 3 and the median, 39 Bq/m 3 . The distribution of the radon levels was approximately log-normal with 80% of houses having radon concentrations less than 60 Bq/m 3 . The seasonal variation of the mean radon concentration was evident between the former period including winter value of 45 Bq/m 3 and the latter including summer value of 32 Bq/m 3 . The indoor radon concentrations of wooden houses were found to have the widest distribution with the highest value of 371 Bq/m 3 . The highest value obtained in the ferro-concrete house was 118 Bq/m 3 . Twelve houses having indoor radon concentrations higher than 120 Bq/m 3 were all Japanese traditional wooden houses with walls made of soil. (author)

  7. Analytical expressions for conditional averages: A numerical test

    DEFF Research Database (Denmark)

    Pécseli, H.L.; Trulsen, J.

    1991-01-01

    Conditionally averaged random potential fluctuations are an important quantity for analyzing turbulent electrostatic plasma fluctuations. Experimentally, this averaging can be readily performed by sampling the fluctuations only when a certain condition is fulfilled at a reference position...

  8. Experimental demonstration of squeezed-state quantum averaging

    DEFF Research Database (Denmark)

    Lassen, Mikael Østergaard; Madsen, Lars Skovgaard; Sabuncu, Metin

    2010-01-01

    We propose and experimentally demonstrate a universal quantum averaging process implementing the harmonic mean of quadrature variances. The averaged variances are prepared probabilistically by means of linear optical interference and measurement-induced conditioning. We verify that the implemented...

  9. Parabolic solar concentrator

    Science.gov (United States)

    Tecpoyotl-Torres, M.; Campos-Alvarez, J.; Tellez-Alanis, F.; Sánchez-Mondragón, J.

    2006-08-01

    In this work we present the basis of the solar concentrator design, which has is located at Temixco, Morelos, Mexico. For this purpose, this place is ideal due to its geographic and climatic conditions, and in addition, because it accounts with the greatest constant illumination in Mexico. For the construction of the concentrator we use a recycled parabolic plate of a telecommunications satellite dish (NEC). This plate was totally covered with Aluminum. The opening diameter is of 332 cm, the focal length is of 83 cm and the opening angle is of 90°. The geometry of the plate guaranties that the incident beams, will be collected at the focus. The mechanical treatment of the plate produces an average reflectance of 75% in the visible region of the solar spectrum, and of 92% for wavelengths up to 3μm in the infrared region. We obtain up to 2000°C of temperature concentration with this setup. The reflectance can be greatly improved, but did not consider it as typical practical use. The energy obtained can be applied to conditions that require of those high calorific energies. In order to optimize the operation of the concentrator we use a control circuit designed to track the apparent sun position.

  10. The flattening of the average potential in models with fermions

    International Nuclear Information System (INIS)

    Bornholdt, S.

    1993-01-01

    The average potential is a scale dependent scalar effective potential. In a phase with spontaneous symmetry breaking its inner region becomes flat as the averaging extends over infinite volume and the average potential approaches the convex effective potential. Fermion fluctuations affect the shape of the average potential in this region and its flattening with decreasing physical scale. They have to be taken into account to find the true minimum of the scalar potential which determines the scale of spontaneous symmetry breaking. (orig.)

  11. 20 CFR 404.220 - Average-monthly-wage method.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Average-monthly-wage method. 404.220 Section... INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.220 Average-monthly-wage method. (a) Who is eligible for this method. You must...

  12. A time-averaged cosmic ray propagation theory

    International Nuclear Information System (INIS)

    Klimas, A.J.

    1975-01-01

    An argument is presented, which casts doubt on our ability to choose an appropriate magnetic field ensemble for computing the average behavior of cosmic ray particles. An alternate procedure, using time-averages rather than ensemble-averages, is presented. (orig.) [de

  13. 7 CFR 51.2561 - Average moisture content.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except when...

  14. Averaging in SU(2) open quantum random walk

    International Nuclear Information System (INIS)

    Ampadu Clement

    2014-01-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT

  15. Averaging in SU(2) open quantum random walk

    Science.gov (United States)

    Clement, Ampadu

    2014-03-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT.

  16. Bioinformatics programs are 31-fold over-represented among the highest impact scientific papers of the past two decades.

    Science.gov (United States)

    Wren, Jonathan D

    2016-09-01

    To analyze the relative proportion of bioinformatics papers and their non-bioinformatics counterparts in the top 20 most cited papers annually for the past two decades. When defining bioinformatics papers as encompassing both those that provide software for data analysis or methods underlying data analysis software, we find that over the past two decades, more than a third (34%) of the most cited papers in science were bioinformatics papers, which is approximately a 31-fold enrichment relative to the total number of bioinformatics papers published. More than half of the most cited papers during this span were bioinformatics papers. Yet, the average 5-year JIF of top 20 bioinformatics papers was 7.7, whereas the average JIF for top 20 non-bioinformatics papers was 25.8, significantly higher (P papers, bioinformatics journals tended to have higher Gini coefficients, suggesting that development of novel bioinformatics resources may be somewhat 'hit or miss'. That is, relative to other fields, bioinformatics produces some programs that are extremely widely adopted and cited, yet there are fewer of intermediate success. jdwren@gmail.com Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  17. ACHIEVEMENT OF THE HIGHEST LEVEL OF SAFETY AND HEALTH AT WORK AND THE SATISFACTION OF EMPLOYEES IN THE TEXTILE INDUSTRY

    Directory of Open Access Journals (Sweden)

    Snezana Urosevic

    2016-12-01

    Full Text Available Safety and health at work involves the exercise of such working conditions that take certain measures and activities in order to protect the life and health of employees. The interest of society, of all stakeholders and every individual is to achieve the highest level of safety and health at work, to unwanted consequences such as injuries, occupational diseases and diseases related to work are reduced to a minimum, and to create the conditions work in which employees have a sense of satisfaction in the performance of their professional duties. Textile industry is a sector with higher risk, because the plants of textile industry prevailing unfavorable microclimate conditions: high air temperature and high humidity, and often insufficient illumination of rooms and increased noise. The whole line of production in the textile industry, there is a risk of injury, the most common with mechanical force, or gaining burns from heat or chemicals. All of these factors are present in the process of production and processing of textiles and the same may affect the incidence of occupational diseases of workers, absenteeism, reduction of their working capacity and productivity. With the progress of the textile industry production increases in the number of hazardous and harmful substances that may pose a potential danger to the employee in this branch of the economy as well as the harmful impact on the environment. Therefore, it is important to give special attention to these problems.

  18. Density functional theory, comparative vibrational spectroscopic studies, highest occupied molecular orbital and lowest unoccupied molecular orbital analysis of Linezolid

    Science.gov (United States)

    Rajalakshmi, K.; Gunasekaran, S.; Kumaresan, S.

    2015-06-01

    The Fourier transform infrared spectra and Fourier transform Raman spectra of Linezolid have been recorded in the regions 4,000-400 and 4,000-100 cm-1, respectively. Utilizing the observed Fourier transform infrared spectra and Fourier transform Raman spectra data, a complete vibrational assignment and analysis of the fundamental modes of the compound have been carried out. The optimum molecular geometry, harmonic vibrational frequencies, infrared intensities and Raman scattering activities, have been calculated by density functional theory with 6-31G(d,p), 6-311G(d,p) and M06-2X/6-31G(d,p) levels. The difference between the observed and scaled wavenumber values of most of the fundamentals is very small. A detailed interpretation of the infrared and Raman spectra of Linezolid is reported. Mulliken's net charges have also been calculated. Ultraviolet-visible spectrum of the title molecule has also been calculated using time-dependent density functional method. Besides, molecular electrostatic potential, highest occupied molecular orbital and lowest unoccupied molecular orbital analysis and several thermodynamic properties have been performed by the density functional theoretical method.

  19. New hybrid magnet system for structure research at highest magnetic fields and temperatures in the millikelvin region

    International Nuclear Information System (INIS)

    Smeibidl, Peter; Ehmler, Hartmut; Tennant, Alan; Bird, Mark

    2012-01-01

    The Helmholtz Centre Berlin (HZB) is a user facility for the study of structure and dynamics with neutrons and synchrotron radiation with special emphasis on experiments under extreme conditions. Neutron scattering is uniquely suited to study magnetic properties on a microscopic length scale, because neutrons have comparable wavelengths and, due to their magnetic moment, they interact with the atomic magnetic moments. At HZB a dedicated instrument for neutron scattering at extreme magnetic fields and low temperatures is under construction, the Extreme Environment Diffractometer ExED. It is projected according to the time-of-flight principle for elastic and inelastic neutron scattering and for the special geometric constraints of analysing samples in a high field magnet. The new hybrid magnet will not only allow for novel experiments, it will be at the forefront of development in magnet technology itself. With a set of superconducting and resistive coils a maximum field above 30 T will be possible. To compromise between the needs of the magnet design for highest fields and the concept of the neutron instrument, the magnetic field will be generated by means of a coned, resistive inner solenoid and a superconducting outer solenoid with horizontal field orientation. To allow for experiments down to Millikelvin Temperatures the installation of a 3 He or a dilution cryostat with a closed cycle precooling stage is foreseen.

  20. Pleistocene climatic oscillations rather than recent human disturbance influence genetic diversity in one of the world's highest treeline species.

    Science.gov (United States)

    Peng, Yanling; Lachmuth, Susanne; Gallegos, Silvia C; Kessler, Michael; Ramsay, Paul M; Renison, Daniel; Suarez, Ricardo; Hensen, Isabell

    2015-10-01

    Biological responses to climatic change usually leave imprints on the genetic diversity and structure of plants. Information on the current genetic diversity and structure of dominant tree species has facilitated our general understanding of phylogeographical patterns. Using amplified fragment length polymorphism (AFLPs), we compared genetic diversity and structure of 384 adults of P. tarapacana with those of 384 seedlings across 32 forest sites spanning a latitudinal gradient of 600 km occurring between 4100 m and 5000 m a.s.l. in Polylepis tarapacana (Rosaceae), one of the world's highest treeline species endemic to the central Andes. Moderate to high levels of genetic diversity and low genetic differentiation were detected in both adults and seedlings, with levels of genetic diversity and differentiation being almost identical. Four slightly genetically divergent clusters were identified that accorded to differing geographical regions. Genetic diversity decreased from south to north and with increasing precipitation for adults and seedlings, but there was no relationship to elevation. Our study shows that, unlike the case for other Andean treeline species, recent human activities have not affected the genetic structure of P. tarapacana, possibly because its inhospitable habitat is unsuitable for agriculture. The current genetic pattern of P. tarapacana points to a historically more widespread distribution at lower altitudes, which allowed considerable gene flow possibly during the glacial periods of the Pleistocene epoch, and also suggests that the northern Argentinean Andes may have served as a refugium for historical populations. © 2015 Botanical Society of America.

  1. A Systematic Study to Optimize SiPM Photo-Detectors for Highest Time Resolution in PET

    CERN Document Server

    Gundacker, S.; Frisch, B.; Hillemanns, H.; Jarron, P.; Meyer, T.; Pauwels, K.; Lecoq, P.

    2012-01-01

    We report on a systematic study of time resolution made with three different commercial silicon photomultipliers (SiPMs) (Hamamatsu MPPC S10931-025P, S10931-050P, and S10931-100P) and two LSO scintillating crystals. This study aimed to determine the optimum detector conditions for highest time resolution in a prospective time-of-flight positron emission tomography (TOF-PET) system. Measurements were based on the time over threshold method in a coincidence setup using the ultrafast amplifier-discriminator NINO and a fast oscilloscope. Our tests with the three SiPMs of the same area but of different SPAD sizes and fill factors led to best results with the Hamamatsu type of 50×50×μm2 single-pixel size. For this type of SiPM and under realistic geometrical PET scanner conditions, i.e., with 2×2×10×mm3 LSO crystals, a coincidence time resolution of 220 ±4 ps FWHM could be achieved. The results are interpreted in terms of SiPM photon detection efficiency (PDE), dark noise, and photon yield.

  2. Site characterization of the highest-priority geologic formations for CO2 storage in Wyoming

    Energy Technology Data Exchange (ETDEWEB)

    Surdam, Ronald C. [Univ. of Wyoming, Laramie, WY (United States); Bentley, Ramsey [Univ. of Wyoming, Laramie, WY (United States); Campbell-Stone, Erin [Univ. of Wyoming, Laramie, WY (United States); Dahl, Shanna [Univ. of Wyoming, Laramie, WY (United States); Deiss, Allory [Univ. of Wyoming, Laramie, WY (United States); Ganshin, Yuri [Univ. of Wyoming, Laramie, WY (United States); Jiao, Zunsheng [Univ. of Wyoming, Laramie, WY (United States); Kaszuba, John [Univ. of Wyoming, Laramie, WY (United States); Mallick, Subhashis [Univ. of Wyoming, Laramie, WY (United States); McLaughlin, Fred [Univ. of Wyoming, Laramie, WY (United States); Myers, James [Univ. of Wyoming, Laramie, WY (United States); Quillinan, Scott [Univ. of Wyoming, Laramie, WY (United States)

    2013-12-07

    This study, funded by U.S. Department of Energy National Energy Technology Laboratory award DE-FE0002142 along with the state of Wyoming, uses outcrop and core observations, a diverse electric log suite, a VSP survey, in-bore testing (DST, injection tests, and fluid sampling), a variety of rock/fluid analyses, and a wide range of seismic attributes derived from a 3-D seismic survey to thoroughly characterize the highest-potential storage reservoirs and confining layers at the premier CO2 geological storage site in Wyoming. An accurate site characterization was essential to assessing the following critical aspects of the storage site: (1) more accurately estimate the CO2 reservoir storage capacity (Madison Limestone and Weber Sandstone at the Rock Springs Uplift (RSU)), (2) evaluate the distribution, long-term integrity, and permanence of the confining layers, (3) manage CO2 injection pressures by removing formation fluids (brine production/treatment), and (4) evaluate potential utilization of the stored CO2

  3. Compressive sensing-based wideband capacitance measurement with a fixed sampling rate lower than the highest exciting frequency

    International Nuclear Information System (INIS)

    Xu, Lijun; Ren, Ying; Sun, Shijie; Cao, Zhang

    2016-01-01

    In this paper, an under-sampling method for wideband capacitance measurement was proposed by using the compressive sensing strategy. As the excitation signal is sparse in the frequency domain, the compressed sampling method that uses a random demodulator was adopted, which could greatly decrease the sampling rate. Besides, four switches were used to replace the multiplier in the random demodulator. As a result, not only the sampling rate can be much smaller than the signal excitation frequency, but also the circuit’s structure is simpler and its power consumption is lower. A hardware prototype was constructed to validate the method. In the prototype, an excitation voltage with a frequency up to 200 kHz was applied to a capacitance-to-voltage converter. The output signal of the converter was randomly modulated by a pseudo-random sequence through four switches. After a low-pass filter, the signal was sampled by an analog-to-digital converter at a sampling rate of 50 kHz, which was three times lower than the highest exciting frequency. The frequency and amplitude of the signal were then reconstructed to obtain the measured capacitance. Both theoretical analysis and experiments were carried out to show the feasibility of the proposed method and to evaluate the performance of the prototype, including its linearity, sensitivity, repeatability, accuracy and stability within a given measurement range. (paper)

  4. [Archivos de Bronconeumología: among the 3 Spanish medical journals with the highest national impact factors].

    Science.gov (United States)

    Aleixandre Benavent, R; Valderrama Zurián, J C; Castellano Gómez, M; Simó Meléndez, R; Navarro Molina, C

    2004-12-01

    Citation analysis elucidates patterns of information consumption within professional communities. The aim of this study was to analyze the citations of 87 Spanish medical journals by calculating their impact factors and immediacy indices for 2001, and to estimate the importance of Archivos de Bronconeumología within the framework of Spanish medicine. Eighty-seven Spanish medical journals were included. All were listed in the Spanish Medical Index (Indice Medico Español) and in at least one of the following databases: MEDLINE, BIOSIS, EMBASE, or Science Citation Index. References to articles from 1999 through 2001 in citable articles from 2001 were analyzed. Using the method of the Institute for Scientific Information, we calculated the national impact factor and immediacy index for each journal. The journals with the highest national impact factors were Revista Española de Quimioterapia (0.894), Medicina Clínica (0.89), and Archivos de Bronconeumología (0.732). The self-citation percentage of Archivos de Bronconeumología was 18.3% and the immediacy index was 0.033. The impact factor obtained by Archivos de Bronconeumología confirms its importance in Spanish medicine and validates its inclusion as a source journal in Science Citation Index and Journal Citation Report.

  5. The Risk of Reported Cryptosporidiosis in Children Aged <5 Years in Australia is Highest in Very Remote Regions.

    Science.gov (United States)

    Lal, Aparna; Fearnley, Emily; Kirk, Martyn

    2015-09-18

    The incidence of cryptosporidiosis is highest in children <5 years, yet little is known about disease patterns across urban and rural areas of Australia. In this study, we examine whether the risk of reported cryptosporidiosis in children <5 years varies across an urban-rural gradient, after controlling for season and gender. Using Australian data on reported cryptosporidiosis from 2001 to 2012, we spatially linked disease data to an index of geographic remoteness to examine the geographic variation in cryptosporidiosis risk using negative binomial regression. The Incidence Risk Ratio (IRR) of reported cryptosporidiosis was higher in inner regional (IRR 1.4 95% CI 1.2-1.7, p < 0.001), and outer regional areas (IRR 2.4 95% CI 2.2-2.9, p < 0.001), and in remote (IRR 5.2 95% CI 4.3-6.2, p < 0.001) and very remote (IRR 8.2 95% CI 6.9-9.8, p < 0.001) areas, compared to major cities. A linear test for trend showed a statistically significant trend with increasing remoteness. Remote communities need to be a priority for future targeted health promotion and disease prevention interventions to reduce cryptosporidiosis in children <5 years.

  6. The Risk of Reported Cryptosporidiosis in Children Aged <5 Years in Australia is Highest in Very Remote Regions

    Directory of Open Access Journals (Sweden)

    Aparna Lal

    2015-09-01

    Full Text Available The incidence of cryptosporidiosis is highest in children <5 years, yet little is known about disease patterns across urban and rural areas of Australia. In this study, we examine whether the risk of reported cryptosporidiosis in children <5 years varies across an urban-rural gradient, after controlling for season and gender. Using Australian data on reported cryptosporidiosis from 2001 to 2012, we spatially linked disease data to an index of geographic remoteness to examine the geographic variation in cryptosporidiosis risk using negative binomial regression. The Incidence Risk Ratio (IRR of reported cryptosporidiosis was higher in inner regional (IRR 1.4 95% CI 1.2–1.7, p < 0.001, and outer regional areas (IRR 2.4 95% CI 2.2–2.9, p < 0.001, and in remote (IRR 5.2 95% CI 4.3–6.2, p < 0.001 and very remote (IRR 8.2 95% CI 6.9–9.8, p < 0.001 areas, compared to major cities. A linear test for trend showed a statistically significant trend with increasing remoteness. Remote communities need to be a priority for future targeted health promotion and disease prevention interventions to reduce cryptosporidiosis in children <5 years.

  7. Solving The Longstanding Problem Of Low-Energy Nuclear Reactions At the Highest Microscopic Level - Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Quaglioni, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-09-22

    A 2011 DOE-NP Early Career Award (ECA) under Field Work Proposal (FWP) SCW1158 supported the project “Solving the Long-Standing Problem of Low-Energy Nuclear Reactions at the Highest Microscopic Level” in the five-year period from June 15, 2011 to June 14, 2016. This project, led by PI S. Quaglioni, aimed at developing a comprehensive and computationally efficient framework to arrive at a unified description of structural properties and reactions of light nuclei in terms of constituent protons and neutrons interacting through nucleon-nucleon (NN) and three-nucleon (3N) forces. Specifically, the project had three main goals: 1) arriving at the accurate predictions for fusion reactions that power stars and Earth-based fusion facilities; 2) realizing a comprehensive description of clustering and continuum effects in exotic nuclei, including light Borromean systems; and 3) achieving fundamental understanding of the role of the 3N force in nuclear reactions and nuclei at the drip line.

  8. Two Clock Transitions in Neutral Yb for the Highest Sensitivity to Variations of the Fine-Structure Constant

    Science.gov (United States)

    Safronova, Marianna S.; Porsev, Sergey G.; Sanner, Christian; Ye, Jun

    2018-04-01

    We propose a new frequency standard based on a 4 f146 s 6 p P0 3 -4 f136 s25 d (J =2 ) transition in neutral Yb. This transition has a potential for high stability and accuracy and the advantage of the highest sensitivity among atomic clocks to variation of the fine-structure constant α . We find its dimensionless α -variation enhancement factor to be K =-15 , in comparison to the most sensitive current clock (Yb+ E 3 , K =-6 ), and it is 18 times larger than in any neutral-atomic clocks (Hg, K =0.8 ). Combined with the unprecedented stability of an optical lattice clock for neutral atoms, this high sensitivity opens new perspectives for searches for ultralight dark matter and for tests of theories beyond the standard model of elementary particles. Moreover, together with the well-established 1S0-3P0 transition, one will have two clock transitions operating in neutral Yb, whose interleaved interrogations may further reduce systematic uncertainties of such clock-comparison experiments.

  9. Impacts of informal trails on vegetation and soils in the highest protected area in the Southern Hemisphere.

    Science.gov (United States)

    Barros, Agustina; Gonnet, Jorge; Pickering, Catherine

    2013-09-30

    There is limited recreation ecology research in South America, especially studies looking at informal trails. Impacts of informal trails formed by hikers and pack animals on vegetation and soils were assessed for the highest protected area in the Southern Hemisphere, Aconcagua Provincial Park. The number of braided trails, their width and depth were surveyed at 30 sites along the main access route to Mt Aconcagua (6962 m a.s.l.). Species composition, richness and cover were also measured on control and trail transects. A total of 3.3 ha of alpine meadows and 13.4 ha of alpine steppe were disturbed by trails. Trails through meadows resulted in greater soil loss, more exposed soil and rock and less vegetation than trails through steppe vegetation. Trampling also affected the composition of meadow and steppe vegetation with declines in sedges, herbs, grasses and shrubs on trails. These results highlight how visitor use can result in substantial cumulative damage to areas of high conservation value in the Andes. With unregulated use of trails and increasing visitation, park agencies need to limit the further spread of informal trails and improve the conservation of plant communities in Aconcagua Provincial Park and other popular parks in the region. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Concentrations and assessment of exposure to siloxanes and synthetic musks in personal care products from China

    Energy Technology Data Exchange (ETDEWEB)

    Lu Yan [School of Environmental Science and Engineering, Shanghai Jiao Tong University, Shanghai 200240 (China); Wadsworth Center, New York State Department of Health and Department of Environmental Health Sciences, School of Public Health, State University of New York at Albany, Empire State Plaza, PO Box 509, Albany, NY 12201-0509 (United States); Yuan Tao; Wang Wenhua [School of Environmental Science and Engineering, Shanghai Jiao Tong University, Shanghai 200240 (China); Kannan, Kurunthachalam, E-mail: kkannan@wadsworth.org [Wadsworth Center, New York State Department of Health and Department of Environmental Health Sciences, School of Public Health, State University of New York at Albany, Empire State Plaza, PO Box 509, Albany, NY 12201-0509 (United States); International Joint Research Center for Persistent Toxic Substances, State Key Laboratory of Urban Water Resource and Environment, Harbin Institute of Technology, Harbin 150090 (China)

    2011-12-15

    We investigated the concentrations and profiles of 15 siloxanes (four cyclic siloxanes, D{sub 4}-D{sub 7}; 11 linear siloxanes, L{sub 4}-L{sub 14}), four synthetic musks (two polycyclic musks, HHCB and AHTN; two nitro musks, MX and MK), and HHCB-lactone, in 158 personal care products marketed in China. Siloxanes were detected in 88% of the samples analyzed, at concentrations as high as 52.6 mg g{sup -1}; Linear siloxanes were the predominant compounds. Among synthetic musks, more than 80% of the samples contained at least one of these compounds, and their total concentrations were as high as 1.02 mg g{sup -1}. HHCB was the predominant musk in all of the samples analyzed, on average, accounting for 52% of the total musk concentrations. Based on the median concentrations of siloxanes and musks and the average daily usage amounts of consumer products, dermal exposure rates in adults were calculated to be 3.69 and 3.38 mg d{sup -1} for siloxanes and musks, respectively. - Highlights: > Siloxanes and synthetic musks are determined in personal care products. > Highest siloxane concentration was 52.6 mg g{sup -1}. > Highest musk concentration was 1.02 mg g{sup -1}. > Daily dermal exposure rates of siloxanes and musks were in mg levels. > Dermal exposure is a major pathway of human exposure to siloxanes and musks. - Dermal application of several personal care products is a major source of human exposure to cyclic and linear siloxanes.

  11. Trends of atmospheric black carbon concentration over the United Kingdom

    Science.gov (United States)

    Singh, Vikas; Ravindra, Khaiwal; Sahu, Lokesh; Sokhi, Ranjeet

    2018-04-01

    The continuous observations over a period of 7 years (2009-2016) available at 7 locations show declining trend of atmospheric BC in the UK. Among all the locations, the highest decrease of 8 ± 3 percent per year was observed at the Marylebone road in London. The detailed analysis performed at 21 locations during 2009-2011 shows that average annual mean atmospheric BC concentration were 0.45 ± 0.10, 1.47 ± 0.58, 1.34 ± 0.31, 1.83 ± 0.46 and 9.72 ± 0.78 μgm-3 at rural, suburban, urban background, urban centre and kerbside sites respectively. Around 1 μgm-3 of atmospheric BC could be attributed to urban emission, whereas traffic contributed up to 8 μg m-3 of atmospheric BC near busy roads. Seasonal pattern was also observed at all locations except rural and kerbside location, with maximum concentrations (1.2-4 μgm-3) in winter. Further, minimum concentrations (0.3-1.2 μgm-3) were observed in summer and similar concentrations in spring and fall. At suburban and urban background locations, similar diurnal pattern were observed with atmospheric BC concentration peaks (≈1.8 μg m-3) in the morning (around 9 a.m.) and evening (7-9 p.m.) rush hours, whereas minimum concentrations were during late night hours (peak at 5 a.m.) and the afternoon hours (peak at 2 p.m.). The urban centre values show a similar morning pattern (peak at 9 a.m.; concentration - 2.5 μgm-3) in relation to background locations but only a slight decrease in concentration in the afternoon which remained above 2 μgm-3 till midnight. It is concluded that the higher flow of traffic at urban centre locations results in higher atmospheric BC concentrations throughout the day. Comparison of weekday and weekend daily averaged atmospheric BC showed maximum concentrations on Friday, having minimum levels on Sunday. This study will help to refine the atmospheric BC emission inventories and provide data for air pollution and climate change models evaluation, which are used to formulate air pollution

  12. Averaging and sampling for magnetic-observatory hourly data

    Directory of Open Access Journals (Sweden)

    J. J. Love

    2010-11-01

    Full Text Available A time and frequency-domain analysis is made of the effects of averaging and sampling methods used for constructing magnetic-observatory hourly data values. Using 1-min data as a proxy for continuous, geomagnetic variation, we construct synthetic hourly values of two standard types: instantaneous "spot" measurements and simple 1-h "boxcar" averages. We compare these average-sample types with others: 2-h average, Gaussian, and "brick-wall" low-frequency-pass. Hourly spot measurements provide a statistically unbiased representation of the amplitude range of geomagnetic-field variation, but as a representation of continuous field variation over time, they are significantly affected by aliasing, especially at high latitudes. The 1-h, 2-h, and Gaussian average-samples are affected by a combination of amplitude distortion and aliasing. Brick-wall values are not affected by either amplitude distortion or aliasing, but constructing them is, in an operational setting, relatively more difficult than it is for other average-sample types. It is noteworthy that 1-h average-samples, the present standard for observatory hourly data, have properties similar to Gaussian average-samples that have been optimized for a minimum residual sum of amplitude distortion and aliasing. For 1-h average-samples from medium and low-latitude observatories, the average of the combination of amplitude distortion and aliasing is less than the 5.0 nT accuracy standard established by Intermagnet for modern 1-min data. For medium and low-latitude observatories, average differences between monthly means constructed from 1-min data and monthly means constructed from any of the hourly average-sample types considered here are less than the 1.0 nT resolution of standard databases. We recommend that observatories and World Data Centers continue the standard practice of reporting simple 1-h-average hourly values.

  13. A story about the discovery of the largest glacier and the highest peak in heart of the Pamirs

    Directory of Open Access Journals (Sweden)

    V. M. Kotlyakov

    2014-01-01

    Full Text Available The paper tells a story how the “blank spot” at the Pamirs center was puzzled out. In 1878, a small party of explorers headed by V.D. Oshanin had found here a big glacier about 30–40 km long and named it for Fedchenko. In 1884–85, known investigator G.E. Grumm-Grzhimailo made his important proposal about orographic structure of the Pamirs central part. In 1890, expedition headed by topographer N.I. Kosinenko investigated the lower part of the Fedchenko Glacier and, for the first time, saw a separate high peak. In 1916, astronomer Ya.I. Belyaev had put on a map a great pyramidal summit but he had mistaken it for the Garmo Peak well known to local Tadzhiks (Fig. 2.In 1927, N.L. Korzhenevsky published a chart of arrangement of ridges near sources of the river Muksu (Fig. 3 that became a basis for work of the Tadzhik-Pamir expedition of 1928–1932. In 1928, Ya.I. Belyaev determined a true length of the Fechenko Glacier that was 70 km, and geodesist I.G. Dorofeevmapped the whole basin of this glacier (Fig. 4 including also a high irregular truncated pyramid of 7495 m in height (as he believed. But earlier this summit was identified as known the Garmo Peak. And only in 1932, it was established that definitions made by Dorofeev in 1928 were related to this highest peak of the Pamirs and also of the whole Soviet Union. The chart of real Central Pamir orography constructed by I.G. Dorofeev is presented in the paper together with his letter addressed to the author (Fig. 5.Thus, the “Garmo peaks” which were observed by the above mentioned explorers were actually three different summits. One of them does tower on the north of the “knot being puzzled out” and reaches 7495 m, and namely this “one-tooth” peak was repeatedly seen by N.V. Krylenko from valleys Gando and Garmo. It was named then the Stalin peak, and later – the peak of Communism. Another one is located in 18 km southward, and this peak is actually the true Garmo Peak 6595 m

  14. Binding of higher alcohols onto Mn(12) single-molecule magnets (SMMs): access to the highest barrier Mn(12) SMM.

    Science.gov (United States)

    Lampropoulos, Christos; Redler, Gage; Data, Saiti; Abboud, Khalil A; Hill, Stephen; Christou, George

    2010-02-15

    Two new members of the Mn(12) family of single-molecule magnets (SMMs), [Mn(12)O(12)(O(2)CCH(2)Bu(t))(16)(Bu(t)OH)(H(2)O)(3)].2Bu(t)OH (3.2Bu(t)OH) and [Mn(12)O(12)(O(2)CCH(2)Bu(t))(16)(C(5)H(11)OH)(4)] (4) (C(5)H(11)OH is 1-pentanol), are reported. They were synthesized from [Mn(12)O(12)(O(2)CMe)(16)(H(2)O)(4)].2MeCO(2)H.4H(2)O (1) by carboxylate substitution and crystallization from the appropriate alcohol-containing solvent. Complexes 3 and 4 are new members of the recently established [Mn(12)O(12)(O(2)CCH(2)Bu(t))(16)(solv)(4)] (solv = H(2)O, alcohols) family of SMMs. Only one bulky Bu(t)OH can be accommodated into 3, and even this causes significant distortion of the [Mn(12)O(12)] core. Variable-temperature, solid-state alternating current (AC) magnetization studies were carried out on complexes 3 and 4, and they established that both possess an S = 10 ground state spin and are SMMs. However, the magnetic behavior of the two compounds was found to be significantly different, with 4 showing out-of-phase AC peaks at higher temperatures than 3. High-frequency electron paramagnetic resonance (HFEPR) studies were carried out on single crystals of 3.2Bu(t)OH and 4, and these revealed that the axial zero-field splitting constant, D, is very different for the two compounds. Furthermore, it was established that 4 is the Mn(12) SMM with the highest kinetic barrier (U(eff)) to date. The results reveal alcohol substitution as an additional and convenient means to affect the magnetization relaxation barrier of the Mn(12) SMMs without major change to the ligation or oxidation state.

  15. Where do the highest energy CR's come from? and How does the Milky Way affect their arrival directions?

    Directory of Open Access Journals (Sweden)

    Kronberg Philipp

    2013-06-01

    Full Text Available A “grand magnetic design” for the Milky Way disk clearly emerges within ~1.5 kpc of the Galactic mid-plane near the Sun [1], and reveals a pitch angle of −5.5°. directed inward from the Solar tangential. This pitch angle can be expected to differ for Galactic disc locations other than ours. Above ~1.5 kpc, the field geometry is completely different, and its 3-D structure is not yet completely specified. However it appears that the UHECR (> 5 · 1019 eV propagation to us is not much affected by the halo field, at least for protons. I discuss new multi-parameter analyses of UHECR deflections which provide a conceptual “template” for future interpretations of energy-species-direction data from AUGER, HiRes etc., and their successors [2]. I show how the strength and structure of the cosmologically nearby intergalactic magnetic field, BIGM, is now well-estimated out to D ~5 Mpc from the Milky Way: − 20nG. These are the first VHECR data-based estimates of BIGM on nearby supragalactic scales, and are also important for understanding and modeling CR propagation in the more distant Universe. CR Acceleration to the highest energies is probably a natural accompanying phenomenon of Supermassive Black Hole (SMBH-associated jets and lobes [3]. I briefly describe what we know about magnetic configurations in these candidate sites for UHECR acceleration, and the first direct estimate of an extragalactic Poynting flux current, ~3 · 1018 Amperes [4, 5]. This number connects directly to SMBH accretion disk physics, and leads directly to ideas of how VHECR acceleration in jets and lobes, possibly involving magnetic reconnection, is likely to be common in the Universe. It remains to be fully understood.

  16. Identification of multiple sclerosis patients at highest risk of cognitive impairment using an integrated brain magnetic resonance imaging assessment approach.

    Science.gov (United States)

    Uher, T; Vaneckova, M; Sormani, M P; Krasensky, J; Sobisek, L; Dusankova, J Blahova; Seidl, Z; Havrdova, E; Kalincik, T; Benedict, R H B; Horakova, D

    2017-02-01

    While impaired cognitive performance is common in multiple sclerosis (MS), it has been largely underdiagnosed. Here a magnetic resonance imaging (MRI) screening algorithm is proposed to identify patients at highest risk of cognitive impairment. The objective was to examine whether assessment of lesion burden together with whole brain atrophy on MRI improves our ability to identify cognitively impaired MS patients. Of the 1253 patients enrolled in the study, 1052 patients with all cognitive, volumetric MRI and clinical data available were included in the analysis. Brain MRI and neuropsychological assessment with the Brief International Cognitive Assessment for Multiple Sclerosis were performed. Multivariable logistic regression and individual prediction analysis were used to investigate the associations between MRI markers and cognitive impairment. The results of the primary analysis were validated at two subsequent time points (months 12 and 24). The prevalence of cognitive impairment was greater in patients with low brain parenchymal fraction (BPF) (3.5 ml) than in patients with high BPF (>0.85) and low T2-LV (patients predicted cognitive impairment with 83% specificity, 82% negative predictive value, 51% sensitivity and 75% overall accuracy. The risk of confirmed cognitive decline over the follow-up was greater in patients with high T2-LV (OR 2.1; 95% CI 1.1-3.8) and low BPF (OR 2.6; 95% CI 1.4-4.7). The integrated MRI assessment of lesion burden and brain atrophy may improve the stratification of MS patients who may benefit from cognitive assessment. © 2016 EAN.

  17. Lyman alpha emission in nearby star-forming galaxies with the lowest metallicities and the highest [OIII]/[OII] ratios

    Science.gov (United States)

    Izotov, Yuri

    2017-08-01

    The Lyman alpha line of hydrogen is the strongest emission line in galaxies and the tool of predilection for identifying and studying star-forming galaxies over a wide range of redshifts, especially in the early universe. However, it has become clear over the years that not all of the Lyman alpha radiation escapes, due to its resonant scattering on the interstellar and intergalactic medium, and absorption by dust. Although our knowledge of the high-z universe depends crucially on that line, we still do not have a complete understanding of the mechanisms behind the production, radiative transfer and escape of Lyman alpha in galaxies. We wish here to investigate these mechanisms by studying the properties of the ISM in a unique sample of 8 extreme star-forming galaxies (SFGs) that have the highest excitation in the SDSS spectral data base. These dwarf SFGs have considerably lower stellar masses and metallicities, and higher equivalent widths and [OIII]5007/[OII]3727 ratios compared to all nearby SFGs with Lyman alpha emission studied so far with COS. They are, however, very similar to the dwarf Lyman alpha emitters at redshifts 3-6, which are thought to be the main sources of reionization in the early Universe. By combining the HST/COS UV data with data in the optical range, and using photoionization and radiative transfer codes, we will be able to study the properties of the Lyman alpha in these unique objects, derive column densities of the neutral hydrogen N(HI) and compare them with N(HI) obtained from the HeI emission-line ratios in the optical spectra. We will derive Lyman alpha escape fractions and indirectly Lyman continuum escape fractions.

  18. Can All Doctors Be Like This? Seven Stories of Communication Transformation Told by Physicians Rated Highest by Patients.

    Science.gov (United States)

    Janisse, Tom; Tallman, Karen

    2017-01-01

    The top predictors of patient satisfaction with clinical visits are the quality of the physician-patient relationship and the communications contributing to their relationship. How do physicians improve their communication, and what effect does it have on them? This article presents the verbatim stories of seven high-performing physicians describing their transformative change in the areas of communication, connection, and well-being. Data for this study are based on interviews from a previous study in which a 6-question set was posed, in semistructured 60-minute interviews, to 77 of the highest-performing Permanente Medical Group physicians in 4 Regions on the "Art of Medicine" patient survey. Transformation stories emerged spontaneously during the interviews, and so it was an incidental finding when some physicians identified that they were not always high performing in their communication with patients. Seven different modes of transformation in communication were described by these physicians: a listening tool, an awareness course, finding new meaning in clinical practice, a technologic tool, a sudden insight, a mentor observation, and a physician-as-patient experience. These stories illustrate how communication skills can be learned through various activities and experiences that transform physicians into those who are highly successful communicators. All modes result in a change of state-a new way of seeing, of being-and are not just a new tool or a new practice, but a change in state of mind. This state resulted in a marked change of behavior, and a substantial improvement of communication and relationship.

  19. Seasonal Variability of Airborne Particulate Matter and Bacterial Concentrations in Colorado Homes

    Directory of Open Access Journals (Sweden)

    Nicholas Clements

    2018-04-01

    Full Text Available Aerosol measurements were collected at fifteen homes over the course of one year in Colorado (USA to understand the temporal variability of indoor air particulate matter and bacterial concentrations and their relationship with home characteristics, inhabitant activities, and outdoor air particulate matter (PM. Indoor and outdoor PM2.5 concentrations averaged (±st. dev. 8.1 ± 8.1 μg/m3 and 6.8 ± 4.5 μg/m3, respectively. Indoor PM2.5 was statistically significantly higher during summer compared to spring and winter; outdoor PM2.5 was significantly higher for summer compared to spring and fall. The PM2.5 I/O ratio was 1.6 ± 2.4 averaged across all homes and seasons and was not statistically significantly different across the seasons. Average indoor PM10 was 15.4 ± 18.3 μg/m3 and was significantly higher during summer compared to all other seasons. Total suspended particulate bacterial biomass, as determined by qPCR, revealed very little seasonal differences across and within the homes. The qPCR I/O ratio was statistically different across seasons, with the highest I/O ratio in the spring and lowest in the summer. Using one-minute indoor PM10 data and activity logs, it was observed that elevated particulate concentrations commonly occurred when inhabitants were cooking and during periods with elevated outdoor concentrations.

  20. Radiocesium concentrations in wholebody homogenates and several body compartments of naturally contaminated white-tailed deer

    International Nuclear Information System (INIS)

    Brisbin, I.L. Jr.; Smith, M.H.

    1975-01-01

    Radiocesium concentrations were determined for various tissues, organs, and other body compartments of 17 white-tailed deer collected from contaminated habitats on the AEC Savannah River Plant. Highest levels of radiocesium concentration were found in skeletal muscle, feces, kidney, and adrenal tissue, which averaged between 50 to 70 pCi radiocesium/g (dry weight). Liver and bone showed the lowest average values. With the exception of feces and rumen contents, nearly all tissues and organ compartments showed significant positive linear correlations between their respective radiocesium levels. Analyses of whole-body homogenates indicated that the deer examined averaged 9.91 pCi radiocesium/g (whole-body wet weight). These values were best predicted from the radiocesium contents of skeletal muscle, using the relationship: pCi radiocesium/g dry whole-body weight = 3.33 + 0.60 (pCi/g dry skeletal muscle). Calculation of a weighted ''predictive index'' indicated that concentrations in skeletal muscle best predicted the overall pattern and levels of radiocesium distribution within all compartments of the deer body. Radiocesium concentrations in the brain, heart, and liver, respectively, followed muscle in order of predictive ability

  1. Comparison on the Analysis on PM10 Data based on Average and Extreme Series

    Directory of Open Access Journals (Sweden)

    Mohd Amin Nor Azrita

    2018-01-01

    Full Text Available The main concern in environmental issue is on extreme phenomena (catastrophic instead of common events. However, most statistical approaches are concerned primarily with the centre of a distribution or on the average value rather than the tail of the distribution which contains the extreme observations. The concept of extreme value theory affords attention to the tails of distribution where standard models are proved unreliable to analyse extreme series. High level of particulate matter (PM10 is a common environmental problem which causes various impacts to human health and material damages. If the main concern is on extreme events, then extreme value analysis provides the best result with significant evidence. The monthly average and monthly maxima PM10 data for Perlis from 2003 to 2014 were analysed. Forecasting for average data is made by Holt-Winters method while return level determine the predicted value of extreme events that occur on average once in a certain period. The forecasting from January 2015 to December 2016 for average data found that the highest forecasted value is 58.18 (standard deviation 18.45 on February 2016 while return level achieved 253.76 units for 24 months (2015-2016 return periods.

  2. SPATIAL DISTRIBUTION OF THE AVERAGE RUNOFF IN THE IZA AND VIȘEU WATERSHEDS

    Directory of Open Access Journals (Sweden)

    HORVÁTH CS.

    2015-03-01

    Full Text Available The average runoff represents the main parameter with which one can best evaluate an area’s water resources and it is also an important characteristic in al river runoff research. In this paper we choose a GIS methodology for assessing the spatial evolution of the average runoff, using validity curves we identifies three validity areas in which the runoff changes differently with altitude. The tree curves were charted using the average runoff values of 16 hydrometric stations from the area, eight in the Vișeu and eight in the Iza river catchment. Identifying the appropriate areas of the obtained correlations curves (between specific average runoff and catchments mean altitude allowed the assessment of potential runoff at catchment level and on altitudinal intervals. By integrating the curves functions in to GIS we created an average runoff map for the area; from which one can easily extract runoff data using GIS spatial analyst functions. The study shows that from the three areas the highest runoff corresponds with the third zone but because it’s small area the water volume is also minor. It is also shown that with the use of the created runoff map we can compute relatively quickly correct runoff values for areas without hydrologic control.

  3. In-vehicle nitrogen dioxide concentrations in road tunnels

    Science.gov (United States)

    Martin, Ashley N.; Boulter, Paul G.; Roddis, Damon; McDonough, Liza; Patterson, Michael; Rodriguez del Barco, Marina; Mattes, Andrew; Knibbs, Luke D.

    2016-11-01

    There is a lack of knowledge regarding in-vehicle concentrations of nitrogen dioxide (NO2) during transit through road tunnels in urban environments. Furthermore, previous studies have tended to involve a single vehicle and the range of in-vehicle NO2 concentrations that vehicle occupants may be exposed to is not well defined. This study describes simultaneous measurements of in-vehicle and outside-vehicle NO2 concentrations on a route through Sydney, Australia that included several major tunnels, minor tunnels and busy surface roads. Tests were conducted on nine passenger vehicles to assess how vehicle characteristics and ventilation settings affected in-vehicle NO2 concentrations and the in-vehicle-to-outside vehicle (I/O) concentration ratio. NO2 was measured directly using a cavity attenuated phase shift (CAPS) technique that gave a high temporal and spatial resolution. In the major tunnels, transit-average in-vehicle NO2 concentrations were lower than outside-vehicle concentrations for all vehicles with cabin air recirculation either on or off. However, markedly lower I/O ratios were obtained with recirculation on (0.08-0.36), suggesting that vehicle occupants can significantly lower their exposure to NO2 in tunnels by switching recirculation on. The highest mean I/O ratios for NO2 were measured in older vehicles (0.35-0.36), which is attributed to older vehicles having higher air exchange rates. The results from this study can be used to inform the design and operation of future road tunnels and modelling of personal exposure to NO2.

  4. National and Subnational Population-Based Incidence of Cancer in Thailand: Assessing Cancers with the Highest Burdens

    Directory of Open Access Journals (Sweden)

    Shama Virani

    2017-08-01

    Full Text Available In Thailand, five cancer types—breast, cervical, colorectal, liver and lung cancer—contribute to over half of the cancer burden. The magnitude of these cancers must be quantified over time to assess previous health policies and highlight future trajectories for targeted prevention efforts. We provide a comprehensive assessment of these five cancers nationally and subnationally, with trend analysis, projections, and number of cases expected for the year 2025 using cancer registry data. We found that breast (average annual percent change (AAPC: 3.1% and colorectal cancer (female AAPC: 3.3%, male AAPC: 4.1% are increasing while cervical cancer (AAPC: −4.4% is decreasing nationwide. However, liver and lung cancers exhibit disproportionately higher burdens in the northeast and north regions, respectively. Lung cancer increased significantly in northeastern and southern women, despite low smoking rates. Liver cancers are expected to increase in the northern males and females. Liver cancer increased in the south, despite the absence of the liver fluke, a known factor, in this region. Our findings are presented in the context of health policy, population dynamics and serve to provide evidence for future prevention strategies. Our subnational estimates provide a basis for understanding variations in region-specific risk factor profiles that contribute to incidence trends over time.

  5. Te and ne profiles on JFT-2M plasma with the highest spatial resolution TV Thomson scattering system

    International Nuclear Information System (INIS)

    Yamauchi, T.

    1993-01-01

    A high spatial resolution TV Thomson scattering system was constructed on JFT-2M tokamak. This system is similar to those used at PBX-M and TFTR. These systems are providing complete profiles of Te and ne at a single time during a plasma discharge. The characteristics of JFT-2M TVTS are as follows: 1. Measured points are composed of not only 81 points for the scattered light and plasma light, whose time difference is 2 ms, but also 10 points for plasma light measured at the same time with scattered light. 2. Spatial resolution is 0.86 cm, which is higher than any other Thomson scattering system. 3. Sensitivity of detector composed of image intensifier tubes and CCD is as high as that of photomultiplier tube. Te and ne profiles have been measured over one year on JFT-2M. The line-averaged electron density measured was in the region of 5x10 12 cm -3 - 7x10 13 cm -3 and the measured electron temperature was in the region of 50 eV -1.2 keV. (author) 7 refs., 7 figs., 1 tab

  6. Safety Impact of Average Speed Control in the UK

    DEFF Research Database (Denmark)

    Lahrmann, Harry Spaabæk; Brassøe, Bo; Johansen, Jonas Wibert

    2016-01-01

    of automatic speed control was point-based, but in recent years a potentially more effective alternative automatic speed control method has been introduced. This method is based upon records of drivers’ average travel speed over selected sections of the road and is normally called average speed control...... in the UK. The study demonstrates that the introduction of average speed control results in statistically significant and substantial reductions both in speed and in number of accidents. The evaluation indicates that average speed control has a higher safety effect than point-based automatic speed control....

  7. on the performance of Autoregressive Moving Average Polynomial

    African Journals Online (AJOL)

    Timothy Ademakinwa

    Distributed Lag (PDL) model, Autoregressive Polynomial Distributed Lag ... Moving Average Polynomial Distributed Lag (ARMAPDL) model. ..... Global Journal of Mathematics and Statistics. Vol. 1. ... Business and Economic Research Center.

  8. Decision trees with minimum average depth for sorting eight elements

    KAUST Repository

    AbouEisha, Hassan M.

    2015-11-19

    We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees is approximately equal to 8.548×10^326365), has also minimum depth. Both problems were considered by Knuth (1998). To obtain these results, we use tools based on extensions of dynamic programming which allow us to make sequential optimization of decision trees relative to depth and average depth, and to count the number of decision trees with minimum average depth.

  9. Comparison of Interpolation Methods as Applied to Time Synchronous Averaging

    National Research Council Canada - National Science Library

    Decker, Harry

    1999-01-01

    Several interpolation techniques were investigated to determine their effect on time synchronous averaging of gear vibration signals and also the effects on standard health monitoring diagnostic parameters...

  10. Light-cone averaging in cosmology: formalism and applications

    International Nuclear Information System (INIS)

    Gasperini, M.; Marozzi, G.; Veneziano, G.; Nugier, F.

    2011-01-01

    We present a general gauge invariant formalism for defining cosmological averages that are relevant for observations based on light-like signals. Such averages involve either null hypersurfaces corresponding to a family of past light-cones or compact surfaces given by their intersection with timelike hypersurfaces. Generalized Buchert-Ehlers commutation rules for derivatives of these light-cone averages are given. After introducing some adapted ''geodesic light-cone'' coordinates, we give explicit expressions for averaging the redshift to luminosity-distance relation and the so-called ''redshift drift'' in a generic inhomogeneous Universe

  11. Desktop analysis of potential impacts of visitor use: a case study for the highest park in the Southern Hemisphere.

    Science.gov (United States)

    Barros, Agustina; Pickering, Catherine; Gudes, Ori

    2015-03-01

    Nature-based tourism and recreation activities have a range of environmental impacts, but most protected area agencies have limited capacity to assess them. To prioritise where and what impacts to monitor and manage, we conducted a desktop assessment using Geographical Information Systems (GIS) by combining recreation ecology research with data on visitor usage and key environmental features for a popular protected area used for mountaineering and trekking, Aconcagua Provincial Park (2400-6962 m a.s.l.) in the Andes of Argentina. First, we integrated visitor data from permits with environmental data using GIS. We then identified key impact indicators for different activities based on the recreation ecology literature. Finally, we integrated this data to identify likely ecological impacts based on the types of activities, amount of use and altitudinal zones. Visitors only used 2% of the Park, but use was concentrated in areas of high conservation value including in alpine meadows and glacier lakes. Impacts on water resources were likely to be concentrated in campsites from the intermediate to the nival/glacial zones of the Park while impacts on terrestrial biodiversity were likely to be more severe in the low and intermediate alpine zones (2400-3800 m a.s.l.). These results highlight how visitor data can be used to identify priority areas for on-ground assessment of impacts in key locations. Improvements to the management of visitors in this Park involves more effective ways of dealing with water extraction and human waste in high altitude campsites and the impacts of hikers and pack animals in the low and intermediate alpine zones. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Temporal Variation of Ambient PM10 Concentration within an Urban-Industrial Environment

    Science.gov (United States)

    Wong, Yoon-Keaw; Noor, Norazian Mohamed; Izzah Mohamad Hashim, Nur

    2018-03-01

    PM10 concentration in the ambient air has been reported to be the main pollutant affecting human health, particularly in the urban areas. This research is conducted to study the variation of PM10 concentration at the three urban-industrial areas in Malaysia, namely Shah Alam, Kuala Terengganu and Melaka. In addition, the association and correlation between PM10 concentration and other air pollutants will be distinguished. Five years interval dataset (2008-2012) consisting of PM10, SOX, NOX and O3 concentrations and other weather parameters such as wind speed, humidity and temperature were obtained from Department of Environment, Malaysia. Shah Alam shows the highest average of PM10 concentration with the value of 62.76 μg/m3 in June, whereas for Kuala Terengganu was 59.29 μg/m3 in February and 46.61 μg/m3 in August for Melaka. Two peaks were observed from the time series plot using the averaged monthly PM10 concentration. First peak occurs when PM10 concentration rises from January to February and the second peak is reached in June and remain high for the next two consecutive months for Shah Alam and Kuala Terengganu. Meanwhile the second peak for Melaka is only achieved in August as a result of the transboundary of smoke from forest fires in the Sumatra region during dry season from May to September. Both of the pollutants can be sourced from rapid industrial activities at Shah Alam. PM10 concentration is strongly correlated with carbon monoxide concentration in Kuala Terengganu and Melaka with value of r2 = 0.1725 and 0.2744 respectively. High carbon monoxide and PM10 concentration are associated with burning of fossil fuel from increased number of vehicles at these areas.

  13. A New Orally Active, Aminothiol Radioprotector-Free of Nausea and Hypotension Side Effects at Its Highest Radioprotective Doses

    Energy Technology Data Exchange (ETDEWEB)

    Soref, Cheryl M. [ProCertus BioPharm, Inc., Madison, WI (United States); Hacker, Timothy A. [Department of Medicine, Cardiovascular Physiology Core, University of Wisconsin-Madison, Madison, WI (United States); Fahl, William E., E-mail: fahl@oncology.wisc.edu [ProCertus BioPharm, Inc., Madison, WI (United States); McArdle Laboratory for Cancer Research, University of Wisconsin Carbone Cancer Center, Madison, WI (United States)

    2012-04-01

    Purpose: A new aminothiol, PrC-210, was tested for orally conferred radioprotection (rats, mice; 9.0 Gy whole-body, which was otherwise lethal to 100% of the animals) and presence of the debilitating side effects (nausea/vomiting, hypotension/fainting) that restrict use of the current aminothiol, amifostine (Ethyol, WR-2721). Methods and Materials: PrC-210 in water was administered to rats and mice at times before irradiation, and percent-survival was recorded for 60 days. Subcutaneous (SC) amifostine (positive control) or SC PrC-210 was administered to ferrets (Mustela putorius furo) and retching/emesis responses were recorded. Intraperitoneal amifostine (positive control) or PrC-210 was administered to arterial cannulated rats to score drug-induced hypotension. Results: Oral PrC-210 conferred 100% survival in rat and mouse models against an otherwise 100% lethal whole-body radiation dose (9.0 Gy). Oral PrC-210, administered by gavage 30-90 min before irradiation, conferred a broad window of radioprotection. The comparison of PrC-210 and amifostine side effects was striking because there was no retching or emesis in 10 ferrets treated with PrC-210 and no induced hypotension in arterial cannulated rats treated with PrC-210. The tested PrC-210 doses were the ferret and rat equivalent doses of the 0.5 maximum tolerated dose (MTD) PrC-210 dose in mice. The human equivalent of this mouse 0.5 MTD PrC-210 dose would likely be the highest PrC-210 dose used in humans. By comparison, the mouse 0.5 MTD amifostine dose, 400 {mu}g/g body weight (equivalent to the human amifostine dose of 910 mg/m{sup 2}), when tested at equivalent ferret and rat doses in the above models produced 100% retching/vomiting in ferrets and 100% incidence of significant, progressive hypotension in rats. Conclusions: The PrC-210 aminothiol, with no detectable nausea/vomiting or hypotension side effects in these preclinical models, is a logical candidate for human drug development to use in healthy

  14. Time of highest tuberculosis death risk and associated factors: an observation of 12 years in Northern Thailand

    Directory of Open Access Journals (Sweden)

    Saiyud Moolphate

    2011-02-01

    Full Text Available Saiyud Moolphate1,2, Myo Nyein Aung1,3, Oranuch Nampaisan1, Supalert Nedsuwan4, Pacharee Kantipong5, Narin Suriyon6, Chamnarn Hansudewechakul6, Hideki Yanai7, Norio Yamada2, Nobukatsu Ishikawa21TB/HIV Research Foundation, Chiang Rai, Thailand; 2Research Institute of Tuberculosis, Japan Anti-Tuberculosis Association (RIT-JATA, Tokyo, Japan; 3Department of Pharmacology, University of Medicine, Mandalay, Myanmar; 4Department of Preventive and Social Medicine, Chiang Rai Regional Hospital, Chiang Rai, Thailand; 5Department of Health Service System Development, Chiang Rai Regional Hospital, Chiang Rai, Thailand; 6Provincial Health Office, Chiang Rai, Thailand; 7Department of Clinical Laboratory, Fukujuji Hospital, Tokyo, JapanPurpose: Northern Thailand is a tuberculosis (TB endemic area with a high TB death rate. We aimed to establish the time of highest death risk during TB treatment, and to identify the risk factors taking place during that period of high risk.Patients and methods: We explored the TB surveillance data of the Chiang Rai province, Northern Thailand, retrospectively for 12 years. A total of 19,174 TB patients (including 5,009 deaths were investigated from 1997 to 2008, and the proportion of deaths in each month of TB treatment was compared. Furthermore, multiple logistic regression analysis was performed to identify the characteristics of patients who died in the first month of TB treatment. A total of 5,626 TB patients from 2005 to 2008 were included in this regression analysis.Result: The numbers of deaths in the first month of TB treatment were 38%, 39%, and 46% in the years 1997–2000, 2001–2004, and 2005–2008, respectively. The first month of TB treatment is the time of the maximum number of deaths. Moreover, advancing age, HIV infection, and being a Thai citizen were significant factors contributing to these earlier deaths in the course of TB treatment.Conclusion: Our findings have pointed to the specific time period and

  15. Original article Functioning of memory and attention processes in children with intelligence below average

    Directory of Open Access Journals (Sweden)

    Aneta Rita Borkowska

    2014-05-01

    Full Text Available BACKGROUND The aim of the research was to assess memorization and recall of logically connected and unconnected material, coded graphically and linguistically, and the ability to focus attention, in a group of children with intelligence below average, compared to children with average intelligence. PARTICIPANTS AND PROCEDURE The study group included 27 children with intelligence below average. The control group consisted of 29 individuals. All of them were examined using the authors’ experimental trials and the TUS test (Attention and Perceptiveness Test. RESULTS Children with intelligence below average memorized significantly less information contained in the logical material, demonstrated lower ability to memorize the visual material, memorized significantly fewer words in the verbal material learning task, achieved lower results in such indicators of the visual attention process pace as the number of omissions and mistakes, and had a lower pace of perceptual work, compared to children with average intelligence. CONCLUSIONS The results confirm that children with intelligence below average have difficulties with memorizing new material, both logically connected and unconnected. The significantly lower capacity of direct memory is independent of modality. The results of the study on the memory process confirm the hypothesis about lower abilities of children with intelligence below average, in terms of concentration, work pace, efficiency and perception.

  16. Anaerobic bio-digestion of concentrate obtained in the process of ultra filtration of effluents from tilapia processing unit

    Directory of Open Access Journals (Sweden)

    Milena Alves de Souza

    2012-02-01

    Full Text Available The objective of the present study was to evaluate the efficiency of the process of biodigestion of the protein concentrate resulting from the ultrafiltration of the effluent from a slaughterhouse freezer of Nile tilapia. Bench digesters were used with excrements and water (control in comparison with a mixture of cattle manure and effluent from the stages of filleting and bleeding of tilapias. The effluent obtained in the continuous process (bleeding + filleting was the one with highest accumulated population from the 37th day, as well as greatest daily production. Gases composition did not differ between the protein concentrates, but the gas obtained with the use of the effluent from the filleting stage presented highest methane gas average (78.05% in comparison with those obtained in the bleeding stage (69.95% and in the continuous process (70.02% or by the control method (68.59%.

  17. Estimation of total as well as bioaccessible levels and average daily dietary intake of iodine from Japanese edible seaweeds by epithermal neutron activation analysis

    International Nuclear Information System (INIS)

    Fukushima, M.; Chatt, A.

    2012-01-01

    An epi-thermal instrumental neutron activation analysis (EINAA) method in conjunction with Compton suppression spectrometry (EINAA-CSS) was used for the determination of total iodine in eight different species of edible seaweeds from Japan. This method gave an absolute detection limit of about 2 μg. The accuracy of the method was evaluated using various reference materials and found to be generally in agreement within ±6% of the certified values. The longitudinal distributions of iodine at different growing stages in Japanese sea mustard and tangle seaweeds were investigated. For a 150-cm-high tangle, the highest concentration (5,360 mg/kg) of iodine was found at the root, then decreased slowly to 780 mg/kg in the middle portion (60-75 cm), and increased to 2,300 mg/kg at the apex. On the other hand, for a 190-cm-high sea mustard the highest levels of iodine were found both at the roots (164 mg/kg) and apex (152 mg/kg) with lower values (98 mg/kg) in the middle section. In order to estimate the bioaccessible fraction of iodine, seaweeds were digested by an in vitro enzymolysis method, dietary fibre separated from residue, and both fractions analyzed by EINAA-CSS. The average daily dietary intakes of total (0.14 mg) as well as bioaccessible fraction (0.12 mg) of iodine from the consumption of sea mustards were estimated. (author)

  18. Delineation of facial archetypes by 3d averaging.

    Science.gov (United States)

    Shaweesh, Ashraf I; Thomas, C David L; Bankier, Agnes; Clement, John G

    2004-10-01

    The objective of this study was to investigate the feasibility of creating archetypal 3D faces through computerized 3D facial averaging. A 3D surface scanner Fiore and its software were used to acquire the 3D scans of the faces while 3D Rugle3 and locally-developed software generated the holistic facial averages. 3D facial averages were created from two ethnic groups; European and Japanese and from children with three previous genetic disorders; Williams syndrome, achondroplasia and Sotos syndrome as well as the normal control group. The method included averaging the corresponding depth (z) coordinates of the 3D facial scans. Compared with other face averaging techniques there was not any warping or filling in the spaces by interpolation; however, this facial average lacked colour information. The results showed that as few as 14 faces were sufficient to create an archetypal facial average. In turn this would make it practical to use face averaging as an identification tool in cases where it would be difficult to recruit a larger number of participants. In generating the average, correcting for size differences among faces was shown to adjust the average outlines of the facial features. It is assumed that 3D facial averaging would help in the identification of the ethnic status of persons whose identity may not be known with certainty. In clinical medicine, it would have a great potential for the diagnosis of syndromes with distinctive facial features. The system would also assist in the education of clinicians in the recognition and identification of such syndromes.

  19. Nitrogen concentrations in mosses indicate the spatial distribution of atmospheric nitrogen deposition in Europe

    Energy Technology Data Exchange (ETDEWEB)

    Harmens, H., E-mail: hh@ceh.ac.uk [Centre for Ecology and Hydrology, Environment Centre Wales, Deiniol Road, Bangor, Gwynedd LL57 2UW (United Kingdom); Norris, D.A., E-mail: danor@ceh.ac.uk [Centre for Ecology and Hydrology, Environment Centre Wales, Deiniol Road, Bangor, Gwynedd LL57 2UW (United Kingdom); Cooper, D.M., E-mail: cooper@ceh.ac.uk [Centre for Ecology and Hydrology, Environment Centre Wales, Deiniol Road, Bangor, Gwynedd LL57 2UW (United Kingdom); Mills, G., E-mail: gmi@ceh.ac.uk [Centre for Ecology and Hydrology, Environment Centre Wales, Deiniol Road, Bangor, Gwynedd LL57 2UW (United Kingdom); Steinnes, E., E-mail: Eiliv.Steinnes@chem.ntnu.no [Department of Chemistry, Norwegian University of Science and Technology, 7491 Trondheim (Norway); Kubin, E., E-mail: Eero.Kubin@metla.fi [Finnish Forest Research Institute, Kirkkosaarentie 7, 91500 Muhos (Finland); Thoeni, L., E-mail: lotti.thoeni@fub-ag.ch [FUB-Research Group for Environmental Monitoring, Alte Jonastrasse 83, 8640 Rapperswil (Switzerland); Aboal, J.R., E-mail: jesusramon.aboal@usc.es [University of Santiago de Compostela, Faculty of Biology, Department of Ecology, 15782 Santiago de Compostela (Spain); Alber, R., E-mail: Renate.Alber@provinz.bz.it [Environmental Agency of Bolzano, 39055 Laives (Italy); Carballeira, A., E-mail: alejo.carballeira@usc.es [University of Santiago de Compostela, Faculty of Biology, Department of Ecology, 15782 Santiago de Compostela (Spain); Coskun, M., E-mail: coskunafm@yahoo.com [Canakkale Onsekiz Mart University, Faculty of Medicine, Department of Medical Biology, 17100 Canakkale (Turkey); De Temmerman, L., E-mail: ludet@var.fgov.be [Veterinary and Agrochemical Research Centre, Tervuren (Belgium); Frolova, M., E-mail: marina.frolova@lvgma.gov.lv [Latvian Environment, Geology and Meteorology Agency, Riga (Latvia); Gonzalez-Miqueo, L., E-mail: lgonzale2@alumni.unav.es [Univ. of Navarra, Irunlarrea No 1, 31008 Pamplona (Spain)

    2011-10-15

    In 2005/6, nearly 3000 moss samples from (semi-)natural location across 16 European countries were collected for nitrogen analysis. The lowest total nitrogen concentrations in mosses (<0.8%) were observed in northern Finland and northern UK. The highest concentrations ({>=}1.6%) were found in parts of Belgium, France, Germany, Slovakia, Slovenia and Bulgaria. The asymptotic relationship between the nitrogen concentrations in mosses and EMEP modelled nitrogen deposition (averaged per 50 km x 50 km grid) across Europe showed less scatter when there were at least five moss sampling sites per grid. Factors potentially contributing to the scatter are discussed. In Switzerland, a strong (r{sup 2} = 0.91) linear relationship was found between the total nitrogen concentration in mosses and measured site-specific bulk nitrogen deposition rates. The total nitrogen concentrations in mosses complement deposition measurements, helping to identify areas in Europe at risk from high nitrogen deposition at a high spatial resolution. - Highlights: > Nitrogen concentrations in mosses were determined at ca. 3000 sites across Europe. > Moss concentrations were compared with EMEP modelled nitrogen deposition. > The asymptotic relationship for Europe showed saturation at ca. 15 kg N ha{sup -1} y{sup -1}. > Linear relationships were found with measured nitrogen deposition in some countries. > Moss concentrations complement deposition measurements at high spatial resolution. - Mosses as biomonitors of atmospheric nitrogen deposition in Europe.

  20. Rumen fermentation dynamics of concentrate containing the new feed supplement

    International Nuclear Information System (INIS)

    Suharyono; Shintia NW Hardani; Teguh Wahyono

    2015-01-01

    The utilization of "3"2P for measuring of microbial protein synthesis in rumen liquid has potential role for obtaining a new formula of feed supplement (SPB). New Feed Supplements (SPB) was a new generation of ruminant feed supplement produced by the National Nuclear Energy Agency (BATAN). This supplement was applied to complete commercial concentrate function as feed for ruminants. In vitro testing used semi continuous in vitro such as Rumen Simulation Technique (RUSITEC). The purpose of this study was to evaluate SPB as feed supplement and palm oil industry by product, and also to determine the dynamics of rumen fermentation from concentrate containing SPB. Two in vitro's analyzes that have been studied were "3"2P incubation and RUSITEC's methods. "3"2P in vitro's study used five treatments: palm oil leaf (P), palm oil bunches (TKS), Palm oil shell kernel (KC), P+TKS+KC and SPB. Parameter's measurement was microbial protein synthesis (mg/h/l). RUSITEC treatments were: control (K) (commercial concentrate); KS 30 (70 % commercial concentrate + 30 % SPB) and KS 40 (60 % commercial concentrate + 40 % SPB). Observed variables were fermented rumen product (24 hours incubation) such as pH, ammonia concentration (NH_3) (mg/100 ml), total volatile fatty acid (TVFA) (mM), total gas production (ml/d) and methane production (CH_4) (ml/d). Rumen fermentation dynamics represented descriptively on six days incubation. The average variable was analyzed using completely randomized design with 12 replicates (six days incubation x two replications) followed by Duncan test. Highest microbial protein synthesis was on SPB compared with P, TKS, KC and P+TKS+KC (67.6 vs 11.9; 0,67; 1,87 and 42.55 mg/h/l respectively). The RUSITEC results were pH value of three treatments in normal range between 6.40 to 7.15. The dynamics of NH_3 concentration and TVFA production of commercial concentrates always lower than the KS 30 and KS 40. The KS 40 treatment resulted in TVFA production 56

  1. Aerosol deposition (trace elements and black carbon) over the highest glacier of the Eastern European Alps during the last centuries

    Science.gov (United States)

    Bertò, Michele; Barbante, Carlo; Gabrieli, Jacopo; Gabrielli, Paolo; Spolaor, Andrea; Dreossi, Giuliano; Laj, Paolo; Zanatta, Marco; Ginot, Patrick; Fain, Xavier

    2016-04-01

    Ice cores are an archive of a wide variety of climatic and environmental information from the past, retaining them for hundreds of thousands of years. Anthropogenic pollutants, trace elements, heavy metals and major ions, are preserved as well providing insights on the past atmospheric circulations and allowing evaluating the human impact on the environment. Several ice cores were drilled in glaciers at mid and low latitudes, as in the European Alps. The first ice cores drilled to bedrock in the Eastern Alps were retrieved during autumn 2011 on the "Alto dell`Ortles glacier", the uppermost glacier of the Ortles massif (3905m, South Tirol, Italy), in the frame of the "Ortles Project". A preliminary dating of the core suggests that it should cover at least 300-400 years. Despite the summer temperature increase of the last decades this glacier still contain cold ice. Indeed, O and H isotopes profiles well describe the atmospheric warming as well as the low temperatures recorded during the Little Ice Age (LIA). Moreover, this glacier is located close to densely populated and industrialized areas and can be used for reconstructing for the first time past and recent air pollution and the human impact in the Eastern European Alps. The innermost part of the core is under analysis by means of a "Continuous Flow Analysis" system. This kind of analysis offers a high resolution in data profiles. The separation between the internal and the external parts of the core avoid any kind of contamination. An aluminum melting head melts the core at about 2.5 cm min-1. Simultaneous analyses of conductivity, dust concentration and size distribution (from 0.8 to 80 μm), trace elements with Inductive Coupled Plasma Mass Spectrometer (ICP-MS, Agilent 7500) and refractory black carbon (rBC) with the Single Particle Soot Photometer (SP2, Droplet Measurement Technologies) are performed. A fraction of the melt water is collected by an auto-sampler for further analysis. The analyzed elements

  2. Interpreting Bivariate Regression Coefficients: Going beyond the Average

    Science.gov (United States)

    Halcoussis, Dennis; Phillips, G. Michael

    2010-01-01

    Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…

  3. Average stress in a Stokes suspension of disks

    NARCIS (Netherlands)

    Prosperetti, Andrea

    2004-01-01

    The ensemble-average velocity and pressure in an unbounded quasi-random suspension of disks (or aligned cylinders) are calculated in terms of average multipoles allowing for the possibility of spatial nonuniformities in the system. An expression for the stress due to the suspended particles is

  4. 47 CFR 1.959 - Computation of average terrain elevation.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Computation of average terrain elevation. 1.959 Section 1.959 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Wireless Radio Services Applications and Proceedings Application Requirements and Procedures § 1.959 Computation of average terrain elevation. Except a...

  5. 47 CFR 80.759 - Average terrain elevation.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Average terrain elevation. 80.759 Section 80.759 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.759 Average terrain elevation. (a)(1) Draw radials...

  6. The average covering tree value for directed graph games

    NARCIS (Netherlands)

    Khmelnitskaya, Anna Borisovna; Selcuk, Özer; Talman, Dolf

    We introduce a single-valued solution concept, the so-called average covering tree value, for the class of transferable utility games with limited communication structure represented by a directed graph. The solution is the average of the marginal contribution vectors corresponding to all covering

  7. The Average Covering Tree Value for Directed Graph Games

    NARCIS (Netherlands)

    Khmelnitskaya, A.; Selcuk, O.; Talman, A.J.J.

    2012-01-01

    Abstract: We introduce a single-valued solution concept, the so-called average covering tree value, for the class of transferable utility games with limited communication structure represented by a directed graph. The solution is the average of the marginal contribution vectors corresponding to all

  8. 18 CFR 301.7 - Average System Cost methodology functionalization.

    Science.gov (United States)

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Average System Cost... REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER MARKETING ADMINISTRATIONS AVERAGE SYSTEM COST METHODOLOGY FOR SALES FROM UTILITIES TO BONNEVILLE POWER ADMINISTRATION UNDER NORTHWEST POWER...

  9. Analytic computation of average energy of neutrons inducing fission

    International Nuclear Information System (INIS)

    Clark, Alexander Rich

    2016-01-01

    The objective of this report is to describe how I analytically computed the average energy of neutrons that induce fission in the bare BeRP ball. The motivation of this report is to resolve a discrepancy between the average energy computed via the FMULT and F4/FM cards in MCNP6 by comparison to the analytic results.

  10. An alternative scheme of the Bogolyubov's average method

    International Nuclear Information System (INIS)

    Ortiz Peralta, T.; Ondarza R, R.; Camps C, E.

    1990-01-01

    In this paper the average energy and the magnetic moment conservation laws in the Drift Theory of charged particle motion are obtained in a simple way. The approach starts from the energy and magnetic moment conservation laws and afterwards the average is performed. This scheme is more economic from the standpoint of time and algebraic calculations than the usual procedure of Bogolyubov's method. (Author)

  11. Decision trees with minimum average depth for sorting eight elements

    KAUST Repository

    AbouEisha, Hassan M.; Chikalov, Igor; Moshkov, Mikhail

    2015-01-01

    We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees

  12. Bounds on Average Time Complexity of Decision Trees

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any

  13. A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, Manfred

    2003-01-01

    We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages...

  14. Self-similarity of higher-order moving averages

    Science.gov (United States)

    Arianos, Sergio; Carbone, Anna; Türk, Christian

    2011-10-01

    In this work, higher-order moving average polynomials are defined by straightforward generalization of the standard moving average. The self-similarity of the polynomials is analyzed for fractional Brownian series and quantified in terms of the Hurst exponent H by using the detrending moving average method. We prove that the exponent H of the fractional Brownian series and of the detrending moving average variance asymptotically agree for the first-order polynomial. Such asymptotic values are compared with the results obtained by the simulations. The higher-order polynomials correspond to trend estimates at shorter time scales as the degree of the polynomial increases. Importantly, the increase of polynomial degree does not require to change the moving average window. Thus trends at different time scales can be obtained on data sets with the same size. These polynomials could be interesting for those applications relying on trend estimates over different time horizons (financial markets) or on filtering at different frequencies (image analysis).

  15. Anomalous behavior of q-averages in nonextensive statistical mechanics

    International Nuclear Information System (INIS)

    Abe, Sumiyoshi

    2009-01-01

    A generalized definition of average, termed the q-average, is widely employed in the field of nonextensive statistical mechanics. Recently, it has however been pointed out that such an average value may behave unphysically under specific deformations of probability distributions. Here, the following three issues are discussed and clarified. Firstly, the deformations considered are physical and may be realized experimentally. Secondly, in view of the thermostatistics, the q-average is unstable in both finite and infinite discrete systems. Thirdly, a naive generalization of the discussion to continuous systems misses a point, and a norm better than the L 1 -norm should be employed for measuring the distance between two probability distributions. Consequently, stability of the q-average is shown not to be established in all of the cases

  16. Bootstrapping pre-averaged realized volatility under market microstructure noise

    DEFF Research Database (Denmark)

    Hounyo, Ulrich; Goncalves, Sílvia; Meddahi, Nour

    The main contribution of this paper is to propose a bootstrap method for inference on integrated volatility based on the pre-averaging approach of Jacod et al. (2009), where the pre-averaging is done over all possible overlapping blocks of consecutive observations. The overlapping nature of the pre......-averaged returns implies that these are kn-dependent with kn growing slowly with the sample size n. This motivates the application of a blockwise bootstrap method. We show that the "blocks of blocks" bootstrap method suggested by Politis and Romano (1992) (and further studied by Bühlmann and Künsch (1995......)) is valid only when volatility is constant. The failure of the blocks of blocks bootstrap is due to the heterogeneity of the squared pre-averaged returns when volatility is stochastic. To preserve both the dependence and the heterogeneity of squared pre-averaged returns, we propose a novel procedure...

  17. Concentrations and congener profiles of chlorinated paraffins in domestic polymeric products in China.

    Science.gov (United States)

    Wang, Chu; Gao, Wei; Liang, Yong; Wang, Yawei; Jiang, Guibin

    2018-03-21

    Chlorinated paraffins (CPs) are widely used in domestic polymeric products as plasticizers and fire retardants. In this study, concentrations and congener profiles of short-chain and medium-chain chlorinated paraffins (SCCPs and MCCPs) were investigated in domestic polymeric products, including plastics, rubber and food packaging in China. The average concentrations of SCCPs in polyethylene terephthalate (PET), polypropylene (PP), polyethylene (PE) and food packaging were 234, 3968, 150 and 188 ng/g, respectively and the corresponding average concentrations of MCCPs in these samples were 37.4, 2537, 208 and 644 ng/g, respectively. The concentrations of CPs in rubber and polyvinylchloride (PVC) were significantly higher than in other matrices. The highest concentrations of SCCPs and MCCPs were found in a PVC cable sheath with 191 mg/g and 145 mg/g, respectively. Congener group profiles analysis indicated C 11 - and C 13 -congener groups were predominant in carbon homologues of SCCPs, and C 14 -congener groups were predominant in MCCPs. High levels of SCCPs and MCCPs in domestic polymeric products implied that they might be a significant source to the environment and human exposure. Copyright © 2018. Published by Elsevier Ltd.

  18. Quantifying metabolic heterogeneity in head and neck tumors in real time: 2-DG uptake is highest in hypoxic tumor regions.

    Directory of Open Access Journals (Sweden)

    Erica C Nakajima

    Full Text Available Intratumoral metabolic heterogeneity may increase the likelihood of treatment failure due to the presence of a subset of resistant tumor cells. Using a head and neck squamous cell carcinoma (HNSCC xenograft model and a real-time fluorescence imaging approach, we tested the hypothesis that tumors are metabolically heterogeneous, and that tumor hypoxia alters patterns of glucose uptake within the tumor.Cal33 cells were grown as xenograft tumors (n = 16 in nude mice after identification of this cell line's metabolic response to hypoxia. Tumor uptake of fluorescent markers identifying hypoxia, glucose import, or vascularity was imaged simultaneously using fluorescent molecular tomography. The variability of intratumoral 2-deoxyglucose (IR800-2-DG concentration was used to assess tumor metabolic heterogeneity, which was further investigated using immunohistochemistry for expression of key metabolic enzymes. HNSCC tumors in patients were assessed for intratumoral variability of (18F-fluorodeoxyglucose ((18F-FDG uptake in clinical PET scans.IR800-2-DG uptake in hypoxic regions of Cal33 tumors was 2.04 times higher compared to the whole tumor (p = 0.0001. IR800-2-DG uptake in tumors containing hypoxic regions was more heterogeneous as compared to tumors lacking a hypoxic signal. Immunohistochemistry staining for HIF-1α, carbonic anhydrase 9, and ATP synthase subunit 5β confirmed xenograft metabolic heterogeneity. We detected heterogeneous (18F-FDG uptake within patient HNSCC tumors, and the degree of heterogeneity varied amongst tumors.Hypoxia is associated with increased intratumoral metabolic heterogeneity. (18F-FDG PET scans may be used to stratify patients according to the metabolic heterogeneity within their tumors, which could be an indicator of prognosis.

  19. Northern Marshall Islands radiological survey: radionuclide concentrations in fish and clams and estimated doses via the marine pathway

    International Nuclear Information System (INIS)

    Robison, W.L.; Noshkin, V.E.; Phillips, W.A.; Eagle, R.J.

    1981-01-01

    The survey consisted, in part, of an aerial radiological reconnaissance to map the external gamma-ray exposure rates. As a secondary phase, terrestrial and marine samples were collected to assess the radiological dose from pertinent food chains to atoll inhabitants. The marine sample collection, processing, and dose assessment methodology are presented as well as the concentration data for 90 Sr, 137 Cs, 238 Pu, 239+240 Pu, 241 Am, and any of the other gamma emitters in fish and clam muscle tissue from the different species collected. Doses are calculated from the average radionuclide concentrations in fish and clam muscle tissue assuming an average daily intake of 200 and 10 g, respectivelty. The 90 Sr concentration in muscle tissue is very low and there is little difference in the average concentrations from the different fish from different atolls or islands. The 239+240 Pu concentration in the muscle tissue of all reef species, however, is higher than that in pelagic lagoon fish. In contrast, 137 Cs concentrations are lowest in the muscle tissue of the bottom-feeding reef species and highest in pelagic logoon fish. Recent measurements of radionuclide concentrations in fish muscle tissue and other marine dietary items from international sources show that the average concentrations in species from the Marshall Islands are comparable to those in fish typically consumed as food in the United States and are generally lower than those in most international marine dietary items. The whole-body dose rates based on continuous consumption of 200 g/d of fish range from 0.028 to 0.1 mrem/y; the bone-marrow dose rates range from 0.029 to 0.12 mrem/y. The dose commitment, or 30-y integral doses, range from 0.00063 to 0.0022 rem for the whole body and from 0.00065 to 0.0032 rem for the bone marrow

  20. Bounds on Average Time Complexity of Decision Trees

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any diagnostic problem, the minimum average depth of decision tree is bounded from below by the entropy of probability distribution (with a multiplier 1/log2 k for a problem over a k-valued information system). Among diagnostic problems, the problems with a complete set of attributes have the lowest minimum average depth of decision trees (e.g, the problem of building optimal prefix code [1] and a blood test study in assumption that exactly one patient is ill [23]). For such problems, the minimum average depth of decision tree exceeds the lower bound by at most one. The minimum average depth reaches the maximum on the problems in which each attribute is "indispensable" [44] (e.g., a diagnostic problem with n attributes and kn pairwise different rows in the decision table and the problem of implementing the modulo 2 summation function). These problems have the minimum average depth of decision tree equal to the number of attributes in the problem description. © Springer-Verlag Berlin Heidelberg 2011.

  1. Measurements and predictors of on-road ultrafine particle concentrations and associated pollutants in Los Angeles

    Energy Technology Data Exchange (ETDEWEB)

    Fruin, S. [California Air Resources Board, Sacramento (United States); University of Southern California, Los Angeles (United States). Keck School of Medicine, Department of Preventive Medicine; Westerdahl, D.; Sax, T. [California Air Resources Board, Sacramento (United States); Sioutas, C. [University of Southern California, Los Angeles (United States). Civil and Environmental Engineering; Fine, P.M. [University of Southern California, Los Angeles (United States). Civil and Environmental Engineering; South Coast Air Quality Management District, Diamond Bar, CA (United States)

    2008-01-15

    Motor vehicles are the dominant source of oxides of nitrogen (NO{sub x}), particulate matter(PM), and certain air toxics (e.g., benzene, 1,3-butadiene) in urban areas. On roadways, motor vehicle-related pollutant concentrations are typically many times higher than ambient concentrations. Due to high air exchange rates typical of moving vehicles, this makes time spent in vehicles on roadways a major source of exposure. This paper presents on-road measurements for Los Angeles freeways and arterial roads taken from a zero-emission electric vehicle outfitted with real-time instruments. The objective was to characterize air pollutant concentrations on roadways and identify the factors associated with the highest concentrations. Our analysis demonstrated that on freeways, concentrations of ultrafine particles (UFPs), black carbon, nitric oxide, and PM-bound polycyclic aromatic hydrocarbons (PM-PAH) are generated primarily by diesel-powered vehicles, despite the relatively low fraction ({approx}6%) of diesel-powered vehicles on Los Angeles freeways. However, UFP concentrations on arterial roads appeared to be driven primarily by proximity to gasoline-powered vehicles undergoing hard accelerations. Concentrations were roughly one-third of those on freeways. By using a multiple regression model for the freeway measurements, we were able to explain 60-70% of the variability in concentrations of UFP, black carbon, nitric oxide, and PM-PAH using measures of diesel truck density and hour of day (as an indicator of wind speed). Freeway concentrations of these pollutants were also well correlated wth readily available annual average daily truck counts, potentially allowing improved population exposure estimates for epidemiology studies. Based on these roadway measurements and average driving time, it appears that 33-45% of total UFP exposure for Los Angeles residents occurs due to time spent traveling in vehicles. (author)

  2. Measurements and predictors of on-road ultrafine particle concentrations and associated pollutants in Los Angeles

    International Nuclear Information System (INIS)

    Fruin, S.; Sioutas, C.

    2008-01-01

    Motor vehicles are the dominant source of oxides of nitrogen (NO x ), particulate matter(PM), and certain air toxics (e.g., benzene, 1,3-butadiene) in urban areas. On roadways, motor vehicle-related pollutant concentrations are typically many times higher than ambient concentrations. Due to high air exchange rates typical of moving vehicles, this makes time spent in vehicles on roadways a major source of exposure. This paper presents on-road measurements for Los Angeles freeways and arterial roads taken from a zero-emission electric vehicle outfitted with real-time instruments. The objective was to characterize air pollutant concentrations on roadways and identify the factors associated with the highest concentrations. Our analysis demonstrated that on freeways, concentrations of ultrafine particles (UFPs), black carbon, nitric oxide, and PM-bound polycyclic aromatic hydrocarbons (PM-PAH) are generated primarily by diesel-powered vehicles, despite the relatively low fraction (∼6%) of diesel-powered vehicles on Los Angeles freeways. However, UFP concentrations on arterial roads appeared to be driven primarily by proximity to gasoline-powered vehicles undergoing hard accelerations. Concentrations were roughly one-third of those on freeways. By using a multiple regression model for the freeway measurements, we were able to explain 60-70% of the variability in concentrations of UFP, black carbon, nitric oxide, and PM-PAH using measures of diesel truck density and hour of day (as an indicator of wind speed). Freeway concentrations of these pollutants were also well correlated wth readily available annual average daily truck counts, potentially allowing improved population exposure estimates for epidemiology studies. Based on these roadway measurements and average driving time, it appears that 33-45% of total UFP exposure for Los Angeles residents occurs due to time spent traveling in vehicles. (author)

  3. Measurements and predictors of on-road ultrafine particle concentrations and associated pollutants in Los Angeles

    Science.gov (United States)

    Fruin, S.; Westerdahl, D.; Sax, T.; Sioutas, C.; Fine, P. M.

    Motor vehicles are the dominant source of oxides of nitrogen (NO x), particulate matter (PM), and certain air toxics (e.g., benzene, 1,3-butadiene) in urban areas. On roadways, motor vehicle-related pollutant concentrations are typically many times higher than ambient concentrations. Due to high air exchange rates typical of moving vehicles, this makes time spent in vehicles on roadways a major source of exposure. This paper presents on-road measurements for Los Angeles freeways and arterial roads taken from a zero-emission electric vehicle outfitted with real-time instruments. The objective was to characterize air pollutant concentrations on roadways and identify the factors associated with the highest concentrations. Our analysis demonstrated that on freeways, concentrations of ultrafine particles (UFPs), black carbon, nitric oxide, and PM-bound polycyclic aromatic hydrocarbons (PM-PAH) are generated primarily by diesel-powered vehicles, despite the relatively low fraction (˜6%) of diesel-powered vehicles on Los Angeles freeways. However, UFP concentrations on arterial roads appeared to be driven primarily by proximity to gasoline-powered vehicles undergoing hard accelerations. Concentrations were roughly one-third of those on freeways. By using a multiple regression model for the freeway measurements, we were able to explain 60-70% of the variability in concentrations of UFP, black carbon, nitric oxide, and PM-PAH using measures of diesel truck density and hour of day (as an indicator of wind speed). Freeway concentrations of these pollutants were also well correlated with readily available annual average daily truck counts, potentially allowing improved population exposure estimates for epidemiology studies. Based on these roadway measurements and average driving time, it appears that 33-45% of total UFP exposure for Los Angeles residents occurs due to time spent traveling in vehicles.

  4. The classification of PM10 concentrations in Johor Based on Seasonal Monsoons

    Science.gov (United States)

    Hamid, Hazrul Abdul; Hanafi Rahmat, Muhamad; Aisyah Sapani, Siti

    2018-04-01

    Air is the most important living resource in life. Contaminated air could adversely affect human health and the environment, especially during the monsoon season. Contamination occurs as a result of human action and haze. There are several pollutants present in the air where one of them is PM10. Secondary data was obtained from the Department of Environment from 2010 until 2014 and was analyzed using the hourly average of PM10 concentrations. This paper examined the relation between PM10 concentrations and the monsoon seasons (Northeast Monsoon and Southwest Monsoon) in Larkin and Pasir Gudang. It was expected that the concentration of PM10 would be higher during the Southwest Monsoon as it is a dry season. The data revealed that the highest PM10 concentrations were recorded between 2010 to 2014 during this particular monsoon season. The characteristics of PM10 concentration were compared using descriptive statistics based on the monsoon seasons and classified using the hierarchical cluster analysis (Ward Methods). The annual average of PM10 concentration during the Southwest Monsoon had exceeded the standard set by the Malaysia Ambient Air Quality Guidelines (50 μg/m3) while the PM10 concentration during the Northeast Monsoon was below the acceptable level for both stations. The dendrogram displayed showed two clusters for each monsoon season for both stations excepted for the PM10 concentration during the Northeast Monsoon in Larkin which was classified into three clusters due to the haze in 2010. Overall, the concentration of PM10 in 2013 was higher based on the clustering shown for every monsoon season at both stations according to the characteristics in the descriptive statistics.

  5. 40 CFR 80.205 - How is the annual refinery or importer average and corporate pool average sulfur level determined?

    Science.gov (United States)

    2010-07-01

    ... volume of gasoline produced or imported in batch i. Si=The sulfur content of batch i determined under § 80.330. n=The number of batches of gasoline produced or imported during the averaging period. i=Individual batch of gasoline produced or imported during the averaging period. (b) All annual refinery or...

  6. 40 CFR 600.510-12 - Calculation of average fuel economy and average carbon-related exhaust emissions.

    Science.gov (United States)

    2010-07-01

    ... and average carbon-related exhaust emissions. 600.510-12 Section 600.510-12 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF... Transportation. (iv) [Reserved] (2) Average carbon-related exhaust emissions will be calculated to the nearest...

  7. Distribution of gaseous and particle-bound Hg concentrations at the sites representative for urban and non-urban zones of Silesia Province

    Directory of Open Access Journals (Sweden)

    Pyta Halina

    2018-01-01

    Full Text Available The basic features of the distribution of total gaseous (TGM and particle-bound mercury (PBM concentrations were determined for a five locations representative for urban (Bielsko-Biała, Lubliniec, Zabrze and rural areas (Godów, Złoty Potok of Silesia Province. Gaseous mercury concentrations were measured (1 continuously - the automatic 1h TGM measurements in Zloty Potok and Zabrze and (2 non-continuously manual 24h TGM measurements with a pre-concentration of the Hg on gold traps (Bielsko-Biała, Lubliniec, Godów. The PBM concentrations were measured non-continuously by taking PM2.5 samples. The Hg content was determined by using a CVAAS method. The highest average concentration of TGM was recorded in Zabrze (2.8ng/m3, significantly lower (2.0ng/m3 in Bielsko-Biała and in the non-urban station in Godów, the lowest concentration (<2.0 ng/m3 was observed in Lubliniec and at the regional background station in Zloty Potok. The results obtained for TGM concentrations exceeded the European average level of 1.5 ng/m3 (AirBase, 2014. The highest average PBM concentration, associated with PM2.5, was obtained in Zabrze (70pg/m3, more than 20% lower results were obtained in Bielsko-Biała and Godów, finally, the lowest one (lower by about 40% in comparison with Zabrze were obtained in Lubliniec and Złoty Potok. Moreover, an enrichment of Hg concentration in PM was observed with the increasing of the PM content, during the heating season.

  8. Comparison of two-concentration with multi-concentration linear regressions: Retrospective data analysis of multiple regulated LC-MS bioanalytical projects.

    Science.gov (United States)

    Musuku, Adrien; Tan, Aimin; Awaiye, Kayode; Trabelsi, Fethi

    2013-09-01

    Linear calibration is usually performed using eight to ten calibration concentration levels in regulated LC-MS bioanalysis because a minimum of six are specified in regulatory guidelines. However, we have previously reported that two-concentration linear calibration is as reliable as or even better than using multiple concentrations. The purpose of this research is to compare two-concentration with multiple-concentration linear calibration through retrospective data analysis of multiple bioanalytical projects that were conducted in an independent regulated bioanalytical laboratory. A total of 12 bioanalytical projects were randomly selected: two validations and two studies for each of the three most commonly used types of sample extraction methods (protein precipitation, liquid-liquid extraction, solid-phase extraction). When the existing data were retrospectively linearly regressed using only the lowest and the highest concentration levels, no extra batch failure/QC rejection was observed and the differences in accuracy and precision between the original multi-concentration regression and the new two-concentration linear regression are negligible. Specifically, the differences in overall mean apparent bias (square root of mean individual bias squares) are within the ranges of -0.3% to 0.7% and 0.1-0.7% for the validations and studies, respectively. The differences in mean QC concentrations are within the ranges of -0.6% to 1.8% and -0.8% to 2.5% for the validations and studies, respectively. The differences in %CV are within the ranges of -0.7% to 0.9% and -0.3% to 0.6% for the validations and studies, respectively. The average differences in study sample concentrations are within the range of -0.8% to 2.3%. With two-concentration linear regression, an average of 13% of time and cost could have been saved for each batch together with 53% of saving in the lead-in for each project (the preparation of working standard solutions, spiking, and aliquoting). Furthermore

  9. Average inactivity time model, associated orderings and reliability properties

    Science.gov (United States)

    Kayid, M.; Izadkhah, S.; Abouammoh, A. M.

    2018-02-01

    In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.

  10. Average L-shell fluorescence, Auger, and electron yields

    International Nuclear Information System (INIS)

    Krause, M.O.

    1980-01-01

    The dependence of the average L-shell fluorescence and Auger yields on the initial vacancy distribution is shown to be small. By contrast, the average electron yield pertaining to both Auger and Coster-Kronig transitions is shown to display a strong dependence. Numerical examples are given on the basis of Krause's evaluation of subshell radiative and radiationless yields. Average yields are calculated for widely differing vacancy distributions and are intercompared graphically for 40 3 subshell yields in most cases of inner-shell ionization

  11. Simultaneous inference for model averaging of derived parameters

    DEFF Research Database (Denmark)

    Jensen, Signe Marie; Ritz, Christian

    2015-01-01

    Model averaging is a useful approach for capturing uncertainty due to model selection. Currently, this uncertainty is often quantified by means of approximations that do not easily extend to simultaneous inference. Moreover, in practice there is a need for both model averaging and simultaneous...... inference for derived parameters calculated in an after-fitting step. We propose a method for obtaining asymptotically correct standard errors for one or several model-averaged estimates of derived parameters and for obtaining simultaneous confidence intervals that asymptotically control the family...

  12. Salecker-Wigner-Peres clock and average tunneling times

    International Nuclear Information System (INIS)

    Lunardi, Jose T.; Manzoni, Luiz A.; Nystrom, Andrew T.

    2011-01-01

    The quantum clock of Salecker-Wigner-Peres is used, by performing a post-selection of the final state, to obtain average transmission and reflection times associated to the scattering of localized wave packets by static potentials in one dimension. The behavior of these average times is studied for a Gaussian wave packet, centered around a tunneling wave number, incident on a rectangular barrier and, in particular, on a double delta barrier potential. The regime of opaque barriers is investigated and the results show that the average transmission time does not saturate, showing no evidence of the Hartman effect (or its generalized version).

  13. Time average vibration fringe analysis using Hilbert transformation

    International Nuclear Information System (INIS)

    Kumar, Upputuri Paul; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad

    2010-01-01

    Quantitative phase information from a single interferogram can be obtained using the Hilbert transform (HT). We have applied the HT method for quantitative evaluation of Bessel fringes obtained in time average TV holography. The method requires only one fringe pattern for the extraction of vibration amplitude and reduces the complexity in quantifying the data experienced in the time average reference bias modulation method, which uses multiple fringe frames. The technique is demonstrated for the measurement of out-of-plane vibration amplitude on a small scale specimen using a time average microscopic TV holography system.

  14. Average multiplications in deep inelastic processes and their interpretation

    International Nuclear Information System (INIS)

    Kiselev, A.V.; Petrov, V.A.

    1983-01-01

    Inclusive production of hadrons in deep inelastic proceseseus is considered. It is shown that at high energies the jet evolution in deep inelastic processes is mainly of nonperturbative character. With the increase of a final hadron state energy the leading contribution to an average multiplicity comes from a parton subprocess due to production of massive quark and gluon jets and their further fragmentation as diquark contribution becomes less and less essential. The ratio of the total average multiplicity in deep inelastic processes to the average multiplicity in e + e - -annihilation at high energies tends to unity

  15. Fitting a function to time-dependent ensemble averaged data

    DEFF Research Database (Denmark)

    Fogelmark, Karl; Lomholt, Michael A.; Irbäck, Anders

    2018-01-01

    Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion...... method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software....

  16. Average wind statistics for SRP area meteorological towers

    International Nuclear Information System (INIS)

    Laurinat, J.E.

    1987-01-01

    A quality assured set of average wind Statistics for the seven SRP area meteorological towers has been calculated for the five-year period 1982--1986 at the request of DOE/SR. A Similar set of statistics was previously compiled for the years 1975-- 1979. The updated wind statistics will replace the old statistics as the meteorological input for calculating atmospheric radionuclide doses from stack releases, and will be used in the annual environmental report. This report details the methods used to average the wind statistics and to screen out bad measurements and presents wind roses generated by the averaged statistics

  17. Attractiveness of the female body: Preference for the average or the supernormal?

    Directory of Open Access Journals (Sweden)

    Marković Slobodan

    2017-01-01

    Full Text Available The main purpose of the present study was to contrast the two hypotheses of female body attractiveness. The first is the “preference-for-the average” hypothesis: the most attractive female body is the one that represents the average body proportions for a given population. The second is the “preference-for-the supernormal” hypothesis: according to the so-called “peak shift effect”, the most attractive female body is more feminine than the average. We investigated the preference for three female body characteristics: waist to hip ratio (WHR, buttocks and breasts. There were 456 participants of both genders. Using a program for computer animation (DAZ 3D three sets of stimuli were generated (WHR, buttocks and breasts. Each set included six stimuli ranked from the lowest to the highest femininity level. Participants were asked to choose the stimulus within each set which they found most attractive (task 1 and average (task 2. One group of participants judged the body parts that were presented in the global context (whole body, while the other group judged the stimuli in the local context (isolated body parts only. Analyses have shown that the most attractive WHR, buttocks and breasts are more feminine (meaning smaller for WHR and larger for breasts and buttocks than average ones, for both genders and in both presentation contexts. The effect of gender was obtained only for the most attractive breasts: males prefer larger breasts than females. Finally, most attractive and average WHR and breasts were less feminine in the local than in the global context. These results support the preference-for the supernormal hypothesis: all analyses have shown that both male and female participants preferred female body parts which are more feminine than those judged average. [Project of the Serbian Ministry of Education, Science and Technological Development, Grant no. 179033

  18. Radiocesium concentrations in epigeic earthworms at various distances from the Fukushima Nuclear Power Plant 6 months after the 2011 accident.

    Science.gov (United States)

    Hasegawa, Motohiro; Ito, Masamichi T; Kaneko, Shinji; Kiyono, Yoshiyuki; Ikeda, Shigeto; Makino, Shun'ichi

    2013-12-01

    We investigated the concentrations of radiocesium in epigeic earthworms, litter, and soil samples collected from forests in Fukushima Prefecture 6 months after the Fukushima Dai-ichi Nuclear Power Plant accident in 2011. Radiocesium concentrations in litter accumulated on the forest floor were higher than those in the soil (0-5 cm depth). The highest average (134+137)Cs concentrations in earthworms (approximately 19 Bq g(-1) of wet weight with gut contents and 108 Bq g(-1) of dry weight without gut contents) were recorded from a plot that experienced an air dose rate of 3.1 μSv h(-1), and earthworm concentrations were found to increase with litter and/or soil concentrations. Average (134)Cs and (137)Cs concentrations (with or without gut contents) were intermediate between accumulated litter and soil. Different species in the same ecological groups on the same plots had similar concentrations because of their use of the same habitats or their similar physiological characteristics. The contribution of global fallout (137)Cs to earthworms with gut contents was calculated to be very low, and most (137)Cs in earthworms was derived from the Fukushima accident. Transfer factors from accumulated litter to earthworms, based on their dry weights, ranged from 0.21 to 0.35, in agreement with previous field studies. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Seasonal variation in radiocesium concentrations in three tree species

    International Nuclear Information System (INIS)

    Garten, C.R. Jr.; Briese, L.A.; Sharitz, R.R.; Gentry, J.B.

    1975-01-01

    Radiocesium concentrations in leaves and stems of black willow (Salix nigra), wax myrtle (Myrica cerifera), and tag alder (Alnus serrulata) trees inhabiting a floodplain contaminated by production-reactor effluents were measured over 1 year. In willow and myrtle trees, leaf radiocesium levels were highest in the spring and declined during the growing season; stem levels remained relatively unchanged or exhibited a slight increase. Seasonal changes in alder tree parts depended on the site examined. The relationship among component parts was essentially consistent across species and collecting sites in the summer. The radiocesium concentrations in order of rank were: roots greater than or equal to leaves greater than stems. Species differences in component-part radiocesium levels were dependent on the part sampled and the collecting site examined. Mean soil to plant-part concentration factors in summer ranged from 0.9 to 7.6, and species means across leaves, stems, and roots averaged 2.1, 3.8, and 6.2 for alder, willow, and myrtle trees, respectively

  20. Airborne Precursors Predict Maternal Serum Perfluoroalkyl Acid Concentrations.

    Science.gov (United States)

    Makey, Colleen M; Webster, Thomas F; Martin, Jonathan W; Shoeib, Mahiba; Harner, Tom; Dix-Cooper, Linda; Webster, Glenys M

    2017-07-05

    Human exposure to persistent perfluoroalkyl acids (PFAAs), including perfluorooctanoic acid (PFOA), perfluorononanoic acid (PFNA), and perfluorooctanesulfonate (PFOS), can occur directly from contaminated food, water, air, and dust. However, precursors to PFAAs (PreFAAs), such as dipolyfluoroalkyl phosphates (diPAPs), fluorotelomer alcohols (FTOHs), perfluorooctyl sulfonamides (FOSAs), and sulfonamidoethanols (FOSEs), which can be biotransformed to PFAAs, may also be a source of exposure. PFAAs were analyzed in 50 maternal sera samples collected in 2007-2008 from participants in Vancouver, Canada, while PFAAs and PreFAAs were measured in matching samples of residential bedroom air collected by passive sampler and in sieved vacuum dust (<150 μm). Concentrations of PreFAAs were higher than for PFAAs in air and dust. Positive associations were discovered between airborne 10:2 FTOH and serum PFOA and PFNA and between airborne MeFOSE and serum PFOS. On average, serum PFOS concentrations were 2.3 ng/mL (95%CI: 0.40, 4.3) higher in participants with airborne MeFOSE concentrations in the highest tertile relative to the lowest tertile. Among all PFAAs, only PFNA in air and vacuum dust predicted serum PFNA. Results suggest that airborne PFAA precursors were a source of PFOA, PFNA, and PFOS exposure in this population.

  1. Average monthly and annual climate maps for Bolivia

    KAUST Repository

    Vicente-Serrano, Sergio M.

    2015-02-24

    This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.

  2. Medicare Part B Drug Average Sales Pricing Files

    Data.gov (United States)

    U.S. Department of Health & Human Services — Manufacturer reporting of Average Sales Price (ASP) data - A manufacturers ASP must be calculated by the manufacturer every calendar quarter and submitted to CMS...

  3. High Average Power Fiber Laser for Satellite Communications, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Very high average power lasers with high electrical-top-optical (E-O) efficiency, which also support pulse position modulation (PPM) formats in the MHz-data rate...

  4. A time averaged background compensator for Geiger-Mueller counters

    International Nuclear Information System (INIS)

    Bhattacharya, R.C.; Ghosh, P.K.

    1983-01-01

    The GM tube compensator described stores background counts to cancel an equal number of pulses from the measuring channel providing time averaged compensation. The method suits portable instruments. (orig.)

  5. Time averaging, ageing and delay analysis of financial time series

    Science.gov (United States)

    Cherstvy, Andrey G.; Vinod, Deepak; Aghion, Erez; Chechkin, Aleksei V.; Metzler, Ralf

    2017-06-01

    We introduce three strategies for the analysis of financial time series based on time averaged observables. These comprise the time averaged mean squared displacement (MSD) as well as the ageing and delay time methods for varying fractions of the financial time series. We explore these concepts via statistical analysis of historic time series for several Dow Jones Industrial indices for the period from the 1960s to 2015. Remarkably, we discover a simple universal law for the delay time averaged MSD. The observed features of the financial time series dynamics agree well with our analytical results for the time averaged measurables for geometric Brownian motion, underlying the famed Black-Scholes-Merton model. The concepts we promote here are shown to be useful for financial data analysis and enable one to unveil new universal features of stock market dynamics.

  6. Historical Data for Average Processing Time Until Hearing Held

    Data.gov (United States)

    Social Security Administration — This dataset provides historical data for average wait time (in days) from the hearing request date until a hearing was held. This dataset includes data from fiscal...

  7. GIS Tools to Estimate Average Annual Daily Traffic

    Science.gov (United States)

    2012-06-01

    This project presents five tools that were created for a geographical information system to estimate Annual Average Daily : Traffic using linear regression. Three of the tools can be used to prepare spatial data for linear regression. One tool can be...

  8. A high speed digital signal averager for pulsed NMR

    International Nuclear Information System (INIS)

    Srinivasan, R.; Ramakrishna, J.; Ra agopalan, S.R.

    1978-01-01

    A 256-channel digital signal averager suitable for pulsed nuclear magnetic resonance spectroscopy is described. It implements 'stable averaging' algorithm and hence provides a calibrated display of the average signal at all times during the averaging process on a CRT. It has a maximum sampling rate of 2.5 μ sec and a memory capacity of 256 x 12 bit words. Number of sweeps is selectable through a front panel control in binary steps from 2 3 to 2 12 . The enhanced signal can be displayed either on a CRT or by a 3.5-digit LED display. The maximum S/N improvement that can be achieved with this instrument is 36 dB. (auth.)

  9. The average-shadowing property and topological ergodicity for flows

    International Nuclear Information System (INIS)

    Gu Rongbao; Guo Wenjing

    2005-01-01

    In this paper, the transitive property for a flow without sensitive dependence on initial conditions is studied and it is shown that a Lyapunov stable flow with the average-shadowing property on a compact metric space is topologically ergodic

  10. Measures of ozone concentrations using passive sampling in forests of South Western Europe

    Energy Technology Data Exchange (ETDEWEB)

    Sanz, M.J. [Fundacion CEAM, Charles R. Darwin 14, Parc Tecnologic, E-46980 Paterna, Valencia (Spain)]. E-mail: mjose@ceam.es; Calatayud, V. [Fundacion CEAM, Charles R. Darwin 14, Parc Tecnologic, E-46980 Paterna, Valencia (Spain); Sanchez-Pena, G. [Servicio de Proteccion de los Montes contra Agentes Nocivos, Direccion General para la Biodiversidad, Ministerio de Medio Ambiente, Gran Via de San Francisco, 4, E-28005, Madrid (Spain)

    2007-02-15

    Ambient ozone concentrations were measured with passive samplers in the framework of the EU and UN/ECE Level II forest monitoring programme. Data from France, Italy, Luxembourg, Spain and Switzerland are reported for 2000-2002, covering the period from April to September. The number of plots increased from 67 in 2000 to 83 in 2002. The year 2001 experienced the highest ozone concentrations, reflecting more stable summer meteorological conditions. Average 6-month ozone concentrations above 45 ppb were measured this year in 40.3% of the plots, in contrast with the less than 21% measured in the other 2 years. Gradients of increasing ozone levels were observed from North to South and with altitude. Comments are made on the regional trends and on the time frame of the higher ozone episodes. Also, some recommendations enabling a better comparison between plots are provided. - Ozone concentrations in forested areas of SW Europe during the period 2000-2002 showed highest values in 2001, as well as a tendency to increase towards the South and with altitude.

  11. Measures of ozone concentrations using passive sampling in forests of South Western Europe

    International Nuclear Information System (INIS)

    Sanz, M.J.; Calatayud, V.; Sanchez-Pena, G.

    2007-01-01

    Ambient ozone concentrations were measured with passive samplers in the framework of the EU and UN/ECE Level II forest monitoring programme. Data from France, Italy, Luxembourg, Spain and Switzerland are reported for 2000-2002, covering the period from April to September. The number of plots increased from 67 in 2000 to 83 in 2002. The year 2001 experienced the highest ozone concentrations, reflecting more stable summer meteorological conditions. Average 6-month ozone concentrations above 45 ppb were measured this year in 40.3% of the plots, in contrast with the less than 21% measured in the other 2 years. Gradients of increasing ozone levels were observed from North to South and with altitude. Comments are made on the regional trends and on the time frame of the higher ozone episodes. Also, some recommendations enabling a better comparison between plots are provided. - Ozone concentrations in forested areas of SW Europe during the period 2000-2002 showed highest values in 2001, as well as a tendency to increase towards the South and with altitude

  12. Evaluation of Chlorinated Hydrocarbon Concentrations in Tehran’s Districts Drinking Water

    Directory of Open Access Journals (Sweden)

    Alireza Pardakhti

    2012-01-01

    Full Text Available In this study Tehran’s drinking water was evaluated for the presence of chlorinated hydrocarbons during spring and summer of 2009. Chlorinated hydrocarbons are an important class of environmental pollutants that cause adverse health effects on human’s kidney, liver and central nervous systems. In this study six water districts were selected for taking drinking water samples in the city of Tehran as well as one location outside the city limits. The samples were analyzed by GC/MS using EPA method 8260. The average concentrations of 1,1-dichloroethylene, 1,2 Dichloromethane, Tetra chloromethane, Trichloroethylene and tetra chloroethylene were determined during a 7 month period and the results were 0.04ppb, 0.52ppb, 0.01ppb, 0.24ppb, 0.03ppb respectively. The highest concentration of chlorinated hydrocarbon observed in Tehran’s drinking water was Trichloroethylene and the lowest concentration was Tetra chloromethane. Districts 5 and 6 showed the highest concentrations of chlorinated hydrocarbons in the city of Tehran.

  13. Application of Bayesian approach to estimate average level spacing

    International Nuclear Information System (INIS)

    Huang Zhongfu; Zhao Zhixiang

    1991-01-01

    A method to estimate average level spacing from a set of resolved resonance parameters by using Bayesian approach is given. Using the information given in the distributions of both levels spacing and neutron width, the level missing in measured sample can be corrected more precisely so that better estimate for average level spacing can be obtained by this method. The calculation of s-wave resonance has been done and comparison with other work was carried out

  14. Annual average equivalent dose of workers form health area

    International Nuclear Information System (INIS)

    Daltro, T.F.L.; Campos, L.L.

    1992-01-01

    The data of personnel monitoring during 1985 and 1991 of personnel that work in health area were studied, obtaining a general overview of the value change of annual average equivalent dose. Two different aspects were presented: the analysis of annual average equivalent dose in the different sectors of a hospital and the comparison of these doses in the same sectors in different hospitals. (C.G.C.)

  15. A precise measurement of the average b hadron lifetime

    CERN Document Server

    Buskulic, Damir; De Bonis, I; Décamp, D; Ghez, P; Goy, C; Lees, J P; Lucotte, A; Minard, M N; Odier, P; Pietrzyk, B; Ariztizabal, F; Chmeissani, M; Crespo, J M; Efthymiopoulos, I; Fernández, E; Fernández-Bosman, M; Gaitan, V; Garrido, L; Martínez, M; Orteu, S; Pacheco, A; Padilla, C; Palla, Fabrizio; Pascual, A; Perlas, J A; Sánchez, F; Teubert, F; Colaleo, A; Creanza, D; De Palma, M; Farilla, A; Gelao, G; Girone, M; Iaselli, Giuseppe; Maggi, G; Maggi, M; Marinelli, N; Natali, S; Nuzzo, S; Ranieri, A; Raso, G; Romano, F; Ruggieri, F; Selvaggi, G; Silvestris, L; Tempesta, P; Zito, G; Huang, X; Lin, J; Ouyang, Q; Wang, T; Xie, Y; Xu, R; Xue, S; Zhang, J; Zhang, L; Zhao, W; Bonvicini, G; Cattaneo, M; Comas, P; Coyle, P; Drevermann, H; Engelhardt, A; Forty, Roger W; Frank, M; Hagelberg, R; Harvey, J; Jacobsen, R; Janot, P; Jost, B; Knobloch, J; Lehraus, Ivan; Markou, C; Martin, E B; Mato, P; Meinhard, H; Minten, Adolf G; Miquel, R; Oest, T; Palazzi, P; Pater, J R; Pusztaszeri, J F; Ranjard, F; Rensing, P E; Rolandi, Luigi; Schlatter, W D; Schmelling, M; Schneider, O; Tejessy, W; Tomalin, I R; Venturi, A; Wachsmuth, H W; Wiedenmann, W; Wildish, T; Witzeling, W; Wotschack, J; Ajaltouni, Ziad J; Bardadin-Otwinowska, Maria; Barrès, A; Boyer, C; Falvard, A; Gay, P; Guicheney, C; Henrard, P; Jousset, J; Michel, B; Monteil, S; Montret, J C; Pallin, D; Perret, P; Podlyski, F; Proriol, J; Rossignol, J M; Saadi, F; Fearnley, Tom; Hansen, J B; Hansen, J D; Hansen, J R; Hansen, P H; Nilsson, B S; Kyriakis, A; Simopoulou, Errietta; Siotis, I; Vayaki, Anna; Zachariadou, K; Blondel, A; Bonneaud, G R; Brient, J C; Bourdon, P; Passalacqua, L; Rougé, A; Rumpf, M; Tanaka, R; Valassi, Andrea; Verderi, M; Videau, H L; Candlin, D J; Parsons, M I; Focardi, E; Parrini, G; Corden, M; Delfino, M C; Georgiopoulos, C H; Jaffe, D E; Antonelli, A; Bencivenni, G; Bologna, G; Bossi, F; Campana, P; Capon, G; Chiarella, V; Felici, G; Laurelli, P; Mannocchi, G; Murtas, F; Murtas, G P; Pepé-Altarelli, M; Dorris, S J; Halley, A W; ten Have, I; Knowles, I G; Lynch, J G; Morton, W T; O'Shea, V; Raine, C; Reeves, P; Scarr, J M; Smith, K; Smith, M G; Thompson, A S; Thomson, F; Thorn, S; Turnbull, R M; Becker, U; Braun, O; Geweniger, C; Graefe, G; Hanke, P; Hepp, V; Kluge, E E; Putzer, A; Rensch, B; Schmidt, M; Sommer, J; Stenzel, H; Tittel, K; Werner, S; Wunsch, M; Beuselinck, R; Binnie, David M; Cameron, W; Colling, D J; Dornan, Peter J; Konstantinidis, N P; Moneta, L; Moutoussi, A; Nash, J; San Martin, G; Sedgbeer, J K; Stacey, A M; Dissertori, G; Girtler, P; Kneringer, E; Kuhn, D; Rudolph, G; Bowdery, C K; Brodbeck, T J; Colrain, P; Crawford, G; Finch, A J; Foster, F; Hughes, G; Sloan, Terence; Whelan, E P; Williams, M I; Galla, A; Greene, A M; Kleinknecht, K; Quast, G; Raab, J; Renk, B; Sander, H G; Wanke, R; Van Gemmeren, P; Zeitnitz, C; Aubert, Jean-Jacques; Bencheikh, A M; Benchouk, C; Bonissent, A; Bujosa, G; Calvet, D; Carr, J; Diaconu, C A; Etienne, F; Thulasidas, M; Nicod, D; Payre, P; Rousseau, D; Talby, M; Abt, I; Assmann, R W; Bauer, C; Blum, Walter; Brown, D; Dietl, H; Dydak, Friedrich; Ganis, G; Gotzhein, C; Jakobs, K; Kroha, H; Lütjens, G; Lutz, Gerhard; Männer, W; Moser, H G; Richter, R H; Rosado-Schlosser, A; Schael, S; Settles, Ronald; Seywerd, H C J; Stierlin, U; Saint-Denis, R; Wolf, G; Alemany, R; Boucrot, J; Callot, O; Cordier, A; Courault, F; Davier, M; Duflot, L; Grivaz, J F; Heusse, P; Jacquet, M; Kim, D W; Le Diberder, F R; Lefrançois, J; Lutz, A M; Musolino, G; Nikolic, I A; Park, H J; Park, I C; Schune, M H; Simion, S; Veillet, J J; Videau, I; Abbaneo, D; Azzurri, P; Bagliesi, G; Batignani, G; Bettarini, S; Bozzi, C; Calderini, G; Carpinelli, M; Ciocci, M A; Ciulli, V; Dell'Orso, R; Fantechi, R; Ferrante, I; Foà, L; Forti, F; Giassi, A; Giorgi, M A; Gregorio, A; Ligabue, F; Lusiani, A; Marrocchesi, P S; Messineo, A; Rizzo, G; Sanguinetti, G; Sciabà, A; Spagnolo, P; Steinberger, Jack; Tenchini, Roberto; Tonelli, G; Triggiani, G; Vannini, C; Verdini, P G; Walsh, J; Betteridge, A P; Blair, G A; Bryant, L M; Cerutti, F; Gao, Y; Green, M G; Johnson, D L; Medcalf, T; Mir, L M; Perrodo, P; Strong, J A; Bertin, V; Botterill, David R; Clifft, R W; Edgecock, T R; Haywood, S; Edwards, M; Maley, P; Norton, P R; Thompson, J C; Bloch-Devaux, B; Colas, P; Duarte, H; Emery, S; Kozanecki, Witold; Lançon, E; Lemaire, M C; Locci, E; Marx, B; Pérez, P; Rander, J; Renardy, J F; Rosowsky, A; Roussarie, A; Schuller, J P; Schwindling, J; Si Mohand, D; Trabelsi, A; Vallage, B; Johnson, R P; Kim, H Y; Litke, A M; McNeil, M A; Taylor, G; Beddall, A; Booth, C N; Boswell, R; Cartwright, S L; Combley, F; Dawson, I; Köksal, A; Letho, M; Newton, W M; Rankin, C; Thompson, L F; Böhrer, A; Brandt, S; Cowan, G D; Feigl, E; Grupen, Claus; Lutters, G; Minguet-Rodríguez, J A; Rivera, F; Saraiva, P; Smolik, L; Stephan, F; Apollonio, M; Bosisio, L; Della Marina, R; Giannini, G; Gobbo, B; Ragusa, F; Rothberg, J E; Wasserbaech, S R; Armstrong, S R; Bellantoni, L; Elmer, P; Feng, P; Ferguson, D P S; Gao, Y S; González, S; Grahl, J; Harton, J L; Hayes, O J; Hu, H; McNamara, P A; Nachtman, J M; Orejudos, W; Pan, Y B; Saadi, Y; Schmitt, M; Scott, I J; Sharma, V; Turk, J; Walsh, A M; Wu Sau Lan; Wu, X; Yamartino, J M; Zheng, M; Zobernig, G

    1996-01-01

    An improved measurement of the average b hadron lifetime is performed using a sample of 1.5 million hadronic Z decays, collected during the 1991-1993 runs of ALEPH, with the silicon vertex detector fully operational. This uses the three-dimensional impact parameter distribution of lepton tracks coming from semileptonic b decays and yields an average b hadron lifetime of 1.533 \\pm 0.013 \\pm 0.022 ps.

  16. Bivariate copulas on the exponentially weighted moving average control chart

    Directory of Open Access Journals (Sweden)

    Sasigarn Kuvattana

    2016-10-01

    Full Text Available This paper proposes four types of copulas on the Exponentially Weighted Moving Average (EWMA control chart when observations are from an exponential distribution using a Monte Carlo simulation approach. The performance of the control chart is based on the Average Run Length (ARL which is compared for each copula. Copula functions for specifying dependence between random variables are used and measured by Kendall’s tau. The results show that the Normal copula can be used for almost all shifts.

  17. Averaging Bias Correction for Future IPDA Lidar Mission MERLIN

    Directory of Open Access Journals (Sweden)

    Tellier Yoann

    2018-01-01

    Full Text Available The CNES/DLR MERLIN satellite mission aims at measuring methane dry-air mixing ratio column (XCH4 and thus improving surface flux estimates. In order to get a 1% precision on XCH4 measurements, MERLIN signal processing assumes an averaging of data over 50 km. The induced biases due to the non-linear IPDA lidar equation are not compliant with accuracy requirements. This paper analyzes averaging biases issues and suggests correction algorithms tested on realistic simulated scenes.

  18. Averaging Bias Correction for Future IPDA Lidar Mission MERLIN

    Science.gov (United States)

    Tellier, Yoann; Pierangelo, Clémence; Wirth, Martin; Gibert, Fabien

    2018-04-01

    The CNES/DLR MERLIN satellite mission aims at measuring methane dry-air mixing ratio column (XCH4) and thus improving surface flux estimates. In order to get a 1% precision on XCH4 measurements, MERLIN signal processing assumes an averaging of data over 50 km. The induced biases due to the non-linear IPDA lidar equation are not compliant with accuracy requirements. This paper analyzes averaging biases issues and suggests correction algorithms tested on realistic simulated scenes.

  19. The average action for scalar fields near phase transitions

    International Nuclear Information System (INIS)

    Wetterich, C.

    1991-08-01

    We compute the average action for fields in two, three and four dimensions, including the effects of wave function renormalization. A study of the one loop evolution equations for the scale dependence of the average action gives a unified picture of the qualitatively different behaviour in various dimensions for discrete as well as abelian and nonabelian continuous symmetry. The different phases and the phase transitions can be infered from the evolution equation. (orig.)

  20. Wave function collapse implies divergence of average displacement

    OpenAIRE

    Marchewka, A.; Schuss, Z.

    2005-01-01

    We show that propagating a truncated discontinuous wave function by Schr\\"odinger's equation, as asserted by the collapse axiom, gives rise to non-existence of the average displacement of the particle on the line. It also implies that there is no Zeno effect. On the other hand, if the truncation is done so that the reduced wave function is continuous, the average coordinate is finite and there is a Zeno effect. Therefore the collapse axiom of measurement needs to be revised.

  1. Average geodesic distance of skeleton networks of Sierpinski tetrahedron

    Science.gov (United States)

    Yang, Jinjin; Wang, Songjing; Xi, Lifeng; Ye, Yongchao

    2018-04-01

    The average distance is concerned in the research of complex networks and is related to Wiener sum which is a topological invariant in chemical graph theory. In this paper, we study the skeleton networks of the Sierpinski tetrahedron, an important self-similar fractal, and obtain their asymptotic formula for average distances. To provide the formula, we develop some technique named finite patterns of integral of geodesic distance on self-similar measure for the Sierpinski tetrahedron.

  2. [PM₂.₅ Background Concentration at Different Directions in Beijing in 2013].

    Science.gov (United States)

    Li, Yun-ting; Cheng, Niam-liang; Zhang, Da-wei; Sun, Rui-wen; Dong, Xin; Sun, Nai-di; Chen, Chen

    2015-12-01

    PM₂.₅, background concentration at different directions in 2013 in Beijing was analyzed combining the techniques of mathematical statistics, physical identification and numerical simulation (CMAQ4.7.1) as well as using monitoring data of six PM₂.₅ auto-monitoring sites and five meteorological sites in 2013. Results showed that background concentrations of PM₂.₅ at northwest, northeast, eastern, southeast, southern and southwest boundary sites were between 40.3 and 85.3 µg · m⁻³ in Beijing. From the lowest to the highest, PMPM₂.₅ background concentrations at different sites were: Miyun reservoir, Badaling, Donggaocun, Yufa, Yongledian and Liulihe. Background concentration of PM₂.₅ was the lowest under north wind, then under west wind, and significantly higher under south and east wind. Calculated PM₂.₅ background average concentrations were 6.5-27.9, 22.4-73.4, 67.2-91.7, 40.7-116.1 µg · m⁻³ respectively in different wind directions. Simulated PM₂.₅ background concentration showed a clear north-south gradient distribution and the surrounding area had a notable effect on the spatial distribution of PM₂.₅ background concentration in 2013 in Beijing.

  3. Determination of viscosity-average molecular weight of chitosan using intrinsic viscosity measurement

    International Nuclear Information System (INIS)

    Norzita Yacob; Norhashidah Talip; Maznah Mahmud; Nurul Aizam Idayu Mat Sani; Nor Akma Samsuddin; Norafifah Ahmad Fabillah

    2013-01-01

    Determination of molecular weight by intrinsic viscosity measurement is a simple method for characterization of chitosan. To study the effect of radiation on molecular weight, chitosan was first irradiated using electron beam at different doses prior to measurement. Different concentrations of chitosan were prepared and measurement was done at room temperature. The flow time data was used to calculate the intrinsic viscosity by extrapolating the reduced viscosity to zero concentration. The value of intrinsic viscosity was then recalculated into the viscosity-average molecular weight using Mark-Houwink equation. (Author)

  4. Determination of Viscosity-Average Molecular Weight of Chitosan using Intrinsic Viscosity Measurement

    International Nuclear Information System (INIS)

    Norzita Yacob; Norhashidah Talip; Maznah Mahmud

    2011-01-01

    Molecular weight of chitosan can be determined by different techniques such as Gel Permeation Chromatography (GPC), Static Light Scattering (SLS) and intrinsic viscosity measurement. Determination of molecular weight by intrinsic viscosity measurement is a simple method for characterization of chitosan. Different concentrations of chitosan were prepared and measurement was done at room temperature. The flow time data was used to calculate the intrinsic viscosity by extrapolating the reduced viscosity to zero concentration. The value of intrinsic viscosity was then recalculated into the viscosity-average molecular weight using Mark-Houwink equation. (author)

  5. Radon and radon daughters indoors, problems in the determination of the annual average

    International Nuclear Information System (INIS)

    Swedjemark, G.A.

    1984-01-01

    The annual average of the concentration of radon and radon daughters in indoor air is required both in studies such as determining the collective dose to a population and at comparing with limits. Measurements are often carried out during a time period shorter than a year for practical reasons. Methods for estimating the uncertainties due to temporal variations in an annual average calculated from measurements carried out during various lengths of the sampling periods. These methods have been applied to the results from long-term measurements of radon-222 in a few houses. The possibilities to use correction factors in order to get a more adequate annual average have also been studied and some examples have been given. (orig.)

  6. Mercury concentrations in water, and mercury and selenium concentrations in fish from Brownlee Reservoir and selected sites in Boise and Snake Rivers, Idaho and Oregon, 2013

    Science.gov (United States)

    MacCoy, Dorene E.

    2014-01-01

    Mercury (Hg) analyses were conducted on samples of sport fish and water collected from six sampling sites in the Boise and Snake Rivers, and Brownlee Reservoir to meet National Pollution Discharge and Elimination System (NPDES) permit requirements for the City of Boise, Idaho. A water sample was collected from each site during October and November 2013 by the City of Boise personnel and was analyzed by the Boise City Public Works Water Quality Laboratory. Total Hg concentrations in unfiltered water samples ranged from 0.73 to 1.21 nanograms per liter (ng/L) at five river sites; total Hg concentration was highest (8.78 ng/L) in a water sample from Brownlee Reservoir. All Hg concentrations in water samples were less than the EPA Hg chronic aquatic life criterion in Idaho (12 ng/L). The EPA recommended a water-quality criterion of 0.30 milligrams per kilogram (mg/kg) methylmercury (MeHg) expressed as a fish-tissue residue value (wet-weight MeHg in fish tissue). MeHg residue in fish tissue is considered to be equivalent to total Hg in fish muscle tissue and is referred to as Hg in this report. The Idaho Department of Environmental Quality adopted the EPA’s fish-tissue criterion and a reasonable potential to exceed (RPTE) threshold 20 percent lower than the criterion or greater than 0.24 mg/kg based on an average concentration of 10 fish from a receiving waterbody. NPDES permitted discharge to waters with fish having Hg concentrations exceeding 0.24 mg/kg are said to have a reasonable potential to exceed the water-quality criterion and thus are subject to additional permit obligations, such as requirements for increased monitoring and the development of a Hg minimization plan. The Idaho Fish Consumption Advisory Program (IFCAP) issues fish advisories to protect general and sensitive populations of fish consumers and has developed an action level of 0.22 mg/kg wet weight Hg in fish tissue. Fish consumption advisories are water body- and species-specific and are used to

  7. Effects of Dietary Zinc Pectin Oligosaccharides Chelate Supplementation on Growth Performance, Nutrient Digestibility and Tissue Zinc Concentrations of Broilers.

    Science.gov (United States)

    Wang, Zhongcheng; Yu, Huimin; Wu, Xuezhuang; Zhang, Tietao; Cui, Hu; Wan, Chunmeng; Gao, Xiuhua

    2016-10-01

    The experiment was conducted to investigate the effects of zinc pectin oligosaccharides (Zn-POS) chelate on growth performance, nutrient digestibility, and tissue zinc concentrations of Arbor Acre broilers aged from 1 to 42 days. A total of 576 1-day-old broilers were randomly assigned into 4 groups with 9 replicates per group and 16 chicks per replicate. Chicks were fed either a basal diet (control) or basal diet supplemented with Zn-POS at 300 (Zn-POS-300), 600 (Zn-POS-600), or 900 mg/kg (Zn-POS-900), respectively, for 42 days. A 3-day metabolism trial was conducted during the last week of the experiment feeding. The average daily gain and the average daily feed intake of Zn-POS-600 were significantly higher (P digestibility of dry matter, crude protein, and metabolic energy among all groups. The control group had the lowest apparent digestibility of dry matter (P digestibility of dry matter in Zn-POS-600 was higher (P digestibility of crude protein in Zn-POS-600 or Zn-POS-900 was higher (P digestibility of metabolic energy in Zn-POS-600 or Zn-POS-900 was higher (P < 0.05) than that of Zn-POS-300. Zn-POS-600 had the highest liver zinc concentrations (P < 0.05), while Zn-POS-900 had the highest pancreatic zinc concentrations (P < 0.05). Our data suggest that the supplementation of 600 mg/kg Zn-POS is optimal in improving the average daily gain and the average daily feed intake, utilization of dietary dry matter and crude protein, and increasing tissue zinc concentrations in liver and pancreas of broilers.

  8. Average Soil Water Retention Curves Measured by Neutron Radiography

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Chu-Lin [ORNL; Perfect, Edmund [University of Tennessee, Knoxville (UTK); Kang, Misun [ORNL; Voisin, Sophie [ORNL; Bilheux, Hassina Z [ORNL; Horita, Juske [Texas Tech University (TTU); Hussey, Dan [NIST Center for Neutron Research (NCRN), Gaithersburg, MD

    2011-01-01

    Water retention curves are essential for understanding the hydrologic behavior of partially-saturated porous media and modeling flow transport processes within the vadose zone. In this paper we report direct measurements of the main drying and wetting branches of the average water retention function obtained using 2-dimensional neutron radiography. Flint sand columns were saturated with water and then drained under quasi-equilibrium conditions using a hanging water column setup. Digital images (2048 x 2048 pixels) of the transmitted flux of neutrons were acquired at each imposed matric potential (~10-15 matric potential values per experiment) at the NCNR BT-2 neutron imaging beam line. Volumetric water contents were calculated on a pixel by pixel basis using Beer-Lambert s law after taking into account beam hardening and geometric corrections. To remove scattering effects at high water contents the volumetric water contents were normalized (to give relative saturations) by dividing the drying and wetting sequences of images by the images obtained at saturation and satiation, respectively. The resulting pixel values were then averaged and combined with information on the imposed basal matric potentials to give average water retention curves. The average relative saturations obtained by neutron radiography showed an approximate one-to-one relationship with the average values measured volumetrically using the hanging water column setup. There were no significant differences (at p < 0.05) between the parameters of the van Genuchten equation fitted to the average neutron radiography data and those estimated from replicated hanging water column data. Our results indicate that neutron imaging is a very effective tool for quantifying the average water retention curve.

  9. Atmospheric relative concentrations in building wakes

    International Nuclear Information System (INIS)

    Ramsdell, J.V. Jr.; Simonen, C.A.

    1997-05-01

    This report documents the ARCON96 computer code developed for the U.S. Nuclear Regulatory Commission Office of Nuclear Reactor Regulation for potential use in control room habitability assessments. It includes a user's guide to the code, a description of the technical basis for the code, and a programmer's guide to the code. The ARCON96 code uses hourly meteorological data and recently developed methods for estimating dispersion in the vicinity of buildings to calculate relative concentrations at control room air intakes that would be exceeded no more than five percent of the time. The concentrations are calculated for averaging periods ranging from one hour to 30 days in duration. ARCON96 is a revised version of ARCON95, which was developed for the NRC Office of Nuclear Regulatory Research. Changes in the code permit users to simulate releases from area sources as well as point sources. The method of averaging concentrations for periods longer than 2 hours has also been changed. The change in averaging procedures increases relative concentrations for these averaging periods. In general, the increase in concentrations is less than a factor of two. The increase is greatest for relatively short averaging periods, for example 0 to 8 hours and diminishes as the duration of the averaging period increases

  10. Estimating average glandular dose by measuring glandular rate in mammograms

    International Nuclear Information System (INIS)

    Goto, Sachiko; Azuma, Yoshiharu; Sumimoto, Tetsuhiro; Eiho, Shigeru

    2003-01-01

    The glandular rate of the breast was objectively measured in order to calculate individual patient exposure dose (average glandular dose) in mammography. By employing image processing techniques and breast-equivalent phantoms with various glandular rate values, a conversion curve for pixel value to glandular rate can be determined by a neural network. Accordingly, the pixel values in clinical mammograms can be converted to the glandular rate value for each pixel. The individual average glandular dose can therefore be calculated using the individual glandular rates on the basis of the dosimetry method employed for quality control in mammography. In the present study, a data set of 100 craniocaudal mammograms from 50 patients was used to evaluate our method. The average glandular rate and average glandular dose of the data set were 41.2% and 1.79 mGy, respectively. The error in calculating the individual glandular rate can be estimated to be less than ±3%. When the calculation error of the glandular rate is taken into consideration, the error in the individual average glandular dose can be estimated to be 13% or less. We feel that our method for determining the glandular rate from mammograms is useful for minimizing subjectivity in the evaluation of patient breast composition. (author)

  11. Accurate phenotyping: Reconciling approaches through Bayesian model averaging.

    Directory of Open Access Journals (Sweden)

    Carla Chia-Ming Chen

    Full Text Available Genetic research into complex diseases is frequently hindered by a lack of clear biomarkers for phenotype ascertainment. Phenotypes for such diseases are often identified on the basis of clinically defined criteria; however such criteria may not be suitable for understanding the genetic composition of the diseases. Various statistical approaches have been proposed for phenotype definition; however our previous studies have shown that differences in phenotypes estimated using different approaches have substantial impact on subsequent analyses. Instead of obtaining results based upon a single model, we propose a new method, using Bayesian model averaging to overcome problems associated with phenotype definition. Although Bayesian model averaging has been used in other fields of research, this is the first study that uses Bayesian model averaging to reconcile phenotypes obtained using multiple models. We illustrate the new method by applying it to simulated genetic and phenotypic data for Kofendred personality disorder-an imaginary disease with several sub-types. Two separate statistical methods were used to identify clusters of individuals with distinct phenotypes: latent class analysis and grade of membership. Bayesian model averaging was then used to combine the two clusterings for the purpose of subsequent linkage analyses. We found that causative genetic loci for the disease produced higher LOD scores using model averaging than under either individual model separately. We attribute this improvement to consolidation of the cores of phenotype clusters identified using each individual method.

  12. Yearly, seasonal and monthly daily average diffuse sky radiation models

    International Nuclear Information System (INIS)

    Kassem, A.S.; Mujahid, A.M.; Turner, D.W.

    1993-01-01

    A daily average diffuse sky radiation regression model based on daily global radiation was developed utilizing two year data taken near Blytheville, Arkansas (Lat. =35.9 0 N, Long. = 89.9 0 W), U.S.A. The model has a determination coefficient of 0.91 and 0.092 standard error of estimate. The data were also analyzed for a seasonal dependence and four seasonal average daily models were developed for the spring, summer, fall and winter seasons. The coefficient of determination is 0.93, 0.81, 0.94 and 0.93, whereas the standard error of estimate is 0.08, 0.102, 0.042 and 0.075 for spring, summer, fall and winter, respectively. A monthly average daily diffuse sky radiation model was also developed. The coefficient of determination is 0.92 and the standard error of estimate is 0.083. A seasonal monthly average model was also developed which has 0.91 coefficient of determination and 0.085 standard error of estimate. The developed monthly daily average and daily models compare well with a selected number of previously developed models. (author). 11 ref., figs., tabs

  13. Genetic variability of concentration of microelements in wild sunflower species and hybrids

    Directory of Open Access Journals (Sweden)

    Kastori Rudolf R.

    2010-01-01

    Full Text Available The aim of this work was to investigate genetic specificity of sunflower nutrition with microelements. Therefore, concentrations of essential (Zn, B, Mn, Cu, Fe and Ni and non-essential (Cr, Al, Cd, As, Pb and Ba micronutrients were analyzed. Five sunflower hybrids the most grown in Serbia and different populations of wild sunflower species originating from North America: Helianthus neglectus Heiser (3, Helianthus agrophyllus T&G (3, Helianthus petiolaris Nutt. (2, Helianthus annuus L. (4 were included in the experiment. Populations of wild sunflower species and hybrids differed significantly with respect to the concentration of analyzed elements. Manganese concentration was significantly higher in hybrids than in wild species. In all genotypes Fe, B and Mn had the highest concentration. Coefficient of variation of microelement concentration depended on genotype and particular element. In wild populations, for essential microelements, it was between 3.7 and 59.5, whereas in hybrids it varied from 10.0 to 48.8. Coefficient of variation of concentration of non-essential microelements in wild populations varied from 7.7 to 73.8, and in hybrids from 15.1 to 48.8. Average coefficient of variation in both wild species and hybrids was the lowest for Mn and Pb. It was the highest for Cr, Ni, and Zn in hybrids and for Cd, Ni, and Cr in wild species. The results suggest that genetic specificity with respect to uptake of microelements in wild species and hybrids is highly expressed. Broad genetic variability of concentrations of microelements in wild species and hybrids indicate that their reactions to deficiency and/or excess of those elements probably are not the same either. This finding may be used in breeding process aimed specifically at improvement of tolerance and capacity to accumulate microelements in sunflower. Phytoremediation technology designed to reduce the amount of microelements in the soil could thus be advanced by utilization of such

  14. Average cross sections for the 252Cf neutron spectrum

    International Nuclear Information System (INIS)

    Dezso, Z.; Csikai, J.

    1977-01-01

    A number of average cross sections have been measured for 252 Cf neutrons in (n, γ), (n,p), (n,2n), (n,α) reactions by the activation method and for fission by fission chamber. Cross sections have been determined for 19 elements and 45 reactions. The (n,γ) cross section values lie in the interval from 0.3 to 200 mb. The data as a function of target neutron number increases up to about N=60 with minimum near to dosed shells. The values lie between 0.3 mb and 113 mb. These cross sections decrease significantly with increasing the threshold energy. The values are below 20 mb. The data do not exceed 10 mb. Average (n,p) cross sections as a function of the threshold energy and average fission cross sections as a function of Zsup(4/3)/A are shown. The results obtained are summarized in tables

  15. Testing averaged cosmology with type Ia supernovae and BAO data

    Energy Technology Data Exchange (ETDEWEB)

    Santos, B.; Alcaniz, J.S. [Departamento de Astronomia, Observatório Nacional, 20921-400, Rio de Janeiro – RJ (Brazil); Coley, A.A. [Department of Mathematics and Statistics, Dalhousie University, Halifax, B3H 3J5 Canada (Canada); Devi, N. Chandrachani, E-mail: thoven@on.br, E-mail: aac@mathstat.dal.ca, E-mail: chandrachaniningombam@astro.unam.mx, E-mail: alcaniz@on.br [Instituto de Astronomía, Universidad Nacional Autónoma de México, Box 70-264, México City, México (Mexico)

    2017-02-01

    An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO data is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.

  16. Average contraction and synchronization of complex switched networks

    International Nuclear Information System (INIS)

    Wang Lei; Wang Qingguo

    2012-01-01

    This paper introduces an average contraction analysis for nonlinear switched systems and applies it to investigating the synchronization of complex networks of coupled systems with switching topology. For a general nonlinear system with a time-dependent switching law, a basic convergence result is presented according to average contraction analysis, and a special case where trajectories of a distributed switched system converge to a linear subspace is then investigated. Synchronization is viewed as the special case with all trajectories approaching the synchronization manifold, and is thus studied for complex networks of coupled oscillators with switching topology. It is shown that the synchronization of a complex switched network can be evaluated by the dynamics of an isolated node, the coupling strength and the time average of the smallest eigenvalue associated with the Laplacians of switching topology and the coupling fashion. Finally, numerical simulations illustrate the effectiveness of the proposed methods. (paper)

  17. The Health Effects of Income Inequality: Averages and Disparities.

    Science.gov (United States)

    Truesdale, Beth C; Jencks, Christopher

    2016-01-01

    Much research has investigated the association of income inequality with average life expectancy, usually finding negative correlations that are not very robust. A smaller body of work has investigated socioeconomic disparities in life expectancy, which have widened in many countries since 1980. These two lines of work should be seen as complementary because changes in average life expectancy are unlikely to affect all socioeconomic groups equally. Although most theories imply long and variable lags between changes in income inequality and changes in health, empirical evidence is confined largely to short-term effects. Rising income inequality can affect individuals in two ways. Direct effects change individuals' own income. Indirect effects change other people's income, which can then change a society's politics, customs, and ideals, altering the behavior even of those whose own income remains unchanged. Indirect effects can thus change both average health and the slope of the relationship between individual income and health.

  18. Testing averaged cosmology with type Ia supernovae and BAO data

    International Nuclear Information System (INIS)

    Santos, B.; Alcaniz, J.S.; Coley, A.A.; Devi, N. Chandrachani

    2017-01-01

    An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO data is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.

  19. Perceived Average Orientation Reflects Effective Gist of the Surface.

    Science.gov (United States)

    Cha, Oakyoon; Chong, Sang Chul

    2018-03-01

    The human ability to represent ensemble visual information, such as average orientation and size, has been suggested as the foundation of gist perception. To effectively summarize different groups of objects into the gist of a scene, observers should form ensembles separately for different groups, even when objects have similar visual features across groups. We hypothesized that the visual system utilizes perceptual groups characterized by spatial configuration and represents separate ensembles for different groups. Therefore, participants could not integrate ensembles of different perceptual groups on a task basis. We asked participants to determine the average orientation of visual elements comprising a surface with a contour situated inside. Although participants were asked to estimate the average orientation of all the elements, they ignored orientation signals embedded in the contour. This constraint may help the visual system to keep the visual features of occluding objects separate from those of the occluded objects.

  20. Object detection by correlation coefficients using azimuthally averaged reference projections.

    Science.gov (United States)

    Nicholson, William V

    2004-11-01

    A method of computing correlation coefficients for object detection that takes advantage of using azimuthally averaged reference projections is described and compared with two alternative methods-computing a cross-correlation function or a local correlation coefficient versus the azimuthally averaged reference projections. Two examples of an application from structural biology involving the detection of projection views of biological macromolecules in electron micrographs are discussed. It is found that a novel approach to computing a local correlation coefficient versus azimuthally averaged reference projections, using a rotational correlation coefficient, outperforms using a cross-correlation function and a local correlation coefficient in object detection from simulated images with a range of levels of simulated additive noise. The three approaches perform similarly in detecting macromolecular views in electron microscope images of a globular macrolecular complex (the ribosome). The rotational correlation coefficient outperforms the other methods in detection of keyhole limpet hemocyanin macromolecular views in electron micrographs.

  1. A Martian PFS average spectrum: Comparison with ISO SWS

    Science.gov (United States)

    Formisano, V.; Encrenaz, T.; Fonti, S.; Giuranna, M.; Grassi, D.; Hirsh, H.; Khatuntsev, I.; Ignatiev, N.; Lellouch, E.; Maturilli, A.; Moroz, V.; Orleanski, P.; Piccioni, G.; Rataj, M.; Saggin, B.; Zasova, L.

    2005-08-01

    The evaluation of the planetary Fourier spectrometer performance at Mars is presented by comparing an average spectrum with the ISO spectrum published by Lellouch et al. [2000. Planet. Space Sci. 48, 1393.]. First, the average conditions of Mars atmosphere are compared, then the mixing ratios of the major gases are evaluated. Major and minor bands of CO 2 are compared, from the point of view of features characteristics and bands depth. The spectral resolution is also compared using several solar lines. The result indicates that PFS radiance is valid to better than 1% in the wavenumber range 1800-4200 cm -1 for the average spectrum considered (1680 measurements). The PFS monochromatic transfer function generates an overshooting on the left-hand side of strong narrow lines (solar or atmospheric). The spectral resolution of PFS is of the order of 1.3 cm -1 or better. A large number of narrow features to be identified are discovered.

  2. Size and emotion averaging: costs of dividing attention after all.

    Science.gov (United States)

    Brand, John; Oriet, Chris; Tottenham, Laurie Sykes

    2012-03-01

    Perceptual averaging is a process by which sets of similar items are represented by summary statistics such as their average size, luminance, or orientation. Researchers have argued that this process is automatic, able to be carried out without interference from concurrent processing. Here, we challenge this conclusion and demonstrate a reliable cost of computing the mean size of circles distinguished by colour (Experiments 1 and 2) and the mean emotionality of faces distinguished by sex (Experiment 3). We also test the viability of two strategies that could have allowed observers to guess the correct response without computing the average size or emotionality of both sets concurrently. We conclude that although two means can be computed concurrently, doing so incurs a cost of dividing attention.

  3. Radon Concentration And Dose Assessment In Well Water Samples From Karbala Governorate Of Iraq

    Science.gov (United States)

    Al-Alawy, I. T.; Hasan, A. A.

    2018-05-01

    There are numerous studies around the world about radon concentrations and their risks to the health of human beings. One of the most important social characteristics is the use of water wells for irrigation, which is a major source of water pollution with radon gas. In the present study, six well water samples have been collected from different locations in Karbala governorate to investigate radon concentration level using CR-39 technique. The maximum value 4.112±2.0Bq/L was in Al-Hurr (Al-Qarih Al-Easariah) region, and the lowest concentration of radon was in Hay Ramadan region which is 2.156±1.4Bq/L, with an average value 2.84±1.65Bq/L. The highest result of annual effective dose (AED) was in Al-Hurr (Al-Qarih Al-Easariah) region which is equal to 15.00±3.9μSv/y, while the minimum was recorded in Hay Ramadan 7.86±2.8μSv/y, with an average value 10.35±3.1μSv/y. The current results have shown that the radon concentrations in well water samples are lower than the recommended limit 11.1Bq/L and the annual effective dose in these samples are lower than the permissible international limit 1mSv/y.

  4. A virtual pebble game to ensemble average graph rigidity.

    Science.gov (United States)

    González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J

    2015-01-01

    The body-bar Pebble Game (PG) algorithm is commonly used to calculate network rigidity properties in proteins and polymeric materials. To account for fluctuating interactions such as hydrogen bonds, an ensemble of constraint topologies are sampled, and average network properties are obtained by averaging PG characterizations. At a simpler level of sophistication, Maxwell constraint counting (MCC) provides a rigorous lower bound for the number of internal degrees of freedom (DOF) within a body-bar network, and it is commonly employed to test if a molecular structure is globally under-constrained or over-constrained. MCC is a mean field approximation (MFA) that ignores spatial fluctuations of distance constraints by replacing the actual molecular structure by an effective medium that has distance constraints globally distributed with perfect uniform density. The Virtual Pebble Game (VPG) algorithm is a MFA that retains spatial inhomogeneity in the density of constraints on all length scales. Network fluctuations due to distance constraints that may be present or absent based on binary random dynamic variables are suppressed by replacing all possible constraint topology realizations with the probabilities that distance constraints are present. The VPG algorithm is isomorphic to the PG algorithm, where integers for counting "pebbles" placed on vertices or edges in the PG map to real numbers representing the probability to find a pebble. In the VPG, edges are assigned pebble capacities, and pebble movements become a continuous flow of probability within the network. Comparisons between the VPG and average PG results over a test set of proteins and disordered lattices demonstrate the VPG quantitatively estimates the ensemble average PG results well. The VPG performs about 20% faster than one PG, and it provides a pragmatic alternative to averaging PG rigidity characteristics over an ensemble of constraint topologies. The utility of the VPG falls in between the most

  5. Exactly averaged equations for flow and transport in random media

    International Nuclear Information System (INIS)

    Shvidler, Mark; Karasaki, Kenzi

    2001-01-01

    It is well known that exact averaging of the equations of flow and transport in random porous media can be realized only for a small number of special, occasionally exotic, fields. On the other hand, the properties of approximate averaging methods are not yet fully understood. For example, the convergence behavior and the accuracy of truncated perturbation series. Furthermore, the calculation of the high-order perturbations is very complicated. These problems for a long time have stimulated attempts to find the answer for the question: Are there in existence some exact general and sufficiently universal forms of averaged equations? If the answer is positive, there arises the problem of the construction of these equations and analyzing them. There exist many publications related to these problems and oriented on different applications: hydrodynamics, flow and transport in porous media, theory of elasticity, acoustic and electromagnetic waves in random fields, etc. We present a method of finding the general form of exactly averaged equations for flow and transport in random fields by using (1) an assumption of the existence of Green's functions for appropriate stochastic problems, (2) some general properties of the Green's functions, and (3) the some basic information about the random fields of the conductivity, porosity and flow velocity. We present a general form of the exactly averaged non-local equations for the following cases. 1. Steady-state flow with sources in porous media with random conductivity. 2. Transient flow with sources in compressible media with random conductivity and porosity. 3. Non-reactive solute transport in random porous media. We discuss the problem of uniqueness and the properties of the non-local averaged equations, for the cases with some types of symmetry (isotropic, transversal isotropic, orthotropic) and we analyze the hypothesis of the structure non-local equations in general case of stochastically homogeneous fields. (author)

  6. Increase in average foveal thickness after internal limiting membrane peeling

    Directory of Open Access Journals (Sweden)

    Kumagai K

    2017-04-01

    Full Text Available Kazuyuki Kumagai,1 Mariko Furukawa,1 Tetsuyuki Suetsugu,1 Nobuchika Ogino2 1Department of Ophthalmology, Kami-iida Daiichi General Hospital, 2Department of Ophthalmology, Nishigaki Eye Clinic, Aichi, Japan Purpose: To report the findings in three cases in which the average foveal thickness was increased after a thin epiretinal membrane (ERM was removed by vitrectomy with internal limiting membrane (ILM peeling.Methods: The foveal contour was normal preoperatively in all eyes. All cases underwent successful phacovitrectomy with ILM peeling for a thin ERM. The optical coherence tomography (OCT images were examined before and after the surgery. The changes in the average foveal (1 mm thickness and the foveal areas within 500 µm from the foveal center were measured. The postoperative changes in the inner and outer retinal areas determined from the cross-sectional OCT images were analyzed.Results: The average foveal thickness and the inner and outer foveal areas increased significantly after the surgery in each of the three cases. The percentage increase in the average foveal thickness relative to the baseline thickness was 26% in Case 1, 29% in Case 2, and 31% in Case 3. The percentage increase in the foveal inner retinal area was 71% in Case 1, 113% in Case 2, and 110% in Case 3, and the percentage increase in foveal outer retinal area was 8% in Case 1, 13% in Case 2, and 18% in Case 3.Conclusion: The increase in the average foveal thickness and the inner and outer foveal areas suggests that a centripetal movement of the inner and outer retinal layers toward the foveal center probably occurred due to the ILM peeling. Keywords: internal limiting membrane, optical coherence tomography, average foveal thickness, epiretinal membrane, vitrectomy

  7. Positivity of the spherically averaged atomic one-electron density

    DEFF Research Database (Denmark)

    Fournais, Søren; Hoffmann-Ostenhof, Maria; Hoffmann-Ostenhof, Thomas

    2008-01-01

    We investigate the positivity of the spherically averaged atomic one-electron density . For a which stems from a physical ground state we prove that for r ≥  0. This article may be reproduced in its entirety for non-commercial purposes.......We investigate the positivity of the spherically averaged atomic one-electron density . For a which stems from a physical ground state we prove that for r ≥  0. This article may be reproduced in its entirety for non-commercial purposes....

  8. Research & development and growth: A Bayesian model averaging analysis

    Czech Academy of Sciences Publication Activity Database

    Horváth, Roman

    2011-01-01

    Roč. 28, č. 6 (2011), s. 2669-2673 ISSN 0264-9993. [Society for Non-linear Dynamics and Econometrics Annual Conferencen. Washington DC, 16.03.2011-18.03.2011] R&D Projects: GA ČR GA402/09/0965 Institutional research plan: CEZ:AV0Z10750506 Keywords : Research and development * Growth * Bayesian model averaging Subject RIV: AH - Economic s Impact factor: 0.701, year: 2011 http://library.utia.cas.cz/separaty/2011/E/horvath-research & development and growth a bayesian model averaging analysis.pdf

  9. MAIN STAGES SCIENTIFIC AND PRODUCTION MASTERING THE TERRITORY AVERAGE URAL

    Directory of Open Access Journals (Sweden)

    V.S. Bochko

    2006-09-01

    Full Text Available Questions of the shaping Average Ural, as industrial territory, on base her scientific study and production mastering are considered in the article. It is shown that studies of Ural resources and particularities of the vital activity of its population were concerned by Russian and foreign scientist in XVIII-XIX centuries. It is noted that in XX century there was a transition to systematic organizing-economic study of production power, society and natures of Average Ural. More attention addressed on new problems of region and on needs of their scientific solving.

  10. High-Average, High-Peak Current Injector Design

    CERN Document Server

    Biedron, S G; Virgo, M

    2005-01-01

    There is increasing interest in high-average-power (>100 kW), um-range FELs. These machines require high peak current (~1 kA), modest transverse emittance, and beam energies of ~100 MeV. High average currents (~1 A) place additional constraints on the design of the injector. We present a design for an injector intended to produce the required peak currents at the injector, eliminating the need for magnetic compression within the linac. This reduces the potential for beam quality degradation due to CSR and space charge effects within magnetic chicanes.

  11. Non-self-averaging nucleation rate due to quenched disorder

    International Nuclear Information System (INIS)

    Sear, Richard P

    2012-01-01

    We study the nucleation of a new thermodynamic phase in the presence of quenched disorder. The quenched disorder is a generic model of both impurities and disordered porous media; both are known to have large effects on nucleation. We find that the nucleation rate is non-self-averaging. This is in a simple Ising model with clusters of quenched spins. We also show that non-self-averaging behaviour is straightforward to detect in experiments, and may be rather common. (fast track communication)

  12. A note on moving average models for Gaussian random fields

    DEFF Research Database (Denmark)

    Hansen, Linda Vadgård; Thorarinsdottir, Thordis L.

    The class of moving average models offers a flexible modeling framework for Gaussian random fields with many well known models such as the Matérn covariance family and the Gaussian covariance falling under this framework. Moving average models may also be viewed as a kernel smoothing of a Lévy...... basis, a general modeling framework which includes several types of non-Gaussian models. We propose a new one-parameter spatial correlation model which arises from a power kernel and show that the associated Hausdorff dimension of the sample paths can take any value between 2 and 3. As a result...

  13. Impact of Cabin Ozone Concentrations on Passenger Reported Symptoms in Commercial Aircraft

    DEFF Research Database (Denmark)

    Bekö, Gabriel; Allen, Joseph G.; Weschler, Charles J.

    2015-01-01

    relatively low (median: 9.5 ppb). On thirteen flights (16%) ozone levels exceeded 60 ppb, while the highest peak level reached 256 ppb for a single flight. The most commonly reported symptoms were dry mouth or lips (26%), dry eyes (22.1%) and nasal stuffiness (18.9%). 46% of passengers reported at least one...... symptom related to the eyes or mouth. A third of the passengers reported at least one upper respiratory symptom. Using multivariate logistic (individual symptoms) and linear (aggregated continuous symptom variables) regression, ozone was consistently associated with symptoms related to the eyes...... and certain upper respiratory endpoints. A concentration-response relationship was observed for nasal stuffiness and eye and upper respiratory symptom indicators. Average ozone levels, as opposed to peak concentrations, exhibited slightly weaker associations. Medium and long duration flights were...

  14. Determination of Anionic Detergent Concentration of Karasu Stream in Sinop (Turkey

    Directory of Open Access Journals (Sweden)

    Ayşe Gündoğdu

    2018-02-01

    Full Text Available The study was achieved between May 2014 and April 2015 at the Karasu Creek located in the province of Sinop. It was conducted to determine anionic detergent pollution and some physicochemical properties (pH, temperature, conductivity, salinity, dissolved oxygen, total hardness, chemical oxygen demand, phosphate PO4-3, total nitrogen. The anionic detergent concentration of the stations was determined on a monthly basis. Seasonally averaged values of the anionic detergent was measured as the highest value in the autumn season. The lowest values of anionic detergent were found in stations in winter and spring. The increase in the concentration of anionic detergent is caused by population growth in residential areas, increased agricultural activities and rains, and that chemicals move to riverbed from terrestrial areas with rain water.

  15. Measurement uncertainties of long-term 222Rn averages at environmental levels using alpha track detectors

    International Nuclear Information System (INIS)

    Nelson, R.A.

    1987-01-01

    More than 250 replicate measurements of outdoor Rn concentration integrated over quarterly periods were made to estimate the random component of the measurement uncertainty of Track Etch detectors (type F) under outdoor conditions. The measurements were performed around three U mill tailings piles to provide a range of environmental concentrations. The measurement uncertainty was typically greater than could be accounted for by Poisson counting statistics. Average coefficients of variation of the order of 20% for all measured concentrations were found. It is concluded that alpha track detectors can be successfully used to determine annual average outdoor Rn concentrations through the use of careful quality control procedures. These include rapid deployment and collection of detectors to minimize unintended Rn exposure, careful packaging and shipping to and from the manufacturer, use of direct sunlight shields for all detectors and careful and secure mounting of all detectors in as similar a manner as possible. The use of multiple (at least duplicate) detectors at each monitoring location and an exposure period of no less than one quarter are suggested

  16. Measurements of 212Pb concentration in the ground-level air in Beijing area and assessment of ionizing radiation exposure of the population

    International Nuclear Information System (INIS)

    Lin Lianqing; Wen Huifen; Zhou Yuanwen

    1994-01-01

    The paper describes the method of measuring 212 Pb concentration in the ground-level air with a gamma spectrometer and the results of the measurements of 212 Pb concentrations during February 1988-January 1989. The results showed that the average and standard deviation are 0.54 and 0.40 Bq·m -3 ; the distribution of 212 Pb concentrations in air was logarithmic normal distribution, geometric average and geometric standard deviation are 0.44 Bq·m -3 and 2.0. The 212 Pb concentration in rain season is the lowest (0.42 Bq·m -3 ) while in heating season is the highest (0.66 Bq· -3 ). In period of a day, 212 Pb concentration in the morning 0-4 o'clock is the highest and in the afternoon 12-18 o'clock is the lowest, 212 Pb(max)/ 212 Pb(min) = 4.8

  17. Correlation Between the Concentration of Lead in the Blood of Dogs and People Living in the Same Environmental Conditions

    Directory of Open Access Journals (Sweden)

    Monkiewicz Jerzy

    2014-10-01

    Full Text Available The studies, conducted between 2010 and 2012, involved 102 dogs and 505 people from Lower Silesia (LS, 104 dogs and 578 people from the Legnica - Głogów Copper Mining Region (LGCMR, and 101 dogs and 897 people from the Upper Silesian Industrial Region (USIR. A significant positive correlation between blood lead concentration (BLC in dogs and people living in the same environment was found. Moreover, the data revealed an increase in BLC in dogs and people with the progressive aging of the body. The highest average BLC in dogs and humans were reported in the LGCMR followed by USIR and LS.

  18. Electrodynamics at the highest energies

    International Nuclear Information System (INIS)

    Klein, Spencer R.

    2002-01-01

    At very high energies, the bremsstrahlung and pair production cross sections exhibit complex behavior due to the material in which the interactions occur. The cross sections in dense media can be dramatically different than for isolated atoms. This writeup discusses these in-medium effects, emphasizing how the cross section has different energy and target density dependencies in different regimes. Data from SLAC experiment E-146 will be presented to confirm the energy and density scaling. Finally, QCD analogs of the electrodynamics effects will be discussed

  19. Vermont Highest Priority Connectivity Blocks

    Data.gov (United States)

    Vermont Center for Geographic Information — Act 174 requires plans to identify potential areas for the development and siting of renewable energy resources and areas that are unsuitable for siting those...

  20. Small Bandwidth Asymptotics for Density-Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    This paper proposes (apparently) novel standard error formulas for the density-weighted average derivative estimator of Powell, Stock, and Stoker (1989). Asymptotic validity of the standard errors developed in this paper does not require the use of higher-order kernels and the standard errors...