WorldWideScience

Sample records for time-weighted average exposure

  1. Americans' Average Radiation Exposure

    International Nuclear Information System (INIS)

    2000-01-01

    We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body

  2. Comparison of Spot and Time Weighted Averaging (TWA Sampling with SPME-GC/MS Methods for Trihalomethane (THM Analysis

    Directory of Open Access Journals (Sweden)

    Don-Roger Parkinson

    2016-02-01

    Full Text Available Water samples were collected and analyzed for conductivity, pH, temperature and trihalomethanes (THMs during the fall of 2014 at two monitored municipal drinking water source ponds. Both spot (or grab and time weighted average (TWA sampling methods were assessed over the same two day sampling time period. For spot sampling, replicate samples were taken at each site and analyzed within 12 h of sampling by both Headspace (HS- and direct (DI- solid phase microextraction (SPME sampling/extraction methods followed by Gas Chromatography/Mass Spectrometry (GC/MS. For TWA, a two day passive on-site TWA sampling was carried out at the same sampling points in the ponds. All SPME sampling methods undertaken used a 65-µm PDMS/DVB SPME fiber, which was found optimal for THM sampling. Sampling conditions were optimized in the laboratory using calibration standards of chloroform, bromoform, bromodichloromethane, dibromochloromethane, 1,2-dibromoethane and 1,2-dichloroethane, prepared in aqueous solutions from analytical grade samples. Calibration curves for all methods with R2 values ranging from 0.985–0.998 (N = 5 over the quantitation linear range of 3–800 ppb were achieved. The different sampling methods were compared for quantification of the water samples, and results showed that DI- and TWA- sampling methods gave better data and analytical metrics. Addition of 10% wt./vol. of (NH42SO4 salt to the sampling vial was found to aid extraction of THMs by increasing GC peaks areas by about 10%, which resulted in lower detection limits for all techniques studied. However, for on-site TWA analysis of THMs in natural waters, the calibration standard(s ionic strength conditions, must be carefully matched to natural water conditions to properly quantitate THM concentrations. The data obtained from the TWA method may better reflect actual natural water conditions.

  3. Comparing personal alpha dosimetry with the conventional area monitoring-time weighting methods of exposure estimation: a Canadian assessment

    International Nuclear Information System (INIS)

    Balint, A.B.; Viljoen, J.

    1988-01-01

    An experimental personal alpha dosimetry program for monitoring exposures of uranium mining facility workers in Canada has been completed. All licenced operating mining facilities were participating. Dosimetry techniques, description of dosimeters used by licences, performance and problems associated with the implementation of the programme as well as technical and administrative advantages and difficulties experienced are discussed. Area monitoring-time weighting methods used and results obtained to determine individual radon and thoron daughter exposure and exposure results generated by using dosimeters are assessed and compared

  4. Method for sampling and analysis of volatile biomarkers in process gas from aerobic digestion of poultry carcasses using time-weighted average SPME and GC-MS.

    Science.gov (United States)

    Koziel, Jacek A; Nguyen, Lam T; Glanville, Thomas D; Ahn, Heekwon; Frana, Timothy S; Hans van Leeuwen, J

    2017-10-01

    A passive sampling method, using retracted solid-phase microextraction (SPME) - gas chromatography-mass spectrometry and time-weighted averaging, was developed and validated for tracking marker volatile organic compounds (VOCs) emitted during aerobic digestion of biohazardous animal tissue. The retracted SPME configuration protects the fragile fiber from buffeting by the process gas stream, and it requires less equipment and is potentially more biosecure than conventional active sampling methods. VOC concentrations predicted via a model based on Fick's first law of diffusion were within 6.6-12.3% of experimentally controlled values after accounting for VOC adsorption to the SPME fiber housing. Method detection limits for five marker VOCs ranged from 0.70 to 8.44ppbv and were statistically equivalent (p>0.05) to those for active sorbent-tube-based sampling. The sampling time of 30min and fiber retraction of 5mm were found to be optimal for the tissue digestion process. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Quantification of benzene, toluene, ethylbenzene and o-xylene in internal combustion engine exhaust with time-weighted average solid phase microextraction and gas chromatography mass spectrometry.

    Science.gov (United States)

    Baimatova, Nassiba; Koziel, Jacek A; Kenessov, Bulat

    2015-05-11

    A new and simple method for benzene, toluene, ethylbenzene and o-xylene (BTEX) quantification in vehicle exhaust was developed based on diffusion-controlled extraction onto a retracted solid-phase microextraction (SPME) fiber coating. The rationale was to develop a method based on existing and proven SPME technology that is feasible for field adaptation in developing countries. Passive sampling with SPME fiber retracted into the needle extracted nearly two orders of magnitude less mass (n) compared with exposed fiber (outside of needle) and sampling was in a time weighted-averaging (TWA) mode. Both the sampling time (t) and fiber retraction depth (Z) were adjusted to quantify a wider range of Cgas. Extraction and quantification is conducted in a non-equilibrium mode. Effects of Cgas, t, Z and T were tested. In addition, contribution of n extracted by metallic surfaces of needle assembly without SPME coating was studied. Effects of sample storage time on n loss was studied. Retracted TWA-SPME extractions followed the theoretical model. Extracted n of BTEX was proportional to Cgas, t, Dg, T and inversely proportional to Z. Method detection limits were 1.8, 2.7, 2.1 and 5.2 mg m(-3) (0.51, 0.83, 0.66 and 1.62 ppm) for BTEX, respectively. The contribution of extraction onto metallic surfaces was reproducible and influenced by Cgas and t and less so by T and by the Z. The new method was applied to measure BTEX in the exhaust gas of a Ford Crown Victoria 1995 and compared with a whole gas and direct injection method. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Evaluation of exposure in mammography: limitations of average glandular dose and proposal of a new quantity

    International Nuclear Information System (INIS)

    Geeraert, N.; Bosmans, H.; Klausz, R.; Muller, S.; Bloch, I.

    2015-01-01

    The radiation risk in mammography is traditionally evaluated using the average glandular dose. This quantity for the average breast has proven to be useful for population statistics and to compare exposure techniques and systems. However it is not indicating the individual radiation risk based on the individual glandular amount and distribution. Simulations of exposures were performed for six appropriate virtual phantoms with varying glandular amount and distribution. The individualised average glandular dose (iAGD), i.e. the individual glandular absorbed energy divided by the mass of the gland, and the glandular imparted energy (GIE), i.e. the glandular absorbed energy, were computed. Both quantities were evaluated for their capability to take into account the glandular amount and distribution. As expected, the results have demonstrated that iAGD reflects only the distribution, while GIE reflects both the glandular amount and distribution. Therefore GIE is a good candidate for individual radiation risk assessment. (authors)

  7. Time-Weighted Balanced Stochastic Model Reduction

    DEFF Research Database (Denmark)

    Tahavori, Maryamsadat; Shaker, Hamid Reza

    2011-01-01

    A new relative error model reduction technique for linear time invariant (LTI) systems is proposed in this paper. Both continuous and discrete time systems can be reduced within this framework. The proposed model reduction method is mainly based upon time-weighted balanced truncation and a recently...

  8. Estimation of the Relative Contribution of Postprandial Glucose Exposure to Average Total Glucose Exposure in Subjects with Type 2 Diabetes

    Directory of Open Access Journals (Sweden)

    Bo Ahrén

    2016-01-01

    Full Text Available We hypothesized that the relative contribution of fasting plasma glucose (FPG versus postprandial plasma glucose (PPG to glycated haemoglobin (HbA1c could be calculated using an algorithm developed by the A1c-Derived Average Glucose (ADAG study group to make HbA1c values more clinically relevant to patients. The algorithm estimates average glucose (eAG exposure, which can be used to calculate apparent PPG (aPPG by subtracting FPG. The hypothesis was tested in a large dataset (comprising 17 studies from the vildagliptin clinical trial programme. We found that 24 weeks of treatment with vildagliptin monotherapy (n=2523 reduced the relative contribution of aPPG to eAG from 8.12% to 2.95% (by 64%, p<0.001. In contrast, when vildagliptin was added to metformin (n=2752, the relative contribution of aPPG to eAG insignificantly increased from 1.59% to 2.56%. In conclusion, glucose peaks, which are often prominent in patients with type 2 diabetes, provide a small contribution to the total glucose exposure assessed by HbA1c, and the ADAG algorithm is not robust enough to assess this small relative contribution in patients receiving combination therapy.

  9. Reduction of Averaging Time for Evaluation of Human Exposure to Radiofrequency Electromagnetic Fields from Cellular Base Stations

    Science.gov (United States)

    Kim, Byung Chan; Park, Seong-Ook

    In order to determine exposure compliance with the electromagnetic fields from a base station's antenna in the far-field region, we should calculate the spatially averaged field value in a defined space. This value is calculated based on the measured value obtained at several points within the restricted space. According to the ICNIRP guidelines, at each point in the space, the reference levels are averaged over any 6min (from 100kHz to 10GHz) for the general public. Therefore, the more points we use, the longer the measurement time becomes. For practical application, it is very advantageous to spend less time for measurement. In this paper, we analyzed the difference of average values between 6min and lesser periods and compared it with the standard uncertainty for measurement drift. Based on the standard deviation from the 6min averaging value, the proposed minimum averaging time is 1min.

  10. EXTRAPOLATION METHOD FOR MAXIMAL AND 24-H AVERAGE LTE TDD EXPOSURE ESTIMATION.

    Science.gov (United States)

    Franci, D; Grillo, E; Pavoncello, S; Coltellacci, S; Buccella, C; Aureli, T

    2018-01-01

    The Long-Term Evolution (LTE) system represents the evolution of the Universal Mobile Telecommunication System technology. This technology introduces two duplex modes: Frequency Division Duplex and Time Division Duplex (TDD). Despite having experienced a limited expansion in the European countries since the debut of the LTE technology, a renewed commercial interest for LTE TDD technology has recently been shown. Therefore, the development of extrapolation procedures optimised for TDD systems becomes crucial, especially for the regulatory authorities. This article presents an extrapolation method aimed to assess the exposure to LTE TDD sources, based on the detection of the Cell-Specific Reference Signal power level. The method introduces a βTDD parameter intended to quantify the fraction of the LTE TDD frame duration reserved for downlink transmission. The method has been validated by experimental measurements performed on signals generated by both a vector signal generator and a test Base Transceiver Station installed at Linkem S.p.A facility in Rome. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. Effects of time-variable exposure regimes of the insecticide chlorpyrifos on freshwater invertebrate communities in microcosms

    NARCIS (Netherlands)

    Zafar, M.I.; Wijngaarden, van R.; Roessink, I.; Brink, van den P.J.

    2011-01-01

    The present study compared the effects of different time-variable exposure regimes having the same time-weighted average (TWA) concentration of the organophosphate insecticide chlorpyrifos on freshwater invertebrate communities to enable extrapolation of effects across exposure regimes. The

  12. Benzene exposure in a Japanese petroleum refinery.

    Science.gov (United States)

    Kawai, T; Yamaoka, K; Uchida, Y; Ikeda, M

    1990-07-01

    Time-weighted average (TWA) intensity of exposure of workers to benzene vapor during a shift was monitored by diffusive sampling technique in a Japanese petroleum refinery. The subjects monitored (83 in total) included refinery operators, laboratory personnel and tanker-loading workers. The results showed that the time-weighted average exposures are well below 1 ppm in most cases. The highest exposure was recorded in 1 case involved in bulk loading of tanker ships, in which exposure of over 1 ppm might take place depending on operational conditions. The observation was generally in agreement with levels previously reported.

  13. A formula for human average whole-body SARwb under diffuse fields exposure in the GHz region

    International Nuclear Information System (INIS)

    Bamba, A; Joseph, W; Vermeeren, G; Thielens, A; Tanghe, E; Martens, L

    2014-01-01

    A simple formula to determine the human average whole-body SAR (SAR wb ) under realistic propagation conditions is proposed in the GHz region, i.e. from 1.45 GHz to 5.8 GHz. The methodology is based on simulations of ellipsoidal human body models. Only the exposure (incident power densities) and the human mass are needed to apply the formula. Diffuse scattered illumination is addressed for the first time and the possible presence of a Line-of-Sight (LOS) component is addressed as well. As validation, the formula is applied to calculate the average whole-body SAR wb in 3D heterogeneous phantoms, i.e. the virtual family (34 year-old male, 26 year-old female, 11 year-old girl, and 6 year-old boy) and the results are compared with numerical ones—using the Finite-Difference Time-Domain (FDTD) method—at 3 GHz. For the LOS exposure, the average relative error varies from 28% to 12% (resp. 14–12%) for the vertical polarization (resp. horizontal polarization), depending on the heteregeneous phantom. Regarding the diffuse illumination, relative errors of −39.40%, −11.70%, 10.70%, and 10.60% are obtained for the 6 year-old boy, 11 year-old girl, 26 year-old female, and 34 year-old male, respectively. The proposed formula estimates well (especially for adults) the SAR wb induced by diffuse illumination in realistic conditions. In general, the correctness of the formula improves when the human mass increases. Keeping the uncertainties of the FDTD simulations in mind, the proposed formula might be important for the dosimetry community to assess rapidly and accurately the human absorption of electromagnetic radiation caused by diffuse fields in the GHz region. Finally, we show the applicability of the proposed formula to personal dosimetry for epidemiological research. (paper)

  14. Estimates of Average Glandular Dose with Auto-modes of X-ray Exposures in Digital Breast Tomosynthesis

    Directory of Open Access Journals (Sweden)

    Izdihar Kamal

    2015-05-01

    Full Text Available Objectives: The aim of this research was to examine the average glandular dose (AGD of radiation among different breast compositions of glandular and adipose tissue with auto-modes of exposure factor selection in digital breast tomosynthesis. Methods: This experimental study was carried out in the National Cancer Society, Kuala Lumpur, Malaysia, between February 2012 and February 2013 using a tomosynthesis digital mammography X-ray machine. The entrance surface air kerma and the half-value layer were determined using a 100H thermoluminescent dosimeter on 50% glandular and 50% adipose tissue (50/50 and 20% glandular and 80% adipose tissue (20/80 commercially available breast phantoms (Computerized Imaging Reference Systems, Inc., Norfolk, Virginia, USA with auto-time, auto-filter and auto-kilovolt modes. Results: The lowest AGD for the 20/80 phantom with auto-time was 2.28 milliGray (mGy for two dimension (2D and 2.48 mGy for three dimensional (3D images. The lowest AGD for the 50/50 phantom with auto-time was 0.97 mGy for 2D and 1.0 mGy for 3D. Conclusion: The AGD values for both phantoms were lower against a high kilovolt peak and the use of auto-filter mode was more practical for quick acquisition while limiting the probability of operator error.

  15. Impact of average household income and damage exposure on post-earthquake distress and functioning: A community study following the February 2011 Christchurch earthquake.

    OpenAIRE

    Dorahy, Martin J.; Rowlands, Amy; Renouf, Charlotte; Hanna, Donncha; Britt, Eileen; Carter, Janet D.

    2015-01-01

    Post-traumatic stress, depression and anxiety symptoms are common outcomes following earthquakes, and may persist for months and years. This study systematically examined the impact of neighbourhood damage exposure and average household income on psychological distress and functioning in 600 residents of Christchurch, New Zealand, 4–6 months after the fatal February, 2011 earthquake. Participants were from highly affected and relatively unaffected suburbs in low, medium and high average house...

  16. Comparison of average global exposure of population induced by a macro 3G network in different geographical areas in France and Serbia.

    Science.gov (United States)

    Huang, Yuanyuan; Varsier, Nadège; Niksic, Stevan; Kocan, Enis; Pejanovic-Djurisic, Milica; Popovic, Milica; Koprivica, Mladen; Neskovic, Aleksandar; Milinkovic, Jelena; Gati, Azeddine; Person, Christian; Wiart, Joe

    2016-09-01

    This article is the first thorough study of average population exposure to third generation network (3G)-induced electromagnetic fields (EMFs), from both uplink and downlink radio emissions in different countries, geographical areas, and for different wireless device usages. Indeed, previous publications in the framework of exposure to EMFs generally focused on individual exposure coming from either personal devices or base stations. Results, derived from device usage statistics collected in France and Serbia, show a strong heterogeneity of exposure, both in time, that is, the traffic distribution over 24 h was found highly variable, and space, that is, the exposure to 3G networks in France was found to be roughly two times higher than in Serbia. Such heterogeneity is further explained based on real data and network architecture. Among those results, authors show that, contrary to popular belief, exposure to 3G EMFs is dominated by uplink radio emissions, resulting from voice and data traffic, and average population EMF exposure differs from one geographical area to another, as well as from one country to another, due to the different cellular network architectures and variability of mobile usage. Bioelectromagnetics. 37:382-390, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  17. British Standard method for determination of ISO speed and average gradient of direct-exposure medical and dental radiographic film/process combinations

    International Nuclear Information System (INIS)

    1983-01-01

    Under the direction of the Cinematography and Photography Standards Committee, a British Standard method has been prepared for determining ISO speed and average gradient of direct-exposure medical and dental radiographic film/film-process combinations. The method determines the speed and gradient, i.e. contrast, of the X-ray films processed according to their manufacturer's recommendations. (U.K.)

  18. Effect of the averaging volume and algorithm on the in situ electric field for uniform electric- and magnetic-field exposures

    International Nuclear Information System (INIS)

    Hirata, Akimasa; Takano, Yukinori; Fujiwara, Osamu; Kamimura, Yoshitsugu

    2010-01-01

    The present study quantified the volume-averaged in situ electric field in nerve tissues of anatomically based numeric Japanese male and female models for exposure to extremely low-frequency electric and magnetic fields. A quasi-static finite-difference time-domain method was applied to analyze this problem. The motivation of our investigation is that the dependence of the electric field induced in nerve tissue on the averaging volume/distance is not clear, while a cubical volume of 5 x 5 x 5 mm 3 or a straight-line segment of 5 mm is suggested in some documents. The influence of non-nerve tissue surrounding nerve tissue is also discussed by considering three algorithms for calculating the averaged in situ electric field in nerve tissue. The computational results obtained herein reveal that the volume-averaged electric field in the nerve tissue decreases with the averaging volume. In addition, the 99th percentile value of the volume-averaged in situ electric field in nerve tissue is more stable than that of the maximal value for different averaging volume. When including non-nerve tissue surrounding nerve tissue in the averaging volume, the resultant in situ electric fields were not so dependent on the averaging volume as compared to the case excluding non-nerve tissue. In situ electric fields averaged over a distance of 5 mm were comparable or larger than that for a 5 x 5 x 5 mm 3 cube depending on the algorithm, nerve tissue considered and exposure scenarios. (note)

  19. 30 CFR 56.5001 - Exposure limits for airborne contaminants.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Exposure limits for airborne contaminants. 56... Quality and Physical Agents Air Quality § 56.5001 Exposure limits for airborne contaminants. Except as... contaminants shall not exceed, on the basis of a time weighted average, the threshold limit values adopted by...

  20. Semen quality in papaya workers with long term exposure to ethylene dibromide.

    OpenAIRE

    Ratcliffe, J M; Schrader, S M; Steenland, K; Clapp, D E; Turner, T; Hornung, R W

    1987-01-01

    To examine whether long term occupational exposure to ethylene dibromide (EDB) affects semen quality a cross sectional study of semen quality was conducted among 46 men employed in the papaya fumigation industry in Hawaii, with an average duration of exposure of five years and a geometric mean breathing zone exposure to airborne EDB of 88 ppb (eight hour time weighted average) and peak exposures of up to 262 ppb. The comparison group consisted of 43 unexposed men from a nearby sugar refinery....

  1. State Averages

    Data.gov (United States)

    U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...

  2. Investigation of the thermal and optical performance of a spatial light modulator with high average power picosecond laser exposure for materials processing applications

    Science.gov (United States)

    Zhu, G.; Whitehead, D.; Perrie, W.; Allegre, O. J.; Olle, V.; Li, Q.; Tang, Y.; Dawson, K.; Jin, Y.; Edwardson, S. P.; Li, L.; Dearden, G.

    2018-03-01

    Spatial light modulators (SLMs) addressed with computer generated holograms (CGHs) can create structured light fields on demand when an incident laser beam is diffracted by a phase CGH. The power handling limitations of these devices based on a liquid crystal layer has always been of some concern. With careful engineering of chip thermal management, we report the detailed optical phase and temperature response of a liquid cooled SLM exposed to picosecond laser powers up to 〈P〉  =  220 W at 1064 nm. This information is critical for determining device performance at high laser powers. SLM chip temperature rose linearly with incident laser exposure, increasing by only 5 °C at 〈P〉  =  220 W incident power, measured with a thermal imaging camera. Thermal response time with continuous exposure was 1-2 s. The optical phase response with incident power approaches 2π radians with average power up to 〈P〉  =  130 W, hence the operational limit, while above this power, liquid crystal thickness variations limit phase response to just over π radians. Modelling of the thermal and phase response with exposure is also presented, supporting experimental observations well. These remarkable performance characteristics show that liquid crystal based SLM technology is highly robust when efficiently cooled. High speed, multi-beam plasmonic surface micro-structuring at a rate R  =  8 cm2 s-1 is achieved on polished metal surfaces at 〈P〉  =  25 W exposure while diffractive, multi-beam surface ablation with average power 〈P〉  =100 W on stainless steel is demonstrated with ablation rate of ~4 mm3 min-1. However, above 130 W, first order diffraction efficiency drops significantly in accord with the observed operational limit. Continuous exposure for a period of 45 min at a laser power of 〈P〉  =  160 W did not result in any detectable drop in diffraction efficiency, confirmed afterwards by the efficient

  3. Impact of average household income and damage exposure on post-earthquake distress and functioning: A community study following the February 2011 Christchurch earthquake.

    Science.gov (United States)

    Dorahy, Martin J; Rowlands, Amy; Renouf, Charlotte; Hanna, Donncha; Britt, Eileen; Carter, Janet D

    2015-08-01

    Post-traumatic stress, depression and anxiety symptoms are common outcomes following earthquakes, and may persist for months and years. This study systematically examined the impact of neighbourhood damage exposure and average household income on psychological distress and functioning in 600 residents of Christchurch, New Zealand, 4-6 months after the fatal February, 2011 earthquake. Participants were from highly affected and relatively unaffected suburbs in low, medium and high average household income areas. The assessment battery included the Acute Stress Disorder Scale, the depression module of the Patient Health Questionnaire (PHQ-9), and the Generalized Anxiety Disorder Scale (GAD-7), along with single item measures of substance use, earthquake damage and impact, and disruptions in daily life and relationship functioning. Controlling for age, gender and social isolation, participants from low income areas were more likely to meet diagnostic cut-offs for depression and anxiety, and have more severe anxiety symptoms. Higher probabilities of acute stress, depression and anxiety diagnoses were evident in affected versus unaffected areas, and those in affected areas had more severe acute stress, depression and anxiety symptoms. An interaction between income and earthquake effect was found for depression, with those from the low and medium income affected suburbs more depressed. Those from low income areas were more likely, post-earthquake, to start psychiatric medication and increase smoking. There was a uniform increase in alcohol use across participants. Those from the low income affected suburb had greater general and relationship disruption post-quake. Average household income and damage exposure made unique contributions to earthquake-related distress and dysfunction. © 2014 The British Psychological Society.

  4. EPa`s program for risk assessment guidelines: Exposure issues

    Energy Technology Data Exchange (ETDEWEB)

    Callahan, M.A. [Environmental Protection Agency, Washington, DC (United States)

    1990-12-31

    Three major issues to be dealt with over the next ten years in the exposure assessment field are: consistency in terminology, the impact of computer technology on the choice of data and modeling, and conceptual issues such as the use of time-weighted averages.

  5. Temporal Variability of Daily Personal Magnetic Field Exposure Metrics in Pregnant Women

    OpenAIRE

    Lewis, Ryan C.; Evenson, Kelly R.; Savitz, David A.; Meeker, John D.

    2014-01-01

    Recent epidemiology studies of power-frequency magnetic fields and reproductive health have characterized exposures using data collected from personal exposure monitors over a single day, possibly resulting in exposure misclassification due to temporal variability in daily personal magnetic field exposure metrics, but relevant data in adults are limited. We assessed the temporal variability of daily central tendency (time-weighted average, median) and peak (upper percentiles, maximum) persona...

  6. Relative Error Model Reduction via Time-Weighted Balanced Stochastic Singular Perturbation

    DEFF Research Database (Denmark)

    Tahavori, Maryamsadat; Shaker, Hamid Reza

    2012-01-01

    A new mixed method for relative error model reduction of linear time invariant (LTI) systems is proposed in this paper. This order reduction technique is mainly based upon time-weighted balanced stochastic model reduction method and singular perturbation model reduction technique. Compared...... by using the concept and properties of the reciprocal systems. The results are further illustrated by two practical numerical examples: a model of CD player and a model of the atmospheric storm track....

  7. Asbestos exposure of building maintenance personnel.

    Science.gov (United States)

    Mlynarek, S; Corn, M; Blake, C

    1996-06-01

    The exposures of building maintenance personnel and occupants to airborne asbestos fibers, and the effects of operations and maintenance programs on those exposures, continue to be an important public health issue. The subject of this investigation was a large metropolitan county with numerous public buildings which routinely conducted air sampling for asbestos. A total of 302 personal air samples in nine task categories collected during maintenance worker activities in proximity to asbestos-containing materials were analyzed; 102 environmental air samples in four task categories were also analyzed. The arithmetic means of the 8-hr time weighted average exposures for personal sampling for each task category were all below the Occupational Safety and Health Administration permissible exposure level of 0.1 fibers (f)/cc > 5 microm. The highest mean 8-hr time weighted average exposure was 0.030 f/cc > 5 microm for ceiling tile replacement. The maximum asbestos concentration during sample collection for environmental samples was 0.027 f/cc > 5 microm. All asbestos-related maintenance work was done within the framework of an Operations and Maintenance Program (OMP) which utilized both personal protective equipment and controls against fiber release/dispersion. Results are presented in association with specific OMP procedures or controls. These results support the effectiveness of using Operations and Maintenance Programs to manage asbestos in buildings without incurring unacceptable risk to maintenance workers performing maintenance tasks.

  8. Determining time-weighted average concentrations of nitrate and ammonium in freshwaters using DGT with ion exchange membrane-based binding layers

    DEFF Research Database (Denmark)

    Huang, Jianyin; Bennett, William W.; Welsh, David T.

    2016-01-01

    Commercially-available AMI-7001 anion exchange and CMI-7000 cation exchange membranes were utilised as binding layers for DGT measurements of NO3-N and NH4-N in freshwaters. These ion exchange membranes are easier to prepare and handle than DGT binding layers consisting of hydrogels cast with ion...... exchange resins. The membranes showed good uptake and elution efficiencies for both NO3-N and NH4-N. The membrane-based DGTs are suitable for pH 3.5-8.5 and ionic strength ranges (0.0001-0.014 and 0.0003-0.012 mol L−1 as NaCl for the AMI-7001 and CMI-7000 membrane, respectively) typical of most natural...... freshwaters. The binding membranes had high intrinsic binding capacities for NO3-N and NH4-N of 911 ± 88 μg and 3512 ± 51 μg, respectively. Interferences from the major competing ions for membrane-based DGTs are similar to DGTs employing resin-based binding layers but with slightly different selectivity...

  9. Neutron resonance averaging

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1986-10-01

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs

  10. Screen-Time Weight-loss Intervention Targeting Children at Home (SWITCH): a randomized controlled trial.

    Science.gov (United States)

    Maddison, Ralph; Marsh, Samantha; Foley, Louise; Epstein, Leonard H; Olds, Timothy; Dewes, Ofa; Heke, Ihirangi; Carter, Karen; Jiang, Yannan; Mhurchu, Cliona Ni

    2014-09-10

    Screen-based activities, such as watching television (TV), playing video games, and using computers, are common sedentary behaviors among young people and have been linked with increased energy intake and overweight. Previous home-based sedentary behaviour interventions have been limited by focusing primarily on the child, small sample sizes, and short follow-up periods. The SWITCH (Screen-Time Weight-loss Intervention Targeting Children at Home) study aimed to determine the effect of a home-based, family-delivered intervention to reduce screen-based sedentary behaviour on body composition, sedentary behaviour, physical activity, and diet over 24 weeks in overweight and obese children. A two-arm, parallel, randomized controlled trial was conducted. Children and their primary caregiver living in Auckland, New Zealand were recruited via schools, community centres, and word of mouth. The intervention, delivered over 20 weeks, consisted of a face-to-face meeting with the parent/caregiver and the child to deliver intervention content, which focused on training and educating them to use a wide range of strategies designed to reduce their child's screen time. Families were given Time Machine TV monitoring devices to assist with allocating screen time, activity packages to promote alternative activities, online support via a website, and monthly newsletters. Control participants were given the intervention material on completion of follow-up. The primary outcome was change in children's BMI z-score from baseline to 24 weeks. Children (n = 251) aged 9-12 years and their primary caregiver were randomized to receive the SWITCH intervention (n = 127) or no intervention (controls; n = 124). There was no significant difference in change of zBMI between the intervention and control groups, although a favorable trend was observed (-0.016; 95% CI: -0.084, 0.051; p = 0.64). There were also no significant differences on secondary outcomes, except for a trend towards

  11. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    1999-01-01

    In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belo...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion......In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  12. Impact of a smoking ban in hospitality venues on second hand smoke exposure: a comparison of exposure assessment methods.

    Science.gov (United States)

    Rajkumar, Sarah; Huynh, Cong Khanh; Bauer, Georg F; Hoffmann, Susanne; Röösli, Martin

    2013-06-04

    In May 2010, Switzerland introduced a heterogeneous smoking ban in the hospitality sector. While the law leaves room for exceptions in some cantons, it is comprehensive in others. This longitudinal study uses different measurement methods to examine airborne nicotine levels in hospitality venues and the level of personal exposure of non-smoking hospitality workers before and after implementation of the law. Personal exposure to second hand smoke (SHS) was measured by three different methods. We compared a passive sampler called MoNIC (Monitor of NICotine) badge, to salivary cotinine and nicotine concentration as well as questionnaire data. Badges allowed the number of passively smoked cigarettes to be estimated. They were placed at the venues as well as distributed to the participants for personal measurements. To assess personal exposure at work, a time-weighted average of the workplace badge measurements was calculated. Prior to the ban, smoke-exposed hospitality venues yielded a mean badge value of 4.48 (95%-CI: 3.7 to 5.25; n = 214) cigarette equivalents/day. At follow-up, measurements in venues that had implemented a smoking ban significantly declined to an average of 0.31 (0.17 to 0.45; n = 37) (p = 0.001). Personal badge measurements also significantly decreased from an average of 2.18 (1.31-3.05 n = 53) to 0.25 (0.13-0.36; n = 41) (p = 0.001). Spearman rank correlations between badge exposure measures and salivary measures were small to moderate (0.3 at maximum). Nicotine levels significantly decreased in all types of hospitality venues after implementation of the smoking ban. In-depth analyses demonstrated that a time-weighted average of the workplace badge measurements represented typical personal SHS exposure at work more reliably than personal exposure measures such as salivary cotinine and nicotine.

  13. Averaged RMHD equations

    International Nuclear Information System (INIS)

    Ichiguchi, Katsuji

    1998-01-01

    A new reduced set of resistive MHD equations is derived by averaging the full MHD equations on specified flux coordinates, which is consistent with 3D equilibria. It is confirmed that the total energy is conserved and the linearized equations for ideal modes are self-adjoint. (author)

  14. Determining average yarding distance.

    Science.gov (United States)

    Roger H. Twito; Charles N. Mann

    1979-01-01

    Emphasis on environmental and esthetic quality in timber harvesting has brought about increased use of complex boundaries of cutting units and a consequent need for a rapid and accurate method of determining the average yarding distance and area of these units. These values, needed for evaluation of road and landing locations in planning timber harvests, are easily and...

  15. Average Revisited in Context

    Science.gov (United States)

    Watson, Jane; Chick, Helen

    2012-01-01

    This paper analyses the responses of 247 middle school students to items requiring the concept of average in three different contexts: a city's weather reported in maximum daily temperature, the number of children in a family, and the price of houses. The mixed but overall disappointing performance on the six items in the three contexts indicates…

  16. Averaging operations on matrices

    Indian Academy of Sciences (India)

    2014-07-03

    Jul 3, 2014 ... Role of Positive Definite Matrices. • Diffusion Tensor Imaging: 3 × 3 pd matrices model water flow at each voxel of brain scan. • Elasticity: 6 × 6 pd matrices model stress tensors. • Machine Learning: n × n pd matrices occur as kernel matrices. Tanvi Jain. Averaging operations on matrices ...

  17. Average-energy games

    Directory of Open Access Journals (Sweden)

    Patricia Bouyer

    2015-09-01

    Full Text Available Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in NP inter coNP and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements.

  18. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    2001-01-01

    In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong ...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation.......In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  19. Average is Over

    Science.gov (United States)

    Eliazar, Iddo

    2018-02-01

    The popular perception of statistical distributions is depicted by the iconic bell curve which comprises of a massive bulk of 'middle-class' values, and two thin tails - one of small left-wing values, and one of large right-wing values. The shape of the bell curve is unimodal, and its peak represents both the mode and the mean. Thomas Friedman, the famous New York Times columnist, recently asserted that we have entered a human era in which "Average is Over" . In this paper we present mathematical models for the phenomenon that Friedman highlighted. While the models are derived via different modeling approaches, they share a common foundation. Inherent tipping points cause the models to phase-shift from a 'normal' bell-shape statistical behavior to an 'anomalous' statistical behavior: the unimodal shape changes to an unbounded monotone shape, the mode vanishes, and the mean diverges. Hence: (i) there is an explosion of small values; (ii) large values become super-large; (iii) 'middle-class' values are wiped out, leaving an infinite rift between the small and the super large values; and (iv) "Average is Over" indeed.

  20. Average nuclear surface properties

    International Nuclear Information System (INIS)

    Groote, H. von.

    1979-01-01

    The definition of the nuclear surface energy is discussed for semi-infinite matter. This definition is extended also for the case that there is a neutron gas instead of vacuum on the one side of the plane surface. The calculations were performed with the Thomas-Fermi Model of Syler and Blanchard. The parameters of the interaction of this model were determined by a least squares fit to experimental masses. The quality of this fit is discussed with respect to nuclear masses and density distributions. The average surface properties were calculated for different particle asymmetry of the nucleon-matter ranging from symmetry beyond the neutron-drip line until the system no longer can maintain the surface boundary and becomes homogeneous. The results of the calculations are incorporated in the nuclear Droplet Model which then was fitted to experimental masses. (orig.)

  1. Intra- and inter-basin mercury comparisons: Importance of basin scale and time-weighted methylmercury estimates

    International Nuclear Information System (INIS)

    Bradley, Paul M.; Journey, Celeste A.; Brigham, Mark E.; Burns, Douglas A.; Button, Daniel T.; Riva-Murray, Karen

    2013-01-01

    To assess inter-comparability of fluvial mercury (Hg) observations at substantially different scales, Hg concentrations, yields, and bivariate-relations were evaluated at nested-basin locations in the Edisto River, South Carolina and Hudson River, New York. Differences between scales were observed for filtered methylmercury (FMeHg) in the Edisto (attributed to wetland coverage differences) but not in the Hudson. Total mercury (THg) concentrations and bivariate-relationships did not vary substantially with scale in either basin. Combining results of this and a previously published multi-basin study, fish Hg correlated strongly with sampled water FMeHg concentration (ρ = 0.78; p = 0.003) and annual FMeHg basin yield (ρ = 0.66; p = 0.026). Improved correlation (ρ = 0.88; p < 0.0001) was achieved with time-weighted mean annual FMeHg concentrations estimated from basin-specific LOADEST models and daily streamflow. Results suggest reasonable scalability and inter-comparability for different basin sizes if wetland area or related MeHg-source-area metrics are considered. - Highlights: ► National scale mercury assessments integrate small scale study results. ► Basin scale differences and representativeness of fluvial mercury samples are concerns. ► Wetland area, not basin size, predicts inter-basin methylmercury variability. ► Time-weighted methylmercury estimates improve the prediction of mercury in basin fish. - Fluvial methylmercury concentration correlates with wetland area not basin scale and time-weighted estimates better predict basin top predator mercury than discrete sample estimates.

  2. Sources of Exposure to ELF Fields at Workplaces (invited paper)

    International Nuclear Information System (INIS)

    Vecchia, P.

    1999-01-01

    In most workplaces people are exposed to a variety of extremely low frequency (ELF) fields. So-called 'electric jobs' have been identified, where the exposure is likely to significantly exceed average residential and occupational levels. The classification is, however, questionable, and no reliable exposure data may in general be associated with different jobs. Most recent epidemiological studies have been focused on areas where ELF fields are quite intense, sources are clearly identified, and actual field levels can be measured. Such areas include electric power generation, transmission and distribution, electric transport, and arc welding. In this paper, typical values of magnetic fields in these workplaces are presented and discussed. The time-weighted magnetic field values for the same jobs are not very consistent across the studies. This indicates that not only job titles, but also measurements of the magnetic field, are poor surrogates for the actual exposure, which is best assessed by the use of dosemeters. (author)

  3. Influence of dispatching rules on average production lead time for multi-stage production systems.

    Science.gov (United States)

    Hübl, Alexander; Jodlbauer, Herbert; Altendorfer, Klaus

    2013-08-01

    In this paper the influence of different dispatching rules on the average production lead time is investigated. Two theorems based on covariance between processing time and production lead time are formulated and proved theoretically. Theorem 1 links the average production lead time to the "processing time weighted production lead time" for the multi-stage production systems analytically. The influence of different dispatching rules on average lead time, which is well known from simulation and empirical studies, can be proved theoretically in Theorem 2 for a single stage production system. A simulation study is conducted to gain more insight into the influence of dispatching rules on average production lead time in a multi-stage production system. We find that the "processing time weighted average production lead time" for a multi-stage production system is not invariant of the applied dispatching rule and can be used as a dispatching rule independent indicator for single-stage production systems.

  4. The philosophy and assumptions underlying exposure limits for ionising radiation, inorganic lead, asbestos and noise

    International Nuclear Information System (INIS)

    Akber, R.

    1996-01-01

    Full text: A review of the literature relating to exposure to, and exposure limits for, ionising radiation, inorganic lead, asbestos and noise was undertaken. The four hazards were chosen because they were insidious and ubiquitous, were potential hazards in both occupational and environmental settings and had early and late effects depending on dose and dose rate. For all four hazards, the effect of the hazard was enhanced by other exposures such as smoking or organic solvents. In the cases of inorganic lead and noise, there were documented health effects which affected a significant percentage of the exposed populations at or below the [effective] exposure limits. This was not the case for ionising radiation and asbestos. None of the exposure limits considered exposure to multiple mutagens/carcinogens in the calculation of risk. Ionising radiation was the only one of the hazards to have a model of all likely exposures, occupational, environmental and medical, as the basis for the exposure limits. The other three considered occupational exposure in isolation from environmental exposure. Inorganic lead and noise had economic considerations underlying the exposure limits and the exposure limits for asbestos were based on the current limit of detection. All four hazards had many variables associated with exposure, including idiosyncratic factors, that made modelling the risk very complex. The scientific idea of a time weighted average based on an eight hour day, and forty hour week on which the exposure limits for lead, asbestos and noise were based was underpinned by neither empirical evidence or scientific hypothesis. The methodology of the ACGIH in the setting of limits later brought into law, may have been unduly influenced by the industries most closely affected by those limits. Measuring exposure over part of an eight hour day and extrapolating to model exposure over the longer term is not the most effective way to model exposure. The statistical techniques used

  5. The difference between alternative averages

    Directory of Open Access Journals (Sweden)

    James Vaupel

    2012-09-01

    Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.

  6. Dose - response relationship between noise exposure and the risk of occupational injury

    Directory of Open Access Journals (Sweden)

    Jin-Ha Yoon

    2015-01-01

    Full Text Available Many workers worldwide experience fatality and disability caused by occupational injuries. This study examined the relationship between noise exposure and occupational injuries at factories in Korea. A total of 1790 factories located in northern Gyeonggi Province, Korea was evaluated. The time-weighted average levels of dust and noise exposure were taken from Workplace Exposure Assessment data. Apart occupational injuries, sports events, traffic accidents, and other accidents occurring outside workplaces were excluded. The incidences of occupational injury in each factory were calculated by data from the Korea Workers′ Compensation and Welfare Services. Workplaces were classified according to the incidence of any occupational injuries (incident or nonincident workplaces, respectively. Workplace dust exposure was classified as 90 dB. Workplaces with high noise exposure were significantly associated with being incident workplaces, whereas workplaces with high dust exposure were not. The odds ratios (95% confidence intervals derived from a logistic regression model were 1.68 (1.27-2.24 and 3.42 (2.26-5.17 at 80-89 dB and ≥90 dB versus <80 dB. These associations remained significant when in a separate analysis according to high or low dust exposure level. Noise exposure increases the risk of occupational injury in the workplace. Furthermore, the risk of occupational injury increases with noise exposure level in a dose-response relationship. Therefore, strategies for reducing noise exposure level are required to decrease the risk of occupational injury.

  7. Personal exposure to ultrafine particles.

    Science.gov (United States)

    Wallace, Lance; Ott, Wayne

    2011-01-01

    Personal exposure to ultrafine particles (UFP) can occur while people are cooking, driving, smoking, operating small appliances such as hair dryers, or eating out in restaurants. These exposures can often be higher than outdoor concentrations. For 3 years, portable monitors were employed in homes, cars, and restaurants. More than 300 measurement periods in several homes were documented, along with 25 h of driving two cars, and 22 visits to restaurants. Cooking on gas or electric stoves and electric toaster ovens was a major source of UFP, with peak personal exposures often exceeding 100,000 particles/cm³ and estimated emission rates in the neighborhood of 10¹² particles/min. Other common sources of high UFP exposures were cigarettes, a vented gas clothes dryer, an air popcorn popper, candles, an electric mixer, a toaster, a hair dryer, a curling iron, and a steam iron. Relatively low indoor UFP emissions were noted for a fireplace, several space heaters, and a laser printer. Driving resulted in moderate exposures averaging about 30,000 particles/cm³ in each of two cars driven on 17 trips on major highways on the East and West Coasts. Most of the restaurants visited maintained consistently high levels of 50,000-200,000 particles/cm³ for the entire length of the meal. The indoor/outdoor ratios of size-resolved UFP were much lower than for PM₂.₅ or PM₁₀, suggesting that outdoor UFP have difficulty in penetrating a home. This in turn implies that outdoor concentrations of UFP have only a moderate effect on personal exposures if indoor sources are present. A time-weighted scenario suggests that for typical suburban nonsmoker lifestyles, indoor sources provide about 47% and outdoor sources about 36% of total daily UFP exposure and in-vehicle exposures add the remainder (17%). However, the effect of one smoker in the home results in an overwhelming increase in the importance of indoor sources (77% of the total).

  8. Assessment of Human Exposure to Magnetic Fields Produced by Domestic Appliances (invited paper)

    International Nuclear Information System (INIS)

    Preece, A.W.; Kaune, W.T.; Grainger, P.; Golding, J.

    1999-01-01

    A study of 50 homes and their appliances examined whether a detailed appliance-use questionnaire and survey would yield data comparable with direct personal monitoring. This was coupled with direct measurement of the appliances in use to determine the field at 50 cm and 1 m. The findings were that individual time-weighted average (TWA) exposures calculated from questionnaire and activity diaries in conjunction with the appliance magnetic field were unrelated to actual personal exposure measurement. It was concluded that questionnaires are of little or no value for TWA estimation. However, peak exposure and short-term temporal variability could be modelled in subjects spending at least 15 min per day within 1 m of an operating microwave cooker or conventional cooker. This method could be extended to other appliances. (author)

  9. Formaldehyde exposure in U.S. industries from OSHA air sampling data.

    Science.gov (United States)

    Lavoue, Jerome; Vincent, Raymond; Gerin, Michel

    2008-09-01

    National occupational exposure databanks have been cited as sources of exposure data for exposure surveillance and exposure assessment for occupational epidemiology. Formaldehyde exposure data recorded in the U.S Integrated Management Information System (IMIS) between 1979 and 2001 were collected to elaborate a multi-industry retrospective picture of formaldehyde exposures and to identify exposure determinants. Due to the database design, only detected personal measurement results (n = 5228) were analyzed with linear mixed-effect models, which explained 29% of the total variance. Short-term measurement results were higher than time-weighted average (TWA) data and decreased 18% per year until 1987 (TWA data 5% per year) and 5% per year (TWA data 4% per year) after that. Exposure varied across industries with maximal estimated TWA geometric means (GM) for 2001 in the reconstituted wood products, structural wood members, and wood dimension and flooring industries (GM = 0.20 mg/m(3). Highest short-term GMs estimated for 2001 were in the funeral service and crematory and reconstituted wood products industries (GM = 0.35 mg/m(3). Exposure levels in IMIS were marginally higher during nonprogrammed inspections compared with programmed inspections. An increasing exterior temperature tended to cause a decrease in exposure levels for cold temperatures (-5% per 5 degrees C for T 15 degrees C). Concentrations measured during the same inspection were correlated and varied differently across industries and sample type (TWA, short term). Sensitivity analyses using TOBIT regression suggested that the average bias caused by excluding non-detects is approximately 30%, being potentially higher for short-term data if many non-detects were actually short-term measurements. Although limited by availability of relevant exposure determinants and potential selection biases in IMIS, these results provide useful insight on formaldehyde occupational exposure in the United States in the last

  10. How to average logarithmic retrievals?

    Directory of Open Access Journals (Sweden)

    B. Funke

    2012-04-01

    Full Text Available Calculation of mean trace gas contributions from profiles obtained by retrievals of the logarithm of the abundance rather than retrievals of the abundance itself are prone to biases. By means of a system simulator, biases of linear versus logarithmic averaging were evaluated for both maximum likelihood and maximum a priori retrievals, for various signal to noise ratios and atmospheric variabilities. These biases can easily reach ten percent or more. As a rule of thumb we found for maximum likelihood retrievals that linear averaging better represents the true mean value in cases of large local natural variability and high signal to noise ratios, while for small local natural variability logarithmic averaging often is superior. In the case of maximum a posteriori retrievals, the mean is dominated by the a priori information used in the retrievals and the method of averaging is of minor concern. For larger natural variabilities, the appropriateness of the one or the other method of averaging depends on the particular case because the various biasing mechanisms partly compensate in an unpredictable manner. This complication arises mainly because of the fact that in logarithmic retrievals the weight of the prior information depends on abundance of the gas itself. No simple rule was found on which kind of averaging is superior, and instead of suggesting simple recipes we cannot do much more than to create awareness of the traps related with averaging of mixing ratios obtained from logarithmic retrievals.

  11. Temporal variability of daily personal magnetic field exposure metrics in pregnant women.

    Science.gov (United States)

    Lewis, Ryan C; Evenson, Kelly R; Savitz, David A; Meeker, John D

    2015-01-01

    Recent epidemiology studies of power-frequency magnetic fields and reproductive health have characterized exposures using data collected from personal exposure monitors over a single day, possibly resulting in exposure misclassification due to temporal variability in daily personal magnetic field exposure metrics, but relevant data in adults are limited. We assessed the temporal variability of daily central tendency (time-weighted average, median) and peak (upper percentiles, maximum) personal magnetic field exposure metrics over 7 consecutive days in 100 pregnant women. When exposure was modeled as a continuous variable, central tendency metrics had substantial reliability, whereas peak metrics had fair (maximum) to moderate (upper percentiles) reliability. The predictive ability of a single-day metric to accurately classify participants into exposure categories based on a weeklong metric depended on the selected exposure threshold, with sensitivity decreasing with increasing exposure threshold. Consistent with the continuous measures analysis, sensitivity was higher for central tendency metrics than for peak metrics. If there is interest in peak metrics, more than 1 day of measurement is needed over the window of disease susceptibility to minimize measurement error, but 1 day may be sufficient for central tendency metrics.

  12. Lagrangian averaging with geodesic mean.

    Science.gov (United States)

    Oliver, Marcel

    2017-11-01

    This paper revisits the derivation of the Lagrangian averaged Euler (LAE), or Euler- α equations in the light of an intrinsic definition of the averaged flow map as the geodesic mean on the volume-preserving diffeomorphism group. Under the additional assumption that first-order fluctuations are statistically isotropic and transported by the mean flow as a vector field, averaging of the kinetic energy Lagrangian of an ideal fluid yields the LAE Lagrangian. The derivation presented here assumes a Euclidean spatial domain without boundaries.

  13. Averaging in spherically symmetric cosmology

    International Nuclear Information System (INIS)

    Coley, A. A.; Pelavas, N.

    2007-01-01

    The averaging problem in cosmology is of fundamental importance. When applied to study cosmological evolution, the theory of macroscopic gravity (MG) can be regarded as a long-distance modification of general relativity. In the MG approach to the averaging problem in cosmology, the Einstein field equations on cosmological scales are modified by appropriate gravitational correlation terms. We study the averaging problem within the class of spherically symmetric cosmological models. That is, we shall take the microscopic equations and effect the averaging procedure to determine the precise form of the correlation tensor in this case. In particular, by working in volume-preserving coordinates, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. We find that the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background must be of the form of a spatial curvature. Inhomogeneities and spatial averaging, through this spatial curvature correction term, can have a very significant dynamical effect on the dynamics of the Universe and cosmological observations; in particular, we discuss whether spatial averaging might lead to a more conservative explanation of the observed acceleration of the Universe (without the introduction of exotic dark matter fields). We also find that the correlation tensor for a non-FLRW background can be interpreted as the sum of a spatial curvature and an anisotropic fluid. This may lead to interesting effects of averaging on astrophysical scales. We also discuss the results of averaging an inhomogeneous Lemaitre-Tolman-Bondi solution as well as calculations of linear perturbations (that is, the backreaction) in an FLRW background, which support the main conclusions of the analysis

  14. Averaging models: parameters estimation with the R-Average procedure

    Directory of Open Access Journals (Sweden)

    S. Noventa

    2010-01-01

    Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.

  15. Evaluations of average level spacings

    International Nuclear Information System (INIS)

    Liou, H.I.

    1980-01-01

    The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of 168 Er data. 19 figures, 2 tables

  16. Ergodic averages via dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    2006-01-01

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain....

  17. Temporal and Other Exposure Aspects of Residential Magnetic Fields Measurement in Relation to Acute Lymphoblastic Leukaemia in Children: The National Cancer Institute Children's Cancer Group Study (invited paper)

    International Nuclear Information System (INIS)

    Baris, D.; Linet, M.; Auvinen, A.; Kaune, W.T.; Wacholder, S.; Kleinerman, R.; Hatch, E.; Robison, L.; Niwa, S.; Haines, C.; Tarone, R.E.

    1999-01-01

    Case-control studies have used a variety of measurements to evaluate the relationship of children's exposure to magnetic fields (50 or 60 Hz) with childhood leukaemia and other childhood cancers. In the absence of knowledge about which exposure metrics may be biologically meaningful, studies during the past 10 years have often used time-weighted average (TWA) summaries of home measurements. Recently, other exposure metrics have been suggested, usually based on theoretical considerations or limited laboratory data. In this paper, the rationale and associated preliminary studies undertaken are described as well as feasibility and validity issues governing the choice of the primary magnetic field exposure assessment methods and summary metric used to estimate children's exposure in the National Cancer Institute/Children's Cancer Group (NCI/CCG) case-control study. Also provided are definitions and discussion of the strengths and weaknesses of the various exposure metrics used in exploratory analyses of the NCI/CCG measurement data. Exposure metrics evaluated include measures of central tendency (mean, median, 30th to 70th percentiles), peak exposures (90th and higher percentiles, peak values of the 24 h measurements), and measurements of short-term temporal variability (rate of change). This report describes correlations of the various metrics with the time-weighted average for the 24 h period (TWA-24-h). Most of the metrics were found to be positively and highly correlated with TWA-24-h, but lower correlations of TWA-24-h with peak exposure and with rate of change were observed. To examine further the relation between TWA and alternative metrics, similar exploratory analysis should be considered for existing data sets and for forthcoming measurement investigations of residential magnetic fields and childhood leukaemia. (author)

  18. Dose — response relationship between noise exposure and the risk of occupational injury

    Science.gov (United States)

    Yoon, Jin-Ha; Hong, Jeong-Suk; Roh, Jaehoon; Kim, Chi-Nyon; Won, Jong-Uk

    2015-01-01

    Many workers worldwide experience fatality and disability caused by occupational injuries. This study examined the relationship between noise exposure and occupational injuries at factories in Korea. A total of 1790 factories located in northern Gyeonggi Province, Korea was evaluated. The time-weighted average levels of dust and noise exposure were taken from Workplace Exposure Assessment data. Apart occupational injuries, sports events, traffic accidents, and other accidents occurring outside workplaces were excluded. The incidences of occupational injury in each factory were calculated by data from the Korea Workers’ Compensation and Welfare Services. Workplaces were classified according to the incidence of any occupational injuries (incident or nonincident workplaces, respectively). Workplace dust exposure was classified as 90 dB. Workplaces with high noise exposure were significantly associated with being incident workplaces, whereas workplaces with high dust exposure were not. The odds ratios (95% confidence intervals) derived from a logistic regression model were 1.68 (1.27-2.24) and 3.42 (2.26-5.17) at 80-89 dB and ≥90 dB versus occupational injury in the workplace. Furthermore, the risk of occupational injury increases with noise exposure level in a dose-response relationship. Therefore, strategies for reducing noise exposure level are required to decrease the risk of occupational injury. PMID:25599757

  19. High average power supercontinuum sources

    Indian Academy of Sciences (India)

    The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium.

  20. Assessment of Industrial Exposure to Magnetic Fields (invited paper)

    International Nuclear Information System (INIS)

    Chadwick, P.

    1999-01-01

    Magnetic field strengths produced by industrial processes can be very large, but they often exhibit a marked spatial variation. Whilst there may be the potential for exposures of workers to be high, actual exposure will be determined to a great extent by working practices. Possible metrics for epidemiological studies might be based on the temporal variability of exposure as well as maximum operator exposure or time-weighted average exposure and, whilst it might be possible to estimate these quantities from spot magnetic field strength measurements and observed working practices, this might be very difficult to achieve in practice. An alternative would be the use of a logging dosemeter: this paper describes some of the results of exposure assessments carried out in industrial environments with a modified EMDEX II magnetic field dosemeter. Magnetic fields in industrial environments often have waveforms which are not purely sinusoidal. Distortion can be introduced by the magnetic saturation of transformer and motor cores, by rectification, by poor matching between oscillator circuits and loads and when thyristors are used to control power. The resulting repetitive but non-sinusoidal magnetic field waveforms can be recorded and analysed; the spectral data may be incorporated into possible exposure metrics. It is also important to ensure that measurement instrumentation is responding appropriately in a non-sinusoidal field and this can only be done if the spectral content of the field is characterised fully. Some non-sinusoidal magnetic field waveforms cannot be expressed as a harmonic series. Specialist instrumentation and techniques are needed to assess exposure to such fields. Examples of approaches to the assessment of exposure to repetitive and non-repetitive magnetic fields are also discussed. (author)

  1. When good = better than average

    Directory of Open Access Journals (Sweden)

    Don A. Moore

    2007-10-01

    Full Text Available People report themselves to be above average on simple tasks and below average on difficult tasks. This paper proposes an explanation for this effect that is simpler than prior explanations. The new explanation is that people conflate relative with absolute evaluation, especially on subjective measures. The paper then presents a series of four studies that test this conflation explanation. These tests distinguish conflation from other explanations, such as differential weighting and selecting the wrong referent. The results suggest that conflation occurs at the response stage during which people attempt to disambiguate subjective response scales in order to choose an answer. This is because conflation has little effect on objective measures, which would be equally affected if the conflation occurred at encoding.

  2. Autoregressive Moving Average Graph Filtering

    OpenAIRE

    Isufi, Elvin; Loukas, Andreas; Simonetto, Andrea; Leus, Geert

    2016-01-01

    One of the cornerstones of the field of signal processing on graphs are graph filters, direct analogues of classical filters, but intended for signals defined on graphs. This work brings forth new insights on the distributed graph filtering problem. We design a family of autoregressive moving average (ARMA) recursions, which (i) are able to approximate any desired graph frequency response, and (ii) give exact solutions for tasks such as graph signal denoising and interpolation. The design phi...

  3. Averaging Robertson-Walker cosmologies

    International Nuclear Information System (INIS)

    Brown, Iain A.; Robbers, Georg; Behrend, Juliane

    2009-01-01

    The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ω eff 0 ≈ 4 × 10 −6 , with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10 −8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w eff < −1/3 can be found for strongly phantom models

  4. Timing and Duration of Traffic-related Air Pollution Exposure and the Risk for Childhood Wheeze and Asthma.

    Science.gov (United States)

    Brunst, Kelly J; Ryan, Patrick H; Brokamp, Cole; Bernstein, David; Reponen, Tiina; Lockey, James; Khurana Hershey, Gurjit K; Levin, Linda; Grinshpun, Sergey A; LeMasters, Grace

    2015-08-15

    The timing and duration of traffic-related air pollution (TRAP) exposure may be important for childhood wheezing and asthma development. We examined the relationship between TRAP exposure and longitudinal wheezing phenotypes and asthma at age 7 years. Children completed clinical examinations annually from age 1 year through age 4 years and age 7 years. Parental-reported wheezing was assessed at each age, and longitudinal wheezing phenotypes (early-transient, late-onset, persistent) and asthma were defined at age 7 years. Participants' time-weighted exposure to TRAP, from birth through age 7 years, was estimated using a land-use regression model. The relationship between TRAP exposure and wheezing phenotypes and asthma was examined. High TRAP exposure at birth was significantly associated with both transient and persistent wheezing phenotypes (adjusted odds ratio [aOR] = 1.64; 95% confidence interval [CI], 1.04-2.57 and aOR = 2.31; 95% CI, 1.28-4.15, respectively); exposure from birth to age 1 year and age 1 to 2 years was also associated with persistent wheeze. Only children with high average TRAP exposure from birth through age 7 years were at significantly increased risk for asthma (aOR = 1.71; 95% CI, 1.01-2.88). Early-life exposure to TRAP is associated with increased risk for persistent wheezing, but only long-term exposure to high levels of TRAP throughout childhood was associated with asthma development.

  5. Personal exposures to NO2 in the EXPOLIS-study: relation to residential indoor, outdoor and workplace concentrations in Basel, Helsinki and Prague

    International Nuclear Information System (INIS)

    Kousa, A.; Rotko, T.; Alm, S.; Monn, C.

    2001-01-01

    Personal exposures, residential indoor, outdoor and workplace levels of nitrogen dioxide (NO 2 ) were measured for 262 urban adult (25-55 years) participants in three EXPOLIS centres (Basel, Switzerland; Helsinki, Finland; and Prague, Czech Republic) using passive samplers for 48-h sampling periods during 1996-1997. The average residential outdoor and indoor NO 2 levels were lowest in Helsinki (24 ± 12 and 18 ± 11 μgm -3 , respectively), highest in Prague (61 ± 20 and 43 ± 23μgm -3 ), with Basel in between (36 ± 13 and 27± 13μgm -3 ). Average workplace NO 2 levels, however, were highest in Basel (36 ± 24μgm -3 ), lowest in Helsinki (27 ± 15μgm -3 ), with Prague in between (30 ± 18μgm -3 ). A time-weighted microenvironmental exposure model explained 74% of the personal exposure variation in all centre and in average 88% of the exposures. Log-linear regression models, using residential outdoor measurements (fixed site monitoring) combined with residential and work characteristics (i.e. work location, using gas appliances and keeping windows open), explained 48% (37%) of the personal NO 2 exposure variation. Regression models based on ambient fixed site concentrations alone explained only 11-19% of personal NO 2 exposure variation. Thus, ambient fixed site monitoring alone was a poor predictor for personal NO 2 exposure variation, but adding personal questionnaire information can significantly improve the predicting power. (Author)

  6. Risk of Lung Cancer and Indoor Radon Exposure in France

    International Nuclear Information System (INIS)

    Baysson, H.; Tirmarche, M.; Tymen, G.; Ducloy, F.; Laurier, D.

    2004-01-01

    It is well established that radon exposure increases risks of lung cancer among underground miners. to estimate the lung cancer risk linked to indoor radon exposure, a hospital based case-control study was carried out in France, With a focus on precise reconstruction of past indoor radon exposure over the 30 years preceding the lung cancer diagnosis. The investigation rook place from 1992 to 1998 in four regions of France: Auvergne, Brittany, Languedoc and Limousin. During face-to-face interviews a standardized questionnaire was used to ascertain demographic characteristics, information on active and passive smoking, occupational exposure, medical history as well as extensive details on residential history. Radon concentrations were measured in the dwellings where subjects had lived at least one year during the 5-30 year period before interview. Measurements of radon concentrations were performed during a 6-month period, using two Kodalpha LR 115 detectors, one in the living room and one in the bedroom. The time-weighted average (TWA) radon concentration for a subject during the 5-30 year period before interview was based on radon concentrations over all addresses occupied by the subject weighted by the number of years spent at each address. For the time intervals without available measurements, we imputed the region-specific arithmetic average of radon concentrations for measured addresses of control subjects. Lung cancer risk was examined in relation to indoor radon exposure after adjustment for age, sex, region, cigarette smoking and occupational exposure. The estimated relative a risk per 100 Bq/m''3 was 1.04, at the borderline of statistical significance (95 percent Confidence Interval: 0.99, 1..1). These results are in agreement with results from other indoor radon case-control studies and with extrapolations from underground miners studies. (Author) 31 refs

  7. Estimation of occupational and nonoccupational nitrogen dioxide exposure for Korean taxi drivers using a microenvironmental model

    International Nuclear Information System (INIS)

    Son, Busoon; Yang, Wonho; Breysse, Patrick; Chung, Taewoong; Lee, Youngshin

    2004-01-01

    Occupational and nonoccupational personal nitrogen dioxide (NO 2 ) exposures were measured using passive samplers for 31 taxi drivers in Asan and Chunan, Korea. Exposures were also estimated using a microenvironmental time-weighted average model based on indoor, outdoor and inside the taxi area measurements. Mean NO 2 indoor and outdoor concentrations inside and outside the taxi drivers' houses were 24.7±10.7 and 23.3±8.3 ppb, respectively, with a mean indoor to outdoor NO 2 ratio of 1.1. Mean personal NO 2 exposure of taxi drivers was 30.3±9.7 ppb. Personal NO 2 exposures for drivers were more strongly correlated with interior vehicle NO 2 levels (r=0.89) rather than indoor residential NO 2 levels (r=0.74) or outdoor NO 2 levels (r=0.71). The main source of NO 2 exposure for taxi drivers was considered to be occupational driving. Interestingly, the NO 2 exposures for drivers' using LPG-fueled vehicles (26.3±1.3 ppb) were significantly lower than those (38.1±1.3 ppb) using diesel-fueled vehicle (P 2 exposure with indoor and outdoor NO 2 levels of the residence, and interior vehicle NO 2 levels (P 2 levels because they drive diesel-using vehicles outdoors in Korea

  8. Screen-time Weight-loss Intervention Targeting Children at Home (SWITCH: A randomized controlled trial study protocol

    Directory of Open Access Journals (Sweden)

    Tsai Midi

    2011-06-01

    Full Text Available Abstract Background Approximately one third of New Zealand children and young people are overweight or obese. A similar proportion (33% do not meet recommendations for physical activity, and 70% do not meet recommendations for screen time. Increased time being sedentary is positively associated with being overweight. There are few family-based interventions aimed at reducing sedentary behavior in children. The aim of this trial is to determine the effects of a 24 week home-based, family oriented intervention to reduce sedentary screen time on children's body composition, sedentary behavior, physical activity, and diet. Methods/Design The study design is a pragmatic two-arm parallel randomized controlled trial. Two hundred and seventy overweight children aged 9-12 years and primary caregivers are being recruited. Participants are randomized to intervention (family-based screen time intervention or control (no change. At the end of the study, the control group is offered the intervention content. Data collection is undertaken at baseline and 24 weeks. The primary trial outcome is child body mass index (BMI and standardized body mass index (zBMI. Secondary outcomes are change from baseline to 24 weeks in child percentage body fat; waist circumference; self-reported average daily time spent in physical and sedentary activities; dietary intake; and enjoyment of physical activity and sedentary behavior. Secondary outcomes for the primary caregiver include change in BMI and self-reported physical activity. Discussion This study provides an excellent example of a theory-based, pragmatic, community-based trial targeting sedentary behavior in overweight children. The study has been specifically designed to allow for estimation of the consistency of effects on body composition for Māori (indigenous, Pacific and non-Māori/non-Pacific ethnic groups. If effective, this intervention is imminently scalable and could be integrated within existing weight

  9. Recommendations concerning an interim annual individual exposure limit for respirable quartz

    International Nuclear Information System (INIS)

    Stocker, H.; Horvath, F.J.; Napier, W.

    1983-07-01

    This paper presents AECB staff recommendations on the desirability of an annual individual occupational exposure limit for respirable quartz and on the magnitude of this limit, for uranium miners. Justifications are presented for the magnitude of this suggested limit for respirable quartz, drawing on experience gained in Ontario uranium and non-uranium mines and on that in other countries. The suggestion is made that an exposure limit be set for an interim period in order that additional information on the adequacy of the magnitude of the limit may be acquired. To complement the suggested exposure limit, it is proposed that a co-existing control program of action levels, to be triggered at various respirable quartz concentrations, be set up. It is the contention of this paper that the degree of protection afforded to individuals by the suggested exposure limit would be equivalent to the time-weighted average threshold limit value derived from recommendations, based on group average exposures, of the American Conference of Governmental Industrial Hygienists

  10. Effect of physical exertion on the biological monitoring of exposure to various solvents following exposure by inhalation in human volunteers: III. Styrene.

    Science.gov (United States)

    Truchon, Ginette; Brochu, Martin; Tardif, Robert

    2009-08-01

    This study evaluated the impact of different work load intensities on biological indicators of styrene exposure. Four adult Caucasian men, aged 20 to 44 years, were recruited. Groups of 2-4 volunteers were exposed to 20 ppm of styrene in an exposure chamber according to scenarios involving either aerobic, muscular, or both types of physical exercise for 3 or 7 hr. The target intensities for each 30-min exercise period-interspaced with 15 min at rest-were the following: REST, 38 watts AERO (time-weighted average intensity), 34 watts AERO/MUSC, 49 watts AERO/MUSC, and 54 watts AERO for 7 hr and 22 watts MUSC for 3 hr. End-exhaled air samples were collected at 15 time points during and after 7-hr exposures for the determination of styrene concentrations. Urine samples were collected before the start of exposure, after the first 3 hr of exposure, and at the end of exposure for the determination of mandelic acid (MA) and phenylglyoxilic acid (PGA) concentrations. Compared with exposure at rest, styrene in alveolar air increased by a factor up to 1.7, while the sum of urinary MA and PGA increased by a factor ranging from 1.2 to 3.5, depending on the exposure scenario. Concentrations of biological indicators of styrene fluctuated with physical exertion and were correlated with the magnitude of the physical activity and pulmonary ventilation. Despite the physical exertion effect, urinary concentrations of styrene metabolites after a single-day exposure remain below the current biological exposure index value recommended by ACGIH; therefore, no additional health risk is expected. However, results shows that work load intensities must be considered in the interpretation of biological monitoring data and in the evaluation of the health risk associated with styrene exposure.

  11. Workplace exposure to nanoparticles and the application of provisional nanoreference values in times of uncertain risks

    Science.gov (United States)

    van Broekhuizen, Pieter; van Broekhuizen, Fleur; Cornelissen, Ralf; Reijnders, Lucas

    2012-03-01

    Nano reference values (NRVs) for occupational use of nanomaterials were tested as provisional substitute for Occupational Exposure Limits (OELs). NRVs can be used as provisional limit values until Health-Based OELs or derived no-effect levels (DNEL) become available. NRVs were defined for 8 h periods (time weighted average) and for short-term exposure periods (15 min-time weighted average). To assess the usefulness of these NRVs, airborne number concentrations of nanoparticles (NPs) in the workplace environment were measured during paint manufacturing, electroplating, light equipment manufacturing, non-reflective glass production, production of pigment concentrates and car refinishing. Activities monitored were handling of solid engineered NPs (ENP), abrasion, spraying and heating during occupational use of nanomaterials (containing ENPs) and machining nanosurfaces. The measured concentrations are often presumed to contain ENPs as well as process-generated NPs (PGNP). The PGNP are found to be a significant source for potential exposure and cannot be ignored in risk assessment. Levels of NPs identified in workplace air were up to several millions of nanoparticles/cm3. Conventional components in paint manufacturing like CaCO3 and talc may contain a substantial amount of nanosized particulates giving rise to airborne nanoparticle concentrations. It is argued that risk assessments carried out for e.g. paint manufacturing processes using conventional non-nano components should take into account potential nanoparticle emissions as well. The concentrations measured were compared with particle-based NRVs and with mass-based values that have also been proposed for workers protection. It is concluded that NRVs can be used for risk management for handling or processing of nanomaterials at workplaces provided that the scope of NRVs is not limited to ENPs only, but extended to the exposure to process-generated NPs as well.

  12. Workplace exposure to nanoparticles and the application of provisional nanoreference values in times of uncertain risks

    International Nuclear Information System (INIS)

    Broekhuizen, Pieter van; Broekhuizen, Fleur van; Cornelissen, Ralf; Reijnders, Lucas

    2012-01-01

    Nano reference values (NRVs) for occupational use of nanomaterials were tested as provisional substitute for Occupational Exposure Limits (OELs). NRVs can be used as provisional limit values until Health-Based OELs or derived no-effect levels (DNEL) become available. NRVs were defined for 8 h periods (time weighted average) and for short-term exposure periods (15 min-time weighted average). To assess the usefulness of these NRVs, airborne number concentrations of nanoparticles (NPs) in the workplace environment were measured during paint manufacturing, electroplating, light equipment manufacturing, non-reflective glass production, production of pigment concentrates and car refinishing. Activities monitored were handling of solid engineered NPs (ENP), abrasion, spraying and heating during occupational use of nanomaterials (containing ENPs) and machining nanosurfaces. The measured concentrations are often presumed to contain ENPs as well as process-generated NPs (PGNP). The PGNP are found to be a significant source for potential exposure and cannot be ignored in risk assessment. Levels of NPs identified in workplace air were up to several millions of nanoparticles/cm 3 . Conventional components in paint manufacturing like CaCO 3 and talc may contain a substantial amount of nanosized particulates giving rise to airborne nanoparticle concentrations. It is argued that risk assessments carried out for e.g. paint manufacturing processes using conventional non-nano components should take into account potential nanoparticle emissions as well. The concentrations measured were compared with particle-based NRVs and with mass-based values that have also been proposed for workers protection. It is concluded that NRVs can be used for risk management for handling or processing of nanomaterials at workplaces provided that the scope of NRVs is not limited to ENPs only, but extended to the exposure to process-generated NPs as well.

  13. Topological quantization of ensemble averages

    International Nuclear Information System (INIS)

    Prodan, Emil

    2009-01-01

    We define the current of a quantum observable and, under well-defined conditions, we connect its ensemble average to the index of a Fredholm operator. The present work builds on a formalism developed by Kellendonk and Schulz-Baldes (2004 J. Funct. Anal. 209 388) to study the quantization of edge currents for continuous magnetic Schroedinger operators. The generalization given here may be a useful tool to scientists looking for novel manifestations of the topological quantization. As a new application, we show that the differential conductance of atomic wires is given by the index of a certain operator. We also comment on how the formalism can be used to probe the existence of edge states

  14. Flexible time domain averaging technique

    Science.gov (United States)

    Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng

    2013-09-01

    Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

  15. Radon daughter exposure estimation and its relation to the exposure limit

    International Nuclear Information System (INIS)

    Stocker, H.

    1981-10-01

    Under current Atomic Energy Control Regulations, the annual limit for individual exposure to radon daughters is 4 WLM. The Regulations do not specify how the exposure is to be determined nor to what accuracy the measurements should be made. This paper discusses the historical and conventional grab-sampling and time-weighting methods for assigning exposures to radon daughters in uranium mines in Canada. As a further step in the evolution of exposure assignments, the system of personal radon daughter dosimetry is introduced as the more accurate means of assigning individual exposures and of adhering to the intent of the exposure limit

  16. Temporal and Other Exposure Aspects of Residential Magnetic Fields Measurement in Relation to Acute Lymphoblastic Leukaemia in Children: The National Cancer Institute Children's Cancer Group Study (invited paper)

    Energy Technology Data Exchange (ETDEWEB)

    Baris, D.; Linet, M.; Auvinen, A.; Kaune, W.T.; Wacholder, S.; Kleinerman, R.; Hatch, E.; Robison, L.; Niwa, S.; Haines, C.; Tarone, R.E

    1999-07-01

    Case-control studies have used a variety of measurements to evaluate the relationship of children's exposure to magnetic fields (50 or 60 Hz) with childhood leukaemia and other childhood cancers. In the absence of knowledge about which exposure metrics may be biologically meaningful, studies during the past 10 years have often used time-weighted average (TWA) summaries of home measurements. Recently, other exposure metrics have been suggested, usually based on theoretical considerations or limited laboratory data. In this paper, the rationale and associated preliminary studies undertaken are described as well as feasibility andvalidity issues governing the choice of the primary magnetic field exposure assessment methods and summary metric used to estimate children's exposure in the National Cancer Institute/Children's Cancer Group (NCI/CCG) case-control study. Also provided are definitions and discussion of the strengths and weaknesses of the various exposure metrics used in exploratory analyses of the NCI/CCG measurement data. Exposure metrics evaluated include measures of central tendency (mean, median, 30th to 70th percentiles), peak exposures (90th and higher percentiles, peak values of the 24 h measurements), and measurements of short-term temporal variability (rate of change). This report describes correlations of the various metrics with the time-weighted average for the 24 h period (TWA-24-h). Most of the metrics were found to be positively and highly correlated with TWA-24-h, but lower correlations of TWA-24-h with peak exposure and with rate of change were observed. To examine further the relation between TWA and alternative metrics, similar exploratory analysis should be considered for existing data sets and for forthcoming measurement investigations of residential magnetic fields and childhood leukaemia. (author)

  17. Exposure to methyl tert-butyl ether, benzene, and total hydrocarbons at the Singapore-Malaysia causeway immigration checkpoint

    Energy Technology Data Exchange (ETDEWEB)

    Tan, C.; Ong, H.Y.; Kok, P.W. [and others

    1996-12-31

    The primary aim of this study was to determine the extent and levels of exposure to volatile organic compounds (VOCs) from automobile emissions in a group of immigration officers at a busy cross-border checkpoint. A majority (80%) of the workers monitored were exposed to benzene at levels between 0.01 and 0.5 ppm, with only 1.2% exceeding the current Occupational Safety and Health Administration occupational exposure limit of 1 ppm. The geometric mean (GM) concentrations of 8-hr time-weighted average exposure were 0.03 ppm, 0.9 ppm, and 2.46 ppm for methyl-tert-butyl ether (MTBE), benzene, and total hydrocarbons (THC), respectively. The highest time-weighted average concentrations measured were 1.05 ppm for MTBE, 2.01 ppm for benzene, and 34 ppm for THC. It was found that motorbikes emitted a more significant amount of pollutants compared with motor cars. On average, officers at the motorcycle booths were exposed to four to five times higher levels of VOCs (GMs of 0.07 ppm, 0.23 ppm, and 4.7 ppm for MTBE, benzene, and THC) than their counterparts at the motor car booths (GMs of 0.01 ppm, 0.05 ppm, and 1.5 ppm). The airborne concentrations of all three pollutants correlated with the flow of vehicle traffic. Close correlations were also noted for the concentrations in ambient air for the three pollutants measured. Benzene and MTBE had a correlation coefficient of 0.97. The overall findings showed that the concentrations of various VOCs were closely related to the traffic density, suggesting that they were from a common source, such as exhaust emissions from the vehicles. The results also indicated that although benzene, MTBE, and THC are known to be volatile, a significant amount could still be detected in the ambient environment, thus contributing to our exposure to these compounds. 4 refs., 6 figs.

  18. The average Indian female nose.

    Science.gov (United States)

    Patil, Surendra B; Kale, Satish M; Jaiswal, Sumeet; Khare, Nishant; Math, Mahantesh

    2011-12-01

    This study aimed to delineate the anthropometric measurements of the noses of young women of an Indian population and to compare them with the published ideals and average measurements for white women. This anthropometric survey included a volunteer sample of 100 young Indian women ages 18 to 35 years with Indian parents and no history of previous surgery or trauma to the nose. Standardized frontal, lateral, oblique, and basal photographs of the subjects' noses were taken, and 12 standard anthropometric measurements of the nose were determined. The results were compared with published standards for North American white women. In addition, nine nasal indices were calculated and compared with the standards for North American white women. The nose of Indian women differs significantly from the white nose. All the nasal measurements for the Indian women were found to be significantly different from those for North American white women. Seven of the nine nasal indices also differed significantly. Anthropometric analysis suggests differences between the Indian female nose and the North American white nose. Thus, a single aesthetic ideal is inadequate. Noses of Indian women are smaller and wider, with a less projected and rounded tip than the noses of white women. This study established the nasal anthropometric norms for nasal parameters, which will serve as a guide for cosmetic and reconstructive surgery in Indian women.

  19. Titanium dioxide nanoparticles: occupational exposure assessment in the photocatalytic paving production

    International Nuclear Information System (INIS)

    Spinazzè, Andrea; Cattaneo, Andrea; Limonta, Marina; Bollati, Valentina; Bertazzi, Pier Alberto; Cavallo, Domenico M.

    2016-01-01

    Limited data are available regarding occupational exposure assessment to nano-sized titanium dioxide (nano-TiO_2). The objective of this study is to assess the occupational exposure of workers engaged in the application of nano-TiO_2 onto concrete building materials, by means of a multi-metric approach (mean diameter, number, mass and surface area concentrations). The measurement design consists of the combined use of (i) direct-reading instruments to evaluate the total particle number concentrations relative to the background concentration and the mean size-dependent characteristics of particles (mean diameter and surface area concentration) and to estimate the 8-h time-weighted average (8-h TWA) exposure to nano-TiO_2 for workers involved in different working tasks; and (ii) filter-based air sampling, used for the determination of size-resolved particle mass concentrations. A further estimation was performed to obtain the mean 8-h TWA exposure values expressed as mass concentrations (µg nano-TiO_2/m"3). The multi-metric characterization of occupational exposure to nano-TiO_2 was significantly different both for different work environments and for each work task. Generally, workers were exposed to engineered nanoparticles (ENPs; <100 nm) mean levels lower than the recommended reference values and proposed occupational exposure limits (40,000 particle/cm"3; 300 µg/m"3) and relevant exposures to peak concentration were not likely to be expected. The estimated 8-h TWA exposure showed differences between the unexposed and exposed subjects. For these last, further differences were defined between operators involved in different work tasks. This study provides information on nano-TiO_2 number and mass concentration, size distribution, particles diameter and surface area concentrations, which were used to obtain work shift-averaged exposures.

  20. Titanium dioxide nanoparticles: occupational exposure assessment in the photocatalytic paving production

    Energy Technology Data Exchange (ETDEWEB)

    Spinazzè, Andrea, E-mail: andrea.spinazze@uninsubria.it; Cattaneo, Andrea; Limonta, Marina [Università degli studi dell’Insubria, Dipartimento di Scienza e Alta Tecnologia (Italy); Bollati, Valentina; Bertazzi, Pier Alberto [Università degli Studi di Milano, EPIGET-Epidemiology, Epigenetics and Toxicology Lab, Dipartimento di Scienze Cliniche e di Comunità (Italy); Cavallo, Domenico M. [Università degli studi dell’Insubria, Dipartimento di Scienza e Alta Tecnologia (Italy)

    2016-06-15

    Limited data are available regarding occupational exposure assessment to nano-sized titanium dioxide (nano-TiO{sub 2}). The objective of this study is to assess the occupational exposure of workers engaged in the application of nano-TiO{sub 2} onto concrete building materials, by means of a multi-metric approach (mean diameter, number, mass and surface area concentrations). The measurement design consists of the combined use of (i) direct-reading instruments to evaluate the total particle number concentrations relative to the background concentration and the mean size-dependent characteristics of particles (mean diameter and surface area concentration) and to estimate the 8-h time-weighted average (8-h TWA) exposure to nano-TiO{sub 2} for workers involved in different working tasks; and (ii) filter-based air sampling, used for the determination of size-resolved particle mass concentrations. A further estimation was performed to obtain the mean 8-h TWA exposure values expressed as mass concentrations (µg nano-TiO{sub 2}/m{sup 3}). The multi-metric characterization of occupational exposure to nano-TiO{sub 2} was significantly different both for different work environments and for each work task. Generally, workers were exposed to engineered nanoparticles (ENPs; <100 nm) mean levels lower than the recommended reference values and proposed occupational exposure limits (40,000 particle/cm{sup 3}; 300 µg/m{sup 3}) and relevant exposures to peak concentration were not likely to be expected. The estimated 8-h TWA exposure showed differences between the unexposed and exposed subjects. For these last, further differences were defined between operators involved in different work tasks. This study provides information on nano-TiO{sub 2} number and mass concentration, size distribution, particles diameter and surface area concentrations, which were used to obtain work shift-averaged exposures.

  1. Smoke exposure at western wildfires.

    Science.gov (United States)

    Timothy E. Reinhardt; Roger D. Ottmar

    2000-01-01

    Smoke exposure measurements among firefighters at wildfires in the Western United States between 1992 and 1995 showed that altogether most exposures were not significant, between 3 and 5 percent of the shift-average exposures exceeded occupational exposure limits for carbon monoxide and respiratory irritants. Exposure to benzene and total suspended particulate was not...

  2. Task-based exposure assessment of nanoparticles in the workplace

    International Nuclear Information System (INIS)

    Ham, Seunghon; Yoon, Chungsik; Lee, Euiseung; Lee, Kiyoung; Park, Donguk; Chung, Eunkyo; Kim, Pilje; Lee, Byoungcheun

    2012-01-01

    Although task-based sampling is, theoretically, a plausible approach to the assessment of nanoparticle exposure, few studies using this type of sampling have been published. This study characterized and compared task-based nanoparticle exposure profiles for engineered nanoparticle manufacturing workplaces (ENMW) and workplaces that generated welding fumes containing incidental nanoparticles. Two ENMW and two welding workplaces were selected for exposure assessments. Real-time devices were utilized to characterize the concentration profiles and size distributions of airborne nanoparticles. Filter-based sampling was performed to measure time-weighted average (TWA) concentrations, and off-line analysis was performed using an electron microscope. Workplace tasks were recorded by researchers to determine the concentration profiles associated with particular tasks/events. This study demonstrated that exposure profiles differ greatly in terms of concentrations and size distributions according to the task performed. The size distributions recorded during tasks were different from both those recorded during periods with no activity and from the background. The airborne concentration profiles of the nanoparticles varied according to not only the type of workplace but also the concentration metrics. The concentrations measured by surface area and the number concentrations measured by condensation particle counter, particulate matter 1.0, and TWA mass concentrations all showed a similar pattern, whereas the number concentrations measured by scanning mobility particle sizer indicated that the welding fume concentrations at one of the welding workplaces were unexpectedly higher than were those at workplaces that were engineering nanoparticles. This study suggests that a task-based exposure assessment can provide useful information regarding the exposure profiles of nanoparticles and can therefore be used as an exposure assessment tool.

  3. Exposure to MRI-related magnetic fields and vertigo in MRI workers.

    Science.gov (United States)

    Schaap, Kristel; Portengen, Lützen; Kromhout, Hans

    2016-03-01

    Vertigo has been reported by people working around magnetic resonance imaging (MRI) scanners and was found to increase with increasing strength of scanner magnets. This suggests an association with exposure to static magnetic fields (SMF) and/or motion-induced time-varying magnetic fields (TVMF). This study assessed the association between various metrics of shift-long exposure to SMF and TVMF and self-reported vertigo among MRI workers. We analysed 358 shifts from 234 employees at 14 MRI facilities in the Netherlands. Participants used logbooks to report vertigo experienced during the work day at the MRI facility. In addition, personal exposure to SMF and TVMF was measured during the same shifts, using portable magnetic field dosimeters. Vertigo was reported during 22 shifts by 20 participants and was significantly associated with peak and time-weighted average (TWA) metrics of SMF as well as TVMF exposure. Associations were most evident with full-shift TWA TVMF exposure. The probability of vertigo occurrence during a work shift exceeded 5% at peak exposure levels of 409 mT and 477 mT/s and at full-shift TWA levels of 3 mT and 0.6 mT/s. These results confirm the hypothesis that vertigo is associated with exposure to MRI-related SMF and TVMF. Strong correlations between various metrics of shift-long exposure make it difficult to disentangle the effects of SMF and TVMF exposure, or identify the most relevant exposure metric. On the other hand, this also implies that several metrics of shift-long exposure to SMF and TVMF should perform similarly in epidemiological studies on MRI-related vertigo. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  4. Assessment of Occupational Noise Exposure among Groundskeepers in North Carolina Public Universities

    Directory of Open Access Journals (Sweden)

    Jo Anne G. Balanay

    2016-01-01

    Full Text Available Groundskeepers may have increased risk to noise-induced hearing loss due to the performance of excessively noisy tasks. This study assessed the exposure of groundskeepers to noise in multiple universities and determined the association between noise exposure and variables (ie, university, month, tool used. Personal noise exposures were monitored during the work shift using noise dosimetry. A sound level meter was used to measure the maximum sound pressure levels from groundskeeping equipment. The mean Occupational Safety and Health Administration (OSHA and National Institute for Occupational Safety and Health (NIOSH time-weighted average (TWA noise exposures were 83.0 ± 9.6 and 88.0 ± 6.7 dBA, respectively. About 52% of the OSHA TWAs and 77% of the NIOSH TWAs exceeded 85 dBA. Riding mower use was associated with high TWA noise exposures and with having OSHA TWAs exceeding 85 and 90 dBA. The maximum sound pressure levels of equipment and tools measured ranged from 76 to 109 dBA, 82% of which were >85 dBA. These findings support that groundskeepers have excessive noise exposures, which may be effectively reduced through careful scheduling of the use of noisy equipment/tools.

  5. Crystallographic extraction and averaging of data from small image areas

    NARCIS (Netherlands)

    Perkins, GA; Downing, KH; Glaeser, RM

    The accuracy of structure factor phases determined from electron microscope images is determined mainly by the level of statistical significance, which is limited by the low level of allowed electron exposure and by the number of identical unit cells that can be averaged. It is shown here that

  6. Assessing the importance of different exposure metrics and time-activity data to predict 24-H personal PM2.5 exposures.

    Science.gov (United States)

    Chang, Li-Te; Koutrakis, Petros; Catalano, Paul J; Suh, Helen H

    Personal PM(2.5) data from two recent exposure studies, the Scripted Activity Study and the Older Adults Study, were used to develop models predicting 24-h personal PM(2.5) exposures. Both studies were conducted concurrently in the summer of 1998 and the winter of 1999 in Baltimore, MD. In the Scripted Activity Study, 1-h personal PM(2.5) exposures were measured. Data were used to identify significant factors affecting personal exposures and to develop 1-h personal exposure models for five different micro-environments. By incorporating the time-activity diary data, these models were then combined to develop a time-weighted microenvironmental personal model (model M1AD) to predict the 24-h PM(2.5) exposures measured for individuals in the Older Adults Study. Twenty-four-hour time-weighted models were also developed using 1-h ambient PM(2.5) levels and time-activity data (model A1AD) or using 24-h ambient PM(2.5) levels and time-activity data (model A24AD). The performance of these three models was compared to that using 24-h ambient concentrations alone (model A24). Results showed that factors affecting 1-h personal PM(2.5) exposures included air conditioning status and the presence of environmental tobacco smoke (ETS) for indoor micro-environments, consistent with previous studies. ETS was identified as a significant contributor to measured 24-h personal PM(2.5) exposures. Staying in an ETS-exposed microenvironment for 1 h elevated 24-h personal PM(2.5) exposures by approximately 4 microg/m 3 on average. Cooking and washing activities were identified in the winter as significant contributors to 24-h personal exposures as well, increasing 24-h personal PM(2.5) exposures by about 4 and 5 microg/m 3 per hour of activity, respectively. The ability of 3 microenvironmental personal exposure models to estimate 24-h personal PM(2.5) exposures was generally comparable to and consistently greater than that of model A24. Results indicated that using time-activity data with 1

  7. Noise exposure levels for musicians during rehearsal and performance times.

    Science.gov (United States)

    McIlvaine, Devon; Stewart, Michael; Anderson, Robert

    2012-03-01

    The purpose of this study was to determine daily noise doses and 8-hour time weighted averages for rock band musicians, crew members, and spectators during a typical rehearsal and performance using both Occupational Safety and Health Administration (OSHA) and National Institute of Occupational Safety and Health (NIOSH) measurement criteria. Personal noise dosimetry was completed on five members of a rock band during one 2-hr rehearsal and one 4-hr performance. Time-weighted averages (TWA) and daily dose values were calculated using both OSHA and NIOSH criteria and compared to industry guidelines for enrollment in hearing conservation programs and the use of hearing protection devices. TWA values ranged from 84.3 to 90.4 dBA (OSHA) and from 90.0 to 96.4 dBA (NIOSH) during the rehearsal. The same values ranged from 91.0 to 99.7 dBA (OSHA) and 94.0 to 102.8 dBA (NIOSH) for the performance. During the rehearsal, daily noise doses ranged from 45.54% to 106.7% (OSHA) and from 317.74% to 1396.07% (NIOSH). During the performance, doses ranged from 114.66% to 382.49% (OSHA) and from 793.31% to 5970.15% (NIOSH). The musicians in this study were exposed to dangerously high levels of noise and should be enrolled in a hearing conservation programs. Hearing protection devices should be worn, especially during performances. The OSHA measurement criteria yielded values significantly more conservative than those produced by NIOSH criteria. Audiologists should counsel musician-patients about the hazards of excessive noise (music) exposure and how to protect their hearing.

  8. Exposure levels of farmers and veterinarians to particulate matter and gases uring operational tasks in pig-fattening houses

    Directory of Open Access Journals (Sweden)

    Nele Van Ransbeeck

    2014-09-01

    Full Text Available The main objective of the study was to assess particulate matter (PM exposure levels for both the farmer and the veterinarian during different operational tasks in pig-fattening houses, and to estimate their exposure levels on a daily working basis (time-weighted average (TWA. The measured PM fractions were: inhalable and respirable PM, PM10, PM2.5 and PM1. The effects of pig age, pen floor type (conventional or low emission surface and cleaning of the pens on the personal PM exposure were also investigated. Indoor concentrations of NH[sub]3[/sub], CH[sub]4[/sub], and CO[sub]2[/sub] were additionally measured during some operational tasks. The results showed that personal exposure levels can become extremely high during some operational tasks performed by the farmer or veterinarian. The highest concentration levels were observed during feed shovelling and blood sampling, the lowest during the weighing of the pigs. For the farmer, the estimated TWA exposure levels of inhalable and respirable PM were 6.0 and 0.29 mg m[sup] -3[/sup] , respectively. These exposure levels for the veterinarian were, respectively, 10.6 and 0.74 mg m[sup] -3[/sup] . The PM concentration levels were mainly determined by the performed operational tasks. There was no significant effect of pig age, pen floor type, nor cleaning of the pens on the personal exposure levels.

  9. Respiratory health effects of occupational exposure to charcoal dust in Namibia

    Science.gov (United States)

    Kgabi, Nnenesi

    2016-01-01

    Background Charcoal processing activities can increase the risk of adverse respiratory outcomes. Objective To determine dose–response relationships between occupational exposure to charcoal dust, respiratory symptoms and lung function among charcoal-processing workers in Namibia. Methods A cross-sectional study was conducted with 307 workers from charcoal factories in Namibia. All respondents completed interviewer-administered questionnaires. Spirometry was performed, ambient and respirable dust levels were assessed in different work sections. Multiple logistic regression analysis estimated the overall effect of charcoal dust exposure on respiratory outcomes, while linear regression estimated the exposure-related effect on lung function. Workers were stratified according to cumulative dust exposure category. Results Exposure to respirable charcoal dust levels was above occupational exposure limits in most sectors, with packing and weighing having the highest dust exposure levels (median 27.7 mg/m3, range: 0.2–33.0 for the 8-h time-weighted average). The high cumulative dust exposure category was significantly associated with usual cough (OR: 2.1; 95% CI: 1.1–4.0), usual phlegm (OR: 2.1; 95% CI: 1.1–4.1), episodes of phlegm and cough (OR: 2.8; 95% CI: 1.1–6.1), and shortness of breath. A non-statistically significant lower adjusted mean-predicted % FEV1 was observed (98.1% for male and 95.5% for female) among workers with greater exposure. Conclusions Charcoal dust levels exceeded the US OSHA recommended limit of 3.5 mg/m3 for carbon-black-containing material and study participants presented with exposure-related adverse respiratory outcomes in a dose–response manner. Our findings suggest that the Namibian Ministry of Labour introduce stronger enforcement strategies of existing national health and safety regulations within the industry. PMID:27687528

  10. Hair Manganese as an Exposure Biomarker among Welders.

    Science.gov (United States)

    Reiss, Boris; Simpson, Christopher D; Baker, Marissa G; Stover, Bert; Sheppard, Lianne; Seixas, Noah S

    2016-03-01

    Quantifying exposure and dose to manganese (Mn) containing airborne particles in welding fume presents many challenges. Common biological markers such as Mn in blood or Mn in urine have not proven to be practical biomarkers even in studies where positive associations were observed. However, hair Mn (MnH) as a biomarker has the advantage over blood and urine that it is less influenced by short-term variability of Mn exposure levels because of its slow growth rate. The objective of this study was to determine whether hair can be used as a biomarker for welders exposed to manganese. Hair samples (1cm) were collected from 47 welding school students and individual air Mn (MnA) exposures were measured for each subject. MnA levels for all days were estimated with a linear mixed model using welding type as a predictor. A 30-day time-weighted average MnA (MnA30d) exposure level was calculated for each hair sample. The association between MnH and MnA30d levels was then assessed. A linear relationship was observed between log-transformed MnA30d and log-transformed MnH. Doubling MnA30d exposure levels yields a 20% (95% confidence interval: 11-29%) increase in MnH. The association was similar for hair washed following two different wash procedures designed to remove external contamination. Hair shows promise as a biomarker for inhaled Mn exposure given the presence of a significant linear association between MnH and MnA30d levels. © The Author 2015. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  11. [Nanosilver--Occupational exposure limits].

    Science.gov (United States)

    Świdwińska-Gajewska, Anna Maria; Czerczak, Sławomir

    2015-01-01

    Historically, nanosilver has been known as colloidal silver composed of particles with a size below 100 nm. Silver nanoparticles are used in many technologies, creating a wide range of products. Due to antibacterial properties nanosilver is used, among others, in medical devices (wound dressings), textiles (sport clothes, socks), plastics and building materials (paints). Colloidal silver is considered by many as an ideal agent in the fight against pathogenic microorganisms, unlike antibiotics, without side effects. However, in light of toxicological research, nanosilver is not inert to the body. The inhalation of silver nanoparticles have an adverse effect mainly on the liver and lung of rats. The oxidative stress caused by reactive oxygen species is responsible for the toxicity of nanoparticles, contributing to cytotoxic and genotoxic effects. The activity of the readily oxidized nanosilver surface underlies the molecular mechanism of toxicity. This leads to the release of silver ions, a known harmful agent. Occupational exposure to silver nanoparticles may occur in the process of its manufacture, formulation and also usage during spraying, in particular. In Poland, as well as in other countries of the world, there is no separate hygiene standards applicable to nanomaterials. The present study attempts to estimate the value of MAC-TWA (maximum admissible concentration--the time-weighted average) for silver--a nano-objects fraction, which amounted to 0.01 mg/m3. The authors are of the opinion that the current value of the MAC-TWA for silver metallic--inhalable fraction (0.05 mg/m3) does not provide sufficient protection against the harmful effects of silver in the form of nano-objects. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.

  12. Nanosilver – Occupational exposure limits

    Directory of Open Access Journals (Sweden)

    Anna Maria Świdwińska-Gajewska

    2015-07-01

    Full Text Available Historically, nanosilver has been known as colloidal silver composed of particles with a size below 100 nm. Silver nanoparticles are used in many technologies, creating a wide range of products. Due to antibacterial properties nanosilver is used, among others, in medical devices (wound dressings, textiles (sport clothes, socks, plastics and building materials (paints. Colloidal silver is considered by many as an ideal agent in the fight against pathogenic microorganisms, unlike antibiotics, without side effects. However, in light of toxicological research, nanosilver is not inert to the body. The inhalation of silver nanoparticles have an adverse effect mainly on the liver and lung of rats. The oxidative stress caused by reactive oxygen species is responsible for the toxicity of nanoparticles, contributing to cytotoxic and genotoxic effects. The activity of the readily oxidized nanosilver surface underlies the molecular mechanism of toxicity. This leads to the release of silver ions, a known harmful agent. Occupational exposure to silver nanoparticles may occur in the process of its manufacture, formulation and also usage during spraying, in particular. In Poland, as well as in other countries of the world, there is no separate hygiene standards applicable to nanomaterials. The present study attempts to estimate the value of MAC-TWA (maximum admissible concentration – the time-weighted average for silver – a nano-objects fraction, which amounted to 0.01 mg/m3. The authors are of the opinion that the current value of the MAC-TWA for silver metallic – inhalable fraction (0.05 mg/m3 does not provide sufficient protection against the harmful effects of silver in the form of nano-objects. Med Pr 2015;66(3:429–442

  13. Averaging scheme for atomic resolution off-axis electron holograms.

    Science.gov (United States)

    Niermann, T; Lehmann, M

    2014-08-01

    All micrographs are limited by shot-noise, which is intrinsic to the detection process of electrons. For beam insensitive specimen this limitation can in principle easily be circumvented by prolonged exposure times. However, in the high-resolution regime several instrumental instabilities limit the applicable exposure time. Particularly in the case of off-axis holography the holograms are highly sensitive to the position and voltage of the electron-optical biprism. We present a novel reconstruction algorithm to average series of off-axis holograms while compensating for specimen drift, biprism drift, drift of biprism voltage, and drift of defocus, which all might cause problematic changes from exposure to exposure. We show an application of the algorithm utilizing also the possibilities of double biprism holography, which results in a high quality exit-wave reconstruction with 75 pm resolution at a very high signal-to-noise ratio. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Averaging of nonlinearity-managed pulses

    International Nuclear Information System (INIS)

    Zharnitsky, Vadim; Pelinovsky, Dmitry

    2005-01-01

    We consider the nonlinear Schroedinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons

  15. The Occupational Exposure Limit for Fluid Aerosol Generated in Metalworking Operations: Limitations and Recommendations

    Directory of Open Access Journals (Sweden)

    Donguk Park

    2012-03-01

    Full Text Available The aim of this review was to assess current knowledge related to the occupational exposure limit (OEL for fluid aerosols including either mineral or chemical oil that are generated in metalworking operations, and to discuss whether their OEL can be appropriately used to prevent several health risks that may vary among metalworking fluid (MWF types. The OEL (time-weighted average; 5 mg/m3, short-term exposure limit ; 15 mg/m3 has been applied to MWF aerosols without consideration of different fluid aerosol-size fractions. The OEL, is also based on the assumption that there are no significant differences in risk among fluid types, which may be contentious. Particularly, the health risks from exposure to water-soluble fluids may not have been sufficiently considered. Although adoption of The National Institute for Occupational Safety and Health's recommended exposure limit for MWF aerosol (0.5 mg/m3 would be an effective step towards minimizing and evaluating the upper respiratory irritation that may be caused by neat or diluted MWF, this would fail to address the hazards (e.g., asthma and hypersensitivity pneumonitis caused by microbial contaminants generated only by the use of water-soluble fluids. The absence of an OEL for the water-soluble fluids used in approximately 80-90 % of all applicants may result in limitations of the protection from health risks caused by exposure to those fluids.

  16. The occupational exposure limit for fluid aerosol generated in metalworking operations: limitations and recommendations.

    Science.gov (United States)

    Park, Donguk

    2012-03-01

    The aim of this review was to assess current knowledge related to the occupational exposure limit (OEL) for fluid aerosols including either mineral or chemical oil that are generated in metalworking operations, and to discuss whether their OEL can be appropriately used to prevent several health risks that may vary among metalworking fluid (MWF) types. The OEL (time-weighted average; 5 mg/m(3), short-term exposure limit ; 15 mg/m(3)) has been applied to MWF aerosols without consideration of different fluid aerosol-size fractions. The OEL, is also based on the assumption that there are no significant differences in risk among fluid types, which may be contentious. Particularly, the health risks from exposure to water-soluble fluids may not have been sufficiently considered. Although adoption of The National Institute for Occupational Safety and Health's recommended exposure limit for MWF aerosol (0.5 mg/m(3)) would be an effective step towards minimizing and evaluating the upper respiratory irritation that may be caused by neat or diluted MWF, this would fail to address the hazards (e.g., asthma and hypersensitivity pneumonitis) caused by microbial contaminants generated only by the use of water-soluble fluids. The absence of an OEL for the water-soluble fluids used in approximately 80-90 % of all applicants may result in limitations of the protection from health risks caused by exposure to those fluids.

  17. Current man-made mineral fibers (MMMF) exposures among ontario construction workers.

    Science.gov (United States)

    Verma, Dave K; Sahai, Dru; Kurtz, Lawrence A; Finkelstein, Murray M

    2004-05-01

    Current occupational exposures to man-made mineral fibers (MMMF), including refractory ceramic fibers (RCF), were measured as part of an exposure assessment program for an epidemiological study pertaining to cancer and mortality patterns of Ontario construction workers. The assessments were carried out at commercial and residential sites. A total of 130 MMMF samples (104 personal and 26 area) was collected and included 21 RCF (16 personal and 5 area). The samples were analyzed by the World Health Organization method in which both respirable and nonrespirable airborne fibers are counted. The results show that Ontario construction workers' full-shift exposure to MMMF (excluding RCF) is generally lower than the American Conference of Governmental Industrial Hygienists' (ACGIH) recommended threshold limit value-time-weighted average (TLV-TWA) of 1 fibers/cc and thus should not present any significant hazard. However, approximately 40% of the occupational exposures to RCF are higher than ACGIH's TLV-TWA of 0.2 fibers/cc and present a significant potential hazard. Workers generally wore adequate approved respiratory protection, especially while performing particularly dusty tasks such as blowing, spraying, and cutting, so the actual exposure received by workers was lower than the reported values. Adequate control measures such as ventilation and respiratory protection should always be used when work involves RCF.

  18. Modeled occupational exposures to gas-phase medical laser-generated air contaminants.

    Science.gov (United States)

    Lippert, Julia F; Lacey, Steven E; Jones, Rachael M

    2014-01-01

    Exposure monitoring data indicate the potential for substantive exposure to laser-generated air contaminants (LGAC); however the diversity of medical lasers and their applications limit generalization from direct workplace monitoring. Emission rates of seven previously reported gas-phase constituents of medical laser-generated air contaminants (LGAC) were determined experimentally and used in a semi-empirical two-zone model to estimate a range of plausible occupational exposures to health care staff. Single-source emission rates were generated in an emission chamber as a one-compartment mass balance model at steady-state. Clinical facility parameters such as room size and ventilation rate were based on standard ventilation and environmental conditions required for a laser surgical facility in compliance with regulatory agencies. All input variables in the model including point source emission rates were varied over an appropriate distribution in a Monte Carlo simulation to generate a range of time-weighted average (TWA) concentrations in the near and far field zones of the room in a conservative approach inclusive of all contributing factors to inform future predictive models. The concentrations were assessed for risk and the highest values were shown to be at least three orders of magnitude lower than the relevant occupational exposure limits (OELs). Estimated values do not appear to present a significant exposure hazard within the conditions of our emission rate estimates.

  19. Asbestos exposures of mechanics performing clutch service on motor vehicles.

    Science.gov (United States)

    Cohen, Howard J; Van Orden, Drew R

    2008-03-01

    and frequency of this task, the incremental contribution of this task to mechanics' 8-hr time-weighted average (TWA) asbestos exposures was 0.0016 flcc. Using the range of data inputs that were obtained, the authors calculated a range of TWA exposures of 3.75 x 10(-5) flcc to 0.03 flcc. The mean value of 0.0016 flcc is below background levels of asbestos that have been reported in garages during this time and below the current OSHA PEL of 0.1 flcc.

  20. Children's exposure assessment of radiofrequency fields: Comparison between spot and personal measurements.

    Science.gov (United States)

    Gallastegi, Mara; Huss, Anke; Santa-Marina, Loreto; Aurrekoetxea, Juan J; Guxens, Mònica; Birks, Laura Ellen; Ibarluzea, Jesús; Guerra, David; Röösli, Martin; Jiménez-Zabala, Ana

    2018-05-24

    Radiofrequency (RF) fields are widely used and, while it is still unknown whether children are more vulnerable to this type of exposure, it is essential to explore their level of exposure in order to conduct adequate epidemiological studies. Personal measurements provide individualized information, but they are costly in terms of time and resources, especially in large epidemiological studies. Other approaches, such as estimation of time-weighted averages (TWAs) based on spot measurements could simplify the work. The aims of this study were to assess RF exposure in the Spanish INMA birth cohort by spot measurements and by personal measurements in the settings where children tend to spend most of their time, i.e., homes, schools and parks; to identify the settings and sources that contribute most to that exposure; and to explore if exposure assessment based on spot measurements is a valid proxy for personal exposure. When children were 8 years old, spot measurements were conducted in the principal settings of 104 participants: homes (104), schools and their playgrounds (26) and parks (79). At the same time, personal measurements were taken for a subsample of 50 children during 3 days. Exposure assessment based on personal and on spot measurements were compared both in terms of mean exposures and in exposure-dependent categories by means of Bland-Altman plots, Cohen's kappa and McNemar test. Median exposure levels ranged from 29.73 (in children's bedrooms) to 200.10 μW/m 2 (in school playgrounds) for spot measurements and were higher outdoors than indoors. Median personal exposure was 52.13 μW/m 2 and median levels of assessments based on spot measurements ranged from 25.46 to 123.21 μW/m 2 . Based on spot measurements, the sources that contributed most to the exposure were FM radio, mobile phone downlink and Digital Video Broadcasting-Terrestrial, while indoor and personal sources contributed very little (altogether spot measurements, with the latter

  1. Rapporteur Report: Sources and Exposure Metrics for ELF Epidemiology (Part 1) (invited paper)

    International Nuclear Information System (INIS)

    Matthes, R.

    1999-01-01

    High quality epidemiological studies on the possible link between exposure to non-ionizing radiation and human health effects are of great importance for radiation protection in this area. The main sources of ELF fields are domestic appliances, different electrical energy distribution systems and all kinds of electrical machinery and devices at the workplace. In general, ELF fields present in the environment, show complex temporal patterns and spatial distributions, depending on the generating source. The complete characterisation of the different field sources often requires highly sophisticated instrumentation, and this is therefore not feasible within the scope of epidemiological studies. On average, individual exposure from ELF fields is low in both the working environment and in residential areas. Only at certain workplaces are people subject to significant ELF exposure with regard to biological effects. Different methods have been developed to determine levels of exposure received by study subjects, with the aim to rank exposed and non-exposed groups in epidemiological studies. These include spot measurements, calculations or modelling. The different methods used to estimate total exposure in epidemiological studies may result to a differing extent in a misclassification of the study subjects. Equally important for future studies is the selection of the appropriate exposure metric. The most widely used metric so far is the time-weighted average and thus represents a quasi standard metric for use in epidemiological studies. Beside, wire codes have been used for a long time in residential studies and job titles are often used in occupational studies. On the basis of the experience gained in previous studies, it would be desirable to develop standardised, state-of-the-art protocols to improve exposure assessment. New surrogates and metrics were proposed as the basis for further studies. But only few of these have recently undergone preliminary testing. A

  2. Activity pattern and personal exposure to nitrogen dioxide in indoor and outdoor microenvironments.

    Science.gov (United States)

    Kornartit, C; Sokhi, R S; Burton, M A; Ravindra, Khaiwal

    2010-01-01

    People are exposed to air pollution from a range of indoor and outdoor sources. Concentrations of nitrogen dioxide (NO(2)), which is hazardous to health, can be significant in both types of environments. This paper reports on the measurement and analysis of indoor and outdoor NO(2) concentrations and their comparison with measured personal exposure in various microenvironments during winter and summer seasons. Furthermore, the relationship between NO(2) personal exposure in various microenvironments and including activities patterns were also studied. Personal, indoor microenvironments and outdoor measurements of NO(2) levels were conducted using Palmes tubes for 60 subjects. The results showed significant differences in indoor and outdoor NO(2) concentrations in winter but not for summer. In winter, indoor NO(2) concentrations were found to be strongly correlated with personal exposure levels. NO(2) concentration in houses using a gas cooker was higher in all rooms than those with an electric cooker during the winter campaign, whereas there was no significant difference noticed in summer. The average NO(2) levels in kitchens with a gas cooker were twice as high as those with an electric cooker, with no significant difference in the summer period. A time-weighted average personal exposure was calculated and compared with measured personal exposures in various indoor microenvironments (e.g. front doors, bedroom, living room and kitchen); including non-smokers, passive smokers and smoker. The estimated results were closely correlated, but showed some underestimation of the measured personal exposures to NO(2) concentrations. Interestingly, for our particular study higher NO(2) personal exposure levels were found during summer (14.0+/-1.5) than winter (9.5+/-2.4).

  3. Reconstruction of a time-averaged midposition CT scan for radiotherapy planning of lung cancer patients using deformable registration.

    Science.gov (United States)

    Wolthaus, J W H; Sonke, J J; van Herk, M; Damen, E M F

    2008-09-01

    novel method to create a midposition CT scan (time-weighted average of the anatomy) for treatment planning with reduced noise and artifacts was introduced. Tumor shape and position in the MidP CT scan represents that of the BH CT scan better than MidV CT scan and, therefore, was found to be appropriate for treatment planning.

  4. Characteristics of Occupational Exposure to Benzene during Turnaround in the Petrochemical Industries.

    Science.gov (United States)

    Chung, Eun-Kyo; Shin, Jung-Ah; Lee, Byung-Kyu; Kwon, Jiwoon; Lee, Naroo; Chung, Kwang-Jae; Lee, Jong-Han; Lee, In-Seop; Kang, Seong-Kyu; Jang, Jae-Kil

    2010-09-01

    The level of benzene exposure in the petrochemical industry during regular operation has been well established, but not in turnaround (TA), where high exposure may occur. In this study, the characteristics of occupational exposure to benzene during TA in the petrochemical companies were investigated in order to determine the best management strategies and improve the working environment. This was accomplished by evaluating the exposure level for the workers working in environments where benzene was being produced or used as an ingredient during the unit process. From 2003 to 2008, a total of 705 workers in three petrochemical companies in Korea were studied. Long- and short-term (< 1 hr) samples were taken during TAs. TA was classified into three stages: shut-down, maintenance and start-up. All works were classified into 12 occupation categories. The long-term geometric mean (GM) benzene exposure level was 0.025 (5.82) ppm (0.005-42.120 ppm) and the short-term exposure concentration during TA was 0.020 (17.42) ppm (0.005-61.855 ppm). The proportions of TA samples exceeding the time-weighted average, occupational exposure level (TWA-OEL in Korea, 1 ppm) and the short-term exposure limit (STEL-OEL, 5 ppm) were 4.1% (20 samples of 488) and 6.0% (13 samples of 217), respectively. The results for the benzene exposure levels and the rates of exceeding the OEL were both statistically significant (p < 0.05). Among the 12 job categories of petrochemical workers, mechanical engineers, plumbers, welders, fieldman and scaffolding workers exhibited long-term samples that exceeded the OEL of benzene, and the rate of exceeding the OEL was statistically significant for the first two occupations (p < 0.05). These findings suggest that the periodic work environment must be assessed during non-routine works such as TA.

  5. Cobalt exposure and lung disease in tungsten carbide production. A cross-sectional study of current workers

    International Nuclear Information System (INIS)

    Sprince, N.L.; Oliver, L.C.; Eisen, E.A.; Greene, R.E.; Chamberlin, R.I.

    1988-01-01

    A cross-sectional study of 1,039 tungsten carbide (TC) production workers was carried out. The purposes were (1) to evaluate the prevalence of interstitial lung disease (ILD) and work-related wheezing, (2) to assess correlations between cobalt exposure and pulmonary disease, (3) to compare lung disease in grinders of hard carbide versus nongrinders, and (4) to evaluate the effects of new and previous threshold limit values for cobalt of 50 and 100 micrograms/m3. We obtained medical and occupational histories, flow-volume loops, single breath carbon monoxide diffusing capacity (DLCO), and chest radiographs. Time-weighted average cobalt levels were determined at every step in the production process. Work-related wheeze occurred in 113 participants (10.9%). Profusion greater than or equal to 1/0 occurred in 26 (2.6%) and interstitial lung disease (defined as profusion greater than or equal to 1M, FVC or DLCO less than or equal to 70%, and FEV1/FVC% greater than or equal to 75) in 7 (0.7%). The relative odds of work-related wheeze was 2.1 times for present cobalt exposures exceeding 50 micrograms/m3 compared with exposures less than or equal to 50 micrograms/m3. The relative odds of profusion greater than or equal to 1/0 was 5.1 times for average lifetime cobalt exposures exceeding 100 micrograms/m3 compared with exposures less than or equal to 100 micrograms/m3 in those with latency exceeding 10 yr. ILD was found in three workers with very low average lifetime exposures (less than 8 micrograms/m3) and shorter latencies. Grinders of hard carbide had lower mean DLCO than nongrinders, even though their cobalt exposures were lower

  6. Exposures to jet fuel and benzene during aircraft fuel tank repair in the U.S. Air Force.

    Science.gov (United States)

    Carlton, G N; Smith, L B

    2000-06-01

    Jet fuel and benzene vapor exposures were measured during aircraft fuel tank entry and repair at twelve U.S. Air Force bases. Breathing zone samples were collected on the fuel workers who performed the repair. In addition, instantaneous samples were taken at various points during the procedures with SUMMA canisters and subsequent analysis by mass spectrometry. The highest eight-hour time-weighted average (TWA) fuel exposure found was 1304 mg/m3; the highest 15-minute short-term exposure was 10,295 mg/m3. The results indicate workers who repair fuel tanks containing explosion suppression foam have a significantly higher exposure to jet fuel as compared to workers who repair tanks without foam (p fuel, absorbed by the foam, to volatilize during the foam removal process. Fuel tanks that allow flow-through ventilation during repair resulted in lower exposures compared to those tanks that have only one access port and, as a result, cannot be ventilated efficiently. The instantaneous sampling results confirm that benzene exposures occur during fuel tank repair; levels up to 49.1 mg/m3 were found inside the tanks during the repairs. As with jet fuel, these elevated benzene concentrations were more likely to occur in foamed tanks. The high temperatures associated with fuel tank repair, along with the requirement to wear vapor-permeable cotton coveralls for fire reasons, could result in an increase in the benzene body burden of tank entrants.

  7. Exposure to asbestos during brake maintenance of automotive vehicles by different methods.

    Science.gov (United States)

    Kauppinen, T; Korhonen, K

    1987-05-01

    Asbestos concentrations were measured during the different operations of brake maintenance of passenger cars, trucks and buses in 24 Finnish workplaces. The estimated average asbestos exposure during the workday (8-hr time-weighted average) was 0.1-0.2 fibers/cm3 during brake repair of trucks or buses, and under 0.05 f/cm3 during repair of passenger car brakes when the background concentration was not included in the calculations. The background concentration was estimated to be less than 0.1 f/cm3. During brake maintenance of buses and trucks, heavy exposure, 0.3-125 (mean 56) f/cm3, was observed during machine grinding of new brake linings if local exhaust was not in use. Other short-term operations during which the concentration exceeded 1 f/cm3 were the cleaning of brakes with a brush, wet cloth or compressed air jet. During brake servicing of passenger cars, the concentration of asbestos exceeded 1 f/cm3 only during compressed air blowing without local exhaust. The different methods of decreasing the exposure and the risk of asbestos-related diseases among car mechanics are discussed.

  8. The average size of ordered binary subgraphs

    NARCIS (Netherlands)

    van Leeuwen, J.; Hartel, Pieter H.

    To analyse the demands made on the garbage collector in a graph reduction system, the change in size of an average graph is studied when an arbitrary edge is removed. In ordered binary trees the average number of deleted nodes as a result of cutting a single edge is equal to the average size of a

  9. Noise-induced hearing loss in Korean workers: co-exposure to organic solvents and heavy metals in nationwide industries.

    Directory of Open Access Journals (Sweden)

    Yoon-Hyeong Choi

    Full Text Available BACKGROUND: Noise exposure is a well-known contributor to work-related hearing loss. Recent biological evidence suggests that exposure to ototoxic chemicals such as organic solvents and heavy metals may be additional contributors to hearing loss. However, in industrial settings, it is difficult to determine the risks of hearing loss due to these chemicals in workplaces accompanied by excessive noise exposure. A few studies suggest that the effect of noise may be enhanced by ototoxic chemicals. Therefore, this study investigated whether co-exposure to organic solvents and/or heavy metals in the workplace modifies the risk of noise exposure on hearing loss in a background of excessive noise. METHODS: We examined 30,072 workers nationwide in a wide range of industries from the Korea National Occupational Health Surveillance 2009. Data on industry-based exposure (e.g., occupational noise, heavy metals, and organic solvents and subject-specific health outcomes (e.g., audiometric examination were collected. Noise was measured as the daily 8-h time-weighted average level. Air conduction hearing thresholds were measured from 0.5 to 6 kHz, and pure-tone averages (PTA (i.e., means of 2, 3, and 4 kHz were computed. RESULTS: In the multivariate linear model, PTA increment with occupational noise were 1.64-fold and 2.15-fold higher in individuals exposed to heavy metals and organic solvents than in unexposed individuals, respectively. CONCLUSION: This study provides nationwide evidence that co-exposure to heavy metals and/or organic solvents may exacerbate the effect of noise exposure on hearing loss in workplaces. These findings suggest that workers in industries dealing with heavy metals or organic solvents are susceptible to such risks.

  10. Noise-induced hearing loss in Korean workers: co-exposure to organic solvents and heavy metals in nationwide industries.

    Science.gov (United States)

    Choi, Yoon-Hyeong; Kim, KyooSang

    2014-01-01

    Noise exposure is a well-known contributor to work-related hearing loss. Recent biological evidence suggests that exposure to ototoxic chemicals such as organic solvents and heavy metals may be additional contributors to hearing loss. However, in industrial settings, it is difficult to determine the risks of hearing loss due to these chemicals in workplaces accompanied by excessive noise exposure. A few studies suggest that the effect of noise may be enhanced by ototoxic chemicals. Therefore, this study investigated whether co-exposure to organic solvents and/or heavy metals in the workplace modifies the risk of noise exposure on hearing loss in a background of excessive noise. We examined 30,072 workers nationwide in a wide range of industries from the Korea National Occupational Health Surveillance 2009. Data on industry-based exposure (e.g., occupational noise, heavy metals, and organic solvents) and subject-specific health outcomes (e.g., audiometric examination) were collected. Noise was measured as the daily 8-h time-weighted average level. Air conduction hearing thresholds were measured from 0.5 to 6 kHz, and pure-tone averages (PTA) (i.e., means of 2, 3, and 4 kHz) were computed. In the multivariate linear model, PTA increment with occupational noise were 1.64-fold and 2.15-fold higher in individuals exposed to heavy metals and organic solvents than in unexposed individuals, respectively. This study provides nationwide evidence that co-exposure to heavy metals and/or organic solvents may exacerbate the effect of noise exposure on hearing loss in workplaces. These findings suggest that workers in industries dealing with heavy metals or organic solvents are susceptible to such risks.

  11. Noise-Induced Hearing Loss in Korean Workers: Co-Exposure to Organic Solvents and Heavy Metals in Nationwide Industries

    Science.gov (United States)

    Choi, Yoon-Hyeong; Kim, KyooSang

    2014-01-01

    Background Noise exposure is a well-known contributor to work-related hearing loss. Recent biological evidence suggests that exposure to ototoxic chemicals such as organic solvents and heavy metals may be additional contributors to hearing loss. However, in industrial settings, it is difficult to determine the risks of hearing loss due to these chemicals in workplaces accompanied by excessive noise exposure. A few studies suggest that the effect of noise may be enhanced by ototoxic chemicals. Therefore, this study investigated whether co-exposure to organic solvents and/or heavy metals in the workplace modifies the risk of noise exposure on hearing loss in a background of excessive noise. Methods We examined 30,072 workers nationwide in a wide range of industries from the Korea National Occupational Health Surveillance 2009. Data on industry-based exposure (e.g., occupational noise, heavy metals, and organic solvents) and subject-specific health outcomes (e.g., audiometric examination) were collected. Noise was measured as the daily 8-h time-weighted average level. Air conduction hearing thresholds were measured from 0.5 to 6 kHz, and pure-tone averages (PTA) (i.e., means of 2, 3, and 4 kHz) were computed. Results In the multivariate linear model, PTA increment with occupational noise were 1.64-fold and 2.15-fold higher in individuals exposed to heavy metals and organic solvents than in unexposed individuals, respectively. Conclusion This study provides nationwide evidence that co-exposure to heavy metals and/or organic solvents may exacerbate the effect of noise exposure on hearing loss in workplaces. These findings suggest that workers in industries dealing with heavy metals or organic solvents are susceptible to such risks. PMID:24870407

  12. Exposures to asbestos arising from bandsawing gasket material.

    Science.gov (United States)

    Fowler, D P

    2000-05-01

    A simulation of bandsawing sheet asbestos gasket material was performed as part of a retrospective exposure evaluation undertaken to assist in determining causation of a case of mesothelioma. The work was performed by bandsawing a chrysotile asbestos (80%)/neoprene gasket sheet with a conventional 16-inch woodworking bandsaw inside a chamber. Measurements of airborne asbestos were made using conventional area and personal sampling methods, with analysis of collected samples by transmission electron microscopy (TEM) and phase contrast microscopy (PCM). These were supplemented by qualitative scanning electron microscopy (SEM) examinations of some of the airborne particles collected on the filters. In contrast with findings from studies examining manual handling (installation and removal) of gaskets, airborne asbestos concentrations from this operation were found to be well above current Occupational Safety and Health Administration (OSHA) permissible exposure limit (PEL) (eight-hour time-weighted average [TWA]) and excursion limit (30-minute) standards. Although some "encapsulation" effect of the neoprene matrix was seen on the particles in the airborne dust, unencapsulated individual fiber bundles were also seen. Suggestions for the implications of the work are given. In summary, the airborne asbestos concentrations arising from this work were quite high, and point to the need for careful observation of common sense precautions when manipulation of asbestos-containing materials (even those believed to have limited emissions potential) may involved machining operations.

  13. Associations between Bisphenol A Exposure and Reproductive Hormones among Female Workers

    Directory of Open Access Journals (Sweden)

    Maohua Miao

    2015-10-01

    Full Text Available The associations between Bisphenol-A (BPA exposure and reproductive hormone levels among women are unclear. A cross-sectional study was conducted among female workers from BPA-exposed and unexposed factories in China. Women’s blood samples were collected for assay of follicle-stimulating hormone (FSH, luteinizing hormone (LH, 17β-Estradiol (E2, prolactin (PRL, and progesterone (PROG. Their urine samples were collected for BPA measurement. In the exposed group, time weighted average exposure to BPA for an 8-h shift (TWA8, a measure incorporating historic exposure level, was generated based on personal air sampling. Multiple linear regression analyses were used to examine linear associations between urine BPA concentration and reproductive hormones after controlling for potential confounders. A total of 106 exposed and 250 unexposed female workers were included in this study. A significant positive association between increased urine BPA concentration and higher PRL and PROG levels were observed. Similar associations were observed after the analysis was carried out separately among the exposed and unexposed workers. In addition, a positive association between urine BPA and E2 was observed among exposed workers with borderline significance, while a statistically significant inverse association between urine BPA and FSH was observed among unexposed group. The results suggest that BPA exposure may lead to alterations in female reproductive hormone levels.

  14. Associations between Bisphenol A Exposure and Reproductive Hormones among Female Workers.

    Science.gov (United States)

    Miao, Maohua; Yuan, Wei; Yang, Fen; Liang, Hong; Zhou, Zhijun; Li, Runsheng; Gao, Ersheng; Li, De-Kun

    2015-10-22

    The associations between Bisphenol-A (BPA) exposure and reproductive hormone levels among women are unclear. A cross-sectional study was conducted among female workers from BPA-exposed and unexposed factories in China. Women's blood samples were collected for assay of follicle-stimulating hormone (FSH), luteinizing hormone (LH), 17β-Estradiol (E2), prolactin (PRL), and progesterone (PROG). Their urine samples were collected for BPA measurement. In the exposed group, time weighted average exposure to BPA for an 8-h shift (TWA8), a measure incorporating historic exposure level, was generated based on personal air sampling. Multiple linear regression analyses were used to examine linear associations between urine BPA concentration and reproductive hormones after controlling for potential confounders. A total of 106 exposed and 250 unexposed female workers were included in this study. A significant positive association between increased urine BPA concentration and higher PRL and PROG levels were observed. Similar associations were observed after the analysis was carried out separately among the exposed and unexposed workers. In addition, a positive association between urine BPA and E2 was observed among exposed workers with borderline significance, while a statistically significant inverse association between urine BPA and FSH was observed among unexposed group. The results suggest that BPA exposure may lead to alterations in female reproductive hormone levels.

  15. Characterization of airborne BTEX exposures during use of lawnmowers and trimmers.

    Science.gov (United States)

    Avens, Heather J; Maskrey, Joshua R; Insley, Allison L; Unice, Kenneth M; Reid, Rachel C D; Sahmel, Jennifer

    2018-02-08

    Few studies have evaluated airborne exposures to benzene, toluene, ethylbenzene, and xylenes (BTEX) during operation of two-stroke and four-stroke small engines, such as those in lawn maintenance equipment. Full-shift, 8-hour personal samples were collected during a simulation study to characterize yard maintenance activities including mowing, trimming, and fueling. Short-term, 15-minute personal samples were collected to separately evaluate mowing and trimming exposures. Mean 8-hour time weighted average (TWA) BTEX concentrations were 2.3, 5.8, 0.91, and 4.6 ppb, respectively (n = 2). Mean 15-minute TWA BTEX concentrations were 1.6, 1.8, 0.22, and 1.3 ppb, respectively, during mowing and 1.2, 3.6, 0.68, and 3.3 ppb, respectively, during trimming (n = 3 per task). Measured BTEX concentrations during fueling were 20-110, 61-310, 8-41, and 40-203 ppb, respectively (n = 2, duration 2-3 minutes). These exposure concentrations were well below applicable US occupational exposure limits.

  16. An evaluation of retrofit engineering control interventions to reduce perchloroethylene exposures in commercial dry-cleaning shops.

    Science.gov (United States)

    Earnest, G Scott; Ewers, Lynda M; Ruder, Avima M; Petersen, Martin R; Kovein, Ronald J

    2002-02-01

    Real-time monitoring was used to evaluate the ability of engineering control devices retrofitted on two existing dry-cleaning machines to reduce worker exposures to perchloroethylene. In one dry-cleaning shop, a refrigerated condenser was installed on a machine that had a water-cooled condenser to reduce the air temperature, improve vapor recovery, and lower exposures. In a second shop, a carbon adsorber was retrofitted on a machine to adsorb residual perchloroethylene not collected by the existing refrigerated condenser to improve vapor recovery and reduce exposures. Both controls were successful at reducing the perchloroethylene exposures of the dry-cleaning machine operator. Real-time monitoring was performed to evaluate how the engineering controls affected exposures during loading and unloading the dry-cleaning machine, a task generally considered to account for the highest exposures. The real-time monitoring showed that dramatic reductions occurred in exposures during loading and unloading of the dry-cleaning machine due to the engineering controls. Peak operator exposures during loading and unloading were reduced by 60 percent in the shop that had a refrigerated condenser installed on the dry-cleaning machine and 92 percent in the shop that had a carbon adsorber installed. Although loading and unloading exposures were dramatically reduced, drops in full-shift time-weighted average (TWA) exposures were less dramatic. TWA exposures to perchloroethylene, as measured by conventional air sampling, showed smaller reductions in operator exposures of 28 percent or less. Differences between exposure results from real-time and conventional air sampling very likely resulted from other uncontrolled sources of exposure, differences in shop general ventilation before and after the control was installed, relatively small sample sizes, and experimental variability inherent in field research. Although there were some difficulties and complications with installation and

  17. Averaging for solitons with nonlinearity management

    International Nuclear Information System (INIS)

    Pelinovsky, D.E.; Kevrekidis, P.G.; Frantzeskakis, D.J.

    2003-01-01

    We develop an averaging method for solitons of the nonlinear Schroedinger equation with a periodically varying nonlinearity coefficient, which is used to effectively describe solitons in Bose-Einstein condensates, in the context of the recently proposed technique of Feshbach resonance management. Using the derived local averaged equation, we study matter-wave bright and dark solitons and demonstrate a very good agreement between solutions of the averaged and full equations

  18. DSCOVR Magnetometer Level 2 One Minute Averages

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-minute average of Level 1 data

  19. DSCOVR Magnetometer Level 2 One Second Averages

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-second average of Level 1 data

  20. Spacetime averaging of exotic singularity universes

    International Nuclear Information System (INIS)

    Dabrowski, Mariusz P.

    2011-01-01

    Taking a spacetime average as a measure of the strength of singularities we show that big-rips (type I) are stronger than big-bangs. The former have infinite spacetime averages while the latter have them equal to zero. The sudden future singularities (type II) and w-singularities (type V) have finite spacetime averages. The finite scale factor (type III) singularities for some values of the parameters may have an infinite average and in that sense they may be considered stronger than big-bangs.

  1. NOAA Average Annual Salinity (3-Zone)

    Data.gov (United States)

    California Natural Resource Agency — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...

  2. Occupational exposure to crystalline silica (quartz) and and prevalence of lung diseases in Dhand Killi, Mohmand Agency, northern Pakistan

    International Nuclear Information System (INIS)

    Jehan, N.

    2005-01-01

    Occupational exposure to respirable crystalline silica (quartz) has long been known to produce fatal lung diseases specifically silicosis and pulmonary tuberculosis. This issue a cohort analysis of occupational exposure, relation to crystalline silica (quartz), the mortality and morbidity rate of various lung diseases were carried out among silica miners and millers in Dhand Killi Mohamand Agency, northern Pakistan. The exposure level of respirable silica (quartz) in the indoor environment counts from 1-14 mg/m sup 3/ per 1 hour, which is thousand fold higher as compared to internationally recommended exposure limits (0.05 mg/m/sup 3) over time-weighted average of 8 hours. The mortality and morbidity rate of silica related lung diseases were found potentially high among the silica (quartz) miners and millers during the follow up period (1996 to 2004) in the target area. The overall analytical data illustrates that the cohort cases of occupational exposure to respirable silica (quartz) and silica related fatal diseases is remarkably high. (author)

  3. Improving consensus structure by eliminating averaging artifacts

    Directory of Open Access Journals (Sweden)

    KC Dukka B

    2009-03-01

    Full Text Available Abstract Background Common structural biology methods (i.e., NMR and molecular dynamics often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures. However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%; in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38 of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA 1, our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction 2, which

  4. 40 CFR 76.11 - Emissions averaging.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Emissions averaging. 76.11 Section 76.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General...

  5. Determinants of College Grade Point Averages

    Science.gov (United States)

    Bailey, Paul Dean

    2012-01-01

    Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the…

  6. Estimating average glandular dose by measuring glandular rate in mammograms

    International Nuclear Information System (INIS)

    Goto, Sachiko; Azuma, Yoshiharu; Sumimoto, Tetsuhiro; Eiho, Shigeru

    2003-01-01

    The glandular rate of the breast was objectively measured in order to calculate individual patient exposure dose (average glandular dose) in mammography. By employing image processing techniques and breast-equivalent phantoms with various glandular rate values, a conversion curve for pixel value to glandular rate can be determined by a neural network. Accordingly, the pixel values in clinical mammograms can be converted to the glandular rate value for each pixel. The individual average glandular dose can therefore be calculated using the individual glandular rates on the basis of the dosimetry method employed for quality control in mammography. In the present study, a data set of 100 craniocaudal mammograms from 50 patients was used to evaluate our method. The average glandular rate and average glandular dose of the data set were 41.2% and 1.79 mGy, respectively. The error in calculating the individual glandular rate can be estimated to be less than ±3%. When the calculation error of the glandular rate is taken into consideration, the error in the individual average glandular dose can be estimated to be 13% or less. We feel that our method for determining the glandular rate from mammograms is useful for minimizing subjectivity in the evaluation of patient breast composition. (author)

  7. Sensor-triggered sampling to determine instantaneous airborne vapor exposure concentrations.

    Science.gov (United States)

    Smith, Philip A; Simmons, Michael K; Toone, Phillip

    2018-06-01

    It is difficult to measure transient airborne exposure peaks by means of integrated sampling for organic chemical vapors, even with very short-duration sampling. Selection of an appropriate time to measure an exposure peak through integrated sampling is problematic, and short-duration time-weighted average (TWA) values obtained with integrated sampling are not likely to accurately determine actual peak concentrations attained when concentrations fluctuate rapidly. Laboratory analysis for integrated exposure samples is preferred from a certainty standpoint over results derived in the field from a sensor, as a sensor user typically must overcome specificity issues and a number of potential interfering factors to obtain similarly reliable data. However, sensors are currently needed to measure intra-exposure period concentration variations (i.e., exposure peaks). In this article, the digitized signal from a photoionization detector (PID) sensor triggered collection of whole-air samples when toluene or trichloroethylene vapors attained pre-determined levels in a laboratory atmosphere generation system. Analysis by gas chromatography-mass spectrometry of whole-air samples (with both 37 and 80% relative humidity) collected using the triggering mechanism with rapidly increasing vapor concentrations showed good agreement with the triggering set point values. Whole-air samples (80% relative humidity) in canisters demonstrated acceptable 17-day storage recoveries, and acceptable precision and bias were obtained. The ability to determine exceedance of a ceiling or peak exposure standard by laboratory analysis of an instantaneously collected sample, and to simultaneously provide a calibration point to verify the correct operation of a sensor was demonstrated. This latter detail may increase the confidence in reliability of sensor data obtained across an entire exposure period.

  8. The dose-response relationship between in-ear occupational noise exposure and hearing loss.

    Science.gov (United States)

    Rabinowitz, Peter M; Galusha, Deron; Dixon-Ernst, Christine; Clougherty, Jane E; Neitzel, Richard L

    2013-10-01

    Current understanding of the dose-response relationship between occupational noise and hearing loss is based on cross-sectional studies prior to the widespread use of hearing protection, and with limited data regarding noise exposures below 85 dBA. We report on the hearing loss experience of a unique cohort of industrial workers, with daily monitoring of noise inside of hearing protection devices. At an industrial facility, workers exhibiting accelerated hearing loss were enrolled in a mandatory programme to monitor daily noise exposures inside of hearing protection. We compared these noise measurements (as time-weighted LAVG) to interval rates of high-frequency hearing loss over a 6-year period using a mixed-effects model, adjusting for potential confounders. Workers' high-frequency hearing levels at study inception averaged more than 40 dB Hearing threshold level (HTL). Most noise exposures were less than 85 dBA (mean LAVG 76 dBA, IQR 74-80 dBA). We found no statistical relationship between LAvg and high-frequency hearing loss (p=0.53). Using a metric for monthly maximum noise exposure did not improve model fit. At-ear noise exposures below 85 dBA did not show an association with risk of high-frequency hearing loss among workers with substantial past noise exposure and hearing loss at baseline. Therefore, effective noise control to below 85 dBA may lead to significant reduction in occupational hearing loss risk in such individuals. Further research is needed on the dose-response relationship of noise and hearing loss in individuals with normal hearing and little prior noise exposure.

  9. The Dose Response Relationship between In Ear Occupational Noise Exposure and Hearing Loss

    Science.gov (United States)

    Rabinowitz, Peter M.; Galusha, Deron; Dixon-Ernst, Christine; Clougherty, Jane E.; Neitzel, Richard L.

    2014-01-01

    Objectives Current understanding of the dose-response relationship between occupational noise and hearing loss is based on cross-sectional studies prior to the widespread use hearing protection and with limited data regarding noise exposures below 85dBA. We report on the hearing loss experience of a unique cohort of industrial workers with daily monitoring of noise inside of hearing protection devices. Methods At an industrial facility, workers exhibiting accelerated hearing loss were enrolled in a mandatory program to monitor daily noise exposures inside of hearing protection. We compared these noise measurements (as time-weighted LAVG) to interval rates of high frequency hearing loss over a six year period using a mixed effects model, adjusting for potential confounders. Results Workers’ high frequency hearing levels at study inception averaged more than 40 dB hearing threshold level (HTL). Most noise exposures were less than 85dBA (mean LAVG 76 dBA, interquartile range 74 to 80 dBA). We found no statistical relationship between LAvg and high frequency hearing loss (p = 0.53). Using a metric for monthly maximum noise exposure did not improve model fit. Conclusion At-ear noise exposures below 85dBA did not show an association with risk of high frequency hearing loss among workers with substantial past noise exposure and hearing loss at baseline. Therefore, effective noise control to below 85dBA may lead to significant reduction in occupational hearing loss risk in such individuals. Further research is needed on the dose response relationship of noise and hearing loss in individuals with normal hearing and little prior noise exposure. PMID:23825197

  10. Computation of the bounce-average code

    International Nuclear Information System (INIS)

    Cutler, T.A.; Pearlstein, L.D.; Rensink, M.E.

    1977-01-01

    The bounce-average computer code simulates the two-dimensional velocity transport of ions in a mirror machine. The code evaluates and bounce-averages the collision operator and sources along the field line. A self-consistent equilibrium magnetic field is also computed using the long-thin approximation. Optionally included are terms that maintain μ, J invariance as the magnetic field changes in time. The assumptions and analysis that form the foundation of the bounce-average code are described. When references can be cited, the required results are merely stated and explained briefly. A listing of the code is appended

  11. Occupational exposure to ionizing radiation

    International Nuclear Information System (INIS)

    Snihs, J.O.

    1985-01-01

    An overview of occupational exposure is presented. Concepts and quantities used for radiation protection are explained as well as the ICRP system of dose limitation. The risks correlated to the limits are discussed. However, the actual exposure are often much lower than the limits and the average risk in radiation work is comparable with the average risk in other safe occupations. Actual exposures in various occupations are presented and discussed. (author)

  12. Identification and estimation of survivor average causal effects.

    Science.gov (United States)

    Tchetgen Tchetgen, Eric J

    2014-09-20

    In longitudinal studies, outcomes ascertained at follow-up are typically undefined for individuals who die prior to the follow-up visit. In such settings, outcomes are said to be truncated by death and inference about the effects of a point treatment or exposure, restricted to individuals alive at the follow-up visit, could be biased even if as in experimental studies, treatment assignment were randomized. To account for truncation by death, the survivor average causal effect (SACE) defines the effect of treatment on the outcome for the subset of individuals who would have survived regardless of exposure status. In this paper, the author nonparametrically identifies SACE by leveraging post-exposure longitudinal correlates of survival and outcome that may also mediate the exposure effects on survival and outcome. Nonparametric identification is achieved by supposing that the longitudinal data arise from a certain nonparametric structural equations model and by making the monotonicity assumption that the effect of exposure on survival agrees in its direction across individuals. A novel weighted analysis involving a consistent estimate of the survival process is shown to produce consistent estimates of SACE. A data illustration is given, and the methods are extended to the context of time-varying exposures. We discuss a sensitivity analysis framework that relaxes assumptions about independent errors in the nonparametric structural equations model and may be used to assess the extent to which inference may be altered by a violation of key identifying assumptions. © 2014 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd.

  13. Rotational averaging of multiphoton absorption cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Friese, Daniel H., E-mail: daniel.h.friese@uit.no; Beerepoot, Maarten T. P.; Ruud, Kenneth [Centre for Theoretical and Computational Chemistry, University of Tromsø — The Arctic University of Norway, N-9037 Tromsø (Norway)

    2014-11-28

    Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.

  14. Sea Surface Temperature Average_SST_Master

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using ArcGIS...

  15. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-01-01

    to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic

  16. Should the average tax rate be marginalized?

    Czech Academy of Sciences Publication Activity Database

    Feldman, N. E.; Katuščák, Peter

    -, č. 304 (2006), s. 1-65 ISSN 1211-3298 Institutional research plan: CEZ:MSM0021620846 Keywords : tax * labor supply * average tax Subject RIV: AH - Economics http://www.cerge-ei.cz/pdf/wp/Wp304.pdf

  17. A practical guide to averaging functions

    CERN Document Server

    Beliakov, Gleb; Calvo Sánchez, Tomasa

    2016-01-01

    This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...

  18. MN Temperature Average (1961-1990) - Line

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  19. MN Temperature Average (1961-1990) - Polygon

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  20. Average Bandwidth Allocation Model of WFQ

    Directory of Open Access Journals (Sweden)

    Tomáš Balogh

    2012-01-01

    Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.

  1. Nonequilibrium statistical averages and thermo field dynamics

    International Nuclear Information System (INIS)

    Marinaro, A.; Scarpetta, Q.

    1984-01-01

    An extension of thermo field dynamics is proposed, which permits the computation of nonequilibrium statistical averages. The Brownian motion of a quantum oscillator is treated as an example. In conclusion it is pointed out that the procedure proposed to computation of time-dependent statistical average gives the correct two-point Green function for the damped oscillator. A simple extension can be used to compute two-point Green functions of free particles

  2. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....

  3. Concentration fluctuations and averaging time in vapor clouds

    CERN Document Server

    Wilson, David J

    2010-01-01

    This book contributes to more reliable and realistic predictions by focusing on sampling times from a few seconds to a few hours. Its objectives include developing clear definitions of statistical terms, such as plume sampling time, concentration averaging time, receptor exposure time, and other terms often confused with each other or incorrectly specified in hazard assessments; identifying and quantifying situations for which there is no adequate knowledge to predict concentration fluctuations in the near-field, close to sources, and far downwind where dispersion is dominated by atmospheric t

  4. The incidence of post-transplant cancer among kidney transplant recipients is associated with the level of tacrolimus exposure during the first year after transplantation.

    Science.gov (United States)

    Lichtenberg, Shelly; Rahamimov, Ruth; Green, Hefziba; Fox, Benjamin D; Mor, Eytan; Gafter, Uzi; Chagnac, Avry; Rozen-Zvi, Benaya

    2017-07-01

    Immunosuppressive therapy plays a major role in the development of post-transplant cancer. In this nested case-control study of kidney transplant recipients (KTRs), we investigated whether the incidence of post-transplant cancer is associated with the level of tacrolimus exposure over time. We screened the Rabin Medical Center database for adults who received kidney transplants between 2001 and 2014 and developed post-transplant cancer (excluding basal and squamous cell skin cancers). They were matched against KTRs without cancer. All patients received a maintenance immunosuppressive treatment with tacrolimus, mycophenolate mofetil and corticosteroids. The degree of exposure to tacrolimus was estimated as the time-weighted average (tTWA) value of tacrolimus blood levels. The tTWA was calculated as the area under the curve divided by time at 1, 6, and 12 months after transplantation and at time of cancer diagnosis. Thirty-two cases were matched against 64 controls. tTWA values above 11 ng/mL at 6 and 12 months after transplantation were associated with odds ratio (OR) of 3.1 (95% CI 1.1-9) and 11.7 (95% CI = 1.3-106), respectively, for post-transplant cancer; and with OR of 5.2 (95% CI 1.3-20.5) and 14.1 (95% CI = 1.5-134.3), respectively, for cancer diagnosed more than 3 years after transplantation. Exposure to a tacrolimus time-weighted average level above 11 ng/mL at 6 or 12 months after kidney transplantation is associated with an increased risk of developing cancer.

  5. Improved averaging for non-null interferometry

    Science.gov (United States)

    Fleig, Jon F.; Murphy, Paul E.

    2013-09-01

    Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.

  6. [Study of the effect of occupational exposure to glyphosate on hepatorenal function].

    Science.gov (United States)

    Zhang, F; Pan, L P; Ding, E M; Ge, Q J; Zhang, Z H; Xu, J N; Zhang, L; Zhu, B L

    2017-07-06

    Objective: To explore the effect of occupational exposure to glyphosate on hepatorenal function. Methods: 526 workers who were occupationally exposed to glyphosate from 5 glyphosate-producing factories were selected as cases; and another 442 administrative staffs who were not exposed to glyphosate were selected as controls from April to November, 2014. All the subjects accepted occupational health examination. The concentration level of glyphosate in the air of workshop was detected and the time weighted average concentration (TWA) was calculated. And analyze the difference of hepatorenal fuction between case group and control group. Result: The age of the subjects in the case and control groups were separately (35.6±10.3), (34.3±9.7) years old, with the length of working for (6.5±5.7), (7.7±6.8) years. The TWA of glyphosate in the case group was between Glyphosate can affect the hepatic and renal function among occupational exposure population, and there was an association between the effect and the exposure dose.

  7. Distribution of exposure concentrations and doses for constituents of environmental tobacco smoke

    Energy Technology Data Exchange (ETDEWEB)

    LaKind, J.S. [LaKind Associates (United States); Ginevan, M.E. [M.E. Ginevan and Associates (United States); Naiman, D.Q. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Mathematical Sciences; James, A.C. [A.C. James and Associates (United States); Jenkins, R.A. [Oak Ridge National Lab., TN (United States); Dourson, M.L.; Felter, S.P. [TERA (United States); Graves, C.G.; Tardiff, R.G. [Sapphire Group, Inc., Bethesda, MD (United States)

    1999-06-01

    The ultimate goal of the research reported in this series of three articles is to derive distributions of doses of selected environmental tobacco smoke (ETS)-related chemicals for nonsmoking workers. This analysis uses data from the 16-City Study collected with personal monitors over the course of one workday in workplaces where smoking occurred. In this article, the authors describe distributions of ETS chemical concentrations and the characteristics of those distributions for the workplace exposure. Next, they present population parameters relevant for estimating dose distributions and the methods used for estimating those dose distributions. Finally, they derive distributions of doses of selected ETS-related constituents obtained in the workplace for people in smoking work environments. Estimating dose distributions provided information beyond the usual point estimate of dose and showed that the preponderance of individuals exposed to ETS in the workplace were exposed at the low end of the dose distribution curve. The results of this analysis include estimations of hourly maxima and time-weighted average (TWA) doses of nicotine from workplace exposures to ETS and doses derived from modeled lung burdens of ultraviolet-absorbing particulate matter (UVPM) and solanesol resulting from workplace exposures to ETS (extrapolated from 1 day to 1 year).

  8. Occupational Noise Exposure of Employees at Locally-Owned Restaurants in a College Town.

    Science.gov (United States)

    Green, Deirdre R; Anthony, T Renée

    2015-01-01

    While many restaurant employees work in loud environments, in both dining and food preparation areas, little is known about worker exposures to noise. The risk of hearing loss to millions of food service workers around the country is unknown. This study evaluated full-shift noise exposure to workers at six locally-owned restaurants to examine risk factors associated with noise exposures during the day shift. Participants included cooks, counter attendants, bartenders, and waiters at full-service restaurants with bar service and at limited-service restaurants that provided counter service only. Assessments were made on weekdays and weekends, both during the summer and the fall (with a local university in session) to examine whether the time of week or year affects noise exposures to this population in a college town. In addition, the relationships between noise exposures and the type of restaurant and job classification were assessed. One-hundred eighty full-shift time-weighted average (TWA) exposures were assessed, using both Occupational Safety and Health Administration (OSHA) and National Institute for Occupational Safety and Health (NIOSH) criteria. No TWA measurements exceeded the 90 dBA OSHA 8 hr permissible exposure limit, although six projected TWAs exceeded the 85 dBA OSHA hearing conservation action limit. Using NIOSH criteria, TWAs ranged from 69-90 dBA with a mean of 80 dBA (SD = 4 dBA). Nearly 8% (14) of the exposures exceeded the NIOSH 8-hr 85 dBA. Full-shift exposures were larger for all workers in full-service restaurants (p restaurant type. The fall semester (p = 0.003) and weekend (p = 0.048) exposures were louder than summer and weekdays. Multiple linear regression analysis suggested that the combination of restaurant type, job classification, and season had a significant effect on restaurant worker noise exposures (p restaurant type, job classification, time of week, and season significantly affected the noise exposures for day

  9. Biological exposure assessment to tetrachloroethylene for workers in the dry cleaning industry

    Directory of Open Access Journals (Sweden)

    Ashley David L

    2008-04-01

    Full Text Available Abstract Background The purpose of this study was to assess the feasibility of conducting biological tetrachloroethylene (perchloroethylene, PCE exposure assessments of dry cleaning employees in conjunction with evaluation of possible PCE health effects. Methods Eighteen women from four dry cleaning facilities in southwestern Ohio were monitored in a pilot study of workers with PCE exposure. Personal breathing zone samples were collected from each employee on two consecutive work days. Biological monitoring included a single measurement of PCE in blood and multiple measurements of pre- and post-shift PCE in exhaled breath and trichloroacetic acid (TCA in urine. Results Post-shift PCE in exhaled breath gradually increased throughout the work week. Statistically significant correlations were observed among the exposure indices. Decreases in PCE in exhaled breath and TCA in urine were observed after two days without exposure to PCE. A mixed-effects model identified statistically significant associations between PCE in exhaled breath and airborne PCE time weighted average (TWA after adjusting for a random participant effect and fixed effects of time and body mass index. Conclusion Although comprehensive, our sampling strategy was challenging to implement due to fluctuating work schedules and the number (pre- and post-shift on three consecutive days and multiplicity (air, blood, exhaled breath, and urine of samples collected. PCE in blood is the preferred biological index to monitor exposures, but may make recruitment difficult. PCE TWA sampling is an appropriate surrogate, although more field intensive. Repeated measures of exposure and mixed-effects modeling may be required for future studies due to high within-subject variability. Workers should be monitored over a long enough period of time to allow the use of a lag term.

  10. Asynchronous Gossip for Averaging and Spectral Ranking

    Science.gov (United States)

    Borkar, Vivek S.; Makhijani, Rahul; Sundaresan, Rajesh

    2014-08-01

    We consider two variants of the classical gossip algorithm. The first variant is a version of asynchronous stochastic approximation. We highlight a fundamental difficulty associated with the classical asynchronous gossip scheme, viz., that it may not converge to a desired average, and suggest an alternative scheme based on reinforcement learning that has guaranteed convergence to the desired average. We then discuss a potential application to a wireless network setting with simultaneous link activation constraints. The second variant is a gossip algorithm for distributed computation of the Perron-Frobenius eigenvector of a nonnegative matrix. While the first variant draws upon a reinforcement learning algorithm for an average cost controlled Markov decision problem, the second variant draws upon a reinforcement learning algorithm for risk-sensitive control. We then discuss potential applications of the second variant to ranking schemes, reputation networks, and principal component analysis.

  11. Benchmarking statistical averaging of spectra with HULLAC

    Science.gov (United States)

    Klapisch, Marcel; Busquet, Michel

    2008-11-01

    Knowledge of radiative properties of hot plasmas is important for ICF, astrophysics, etc When mid-Z or high-Z elements are present, the spectra are so complex that one commonly uses statistically averaged description of atomic systems [1]. In a recent experiment on Fe[2], performed under controlled conditions, high resolution transmission spectra were obtained. The new version of HULLAC [3] allows the use of the same model with different levels of details/averaging. We will take advantage of this feature to check the effect of averaging with comparison with experiment. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Quant. Spectros. Rad. Transf. 65, 43 (2000). [2] J. E. Bailey, G. A. Rochau, C. A. Iglesias et al., Phys. Rev. Lett. 99, 265002-4 (2007). [3]. M. Klapisch, M. Busquet, and A. Bar-Shalom, AIP Conference Proceedings 926, 206-15 (2007).

  12. An approach to averaging digitized plantagram curves.

    Science.gov (United States)

    Hawes, M R; Heinemeyer, R; Sovak, D; Tory, B

    1994-07-01

    The averaging of outline shapes of the human foot for the purposes of determining information concerning foot shape and dimension within the context of comfort of fit of sport shoes is approached as a mathematical problem. An outline of the human footprint is obtained by standard procedures and the curvature is traced with a Hewlett Packard Digitizer. The paper describes the determination of an alignment axis, the identification of two ray centres and the division of the total curve into two overlapping arcs. Each arc is divided by equiangular rays which intersect chords between digitized points describing the arc. The radial distance of each ray is averaged within groups of foot lengths which vary by +/- 2.25 mm (approximately equal to 1/2 shoe size). The method has been used to determine average plantar curves in a study of 1197 North American males (Hawes and Sovak 1993).

  13. Books average previous decade of economic misery.

    Science.gov (United States)

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  14. Books Average Previous Decade of Economic Misery

    Science.gov (United States)

    Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  15. Exploiting scale dependence in cosmological averaging

    International Nuclear Information System (INIS)

    Mattsson, Teppo; Ronkainen, Maria

    2008-01-01

    We study the role of scale dependence in the Buchert averaging method, using the flat Lemaitre–Tolman–Bondi model as a testing ground. Within this model, a single averaging scale gives predictions that are too coarse, but by replacing it with the distance of the objects R(z) for each redshift z, we find an O(1%) precision at z<2 in the averaged luminosity and angular diameter distances compared to their exact expressions. At low redshifts, we show the improvement for generic inhomogeneity profiles, and our numerical computations further verify it up to redshifts z∼2. At higher redshifts, the method breaks down due to its inability to capture the time evolution of the inhomogeneities. We also demonstrate that the running smoothing scale R(z) can mimic acceleration, suggesting that it could be at least as important as the backreaction in explaining dark energy as an inhomogeneity induced illusion

  16. Stochastic Averaging and Stochastic Extremum Seeking

    CERN Document Server

    Liu, Shu-Jun

    2012-01-01

    Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering  and analysis of bacterial  convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...

  17. Aperture averaging in strong oceanic turbulence

    Science.gov (United States)

    Gökçe, Muhsin Caner; Baykal, Yahya

    2018-04-01

    Receiver aperture averaging technique is employed in underwater wireless optical communication (UWOC) systems to mitigate the effects of oceanic turbulence, thus to improve the system performance. The irradiance flux variance is a measure of the intensity fluctuations on a lens of the receiver aperture. Using the modified Rytov theory which uses the small-scale and large-scale spatial filters, and our previously presented expression that shows the atmospheric structure constant in terms of oceanic turbulence parameters, we evaluate the irradiance flux variance and the aperture averaging factor of a spherical wave in strong oceanic turbulence. Irradiance flux variance variations are examined versus the oceanic turbulence parameters and the receiver aperture diameter are examined in strong oceanic turbulence. Also, the effect of the receiver aperture diameter on the aperture averaging factor is presented in strong oceanic turbulence.

  18. Regional averaging and scaling in relativistic cosmology

    International Nuclear Information System (INIS)

    Buchert, Thomas; Carfora, Mauro

    2002-01-01

    Averaged inhomogeneous cosmologies lie at the forefront of interest, since cosmological parameters such as the rate of expansion or the mass density are to be considered as volume-averaged quantities and only these can be compared with observations. For this reason the relevant parameters are intrinsically scale-dependent and one wishes to control this dependence without restricting the cosmological model by unphysical assumptions. In the latter respect we contrast our way to approach the averaging problem in relativistic cosmology with shortcomings of averaged Newtonian models. Explicitly, we investigate the scale-dependence of Eulerian volume averages of scalar functions on Riemannian three-manifolds. We propose a complementary view of a Lagrangian smoothing of (tensorial) variables as opposed to their Eulerian averaging on spatial domains. This programme is realized with the help of a global Ricci deformation flow for the metric. We explain rigorously the origin of the Ricci flow which, on heuristic grounds, has already been suggested as a possible candidate for smoothing the initial dataset for cosmological spacetimes. The smoothing of geometry implies a renormalization of averaged spatial variables. We discuss the results in terms of effective cosmological parameters that would be assigned to the smoothed cosmological spacetime. In particular, we find that on the smoothed spatial domain B-bar evaluated cosmological parameters obey Ω-bar B-bar m + Ω-bar B-bar R + Ω-bar B-bar A + Ω-bar B-bar Q 1, where Ω-bar B-bar m , Ω-bar B-bar R and Ω-bar B-bar A correspond to the standard Friedmannian parameters, while Ω-bar B-bar Q is a remnant of cosmic variance of expansion and shear fluctuations on the averaging domain. All these parameters are 'dressed' after smoothing out the geometrical fluctuations, and we give the relations of the 'dressed' to the 'bare' parameters. While the former provide the framework of interpreting observations with a 'Friedmannian bias

  19. Average: the juxtaposition of procedure and context

    Science.gov (United States)

    Watson, Jane; Chick, Helen; Callingham, Rosemary

    2014-09-01

    This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.

  20. Average-case analysis of numerical problems

    CERN Document Server

    2000-01-01

    The average-case analysis of numerical problems is the counterpart of the more traditional worst-case approach. The analysis of average error and cost leads to new insight on numerical problems as well as to new algorithms. The book provides a survey of results that were mainly obtained during the last 10 years and also contains new results. The problems under consideration include approximation/optimal recovery and numerical integration of univariate and multivariate functions as well as zero-finding and global optimization. Background material, e.g. on reproducing kernel Hilbert spaces and random fields, is provided.

  1. Grassmann Averages for Scalable Robust PCA

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Black, Michael J.

    2014-01-01

    As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...

  2. Benzene exposure in the shoemaking industry in China, a literature survey, 1978-2004.

    Science.gov (United States)

    Wang, Laiming; Zhou, Yimei; Liang, Youxin; Wong, Otto; Armstrong, Thomas; Schnatter, A Robert; Wu, Qiangen; Fang, Jinbin; Ye, Xibiao; Fu, Hua; Irons, Richard D

    2006-11-01

    This article presents a summary of benzene exposure levels in the shoemaking industry in China reported in the Chinese medical literature between 1978 and 2004. A comprehensive search identified 182 papers reporting such exposure data. These papers could be classified into two categories: benzene poisoning case reports and industrial hygiene surveys. From each paper, the following information was abstracted whenever available: location and year of occurrence, occupation and/or task involved, benzene content in adhesives/solvents, work environment, working conditions, working hours, diagnosis, and air monitoring data of benzene. A total of 333 benzene measurements (88 averages, 116 minimums, 129 maximums) in the shoemaking industry were reported in the 182 papers identified. The data were analyzed in terms of geographical location, time period, type of ownership (state, township, or foreign), type of report (benzene poisoning reports vs. industrial hygiene surveys), and job title (work activity) or process. The reported data covered a wide range; some measurements were in excess of 4500 mg/m(3). Thirty-five percent of the reported benzene concentrations were below 40 mg/m(3), which was the national occupational exposure limit (OEL) for benzene between 1979 and 2001. The remaining 65% measurements, which exceeded the national OEL in effect at the time, and were distributed as follows: 40-100 mg/m(3), 11%; 100-300 mg/m(3), 21%; 300-500 mg/m(3), 13%; and 500+ mg/m(3), 20%. However, only 24% of the reported measurements after 2002 were below 6 mg/m(3), i.e., Permissible Concentration-Time Weighted Average (PC-TWA) and 10 mg/m(3), i.e., Permissible Concentration-Short Term Exposure Limit (PC-STEL), the newly amended benzene OELs in effect after May 2002. The data demonstrated that the majority of the facilities in the shoemaking industry reported in the literature were not in compliance of the OEL for benzene in effect at the time. Overall, the data show a clear downward

  3. Model averaging, optimal inference and habit formation

    Directory of Open Access Journals (Sweden)

    Thomas H B FitzGerald

    2014-06-01

    Full Text Available Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function – the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge – that of determining which model or models of their environment are the best for guiding behaviour. Bayesian model averaging – which says that an agent should weight the predictions of different models according to their evidence – provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent’s behaviour should show an equivalent balance. We hypothesise that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realisable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behaviour. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded Bayesian inference, focussing particularly upon the relationship between goal-directed and habitual behaviour.

  4. Generalized Jackknife Estimators of Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic...

  5. Average beta measurement in EXTRAP T1

    International Nuclear Information System (INIS)

    Hedin, E.R.

    1988-12-01

    Beginning with the ideal MHD pressure balance equation, an expression for the average poloidal beta, Β Θ , is derived. A method for unobtrusively measuring the quantities used to evaluate Β Θ in Extrap T1 is described. The results if a series of measurements yielding Β Θ as a function of externally applied toroidal field are presented. (author)

  6. HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS

    International Nuclear Information System (INIS)

    2005-01-01

    Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department

  7. Bayesian Averaging is Well-Temperated

    DEFF Research Database (Denmark)

    Hansen, Lars Kai

    2000-01-01

    Bayesian predictions are stochastic just like predictions of any other inference scheme that generalize from a finite sample. While a simple variational argument shows that Bayes averaging is generalization optimal given that the prior matches the teacher parameter distribution the situation is l...

  8. Gibbs equilibrium averages and Bogolyubov measure

    International Nuclear Information System (INIS)

    Sankovich, D.P.

    2011-01-01

    Application of the functional integration methods in equilibrium statistical mechanics of quantum Bose-systems is considered. We show that Gibbs equilibrium averages of Bose-operators can be represented as path integrals over a special Gauss measure defined in the corresponding space of continuous functions. We consider some problems related to integration with respect to this measure

  9. High average-power induction linacs

    International Nuclear Information System (INIS)

    Prono, D.S.; Barrett, D.; Bowles, E.; Caporaso, G.J.; Chen, Yu-Jiuan; Clark, J.C.; Coffield, F.; Newton, M.A.; Nexsen, W.; Ravenscroft, D.; Turner, W.C.; Watson, J.A.

    1989-01-01

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of ∼ 50-ns duration pulses to > 100 MeV. In this paper the authors report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  10. Function reconstruction from noisy local averages

    International Nuclear Information System (INIS)

    Chen Yu; Huang Jianguo; Han Weimin

    2008-01-01

    A regularization method is proposed for the function reconstruction from noisy local averages in any dimension. Error bounds for the approximate solution in L 2 -norm are derived. A number of numerical examples are provided to show computational performance of the method, with the regularization parameters selected by different strategies

  11. A singularity theorem based on spatial averages

    Indian Academy of Sciences (India)

    journal of. July 2007 physics pp. 31–47. A singularity theorem based on spatial ... In this paper I would like to present a result which confirms – at least partially – ... A detailed analysis of how the model fits in with the .... Further, the statement that the spatial average ...... Financial support under grants FIS2004-01626 and no.

  12. Multiphase averaging of periodic soliton equations

    International Nuclear Information System (INIS)

    Forest, M.G.

    1979-01-01

    The multiphase averaging of periodic soliton equations is considered. Particular attention is given to the periodic sine-Gordon and Korteweg-deVries (KdV) equations. The periodic sine-Gordon equation and its associated inverse spectral theory are analyzed, including a discussion of the spectral representations of exact, N-phase sine-Gordon solutions. The emphasis is on physical characteristics of the periodic waves, with a motivation from the well-known whole-line solitons. A canonical Hamiltonian approach for the modulational theory of N-phase waves is prescribed. A concrete illustration of this averaging method is provided with the periodic sine-Gordon equation; explicit averaging results are given only for the N = 1 case, laying a foundation for a more thorough treatment of the general N-phase problem. For the KdV equation, very general results are given for multiphase averaging of the N-phase waves. The single-phase results of Whitham are extended to general N phases, and more importantly, an invariant representation in terms of Abelian differentials on a Riemann surface is provided. Several consequences of this invariant representation are deduced, including strong evidence for the Hamiltonian structure of N-phase modulational equations

  13. A dynamic analysis of moving average rules

    NARCIS (Netherlands)

    Chiarella, C.; He, X.Z.; Hommes, C.H.

    2006-01-01

    The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type

  14. Essays on model averaging and political economics

    NARCIS (Netherlands)

    Wang, W.

    2013-01-01

    This thesis first investigates various issues related with model averaging, and then evaluates two policies, i.e. West Development Drive in China and fiscal decentralization in U.S, using econometric tools. Chapter 2 proposes a hierarchical weighted least squares (HWALS) method to address multiple

  15. 7 CFR 1209.12 - On average.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false On average. 1209.12 Section 1209.12 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS... CONSUMER INFORMATION ORDER Mushroom Promotion, Research, and Consumer Information Order Definitions § 1209...

  16. High average-power induction linacs

    International Nuclear Information System (INIS)

    Prono, D.S.; Barrett, D.; Bowles, E.

    1989-01-01

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  17. Average Costs versus Net Present Value

    NARCIS (Netherlands)

    E.A. van der Laan (Erwin); R.H. Teunter (Ruud)

    2000-01-01

    textabstractWhile the net present value (NPV) approach is widely accepted as the right framework for studying production and inventory control systems, average cost (AC) models are more widely used. For the well known EOQ model it can be verified that (under certain conditions) the AC approach gives

  18. Average beta-beating from random errors

    CERN Document Server

    Tomas Garcia, Rogelio; Langner, Andy Sven; Malina, Lukas; Franchi, Andrea; CERN. Geneva. ATS Department

    2018-01-01

    The impact of random errors on average β-beating is studied via analytical derivations and simulations. A systematic positive β-beating is expected from random errors quadratic with the sources or, equivalently, with the rms β-beating. However, random errors do not have a systematic effect on the tune.

  19. Reliability Estimates for Undergraduate Grade Point Average

    Science.gov (United States)

    Westrick, Paul A.

    2017-01-01

    Undergraduate grade point average (GPA) is a commonly employed measure in educational research, serving as a criterion or as a predictor depending on the research question. Over the decades, researchers have used a variety of reliability coefficients to estimate the reliability of undergraduate GPA, which suggests that there has been no consensus…

  20. Tendon surveillance requirements - average tendon force

    International Nuclear Information System (INIS)

    Fulton, J.F.

    1982-01-01

    Proposed Rev. 3 to USNRC Reg. Guide 1.35 discusses the need for comparing, for individual tendons, the measured and predicted lift-off forces. Such a comparison is intended to detect any abnormal tendon force loss which might occur. Recognizing that there are uncertainties in the prediction of tendon losses, proposed Guide 1.35.1 has allowed specific tolerances on the fundamental losses. Thus, the lift-off force acceptance criteria for individual tendons appearing in Reg. Guide 1.35, Proposed Rev. 3, is stated relative to a lower bound predicted tendon force, which is obtained using the 'plus' tolerances on the fundamental losses. There is an additional acceptance criterion for the lift-off forces which is not specifically addressed in these two Reg. Guides; however, it is included in a proposed Subsection IWX to ASME Code Section XI. This criterion is based on the overriding requirement that the magnitude of prestress in the containment structure be sufficeint to meet the minimum prestress design requirements. This design requirement can be expressed as an average tendon force for each group of vertical hoop, or dome tendons. For the purpose of comparing the actual tendon forces with the required average tendon force, the lift-off forces measured for a sample of tendons within each group can be averaged to construct the average force for the entire group. However, the individual lift-off forces must be 'corrected' (normalized) prior to obtaining the sample average. This paper derives the correction factor to be used for this purpose. (orig./RW)

  1. Waif goodbye! Average-size female models promote positive body image and appeal to consumers.

    Science.gov (United States)

    Diedrichs, Phillippa C; Lee, Christina

    2011-10-01

    Despite consensus that exposure to media images of thin fashion models is associated with poor body image and disordered eating behaviours, few attempts have been made to enact change in the media. This study sought to investigate an effective alternative to current media imagery, by exploring the advertising effectiveness of average-size female fashion models, and their impact on the body image of both women and men. A sample of 171 women and 120 men were assigned to one of three advertisement conditions: no models, thin models and average-size models. Women and men rated average-size models as equally effective in advertisements as thin and no models. For women with average and high levels of internalisation of cultural beauty ideals, exposure to average-size female models was associated with a significantly more positive body image state in comparison to exposure to thin models and no models. For men reporting high levels of internalisation, exposure to average-size models was also associated with a more positive body image state in comparison to viewing thin models. These findings suggest that average-size female models can promote positive body image and appeal to consumers.

  2. Exposure Forecaster

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Exposure Forecaster Database (ExpoCastDB) is EPA's database for aggregating chemical exposure information and can be used to help with chemical exposure...

  3. Quantifying commuter exposures to volatile organic compounds

    Science.gov (United States)

    Kayne, Ashleigh

    laboratory using standard BTEX gases. The LODs for the Tenax TA sampling tubes (determined with a sample volume of 1,000 standard cubic centimeters which is close to the approximate commuter sample volumes collected) were orders of magnitude lower (0.04 to 0.7 parts per billion (ppb) for individual compounds of BTEX) compared to the PIDs' LODs (9.3 to 15 ppb of a BTEX mixture), which makes the Tenax TA sampling method more suitable to measure BTEX concentrations in the sub-parts per billion (ppb) range. PID and Tenax TA data for commuter exposures were inversely related. The concentrations of VOCs measured by the PID were substantially higher than BTEX concentrations measured by collocated Tenax TA samplers. The inverse trend and the large difference in magnitude between PID responses and Tenax TA BTEX measurements indicates the two methods may have been measuring different air pollutants that are negatively correlated. Drivers in Fort Collins, Colorado with closed windows experienced greater time-weighted average BTEX exposures than cyclists (p: 0.04). Commuter BTEX exposures measured in Fort Collins were lower than commuter exposures measured in prior studies that occurred in larger cities (Boston and Copenhagen). Although route and intake may affect a commuter's BTEX dose, these variables are outside of the scope of this study. Within the limitations of this study (including: small sample size, small representative area of Fort Collins, and respiration rates not taken into account), it appears health risks associated with traffic-induced BTEX exposures may be reduced by commuting via cycling instead of driving with windows closed and living in a less populous area that has less vehicle traffic. Although the PID did not reliably measure low-level commuter BTEX exposures, the Tenax TA sampling method did. The PID measured BTEX concentrations reliably in a controlled environment, at high concentrations (300-800 ppb), and in the absence of other air pollutants. In

  4. Statistics on exponential averaging of periodograms

    Energy Technology Data Exchange (ETDEWEB)

    Peeters, T.T.J.M. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Ciftcioglu, Oe. [Istanbul Technical Univ. (Turkey). Dept. of Electrical Engineering

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a {chi}{sup 2} distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.).

  5. Statistics on exponential averaging of periodograms

    International Nuclear Information System (INIS)

    Peeters, T.T.J.M.; Ciftcioglu, Oe.

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a χ 2 distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.)

  6. ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE

    Directory of Open Access Journals (Sweden)

    Carmen BOGHEAN

    2013-12-01

    Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.

  7. Weighted estimates for the averaging integral operator

    Czech Academy of Sciences Publication Activity Database

    Opic, Bohumír; Rákosník, Jiří

    2010-01-01

    Roč. 61, č. 3 (2010), s. 253-262 ISSN 0010-0757 R&D Projects: GA ČR GA201/05/2033; GA ČR GA201/08/0383 Institutional research plan: CEZ:AV0Z10190503 Keywords : averaging integral operator * weighted Lebesgue spaces * weights Subject RIV: BA - General Mathematics Impact factor: 0.474, year: 2010 http://link.springer.com/article/10.1007%2FBF03191231

  8. Average Transverse Momentum Quantities Approaching the Lightfront

    OpenAIRE

    Boer, Daniel

    2015-01-01

    In this contribution to Light Cone 2014, three average transverse momentum quantities are discussed: the Sivers shift, the dijet imbalance, and the $p_T$ broadening. The definitions of these quantities involve integrals over all transverse momenta that are overly sensitive to the region of large transverse momenta, which conveys little information about the transverse momentum distributions of quarks and gluons inside hadrons. TMD factorization naturally suggests alternative definitions of su...

  9. Time-averaged MSD of Brownian motion

    OpenAIRE

    Andreanov, Alexei; Grebenkov, Denis

    2012-01-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we de...

  10. A population-based exposure assessment methodology for carbon monoxide: Development of a carbon monoxide passive sampler and occupational dosimeter

    Energy Technology Data Exchange (ETDEWEB)

    Apte, Michael G. [Univ. of California, Berkeley, CA (United States)

    1997-09-01

    Two devices, an occupational carbon monoxide (CO) dosimeter (LOCD), and an indoor air quality (IAQ) passive sampler were developed for use in population-based CO exposure assessment studies. CO exposure is a serious public health problem in the U.S., causing both morbidity and mortality (lifetime mortality risk approximately 10{sup -4}). Sparse data from population-based CO exposure assessments indicate that approximately 10% of the U.S. population is exposed to CO above the national ambient air quality standard. No CO exposure measurement technology is presently available for affordable population-based CO exposure assessment studies. The LOCD and IAQ Passive Sampler were tested in the laboratory and field. The palladium-molybdenum based CO sensor was designed into a compact diffusion tube sampler that can be worn. Time-weighted-average (TWA) CO exposure of the device is quantified by a simple spectrophotometric measurement. The LOCD and IAQ Passive Sampler were tested over an exposure range of 40 to 700 ppm-hours and 200 to 4200 ppm-hours, respectively. Both devices were capable of measuring precisely (relative standard deviation <20%), with low bias (<10%). The LOCD was screened for interferences by temperature, humidity, and organic and inorganic gases. Temperature effects were small in the range of 10°C to 30°C. Humidity effects were low between 20% and 90% RH. Ethylene (200 ppm) caused a positive interference and nitric oxide (50 ppm) caused a negative response without the presence of CO but not with CO.

  11. Exposure to fuel-oil ash and welding emissions during the overhaul of an oil-fired boiler.

    Science.gov (United States)

    Liu, Youcheng; Woodin, Mark A; Smith, Thomas J; Herrick, Robert F; Williams, Paige L; Hauser, Russ; Christiani, David C

    2005-09-01

    The health effects of exposure to vanadium in fuel-oil ash are not well described at levels ranging from 10 to 500 microg/m(3). As part of a larger occupational epidemiologic study that assessed these effects during the overhaul of a large oil-fired boiler, this study was designed to quantify boilermakers' exposures to fuel-oil ash particles, metals, and welding gases, and to identify determinants of these exposures. Personal exposure measurements were conducted on 18 boilermakers and 11 utility workers (referents) before and during a 3-week overhaul. Ash particles < 10 microm in diameter (PM(10), mg/m(3)) were sampled over full work shifts using a one-stage personal size selective sampler containing a polytetrafluoroethylene filter. Filters were digested using the Parr bomb method and analyzed for the metals vanadium (V), nickel (Ni), iron (Fe), chromium (Cr), cadmium (Cd), lead (Pb), manganese (Mn), and arsenic (As) by inductively coupled plasma mass spectrometry. Nitrogen dioxide (NO(2)) was measured with an Ogawa passive badge-type sampler and ozone (O(3)) with a personal active pump sampler.Time-weighted average (TWA) exposures were significantly higher (p < 0.05) for boilermakers than for utility workers for PM(10) (geometric mean: 0.47 vs. 0.13 mg/m(3)), V (8.9 vs. 1.4 microg/m(3)), Ni (7.4 vs. 1.8 microg/m(3)) and Fe (56.2 vs. 11.2 microg/m(3)). Exposures were affected by overhaul time periods, tasks, and work locations. No significant increases were found for O(3) or NO(2) for boilermakers or utility workers regardless of overhaul period or task group. Fuel-oil ash was a major contributor to boilermakers' exposure to PM(10) and metals. Vanadium concentrations sometimes exceeded the 2003 American Conference of Governmental Industrial Hygienists (ACGIH) threshold limit value.

  12. Average configuration of the geomagnetic tail

    International Nuclear Information System (INIS)

    Fairfield, D.H.

    1979-01-01

    Over 3000 hours of Imp 6 magnetic field data obtained between 20 and 33 R/sub E/ in the geomagnetic tail have been used in a statistical study of the tail configuration. A distribution of 2.5-min averages of B/sub z/ as a function of position across the tail reveals that more flux crosses the equatorial plane near the dawn and dusk flanks (B-bar/sub z/=3.γ) than near midnight (B-bar/sub z/=1.8γ). The tail field projected in the solar magnetospheric equatorial plane deviates from the x axis due to flaring and solar wind aberration by an angle α=-0.9 Y/sub SM/-2.7, where Y/sub SM/ is in earth radii and α is in degrees. After removing these effects, the B/sub y/ component of the tail field is found to depend on interplanetary sector structure. During an 'away' sector the B/sub y/ component of the tail field is on average 0.5γ greater than that during a 'toward' sector, a result that is true in both tail lobes and is independent of location across the tail. This effect means the average field reversal between northern and southern lobes of the tail is more often 178 0 rather than the 180 0 that is generally supposed

  13. Unscrambling The "Average User" Of Habbo Hotel

    Directory of Open Access Journals (Sweden)

    Mikael Johnson

    2007-01-01

    Full Text Available The “user” is an ambiguous concept in human-computer interaction and information systems. Analyses of users as social actors, participants, or configured users delineate approaches to studying design-use relationships. Here, a developer’s reference to a figure of speech, termed the “average user,” is contrasted with design guidelines. The aim is to create an understanding about categorization practices in design through a case study about the virtual community, Habbo Hotel. A qualitative analysis highlighted not only the meaning of the “average user,” but also the work that both the developer and the category contribute to this meaning. The average user a represents the unknown, b influences the boundaries of the target user groups, c legitimizes the designer to disregard marginal user feedback, and d keeps the design space open, thus allowing for creativity. The analysis shows how design and use are intertwined and highlights the developers’ role in governing different users’ interests.

  14. Changing mortality and average cohort life expectancy

    Directory of Open Access Journals (Sweden)

    Robert Schoen

    2005-10-01

    Full Text Available Period life expectancy varies with changes in mortality, and should not be confused with the life expectancy of those alive during that period. Given past and likely future mortality changes, a recent debate has arisen on the usefulness of the period life expectancy as the leading measure of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure, the average cohort life expectancy (ACLE, to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate measures of mortality are calculated for England and Wales, Norway, and Switzerland for the years 1880 to 2000. CAL is found to be sensitive to past and present changes in death rates. ACLE requires the most data, but gives the best representation of the survivorship of cohorts present at a given time.

  15. Occupational exposure to ethylene oxide--OSHA. Final rule: supplemental statement of reasons.

    Science.gov (United States)

    1985-01-02

    On June 22, 1984, the Occupational Safety and Health Administration (OSHA) published a final standard for ethylene oxide (EtO) that established a permissible exposure limit of 1 part EtO per million parts of air determined as an 8-hour time--weighted average (TWA) concentration (29 CFR 1910.1047, 49 FR 25734). The standard also includes provisions for methods of exposure control, personal protective equipment, measurement of employee exposure, training, signs, and labels, medical surveillance, regulated areas, emergencies and recordkeeping. The basis for this action was a determination by OSHA, based on human and animal data, that exposure to EtO presents a carcinogenic, mutagenic, genotoxic, reproductive, neurologic, and sensitization hazard to workers. During the rulemaking proceedings that led to the establishment of the 1 ppm TWA, the issue of whether there was a need for a short-term exposure limit (STEL) for workers protection from EtO was raised. OSHA reserved decision on the adoption of a STEL at the conclusion of the rulemaking in order to permit peer review of the available evidence and to review more fully the arguments and pertinent data regarding the STEL issue. Upon receipt of the analyses from most of the peer reviewers, OSHA published a notice to that effect on September 19, 1984 (49 FR 36659) and invited public comment on the pertinent issues addressed in the peer reviews. Based on the entire rulemaking record, including the peer reviews and public comments received since June 22, the Assistant Secretary has determined that adoption of a STEL for EtO is not warranted by the available health evidence, and that a STEL is not reasonably necessary or appropriate for inclusion in the final EtO standard. OSHA has also asked that NIOSH fund certain additional studies related to whether a dose-rate relationship can be established for EtO, and OSHA will review the results of those studies when they become available.

  16. Reconstruction of a time-averaged midposition CT scan for radiotherapy planning of lung cancer patients using deformable registration

    International Nuclear Information System (INIS)

    Wolthaus, J. W. H.; Sonke, J.-J.; Herk, M. van; Damen, E. M. F.

    2008-01-01

    for the clearly visible features (e.g., tumor and diaphragm). The shape of the tumor, with respect to that of the BH CT scan, was better represented by the MidP reconstructions than any of the 4D CT frames (including MidV; reduction of 'shape differences' was 66%). The MidP scans contained about one-third the noise of individual 4D CT scan frames. Conclusions: We implemented an accurate method to estimate the motion of structures in a 4D CT scan. Subsequently, a novel method to create a midposition CT scan (time-weighted average of the anatomy) for treatment planning with reduced noise and artifacts was introduced. Tumor shape and position in the MidP CT scan represents that of the BH CT scan better than MidV CT scan and, therefore, was found to be appropriate for treatment planning

  17. Average glandular dose in digital mammography and breast tomosynthesis

    Energy Technology Data Exchange (ETDEWEB)

    Olgar, T. [Ankara Univ. (Turkey). Dept. of Engineering Physics; Universitaetsklinikum Leipzig AoeR (Germany). Klinik und Poliklinik fuer Diagnostische und Interventionelle Radiologie; Kahn, T.; Gosch, D. [Universitaetsklinikum Leipzig AoeR (Germany). Klinik und Poliklinik fuer Diagnostische und Interventionelle Radiologie

    2012-10-15

    Purpose: To determine the average glandular dose (AGD) in digital full-field mammography (2 D imaging mode) and in breast tomosynthesis (3 D imaging mode). Materials and Methods: Using the method described by Boone, the AGD was calculated from the exposure parameters of 2247 conventional 2 D mammograms and 984 mammograms in 3 D imaging mode of 641 patients examined with the digital mammographic system Hologic Selenia Dimensions. The breast glandular tissue content was estimated by the Hologic R2 Quantra automated volumetric breast density measurement tool for each patient from right craniocaudal (RCC) and left craniocaudal (LCC) images in 2 D imaging mode. Results: The mean compressed breast thickness (CBT) was 52.7 mm for craniocaudal (CC) and 56.0 mm for mediolateral oblique (MLO) views. The mean percentage of breast glandular tissue content was 18.0 % and 17.4 % for RCC and LCC projections, respectively. The mean AGD values in 2 D imaging mode per exposure for the standard breast were 1.57 mGy and 1.66 mGy, while the mean AGD values after correction for real breast composition were 1.82 mGy and 1.94 mGy for CC and MLO views, respectively. The mean AGD values in 3 D imaging mode per exposure for the standard breast were 2.19 mGy and 2.29 mGy, while the mean AGD values after correction for the real breast composition were 2.53 mGy and 2.63 mGy for CC and MLO views, respectively. No significant relationship was found between the AGD and CBT in 2 D imaging mode and a good correlation coefficient of 0.98 in 3 D imaging mode. Conclusion: In this study the mean calculated AGD per exposure in 3 D imaging mode was on average 34 % higher than for 2 D imaging mode for patients examined with the same CBT.

  18. Average glandular dose in digital mammography and breast tomosynthesis

    International Nuclear Information System (INIS)

    Olgar, T.; Universitaetsklinikum Leipzig AoeR; Kahn, T.; Gosch, D.

    2012-01-01

    Purpose: To determine the average glandular dose (AGD) in digital full-field mammography (2 D imaging mode) and in breast tomosynthesis (3 D imaging mode). Materials and Methods: Using the method described by Boone, the AGD was calculated from the exposure parameters of 2247 conventional 2 D mammograms and 984 mammograms in 3 D imaging mode of 641 patients examined with the digital mammographic system Hologic Selenia Dimensions. The breast glandular tissue content was estimated by the Hologic R2 Quantra automated volumetric breast density measurement tool for each patient from right craniocaudal (RCC) and left craniocaudal (LCC) images in 2 D imaging mode. Results: The mean compressed breast thickness (CBT) was 52.7 mm for craniocaudal (CC) and 56.0 mm for mediolateral oblique (MLO) views. The mean percentage of breast glandular tissue content was 18.0 % and 17.4 % for RCC and LCC projections, respectively. The mean AGD values in 2 D imaging mode per exposure for the standard breast were 1.57 mGy and 1.66 mGy, while the mean AGD values after correction for real breast composition were 1.82 mGy and 1.94 mGy for CC and MLO views, respectively. The mean AGD values in 3 D imaging mode per exposure for the standard breast were 2.19 mGy and 2.29 mGy, while the mean AGD values after correction for the real breast composition were 2.53 mGy and 2.63 mGy for CC and MLO views, respectively. No significant relationship was found between the AGD and CBT in 2 D imaging mode and a good correlation coefficient of 0.98 in 3 D imaging mode. Conclusion: In this study the mean calculated AGD per exposure in 3 D imaging mode was on average 34 % higher than for 2 D imaging mode for patients examined with the same CBT.

  19. Risk assessments using the Strain Index and the TLV for HAL, Part I: Task and multi-task job exposure classifications.

    Science.gov (United States)

    Kapellusch, Jay M; Bao, Stephen S; Silverstein, Barbara A; Merryweather, Andrew S; Thiese, Mathew S; Hegmann, Kurt T; Garg, Arun

    2017-12-01

    The Strain Index (SI) and the American Conference of Governmental Industrial Hygienists (ACGIH) Threshold Limit Value for Hand Activity Level (TLV for HAL) use different constituent variables to quantify task physical exposures. Similarly, time-weighted-average (TWA), Peak, and Typical exposure techniques to quantify physical exposure from multi-task jobs make different assumptions about each task's contribution to the whole job exposure. Thus, task and job physical exposure classifications differ depending upon which model and technique are used for quantification. This study examines exposure classification agreement, disagreement, correlation, and magnitude of classification differences between these models and techniques. Data from 710 multi-task job workers performing 3,647 tasks were analyzed using the SI and TLV for HAL models, as well as with the TWA, Typical and Peak job exposure techniques. Physical exposures were classified as low, medium, and high using each model's recommended, or a priori limits. Exposure classification agreement and disagreement between models (SI, TLV for HAL) and between job exposure techniques (TWA, Typical, Peak) were described and analyzed. Regardless of technique, the SI classified more tasks as high exposure than the TLV for HAL, and the TLV for HAL classified more tasks as low exposure. The models agreed on 48.5% of task classifications (kappa = 0.28) with 15.5% of disagreement between low and high exposure categories. Between-technique (i.e., TWA, Typical, Peak) agreement ranged from 61-93% (kappa: 0.16-0.92) depending on whether the SI or TLV for HAL was used. There was disagreement between the SI and TLV for HAL and between the TWA, Typical and Peak techniques. Disagreement creates uncertainty for job design, job analysis, risk assessments, and developing interventions. Task exposure classifications from the SI and TLV for HAL might complement each other. However, TWA, Typical, and Peak job exposure techniques all have

  20. Operator product expansion and its thermal average

    Energy Technology Data Exchange (ETDEWEB)

    Mallik, S [Saha Inst. of Nuclear Physics, Calcutta (India)

    1998-05-01

    QCD sum rules at finite temperature, like the ones at zero temperature, require the coefficients of local operators, which arise in the short distance expansion of the thermal average of two-point functions of currents. We extend the configuration space method, applied earlier at zero temperature, to the case at finite temperature. We find that, upto dimension four, two new operators arise, in addition to the two appearing already in the vacuum correlation functions. It is argued that the new operators would contribute substantially to the sum rules, when the temperature is not too low. (orig.) 7 refs.

  1. Fluctuations of wavefunctions about their classical average

    International Nuclear Information System (INIS)

    Benet, L; Flores, J; Hernandez-Saldana, H; Izrailev, F M; Leyvraz, F; Seligman, T H

    2003-01-01

    Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics

  2. Phase-averaged transport for quasiperiodic Hamiltonians

    CERN Document Server

    Bellissard, J; Schulz-Baldes, H

    2002-01-01

    For a class of discrete quasi-periodic Schroedinger operators defined by covariant re- presentations of the rotation algebra, a lower bound on phase-averaged transport in terms of the multifractal dimensions of the density of states is proven. This result is established under a Diophantine condition on the incommensuration parameter. The relevant class of operators is distinguished by invariance with respect to symmetry automorphisms of the rotation algebra. It includes the critical Harper (almost-Mathieu) operator. As a by-product, a new solution of the frame problem associated with Weyl-Heisenberg-Gabor lattices of coherent states is given.

  3. Baseline-dependent averaging in radio interferometry

    Science.gov (United States)

    Wijnholds, S. J.; Willis, A. G.; Salvini, S.

    2018-05-01

    This paper presents a detailed analysis of the applicability and benefits of baseline-dependent averaging (BDA) in modern radio interferometers and in particular the Square Kilometre Array. We demonstrate that BDA does not affect the information content of the data other than a well-defined decorrelation loss for which closed form expressions are readily available. We verify these theoretical findings using simulations. We therefore conclude that BDA can be used reliably in modern radio interferometry allowing a reduction of visibility data volume (and hence processing costs for handling visibility data) by more than 80 per cent.

  4. Multistage parallel-serial time averaging filters

    International Nuclear Information System (INIS)

    Theodosiou, G.E.

    1980-01-01

    Here, a new time averaging circuit design, the 'parallel filter' is presented, which can reduce the time jitter, introduced in time measurements using counters of large dimensions. This parallel filter could be considered as a single stage unit circuit which can be repeated an arbitrary number of times in series, thus providing a parallel-serial filter type as a result. The main advantages of such a filter over a serial one are much less electronic gate jitter and time delay for the same amount of total time uncertainty reduction. (orig.)

  5. Time-averaged MSD of Brownian motion

    International Nuclear Information System (INIS)

    Andreanov, Alexei; Grebenkov, Denis S

    2012-01-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution

  6. Time-dependent angularly averaged inverse transport

    International Nuclear Information System (INIS)

    Bal, Guillaume; Jollivet, Alexandre

    2009-01-01

    This paper concerns the reconstruction of the absorption and scattering parameters in a time-dependent linear transport equation from knowledge of angularly averaged measurements performed at the boundary of a domain of interest. Such measurement settings find applications in medical and geophysical imaging. We show that the absorption coefficient and the spatial component of the scattering coefficient are uniquely determined by such measurements. We obtain stability results on the reconstruction of the absorption and scattering parameters with respect to the measured albedo operator. The stability results are obtained by a precise decomposition of the measurements into components with different singular behavior in the time domain

  7. Independence, Odd Girth, and Average Degree

    DEFF Research Database (Denmark)

    Löwenstein, Christian; Pedersen, Anders Sune; Rautenbach, Dieter

    2011-01-01

      We prove several tight lower bounds in terms of the order and the average degree for the independence number of graphs that are connected and/or satisfy some odd girth condition. Our main result is the extension of a lower bound for the independence number of triangle-free graphs of maximum...... degree at most three due to Heckman and Thomas [Discrete Math 233 (2001), 233–237] to arbitrary triangle-free graphs. For connected triangle-free graphs of order n and size m, our result implies the existence of an independent set of order at least (4n−m−1) / 7.  ...

  8. Bootstrapping Density-Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...... (1989). In many cases validity of bootstrap-based inference procedures is found to depend crucially on whether the bandwidth sequence satisfies a particular (asymptotic linearity) condition. An exception to this rule occurs for inference procedures involving a studentized estimator employing a "robust...

  9. Average Nuclear properties based on statistical model

    International Nuclear Information System (INIS)

    El-Jaick, L.J.

    1974-01-01

    The rough properties of nuclei were investigated by statistical model, in systems with the same and different number of protons and neutrons, separately, considering the Coulomb energy in the last system. Some average nuclear properties were calculated based on the energy density of nuclear matter, from Weizsscker-Beth mass semiempiric formulae, generalized for compressible nuclei. In the study of a s surface energy coefficient, the great influence exercised by Coulomb energy and nuclear compressibility was verified. For a good adjust of beta stability lines and mass excess, the surface symmetry energy were established. (M.C.K.) [pt

  10. Time-averaged MSD of Brownian motion

    Science.gov (United States)

    Andreanov, Alexei; Grebenkov, Denis S.

    2012-07-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution.

  11. Bayesian model averaging and weighted average least squares : Equivariance, stability, and numerical issues

    NARCIS (Netherlands)

    De Luca, G.; Magnus, J.R.

    2011-01-01

    In this article, we describe the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals, which implement, respectively, the exact Bayesian model-averaging estimator and the weighted-average least-squares

  12. Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.

    Science.gov (United States)

    Dirks, Jean; And Others

    1983-01-01

    Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)

  13. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  14. Averaged null energy condition from causality

    Science.gov (United States)

    Hartman, Thomas; Kundu, Sandipan; Tajdini, Amirhossein

    2017-07-01

    Unitary, Lorentz-invariant quantum field theories in flat spacetime obey mi-crocausality: commutators vanish at spacelike separation. For interacting theories in more than two dimensions, we show that this implies that the averaged null energy, ∫ duT uu , must be non-negative. This non-local operator appears in the operator product expansion of local operators in the lightcone limit, and therefore contributes to n-point functions. We derive a sum rule that isolates this contribution and is manifestly positive. The argument also applies to certain higher spin operators other than the stress tensor, generating an infinite family of new constraints of the form ∫ duX uuu··· u ≥ 0. These lead to new inequalities for the coupling constants of spinning operators in conformal field theory, which include as special cases (but are generally stronger than) the existing constraints from the lightcone bootstrap, deep inelastic scattering, conformal collider methods, and relative entropy. We also comment on the relation to the recent derivation of the averaged null energy condition from relative entropy, and suggest a more general connection between causality and information-theoretic inequalities in QFT.

  15. Beta-energy averaging and beta spectra

    International Nuclear Information System (INIS)

    Stamatelatos, M.G.; England, T.R.

    1976-07-01

    A simple yet highly accurate method for approximately calculating spectrum-averaged beta energies and beta spectra for radioactive nuclei is presented. This method should prove useful for users who wish to obtain accurate answers without complicated calculations of Fermi functions, complex gamma functions, and time-consuming numerical integrations as required by the more exact theoretical expressions. Therefore, this method should be a good time-saving alternative for investigators who need to make calculations involving large numbers of nuclei (e.g., fission products) as well as for occasional users interested in restricted number of nuclides. The average beta-energy values calculated by this method differ from those calculated by ''exact'' methods by no more than 1 percent for nuclides with atomic numbers in the 20 to 100 range and which emit betas of energies up to approximately 8 MeV. These include all fission products and the actinides. The beta-energy spectra calculated by the present method are also of the same quality

  16. Asymptotic Time Averages and Frequency Distributions

    Directory of Open Access Journals (Sweden)

    Muhammad El-Taha

    2016-01-01

    Full Text Available Consider an arbitrary nonnegative deterministic process (in a stochastic setting {X(t,  t≥0} is a fixed realization, i.e., sample-path of the underlying stochastic process with state space S=(-∞,∞. Using a sample-path approach, we give necessary and sufficient conditions for the long-run time average of a measurable function of process to be equal to the expectation taken with respect to the same measurable function of its long-run frequency distribution. The results are further extended to allow unrestricted parameter (time space. Examples are provided to show that our condition is not superfluous and that it is weaker than uniform integrability. The case of discrete-time processes is also considered. The relationship to previously known sufficient conditions, usually given in stochastic settings, will also be discussed. Our approach is applied to regenerative processes and an extension of a well-known result is given. For researchers interested in sample-path analysis, our results will give them the choice to work with the time average of a process or its frequency distribution function and go back and forth between the two under a mild condition.

  17. Chaotic Universe, Friedmannian on the average 2

    Energy Technology Data Exchange (ETDEWEB)

    Marochnik, L S [AN SSSR, Moscow. Inst. Kosmicheskikh Issledovanij

    1980-11-01

    The cosmological solutions are found for the equations for correlators, describing a statistically chaotic Universe, Friedmannian on the average in which delta-correlated fluctuations with amplitudes h >> 1 are excited. For the equation of state of matter p = n epsilon, the kind of solutions depends on the position of maximum of the spectrum of the metric disturbances. The expansion of the Universe, in which long-wave potential and vortical motions and gravitational waves (modes diverging at t ..-->.. 0) had been excited, tends asymptotically to the Friedmannian one at t ..-->.. identity and depends critically on n: at n < 0.26, the solution for the scalefactor is situated higher than the Friedmannian one, and lower at n > 0.26. The influence of finite at t ..-->.. 0 long-wave fluctuation modes leads to an averaged quasiisotropic solution. The contribution of quantum fluctuations and of short-wave parts of the spectrum of classical fluctuations to the expansion law is considered. Their influence is equivalent to the contribution from an ultrarelativistic gas with corresponding energy density and pressure. The restrictions are obtained for the degree of chaos (the spectrum characteristics) compatible with the observed helium abundance, which could have been retained by a completely chaotic Universe during its expansion up to the nucleosynthesis epoch.

  18. Averaging in the presence of sliding errors

    International Nuclear Information System (INIS)

    Yost, G.P.

    1991-08-01

    In many cases the precision with which an experiment can measure a physical quantity depends on the value of that quantity. Not having access to the true value, experimental groups are forced to assign their errors based on their own measured value. Procedures which attempt to derive an improved estimate of the true value by a suitable average of such measurements usually weight each experiment's measurement according to the reported variance. However, one is in a position to derive improved error estimates for each experiment from the average itself, provided an approximate idea of the functional dependence of the error on the central value is known. Failing to do so can lead to substantial biases. Techniques which avoid these biases without loss of precision are proposed and their performance is analyzed with examples. These techniques are quite general and can bring about an improvement even when the behavior of the errors is not well understood. Perhaps the most important application of the technique is in fitting curves to histograms

  19. High average power linear induction accelerator development

    International Nuclear Information System (INIS)

    Bayless, J.R.; Adler, R.J.

    1987-07-01

    There is increasing interest in linear induction accelerators (LIAs) for applications including free electron lasers, high power microwave generators and other types of radiation sources. Lawrence Livermore National Laboratory has developed LIA technology in combination with magnetic pulse compression techniques to achieve very impressive performance levels. In this paper we will briefly discuss the LIA concept and describe our development program. Our goals are to improve the reliability and reduce the cost of LIA systems. An accelerator is presently under construction to demonstrate these improvements at an energy of 1.6 MeV in 2 kA, 65 ns beam pulses at an average beam power of approximately 30 kW. The unique features of this system are a low cost accelerator design and an SCR-switched, magnetically compressed, pulse power system. 4 refs., 7 figs

  20. FEL system with homogeneous average output

    Energy Technology Data Exchange (ETDEWEB)

    Douglas, David R.; Legg, Robert; Whitney, R. Roy; Neil, George; Powers, Thomas Joseph

    2018-01-16

    A method of varying the output of a free electron laser (FEL) on very short time scales to produce a slightly broader, but smooth, time-averaged wavelength spectrum. The method includes injecting into an accelerator a sequence of bunch trains at phase offsets from crest. Accelerating the particles to full energy to result in distinct and independently controlled, by the choice of phase offset, phase-energy correlations or chirps on each bunch train. The earlier trains will be more strongly chirped, the later trains less chirped. For an energy recovered linac (ERL), the beam may be recirculated using a transport system with linear and nonlinear momentum compactions M.sub.56, which are selected to compress all three bunch trains at the FEL with higher order terms managed.

  1. Quetelet, the average man and medical knowledge.

    Science.gov (United States)

    Caponi, Sandra

    2013-01-01

    Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.

  2. [Quetelet, the average man and medical knowledge].

    Science.gov (United States)

    Caponi, Sandra

    2013-01-01

    Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.

  3. Asymmetric network connectivity using weighted harmonic averages

    Science.gov (United States)

    Morrison, Greg; Mahadevan, L.

    2011-02-01

    We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.

  4. Angle-averaged Compton cross sections

    International Nuclear Information System (INIS)

    Nickel, G.H.

    1983-01-01

    The scattering of a photon by an individual free electron is characterized by six quantities: α = initial photon energy in units of m 0 c 2 ; α/sub s/ = scattered photon energy in units of m 0 c 2 ; β = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV

  5. Average Gait Differential Image Based Human Recognition

    Directory of Open Access Journals (Sweden)

    Jinyan Chen

    2014-01-01

    Full Text Available The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI, AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition.

  6. Reynolds averaged simulation of unsteady separated flow

    International Nuclear Information System (INIS)

    Iaccarino, G.; Ooi, A.; Durbin, P.A.; Behnia, M.

    2003-01-01

    The accuracy of Reynolds averaged Navier-Stokes (RANS) turbulence models in predicting complex flows with separation is examined. The unsteady flow around square cylinder and over a wall-mounted cube are simulated and compared with experimental data. For the cube case, none of the previously published numerical predictions obtained by steady-state RANS produced a good match with experimental data. However, evidence exists that coherent vortex shedding occurs in this flow. Its presence demands unsteady RANS computation because the flow is not statistically stationary. The present study demonstrates that unsteady RANS does indeed predict periodic shedding, and leads to much better concurrence with available experimental data than has been achieved with steady computation

  7. Angle-averaged Compton cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Nickel, G.H.

    1983-01-01

    The scattering of a photon by an individual free electron is characterized by six quantities: ..cap alpha.. = initial photon energy in units of m/sub 0/c/sup 2/; ..cap alpha../sub s/ = scattered photon energy in units of m/sub 0/c/sup 2/; ..beta.. = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV.

  8. Evaluation of occupational exposure to static magnetic field in a chloralkali plant

    Directory of Open Access Journals (Sweden)

    AR Coobineh

    2005-10-01

    Full Text Available Background and Aims: An observational cross sectional study was conducted to determine iflong term exposure to static magnetic fields could be related to findings of medical examinations.Method: Health data were obtained for 20 workers who spent a major portion of their workdaysin the magnetic fields produced by the direct current through large electrolytic cells. These datawere compared to those of a control group of 21 workers. Intensity of magnetic fields weremeasured in the cell room and the Time weighted Average (TWA exposure to magnetic field wascalculated for each job classification.Results: Maximum and minimum intensities in mTwere found to be 16.99 and 0.46, respectively,which were well below the permissible level. Maximum TWAexposure to magnetic field wasfound to be 47.59 mTWhich were observed. Comparing the ECG, EEG, blood Pressure and pulseratesbetween the two groups showed no statistically significant differences. Clinical findingshowed that fatigue and nervousness complain were be higher in the case group and weresignificantly different between the two groups.Conclusion: We did not see statistical significant differences between case and control groups inECG, EEG, blood pressure and pulse-rates. We have seen statistically differences in the fatigue(P<0.01 and nervousness (P<0.01 complain of these two groups. We suggest that it may benecessary to choose a value of than 60 mT (TLV-TWA, so that the complains of fatigue andnervousness will be reduce.

  9. Evaluation of asbestos exposures during firewood-harvesting simulations in Libby, MT, USA--preliminary data.

    Science.gov (United States)

    Hart, Julie F; Ward, Tony J; Spear, Terry M; Crispen, Kelly; Zolnikov, Tara R

    2007-11-01

    Research was conducted in order to assess potential exposure to asbestos while harvesting firewood from amphibole-contaminated trees near Libby, MT, USA. Three firewood-harvesting simulations took place in the summer and fall of 2006 in the Kootenai Forest inside the US Environmental Protection Agency (EPA) restricted zone surrounding the former W.R. Grace vermiculite mine. Another simulation was conducted near Missoula, MT, USA, which served as the control. The work practices following each simulation were consistent throughout each trial. Personal breathing zone (PBZ) asbestos concentrations were measured by phase contrast microscopy (PCM) and transmission electron microscopy (TEM). Surface wipe samples of personal protective clothing were measured by TEM. The mean (n = 12) PBZ PCM sample time-weighted average (TWA) concentration was 0.29 fibers per milliliter, standard deviation (SD = 0.54). A substantial portion (more than five fibers per sample) of non-asbestos fibers (cellulose) was reported on all PBZ samples (excluding field blanks) when analyzed by TEM. The mean (n = 12) PBZ TEM sample TWA concentration for amphibole fibers 5-microm long was 0.07 fibers per milliliter (SD = 0.08). Substantial amphibole fiber concentrations were revealed on Tyvek clothing wipe samples. The mean concentration (n = 12) was 29 826 fibers per square centimeter (SD = 37 555), with 91% (27 192 fibers per square centimeter) comprised fibers firewood-harvesting activities in asbestos-contaminated areas and that the potential for exposure exists during such activities.

  10. Assessment of airborne asbestos exposure during the servicing and handling of automobile asbestos-containing gaskets.

    Science.gov (United States)

    Blake, Charles L; Dotson, G Scott; Harbison, Raymond D

    2006-07-01

    Five test sessions were conducted to assess asbestos exposure during the removal or installation of asbestos-containing gaskets on vehicles. All testing took place within an operative automotive repair facility involving passenger cars and a pickup truck ranging in vintage from late 1960s through 1970s. A professional mechanic performed all shop work including engine disassembly and reassembly, gasket manipulation and parts cleaning. Bulk sample analysis of removed gaskets through polarized light microscopy (PLM) revealed asbestos fiber concentrations ranging between 0 and 75%. Personal and area air samples were collected and analyzed using National Institute of Occupational Safety Health (NIOSH) methods 7400 [phase contrast microscopy (PCM)] and 7402 [transmission electron microscopy (TEM)]. Among all air samples collected, approximately 21% (n = 11) contained chrysotile fibers. The mean PCM and phase contrast microscopy equivalent (PCME) 8-h time weighted average (TWA) concentrations for these samples were 0.0031 fibers/cubic centimeters (f/cc) and 0.0017 f/cc, respectively. Based on these findings, automobile mechanics who worked with asbestos-containing gaskets may have been exposed to concentrations of airborne asbestos concentrations approximately 100 times lower than the current Occupational Safety and Health Administration (OSHA) Permissible Exposure Limit (PEL) of 0.1 f/cc.

  11. The balanced survivor average causal effect.

    Science.gov (United States)

    Greene, Tom; Joffe, Marshall; Hu, Bo; Li, Liang; Boucher, Ken

    2013-05-07

    Statistical analysis of longitudinal outcomes is often complicated by the absence of observable values in patients who die prior to their scheduled measurement. In such cases, the longitudinal data are said to be "truncated by death" to emphasize that the longitudinal measurements are not simply missing, but are undefined after death. Recently, the truncation by death problem has been investigated using the framework of principal stratification to define the target estimand as the survivor average causal effect (SACE), which in the context of a two-group randomized clinical trial is the mean difference in the longitudinal outcome between the treatment and control groups for the principal stratum of always-survivors. The SACE is not identified without untestable assumptions. These assumptions have often been formulated in terms of a monotonicity constraint requiring that the treatment does not reduce survival in any patient, in conjunction with assumed values for mean differences in the longitudinal outcome between certain principal strata. In this paper, we introduce an alternative estimand, the balanced-SACE, which is defined as the average causal effect on the longitudinal outcome in a particular subset of the always-survivors that is balanced with respect to the potential survival times under the treatment and control. We propose a simple estimator of the balanced-SACE that compares the longitudinal outcomes between equivalent fractions of the longest surviving patients between the treatment and control groups and does not require a monotonicity assumption. We provide expressions for the large sample bias of the estimator, along with sensitivity analyses and strategies to minimize this bias. We consider statistical inference under a bootstrap resampling procedure.

  12. Wintertime pollution level, size distribution and personal daily exposure to particulate matters in the northern and southern rural Chinese homes and variation in different household fuels.

    Science.gov (United States)

    Du, Wei; Shen, Guofeng; Chen, Yuanchen; Zhuo, Shaojie; Xu, Yang; Li, Xinyue; Pan, Xuelian; Cheng, Hefa; Wang, Xilong; Tao, Shu

    2017-12-01

    This study investigated and compared wintertime air pollution and personal exposure in the rural northern and southern Chinese homes. Daily indoor and outdoor particle samples were simultaneously collected by using stationary samplers, and personal exposure was directly measured using portable carried samplers. The daily average concentrations of indoor and outdoor PM 2.5 were 521 ± 234 and 365 ± 185 μg/m 3 in the northern village, that were about 2.3-2.7 times of 188 ± 104 and 150 ± 29 μg/m 3 in indoor and outdoor air in the southern villages. Particle size distribution was similar between indoor and outdoor air, and had relatively smaller difference between the two sites, relative to the particle mass concentration difference. PM 2.5 contributed to ∼80% of the TSP mass, and in PM 2.5 , near 90% were PM 1.0 . In homes using electricity in the southern villages, outdoor air pollution could explain 70-80% of the variation in indoor air pollution. The daily exposure to PM 2.5 measured using personal carried samplers were 451 ± 301 μg/m 3 in the northern villages with traditional solid fuels used for daily cooking and heating, and in the southern villages without heating, the exposure to PM 2.5 were 184 ± 83 and 166 ± 45 μg/m 3 , respectively, for the population using wood and electricity for daily cooking. Time-weighted daily average exposure estimated from area concentration and time spent indoor and outdoor was generally correlated the directly measured exposure. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Effect of physical exertion on the biological monitoring of exposure of various solvents following exposure by inhalation in human volunteers: I. Toluene.

    Science.gov (United States)

    Nadeau, Véronique; Truchon, Ginette; Brochu, Martin; Tardif, Robert

    2006-09-01

    Physical exertion (work load) has been recognized as one of several factors that can influence the kinetics of xenobiotics within the human body. This study was undertaken to evaluate the impact of physical exertion on two exposure indicators of toluene (TOL) in human volunteers exposed under controlled conditions in an inhalation chamber. A group of four volunteers (one woman, three men) were exposed to TOL (50 ppm) according to the following scenarios involving several periods during which volunteers were asked to perform either aerobic (AERO), muscular (MUSC), or both (AERO/MUSC) types of physical exercise (exercise bicycle, treadmills, pulleys). The target intensities (W) for each exercising period of 30 min--interspaced with 15 min at rest--were the following: REST, 50 W AERO (time-weighted average intensity [TWAI]: 46 watts); 50 W AERO/MUSC (TWAI: 38 watts) and 100 W AERO (TWAI: 71 watts) for 7 hours and 50 W MUSC for 3 hours (TWAI: 29 watts). Alveolar air and urine samples were collected at different time intervals before, during, and after exposure for the measurement of unchanged TOL in expired air (TOL-A) and urinary o-cresol (o-CR). Overall, the results showed that TOL-A measured during and after all scenarios involving physical activities were higher (approximately 1.4-2.0 fold) compared with exposures at rest. All scenarios involving physical exertion also resulted in increased end-of-exposure urinary o-CR (mean +/- SD): 0.9 +/- 0.1 mg/L (REST) vs. 2.0 +/- 0.1 mg/L (TWAI 46 watts). However, exposure at a TWAI of 71 watts did not further increase o-CR excretion (1.7 +/- 0.2 mg/L). This study confirms the significant effect of work load on TOL kinetics and showed that o-CR excretion increased proportionally with work load expressed as TWAI or with the estimated mean pulmonary ventilation during the period of exposure. This study also shows that exposure to TOL (50 ppm) involving a work load of around 50 W (light intensity) or lower is likely to produce

  14. Industrial Applications of High Average Power FELS

    CERN Document Server

    Shinn, Michelle D

    2005-01-01

    The use of lasers for material processing continues to expand, and the annual sales of such lasers exceeds $1 B (US). Large scale (many m2) processing of materials require the economical production of laser powers of the tens of kilowatts, and therefore are not yet commercial processes, although they have been demonstrated. The development of FELs based on superconducting RF (SRF) linac technology provides a scaleable path to laser outputs above 50 kW in the IR, rendering these applications economically viable, since the cost/photon drops as the output power increases. This approach also enables high average power ~ 1 kW output in the UV spectrum. Such FELs will provide quasi-cw (PRFs in the tens of MHz), of ultrafast (pulsewidth ~ 1 ps) output with very high beam quality. This talk will provide an overview of applications tests by our facility's users such as pulsed laser deposition, laser ablation, and laser surface modification, as well as present plans that will be tested with our upgraded FELs. These upg...

  15. Calculating Free Energies Using Average Force

    Science.gov (United States)

    Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)

    2001-01-01

    A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.

  16. Geographic Gossip: Efficient Averaging for Sensor Networks

    Science.gov (United States)

    Dimakis, Alexandros D. G.; Sarwate, Anand D.; Wainwright, Martin J.

    Gossip algorithms for distributed computation are attractive due to their simplicity, distributed nature, and robustness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repeatedly recirculating redundant information. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing times of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of $n$ and $\\sqrt{n}$ respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy $\\epsilon$ using $O(\\frac{n^{1.5}}{\\sqrt{\\log n}} \\log \\epsilon^{-1})$ radio transmissions, which yields a $\\sqrt{\\frac{n}{\\log n}}$ factor improvement over standard gossip algorithms. We illustrate these theoretical results with experimental comparisons between our algorithm and standard methods as applied to various classes of random fields.

  17. High-average-power solid state lasers

    International Nuclear Information System (INIS)

    Summers, M.A.

    1989-01-01

    In 1987, a broad-based, aggressive R ampersand D program aimed at developing the technologies necessary to make possible the use of solid state lasers that are capable of delivering medium- to high-average power in new and demanding applications. Efforts were focused along the following major lines: development of laser and nonlinear optical materials, and of coatings for parasitic suppression and evanescent wave control; development of computational design tools; verification of computational models on thoroughly instrumented test beds; and applications of selected aspects of this technology to specific missions. In the laser materials areas, efforts were directed towards producing strong, low-loss laser glasses and large, high quality garnet crystals. The crystal program consisted of computational and experimental efforts aimed at understanding the physics, thermodynamics, and chemistry of large garnet crystal growth. The laser experimental efforts were directed at understanding thermally induced wave front aberrations in zig-zag slabs, understanding fluid mechanics, heat transfer, and optical interactions in gas-cooled slabs, and conducting critical test-bed experiments with various electro-optic switch geometries. 113 refs., 99 figs., 18 tabs

  18. The concept of average LET values determination

    International Nuclear Information System (INIS)

    Makarewicz, M.

    1981-01-01

    The concept of average LET (linear energy transfer) values determination, i.e. ordinary moments of LET in absorbed dose distribution vs. LET of ionizing radiation of any kind and any spectrum (even the unknown ones) has been presented. The method is based on measurement of ionization current with several values of voltage supplying an ionization chamber operating in conditions of columnar recombination of ions or ion recombination in clusters while the chamber is placed in the radiation field at the point of interest. By fitting a suitable algebraic expression to the measured current values one can obtain coefficients of the expression which can be interpreted as values of LET moments. One of the advantages of the method is its experimental and computational simplicity. It has been shown that for numerical estimation of certain effects dependent on LET of radiation it is not necessary to know the dose distribution but only a number of parameters of the distribution, i.e. the LET moments. (author)

  19. On spectral averages in nuclear spectroscopy

    International Nuclear Information System (INIS)

    Verbaarschot, J.J.M.

    1982-01-01

    In nuclear spectroscopy one tries to obtain a description of systems of bound nucleons. By means of theoretical models one attemps to reproduce the eigenenergies and the corresponding wave functions which then enable the computation of, for example, the electromagnetic moments and the transition amplitudes. Statistical spectroscopy can be used for studying nuclear systems in large model spaces. In this thesis, methods are developed and applied which enable the determination of quantities in a finite part of the Hilbert space, which is defined by specific quantum values. In the case of averages in a space defined by a partition of the nucleons over the single-particle orbits, the propagation coefficients reduce to Legendre interpolation polynomials. In chapter 1 these polynomials are derived with the help of a generating function and a generalization of Wick's theorem. One can then deduce the centroid and the variance of the eigenvalue distribution in a straightforward way. The results are used to calculate the systematic energy difference between states of even and odd parity for nuclei in the mass region A=10-40. In chapter 2 an efficient method for transforming fixed angular momentum projection traces into fixed angular momentum for the configuration space traces is developed. In chapter 3 it is shown that the secular behaviour can be represented by a Gaussian function of the energies. (Auth.)

  20. Residential exposure to traffic noise and risk of incident atrial fibrillation

    DEFF Research Database (Denmark)

    Monrad, Maria; Sajadieh, Ahmad; Christensen, Jeppe Schultz

    2016-01-01

    with adjustment for lifestyle, socioeconomic position and air pollution. Results A 10 dB higher 5-year time-weighted mean exposure to road traffic noise was associated with a 6% higher risk of A-fib (incidence rate ratio (IRR): 1.06; 95% confidence interval (95% CI): 1.00–1.12) in models adjusted for factors...

  1. U.S. dietary exposures to heterocyclic amines.

    Science.gov (United States)

    Bogen, K T; Keating, G A

    2001-01-01

    Heterocyclic amines (HAs) formed in fried, broiled or grilled meats are potent mutagens that increase rates of colon, mammary, prostate and other cancers in bioassay rodents. Studies of how human dietary HA exposures may affect cancer risks have so far relied on fairly crudely defined HA-exposure categories. Recently, an integrated, quantitative approach to HA-exposure assessment (HAEA) was developed to estimate compound-specific intakes for particular individuals based on corresponding HA-concentration estimates that reflect their meat-type, intake-rate, cooking-method and meat-doneness preferences. This method was applied in the present study to U.S. national Continuing Survey of Food Intakes by Individuals (CSFII) data on meats consumed and cooking methods used by >25,000 people, after adjusting for underreported energy intake and conditional on meat-doneness preferences estimated from additional survey data. The U.S. population average lifetime time-weighted average of total HAs consumed was estimated to be approximately 9 ng/kg/day, with 2-amino-1-methyl-6-phenylimidazo[4,5-b]pyridine (PhIP) estimated to comprise about two thirds of this intake. Pan-fried meats were the largest source of HA in the diet and chicken the largest source of HAs among different meat types. Estimated total HA intakes by male vs. female children were generally similar, with those by (0- to 15-year-old) children approximately 25% greater than those by (16+-year-old) adults. Race-, age- and sex-specific mean HA intakes were estimated to be greatest for African American males, who were estimated to consume approximately 2- and approximately 3-fold more PhIP than white males at ages <16 and 30+ years, respectively, after considering a relatively greater preference for more well-done items among African Americans based on national survey data. This difference in PhIP intakes may at least partly explain why prostate cancer (PC) kills approximately 2-fold more African American than white men

  2. Assessment of occupational exposure of medical personnel to inhalatory anesthetics in Poland

    Directory of Open Access Journals (Sweden)

    Małgorzata Kucharska

    2014-02-01

    Full Text Available Objectives: Despite common use of inhalatory anesthetics, such as nitrous oxide (N2O, halothane, sevoflurane, and the like, occupational exposure to these substances in operating theatres was not monitored in Poland until 2006. The situation changed when maximum admissible concentration (MAC values for anesthetics used in Poland were established in 2005 for N2O, and in 2007 for sevoflurane, desflurane and isoflurane. The aim of this work was to assess occupational exposure in operating rooms on the basis of reliable and uniform analytical procedures. Material and Methods: The method for the determination of all anesthetics used in Poland, i.e. nitrous oxide, sevoflurane, isoflurane, desflurane, and halothane, was developed and validated. The measurements were performed in 2006-2010 in 31 hospitals countrywide. The study covered 117 operating rooms; air samples were collected from the breathing zone of 146 anesthesiologists, and 154 nurses, mostly anaesthetic. The measurements were carried out during various surgical operations, mostly on adult patients but also in hospitals for children. Results: Time weighted average concentrations of the anesthetics varied considerably, and the greatest differences were noted for N2O (0.1-1438.5 mg/m3; 40% of the results exceeded the MAC value. Only 3% of halothane, and 2% of sevoflurane concentrations exceeded the respective MAC values. Conclusions: Working in operating theatres is dangerous to the health of the operating staff. The coefficient of combined exposure to anesthesiologists under study exceeded the admissible value in 130 cases, which makes over 40% of the whole study population. Most of the excessive exposure values were noted for nitrous oxide. Med Pr 2014;65(1:43–54

  3. Dosimetric consequences of planning lung treatments on 4DCT average reconstruction to represent a moving tumour

    International Nuclear Information System (INIS)

    Dunn, L.F.; Taylor, M.L.; Kron, T.; Franich, R.

    2010-01-01

    Full text: Anatomic motion during a radiotherapy treatment is one of the more significant challenges in contemporary radiation therapy. For tumours of the lung, motion due to patient respiration makes both accurate planning and dose delivery difficult. One approach is to use the maximum intensity projection (MIP) obtained from a 40 computed tomography (CT) scan and then use this to determine the treatment volume. The treatment is then planned on a 4DCT average reco struction, rather than assuming the entire ITY has a uniform tumour density. This raises the question: how well does planning on a 'blurred' distribution of density with CT values greater than lung density but less than tumour density match the true case of a tumour moving within lung tissue? The aim of this study was to answer this question, determining the dosimetric impact of using a 4D-CT average reconstruction as the basis for a radiotherapy treatment plan. To achieve this, Monte-Carlo sim ulations were undertaken using GEANT4. The geometry consisted of a tumour (diameter 30 mm) moving with a sinusoidal pattern of amplitude = 20 mm. The tumour's excursion occurs within a lung equivalent volume beyond a chest wall interface. Motion was defined parallel to a 6 MY beam. This was then compared to a single oblate tumour of a magnitude determined by the extremes of the tumour motion. The variable density of the 4DCT average tumour is simulated by a time-weighted average, to achieve the observed density gradient. The generic moving tumour geometry is illustrated in the Figure.

  4. Emergency department visits of young children and long-term exposure to neighbourhood smoke from household heating - The Growing Up in New Zealand child cohort study.

    Science.gov (United States)

    Lai, Hak Kan; Berry, Sarah D; Verbiest, Marjolein E A; Tricker, Peter J; Atatoa Carr, Polly E; Morton, Susan M B; Grant, Cameron C

    2017-12-01

    In developed countries, exposure to wood or coal smoke occurs predominantly from neighbourhood emissions arising from household heating. The effect of this exposure on child health is not well characterized. Within a birth cohort study in New Zealand we assessed healthcare events associated with exposure to neighbourhood smoke from household heating. Our outcome measure was non-accidental presentations to hospital emergency departments (ED) before age three years. We matched small area-level census information with the geocoded home locations to measure the density of household heating with wood or coal in the neighbourhood and applied a time-weighted average exposure method to account for residential mobility. We then used hierarchical multiple logistic regression to assess the independence of associations of this exposure with ED presentations adjusted for gender, ethnicity, birth weight, breastfeeding, immunizations, number of co-habiting smokers, wood or coal heating at home, bedroom mold, household- and area-level deprivation and rurality. The adjusted odds ratio of having a non-accidental ED visit was 1.07 [95%CI: 1.03-1.12] per wood or coal heating household per hectare. We found a linear dose-response relationship (p-value for trend = 0.024) between the quartiles of exposure (1st as reference) and the same outcome (odds ratio in 2nd to 4th quartiles: 1.14 [0.95-1.37], 1.28 [1.06-1.54], 1.32 [1.09-1.60]). Exposure to neighbourhoods with higher density of wood or coal smoke-producing households is associated with an increased odds of ED visits during early childhood. Policies that reduce smoke pollution from domestic heating by as little as one household per hectare using solid fuel burners could improve child health. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Effect of physical exertion on the biological monitoring of exposure to various solvents following exposure by inhalation in human volunteers: II. n-Hexane.

    Science.gov (United States)

    Tardif, Robert; Nadeau, Véronique; Truchon, Ginette; Brochu, Martin

    2007-07-01

    This study evaluated the impact of physical exertion on two n-hexane (HEX) exposure indicators in human volunteers exposed under controlled conditions in an inhalation chamber. A group of four volunteers (two women, two men) were exposed to HEX (50 ppm; 176 mg/m(3)) according to several scenarios involving several periods when volunteers performed either aerobic (AERO), muscular (MUSC), or both AERO/MUSC types of exercise. The target intensities for 30-min exercise periods separated by 15-min rest periods were the following: REST, 50W AERO [time-weighted average intensity including resting period (TWAI): 38W], 50W AERO/MUSC (TWAI: 34W), 100W AERO/MUSC (TWAI: 63W), and 100W AERO (TWAI: 71W) for 7 hr (two 3-hr exposure periods separated by 1 hr without exposure) and 50W MUSC for 3 hr (TWAI: 31W). Alveolar air and urine samples were collected at different time intervals before, during, and after exposure to measure unchanged HEX in expired air (HEX-A) and urinary 2,5-hexanedione (2,5-HD). HEX-A levels during exposures involving AERO activities (TWAI: 38W and 71W) were significantly enhanced (approximately +14%) compared with exposure at rest. MUSC or AERO/MUSC exercises were also associated with higher HEX-A levels but only at some sampling times. In contrast, end-of-exposure (7 hr) urinary 2,5-HD (mean +/- SD) was not modified by physical exertion: 4.14 +/- 1.51 micromol/L (REST), 4.02 +/- 1.52 micromol/L (TWAI 34W), 4.25 +/- 1.53 micromol/L (TWAI 38W), 3.73 +/- 2.09 micromol/L (TWAI 63W), 3.6 +/- 1.34 micromol/L (TWAI 71W) even though a downward trend was observed. Overall, this study showed that HEX kinetics is practically insensitive to moderate variations in workload intensity; only HEX-A levels increased slightly, and urinary 2,5-HD levels remained unchanged despite the fact that all types of physical exercise increased the pulmonary ventilation rate.

  6. Average spectral efficiency analysis of FSO links over turbulence channel with adaptive transmissions and aperture averaging

    Science.gov (United States)

    Aarthi, G.; Ramachandra Reddy, G.

    2018-03-01

    In our paper, the impact of adaptive transmission schemes: (i) optimal rate adaptation (ORA) and (ii) channel inversion with fixed rate (CIFR) on the average spectral efficiency (ASE) are explored for free-space optical (FSO) communications with On-Off Keying (OOK), Polarization shift keying (POLSK), and Coherent optical wireless communication (Coherent OWC) systems under different turbulence regimes. Further to enhance the ASE we have incorporated aperture averaging effects along with the above adaptive schemes. The results indicate that ORA adaptation scheme has the advantage of improving the ASE performance compared with CIFR under moderate and strong turbulence regime. The coherent OWC system with ORA excels the other modulation schemes and could achieve ASE performance of 49.8 bits/s/Hz at the average transmitted optical power of 6 dBm under strong turbulence. By adding aperture averaging effect we could achieve an ASE of 50.5 bits/s/Hz under the same conditions. This makes ORA with Coherent OWC modulation as a favorable candidate for improving the ASE of the FSO communication system.

  7. An evaluation of short-term exposures of brake mechanics to asbestos during automotive and truck brake cleaning and machining activities.

    Science.gov (United States)

    Richter, Richard O; Finley, Brent L; Paustenbach, Dennis J; Williams, Pamela R D; Sheehan, Patrick J

    2009-07-01

    Historically, the greatest contributions to airborne asbestos concentrations during brake repair work were likely due to specific, short-duration, dust-generating activities. In this paper, the available short-term asbestos air sampling data for mechanics collected during the cleaning and machining of vehicle brakes are evaluated to determine their impact on both short-term and daily exposures. The high degree of variability and lack of transparency for most of the short-term samples limit their use in reconstructing past asbestos exposures for brake mechanics. However, the data are useful in evaluating how reducing short-term, dust-generating activities reduced long-term exposures, especially for auto brake mechanics. Using the short-term dose data for grinding brake linings from these same studies, in combination with existing time-weighted average (TWA) data collected in decades after grinding was commonplace in rebuilding brake shoes, an average 8-h TWA of approximately 0.10 f/cc was estimated for auto brake mechanics that performed arc grinding of linings during automobile brake repair (in the 1960s or earlier). In the 1970s and early 1980s, a decline in machining activities led to a decrease in the 8-h TWA to approximately 0.063 f/cc. Improved cleaning methods in the late 1980s further reduced the 8-h TWA for most brake mechanics to about 0.0021 f/cc. It is noteworthy that when compared with the original OSHA excursion level, only 15 of the more than 300 short-term concentrations for brake mechanics measured during the 1970s and 1980s possibly exceeded the standard. Considering exposure duration, none of the short-term exposures were above the current OSHA excursion level.

  8. Aerosol Emission Monitoring and Assessment of Potential Exposure to Multi-walled Carbon Nanotubes in the Manufacture of Polymer Nanocomposites.

    Science.gov (United States)

    Thompson, Drew; Chen, Sheng-Chieh; Wang, Jing; Pui, David Y H

    2015-11-01

    Recent animal studies have shown that carbon nanotubes (CNTs) may pose a significant health risk to those exposed in the workplace. To further understand this potential risk, effort must be taken to measure the occupational exposure to CNTs. Results from an assessment of potential exposure to multi-walled carbon nanotubes (MWCNTs) conducted at an industrial facility where polymer nanocomposites were manufactured by an extrusion process are presented. Exposure to MWCNTs was quantified by the thermal-optical analysis for elemental carbon (EC) of respirable dust collected by personal sampling. All personal respirable samples collected (n = 8) had estimated 8-h time weighted average (TWA) EC concentrations below the limit of detection for the analysis which was about one-half of the recommended exposure limit for CNTs, 1 µg EC/m(3) as an 8-h TWA respirable mass concentration. Potential exposure sources were identified and characterized by direct-reading instruments and area sampling. Area samples analyzed for EC yielded quantifiable mass concentrations inside an enclosure where unbound MWCNTs were handled and near a pelletizer where nanocomposite was cut, while those analyzed by electron microscopy detected the presence of MWCNTs at six locations throughout the facility. Through size selective area sampling it was identified that the airborne MWCNTs present in the workplace were in the form of large agglomerates. This was confirmed by electron microscopy where most of the MWCNT structures observed were in the form of micrometer-sized ropey agglomerates. However, a small fraction of single, free MWCNTs was also observed. It was found that the high number concentrations of nanoparticles, ~200000 particles/cm(3), present in the manufacturing facility were likely attributable to polymer fumes produced in the extrusion process. © The Author 2015. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  9. Building an industry-wide occupational exposure database for respirable mineral dust - experiences from the IMA dust monitoring programme

    International Nuclear Information System (INIS)

    Houba, Remko; Jongen, Richard; Vlaanderen, Jelle; Kromhout, Hans

    2009-01-01

    Building an industry-wide database with exposure measurements of respirable mineral dust is a challenging operation. The Industrial Minerals Association (IMA-Europe) took the initiative to create an exposure database filled with data from a prospective and ongoing dust monitoring programme that was launched in 2000. More than 20 industrial mineral companies have been collecting exposure data following a common protocol since then. Recently in 2007 ArboUnie and IRAS evaluated the quality of the collected exposure data for data collected up to winter 2005/2006. The data evaluated was collected in 11 sampling campaigns by 24 companies at 84 different worksites and considered about 8,500 respirable dust measurements and 7,500 respirable crystalline silica. In the quality assurance exercise four criteria were used to evaluate the existing measurement data: personal exposure measurements, unique worker identity, sampling duration not longer than one shift and availability of a limit of detection. Review of existing exposure data in the IMA dust monitoring programme database showed that 58% of collected respirable dust measurements and 62% of collected respirable quartz could be regarded as 'good quality data' meeting the four criteria mentioned above. Only one third of the measurement data included repeated measurements (within a sampling campaign) that would allow advanced statistical analysis incorporating estimates of within- and between-worker variability in exposure to respirable mineral dust. This data came from 7 companies comprising measurements from 23 sites. Problematic data was collected in some specific countries and to a large extent this was due to local practices and legislation (e.g. allowing 40-h time weighted averages). It was concluded that the potential of this unique industry-wide exposure database is very high, but that considerable improvements can be made. At the end of 2006 relatively small but essential changes were made in the dust monitoring

  10. Time-varying cycle average and daily variation in ambient air pollution and fecundability.

    Science.gov (United States)

    Nobles, Carrie J; Schisterman, Enrique F; Ha, Sandie; Buck Louis, Germaine M; Sherman, Seth; Mendola, Pauline

    2018-01-01

    Does ambient air pollution affect fecundability? While cycle-average air pollution exposure was not associated with fecundability, we observed some associations for acute exposure around ovulation and implantation with fecundability. Ambient air pollution exposure has been associated with adverse pregnancy outcomes and decrements in semen quality. The LIFE study (2005-2009), a prospective time-to-pregnancy study, enrolled 501 couples who were followed for up to one year of attempting pregnancy. Average air pollutant exposure was assessed for the menstrual cycle before and during the proliferative phase of each observed cycle (n = 500 couples; n = 2360 cycles) and daily acute exposure was assessed for sensitive windows of each observed cycle (n = 440 couples; n = 1897 cycles). Discrete-time survival analysis modeled the association between fecundability and an interquartile range increase in each pollutant, adjusting for co-pollutants, site, age, race/ethnicity, parity, body mass index, smoking, income and education. Cycle-average air pollutant exposure was not associated with fecundability. In acute models, fecundability was diminished with exposure to ozone the day before ovulation and nitrogen oxides 8 days post ovulation (fecundability odds ratio [FOR] 0.83, 95% confidence interval [CI]: 0.72, 0.96 and FOR 0.84, 95% CI: 0.71, 0.99, respectively). However, particulate matter ≤10 microns 6 days post ovulation was associated with greater fecundability (FOR 1.25, 95% CI: 1.01, 1.54). Although our study was unlikely to be biased due to confounding, misclassification of air pollution exposure and the moderate study size may have limited our ability to detect an association between ambient air pollution and fecundability. While no associations were observed for cycle-average ambient air pollution exposure, consistent with past research in the United States, exposure during critical windows of hormonal variability was associated with prospectively measured couple

  11. To quantum averages through asymptotic expansion of classical averages on infinite-dimensional space

    International Nuclear Information System (INIS)

    Khrennikov, Andrei

    2007-01-01

    We study asymptotic expansions of Gaussian integrals of analytic functionals on infinite-dimensional spaces (Hilbert and nuclear Frechet). We obtain an asymptotic equality coupling the Gaussian integral and the trace of the composition of scaling of the covariation operator of a Gaussian measure and the second (Frechet) derivative of a functional. In this way we couple classical average (given by an infinite-dimensional Gaussian integral) and quantum average (given by the von Neumann trace formula). We can interpret this mathematical construction as a procedure of 'dequantization' of quantum mechanics. We represent quantum mechanics as an asymptotic projection of classical statistical mechanics with infinite-dimensional phase space. This space can be represented as the space of classical fields, so quantum mechanics is represented as a projection of 'prequantum classical statistical field theory'

  12. Determining average path length and average trapping time on generalized dual dendrimer

    Science.gov (United States)

    Li, Ling; Guan, Jihong

    2015-03-01

    Dendrimer has wide number of important applications in various fields. In some cases during transport or diffusion process, it transforms into its dual structure named Husimi cactus. In this paper, we study the structure properties and trapping problem on a family of generalized dual dendrimer with arbitrary coordination numbers. We first calculate exactly the average path length (APL) of the networks. The APL increases logarithmically with the network size, indicating that the networks exhibit a small-world effect. Then we determine the average trapping time (ATT) of the trapping process in two cases, i.e., the trap placed on a central node and the trap is uniformly distributed in all the nodes of the network. In both case, we obtain explicit solutions of ATT and show how they vary with the networks size. Besides, we also discuss the influence of the coordination number on trapping efficiency.

  13. Factors influencing time-location patterns and their impact on estimates of exposure: the Multi-Ethnic Study of Atherosclerosis and Air Pollution (MESA Air).

    Science.gov (United States)

    Spalt, Elizabeth W; Curl, Cynthia L; Allen, Ryan W; Cohen, Martin; Williams, Kayleen; Hirsch, Jana A; Adar, Sara D; Kaufman, Joel D

    2016-06-01

    We assessed time-location patterns and the role of individual- and residential-level characteristics on these patterns within the Multi-Ethnic Study of Atherosclerosis and Air Pollution (MESA Air) cohort and also investigated the impact of individual-level time-location patterns on individual-level estimates of exposure to outdoor air pollution. Reported time-location patterns varied significantly by demographic factors such as age, gender, race/ethnicity, income, education, and employment status. On average, Chinese participants reported spending significantly more time indoors and less time outdoors and in transit than White, Black, or Hispanic participants. Using a tiered linear regression approach, we predicted time indoors at home and total time indoors. Our model, developed using forward-selection procedures, explained 43% of the variability in time spent indoors at home, and incorporated demographic, health, lifestyle, and built environment factors. Time-weighted air pollution predictions calculated using recommended time indoors from USEPA overestimated exposures as compared with predictions made with MESA Air participant-specific information. These data fill an important gap in the literature by describing the impact of individual and residential characteristics on time-location patterns and by demonstrating the impact of population-specific data on exposure estimates.

  14. GI Joe or Average Joe? The impact of average-size and muscular male fashion models on men's and women's body image and advertisement effectiveness.

    Science.gov (United States)

    Diedrichs, Phillippa C; Lee, Christina

    2010-06-01

    Increasing body size and shape diversity in media imagery may promote positive body image. While research has largely focused on female models and women's body image, men may also be affected by unrealistic images. We examined the impact of average-size and muscular male fashion models on men's and women's body image and perceived advertisement effectiveness. A sample of 330 men and 289 women viewed one of four advertisement conditions: no models, muscular, average-slim or average-large models. Men and women rated average-size models as equally effective in advertisements as muscular models. For men, exposure to average-size models was associated with more positive body image in comparison to viewing no models, but no difference was found in comparison to muscular models. Similar results were found for women. Internalisation of beauty ideals did not moderate these effects. These findings suggest that average-size male models can promote positive body image and appeal to consumers. 2010 Elsevier Ltd. All rights reserved.

  15. Evaluation of the toxicity data for peracetic acid in deriving occupational exposure limits: a minireview.

    Science.gov (United States)

    Pechacek, Nathan; Osorio, Magdalena; Caudill, Jeff; Peterson, Bridget

    2015-02-17

    Peracetic acid (PAA) is a peroxide-based chemistry that is highly reactive and can produce strong local effects upon direct contact with the eyes, skin and respiratory tract. Given its increasing prominence in industry, attention has focused on health hazards and associated risks for PAA in the workplace. Occupational exposure limits (OEL) are one means to mitigate risks associated with chemical hazards in the workplace. A mini-review of the toxicity data for PAA was conducted in order to determine if the data were sufficient to derive health-based OELs. The available data for PAA frequently come from unpublished studies that lack sufficient study details, suffer from gaps in available information and often follow unconventional testing methodology. Despite these limitations, animal and human data suggest sensory irritation as the most sensitive endpoint associated with inhalation of PAA. Rodent RD50 data (the concentration estimated to cause a 50% depression in respiratory rate) were selected as the critical studies in deriving OELs. Based on these data, a range of 0.36-0.51mg/m(3) (0.1-0.2ppm) was calculated for a time-weighted average (TWA), and 1.2-1.7mg/m(3) (0.4-0.5ppm) as a range for a short-term exposure limit (STEL). These ranges compare favorably to other published OELs for PAA. Considering the applicable health hazards for this chemistry, a joint TWA/STEL OEL approach for PAA is deemed the most appropriate in assessing workplace exposures to PAA, and the selection of specific values within these proposed ranges represents a risk management decision. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  16. Personal exposures to asbestos fibers during brake maintenance of passenger vehicles.

    Science.gov (United States)

    Cely-García, María Fernanda; Sánchez, Mauricio; Breysse, Patrick N; Ramos-Bonilla, Juan P

    2012-11-01

    Brake linings and brake pads are among the asbestos-containing products that are readily available in Colombia. When sold separated from their support, brake linings require extensive manipulation involving several steps that include drilling, countersinking, riveting, bonding, cutting, beveling, and grinding. Without this manipulation, brake linings cannot be installed in a vehicle. The manipulation process may release asbestos fibers, which may expose brake mechanics to the fibers. Three brake repair shops located in Bogotá (Colombia) were sampled for 3 or 4 consecutive days using US National Institute for Occupational Safety and Health (NIOSH) methods 7400 and 7402. Standard procedures for quality control were followed during the sampling process, and asbestos samples were analyzed by an American Industrial Hygiene Association accredited laboratory. Personal samples were collected to assess full-shift and short-term exposures. Area samples were also collected close to the brake-lining manipulation equipment and within office facilities. Activities were documented during the sampling process. Using Phase Contrast Microscopy Equivalent counts to estimate air asbestos concentrations, all personal samples [i.e. 8-h time-weighted averages (TWAs) and 30-min personal samples] were in compliance with the US Occupational Safety and Health Administration standards. Personal asbestos concentrations based on transmission electron microscopy counts were extremely high, ranging from 0.006 to 3.493 f cm(-3) for 8-h TWA and from 0.015 to 8.835 f cm(-3) for 30-min samples. All asbestos fibers detected were chrysotile. Cleaning facilities and grinding linings resulted in the highest asbestos exposures based on transmission electron microscopy counts. There were also some samples that did not comply with the NIOSH's recommended exposure limits. The results indicate that the brake mechanics sampled are exposed to extremely high asbestos concentrations (i.e. based on transmission

  17. Occupational exposure to anesthetics leads to genomic instability, cytotoxicity and proliferative changes

    International Nuclear Information System (INIS)

    Souza, Kátina M.; Braz, Leandro G.; Nogueira, Flávia R.; Souza, Marajane B.; Bincoleto, Lahis F.; Aun, Aline G.; Corrente, José E.; Carvalho, Lídia R.; Braz, José Reinaldo C.; Braz, Mariana G.

    2016-01-01

    Highlights: • Anesthesiologists exposed to the most commonly used anesthetic gases were evaluated. • No alterations were detected for lymphocyte DNA damage detected by the comet assay. • Decreased frequencies of basal cells were detected in exfoliated buccal cells (BMCyt). • Increased frequencies of micronucleus and cytotoxicity were observed in BMCyt assay. • Anesthesiologists have genomic instability due to occupational exposure. - Abstract: Data on the genotoxic and mutagenic effects of occupational exposure to the most frequently used volatile anesthetics are limited and controversial. The current study is the first to evaluate genomic instability, cell death and proliferative index in exfoliated buccal cells (EBC) from anesthesiologists. We also evaluated DNA damage and determined the concentrations of the anesthetic gases most commonly used in operating rooms. This study was conducted on physicians who were allocated into two groups: the exposed group, which consisted of anesthesiologists who had been exposed to waste anesthetic gases (isoflurane, sevoflurane, desflurane and nitrous oxide − N 2 O) for at least two years; and the control group, which consisted of non-exposed physicians matched for age, sex and lifestyle with the exposed group. Venous blood and EBC samples were collected from all participants. Basal DNA damage was evaluated in lymphocytes by the comet assay, whereas the buccal micronucleus (MN) cytome (BMCyt) assay was applied to evaluate genotoxic and cytotoxic effects. The concentrations of N 2 O and anesthetics were measured via a portable infrared spectrophotometer. The average concentration of waste gases was greater than 5 parts per million (ppm) for all of the halogenated anesthetics and was more than 170 ppm for N 2 O, expressed as a time-weighted average. There was no significant difference between the groups in relation to lymphocyte DNA damage. The exposed group had higher frequencies of MN, karyorrhexis and pyknosis, and

  18. Occupational exposure to anesthetics leads to genomic instability, cytotoxicity and proliferative changes

    Energy Technology Data Exchange (ETDEWEB)

    Souza, Kátina M.; Braz, Leandro G.; Nogueira, Flávia R.; Souza, Marajane B.; Bincoleto, Lahis F.; Aun, Aline G. [Faculdade de Medicina de Botucatu, UNESP − Univ Estadual Paulista, Departamento de Anestesiologia, Botucatu (Brazil); Corrente, José E.; Carvalho, Lídia R. [Instituto de Biociências de Botucatu, UNESP − Univ Estadual Paulista, Departamento de Bioestatística, Botucatu (Brazil); Braz, José Reinaldo C. [Faculdade de Medicina de Botucatu, UNESP − Univ Estadual Paulista, Departamento de Anestesiologia, Botucatu (Brazil); Braz, Mariana G., E-mail: mgbraz@hotmail.com [Faculdade de Medicina de Botucatu, UNESP − Univ Estadual Paulista, Departamento de Anestesiologia, Botucatu (Brazil)

    2016-09-15

    Highlights: • Anesthesiologists exposed to the most commonly used anesthetic gases were evaluated. • No alterations were detected for lymphocyte DNA damage detected by the comet assay. • Decreased frequencies of basal cells were detected in exfoliated buccal cells (BMCyt). • Increased frequencies of micronucleus and cytotoxicity were observed in BMCyt assay. • Anesthesiologists have genomic instability due to occupational exposure. - Abstract: Data on the genotoxic and mutagenic effects of occupational exposure to the most frequently used volatile anesthetics are limited and controversial. The current study is the first to evaluate genomic instability, cell death and proliferative index in exfoliated buccal cells (EBC) from anesthesiologists. We also evaluated DNA damage and determined the concentrations of the anesthetic gases most commonly used in operating rooms. This study was conducted on physicians who were allocated into two groups: the exposed group, which consisted of anesthesiologists who had been exposed to waste anesthetic gases (isoflurane, sevoflurane, desflurane and nitrous oxide − N{sub 2}O) for at least two years; and the control group, which consisted of non-exposed physicians matched for age, sex and lifestyle with the exposed group. Venous blood and EBC samples were collected from all participants. Basal DNA damage was evaluated in lymphocytes by the comet assay, whereas the buccal micronucleus (MN) cytome (BMCyt) assay was applied to evaluate genotoxic and cytotoxic effects. The concentrations of N{sub 2}O and anesthetics were measured via a portable infrared spectrophotometer. The average concentration of waste gases was greater than 5 parts per million (ppm) for all of the halogenated anesthetics and was more than 170 ppm for N{sub 2}O, expressed as a time-weighted average. There was no significant difference between the groups in relation to lymphocyte DNA damage. The exposed group had higher frequencies of MN, karyorrhexis and

  19. 20 CFR 404.221 - Computing your average monthly wage.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the average...

  20. Average and local structure of α-CuI by configurational averaging

    International Nuclear Information System (INIS)

    Mohn, Chris E; Stoelen, Svein

    2007-01-01

    Configurational Boltzmann averaging together with density functional theory are used to study in detail the average and local structure of the superionic α-CuI. We find that the coppers are spread out with peaks in the atom-density at the tetrahedral sites of the fcc sublattice of iodines. We calculate Cu-Cu, Cu-I and I-I pair radial distribution functions, the distribution of coordination numbers and the distribution of Cu-I-Cu, I-Cu-I and Cu-Cu-Cu bond-angles. The partial pair distribution functions are in good agreement with experimental neutron diffraction-reverse Monte Carlo, extended x-ray absorption fine structure and ab initio molecular dynamics results. In particular, our results confirm the presence of a prominent peak at around 2.7 A in the Cu-Cu pair distribution function as well as a broader, less intense peak at roughly 4.3 A. We find highly flexible bonds and a range of coordination numbers for both iodines and coppers. This structural flexibility is of key importance in order to understand the exceptional conductivity of coppers in α-CuI; the iodines can easily respond to changes in the local environment as the coppers diffuse, and a myriad of different diffusion-pathways is expected due to the large variation in the local motifs

  1. Doses from radiation exposure

    International Nuclear Information System (INIS)

    Menzel, H-G.; Harrison, J.D.

    2012-01-01

    Practical implementation of the International Commission on Radiological Protection’s (ICRP) system of protection requires the availability of appropriate methods and data. The work of Committee 2 is concerned with the development of reference data and methods for the assessment of internal and external radiation exposure of workers and members of the public. This involves the development of reference biokinetic and dosimetric models, reference anatomical models of the human body, and reference anatomical and physiological data. Following ICRP’s 2007 Recommendations, Committee 2 has focused on the provision of new reference dose coefficients for external and internal exposure. As well as specifying changes to the radiation and tissue weighting factors used in the calculation of protection quantities, the 2007 Recommendations introduced the use of reference anatomical phantoms based on medical imaging data, requiring explicit sex averaging of male and female organ-equivalent doses in the calculation of effective dose. In preparation for the calculation of new dose coefficients, Committee 2 and its task groups have provided updated nuclear decay data (ICRP Publication 107) and adult reference computational phantoms (ICRP Publication 110). New dose coefficients for external exposures of workers are complete (ICRP Publication 116), and work is in progress on a series of reports on internal dose coefficients to workers from inhaled and ingested radionuclides. Reference phantoms for children will also be provided and used in the calculation of dose coefficients for public exposures. Committee 2 also has task groups on exposures to radiation in space and on the use of effective dose.

  2. Occupational risk and lifetime exposure

    International Nuclear Information System (INIS)

    Lapp, R.E.

    1991-01-01

    Any lowering of annual radiation limits for occupational exposure should be based on industry experience with lifetime doses and not on a worst case career exposure of 47 years. Two decades of experience show a lifetime accumulation of less than 1.5 rem for workers with measurable exposure. This is 5% of the normal lifetime exposure of Americans to natural and medical radiation. Any epidemiology of the US nuclear power workforce's two decade long exposure would have to focus on excess leukemia. Application of the Hiroshima and Nagasaki cancer mortality shows that too few leukemias would be expressed to permit a feasible epidemiology. Ionizing radiation appears to be a mild carcinogen as compared to physical and chemical agents presented in the occupational environment. A realistic factor in determining any change in occupational exposure limits for ionizing radiation should take into account the past performance of the licensee and potential health effects applicable to the workplace. Specifically, the lifetime exposure data for workers at nuclear power plants and naval shipyards should be considered. The nuclear industry and the US Navy have detailed data on the annual exposure of workers with a combined collective exposure approaching 1 million worker-rem. The lifetime dose for naval personnel and shipyard workers averages 1.1 rem J 1990. Shipyard workers have an annual dose of 0.28 rem per work-year and a mean exposure time of 4.4 years. The data apply to workers with measurable dose

  3. Analysis and comparison of safety models using average daily, average hourly, and microscopic traffic.

    Science.gov (United States)

    Wang, Ling; Abdel-Aty, Mohamed; Wang, Xuesong; Yu, Rongjie

    2018-02-01

    There have been plenty of traffic safety studies based on average daily traffic (ADT), average hourly traffic (AHT), or microscopic traffic at 5 min intervals. Nevertheless, not enough research has compared the performance of these three types of safety studies, and seldom of previous studies have intended to find whether the results of one type of study is transferable to the other two studies. First, this study built three models: a Bayesian Poisson-lognormal model to estimate the daily crash frequency using ADT, a Bayesian Poisson-lognormal model to estimate the hourly crash frequency using AHT, and a Bayesian logistic regression model for the real-time safety analysis using microscopic traffic. The model results showed that the crash contributing factors found by different models were comparable but not the same. Four variables, i.e., the logarithm of volume, the standard deviation of speed, the logarithm of segment length, and the existence of diverge segment, were positively significant in the three models. Additionally, weaving segments experienced higher daily and hourly crash frequencies than merge and basic segments. Then, each of the ADT-based, AHT-based, and real-time models was used to estimate safety conditions at different levels: daily and hourly, meanwhile, the real-time model was also used in 5 min intervals. The results uncovered that the ADT- and AHT-based safety models performed similar in predicting daily and hourly crash frequencies, and the real-time safety model was able to provide hourly crash frequency. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Evaluation of occupational exposure to toxic metals released in the process of aluminum welding.

    Science.gov (United States)

    Matczak, Wanda; Gromiec, Jan

    2002-04-01

    The objective of this study was to evaluate occupational exposure to welding fumes and its elements on aluminum welders in Polish industry. The study included 52 MIG/Al fume samples and 18 TIG/Al samples in 3 plants. Air samples were collected in the breathing zone of welders (total and respirable dust). Dust concentration was determined gravimetrically, and the elements in the collected dust were determined by AAS. Mean time-weighted average (TWA) concentrations of the welding dusts/fumes and their components in the breathing zone obtained for different welding processes were, in mg/m3: MIG/Al fumes mean 6.0 (0.8-17.8), Al 2.1 (0.1-7.7), Mg 0.2 (TIG/Al fumes 0.7 (0.3-1.4), Al 0.17 (0.07-0.50). A correlation has been found between the concentration of the main components and the fume/dust concentrations in MIG/Al and TIG/Al fumes. Mean percentages of the individual components in MIG/Al fumes/dusts were Al: 30 (9-56) percent; Mg: 3 (1-5.6) percent; Mn: 0.2 (0.1-0.3) percent; Cu: 0.2 (welding methods, the nature of welding-related operations, and work environment conditions.

  5. Personal exposure measurements of school-children to fine particulate matter (PM2.5) in winter of 2013, Shanghai, China.

    Science.gov (United States)

    Zhang, Lijun; Guo, Changyi; Jia, Xiaodong; Xu, Huihui; Pan, Meizhu; Xu, Dong; Shen, Xianbiao; Zhang, Jianghua; Tan, Jianguo; Qian, Hailei; Dong, Chunyang; Shi, Yewen; Zhou, Xiaodan; Wu, Chen

    2018-01-01

    The aim of this study was to perform an exposure assessment of PM2.5 (particulate matter less than 2.5μm in aerodynamic diameter) among children and to explore the potential sources of exposure from both indoor and outdoor environments. In terms of real-time exposure measurements of PM2.5, we collected data from 57 children aged 8-12 years (9.64 ± 0.93 years) in two schools in Shanghai, China. Simultaneously, questionnaire surveys and time-activity diaries were used to estimate the environment at home and daily time-activity patterns in order to estimate the exposure dose of PM2.5 in these children. Principle component regression analysis was used to explore the influence of potential sources of PM2.5 exposure. All the median personal exposure and microenvironment PM2.5 concentrations greatly exceeded the daily 24-h PM2.5 Ambient Air Quality Standards of China, the USA, and the World Health Organization (WHO). The median Etotal (the sum of the PM2.5 exposure levels in different microenvironment and fractional time) of all students was 3014.13 (μg.h)/m3. The concentration of time-weighted average (TWA) exposure of all students was 137.01 μg/m3. The median TWA exposure level during the on-campus period (135.81 μg/m3) was significantly higher than the off-campus period (115.50 μg/m3, P = 0.013 < 0.05). Besides ambient air pollution and meteorological conditions, storey height of the classroom and mode of transportation to school were significantly correlated with children's daily PM2.5 exposure. Children in the two selected schools were exposed to high concentrations of PM2.5 in winter of 2013 in Shanghai. Their personal PM2.5 exposure was mainly associated with ambient air conditions, storey height of the classroom, and children's transportation mode to school.

  6. Household air pollution and personal inhalation exposure to particles (TSP/PM2.5/PM1.0/PM0.25) in rural Shanxi, North China.

    Science.gov (United States)

    Huang, Ye; Du, Wei; Chen, Yuanchen; Shen, Guofeng; Su, Shu; Lin, Nan; Shen, Huizhong; Zhu, Dan; Yuan, Chenyi; Duan, Yonghong; Liu, Junfeng; Li, Bengang; Tao, Shu

    2017-12-01

    Personal exposure to size-segregated particles among rural residents in Shanxi, China in summer, 2011 were investigated using portable carried samplers (N = 84). Household air pollution was simultaneously studied using stationary samplers in nine homes. Information on household fuel types, cooking activity, smoking behavior, kitchen ventilation conditions etc., were also collected and discussed. The study found that even in the summer period, the daily average concentrations of PM 2.5 and PM 1.0 in the kitchen were as high as 376 ± 573 and 288 ± 397 μg/m 3 (N = 6), that were nearly 3 times of 114 ± 81 and 97 ± 77 μg/m 3 in the bedroom (N = 8), and significantly higher than those of 64 ± 28 and 47 ± 21 μg/m 3 in the outdoor air (N = 6). The personal daily exposure to PM 2.5 and PM 1.0 were 98 ± 52 and 77 ± 47 μg/m 3 , respectively, that were lower than the concentrations in the kitchen but higher than the outdoor levels. The mass fractions of PM 2.5 in TSP were 90%, 72%, 65% and 68% on average in the kitchen, bedroom, outdoor air and personal inhalation exposure, respectively, and moreover, a majority of particles in PM 2.5 had diameters less than 1.0 μm. Calculated time-weighted average exposure based on indoor and outdoor air concentrations and time spent indoor and outdoor were positively correlated but, was ∼33% lower than the directly measured exposure. The daily exposure among those burning traditional solid fuels could be lower by ∼41% if the kitchen was equipped with an outdoor chimney, but was still 8-14% higher than those household using cleaning energies, like electricity and gas. With a ventilator in the kitchen, the exposure among the population using clean energies could be further reduced by 10-24%. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Digital mammography screening: average glandular dose and first performance parameters

    International Nuclear Information System (INIS)

    Weigel, S.; Girnus, R.; Czwoydzinski, J.; Heindel, W.; Decker, T.; Spital, S.

    2007-01-01

    Purpose: The Radiation Protection Commission demanded structured implementation of digital mammography screening in Germany. The main requirements were the installation of digital reference centers and separate evaluation of the fully digitized screening units. Digital mammography screening must meet the quality standards of the European guidelines and must be compared to analog screening results. We analyzed early surrogate indicators of effective screening and dosage levels for the first German digital screening unit in a routine setting after the first half of the initial screening round. Materials and Methods: We used three digital mammography screening units (one full-field digital scanner [DR] and two computed radiography systems [CR]). Each system has been proven to fulfill the requirements of the National and European guidelines. The radiation exposure levels, the medical workflow and the histological results were documented in a central electronic screening record. Results: In the first year 11,413 women were screened (participation rate 57.5 %). The parenchymal dosages for the three mammographic X-ray systems, averaged for the different breast sizes, were 0.7 (DR), 1.3 (CR), 1.5 (CR) mGy. 7 % of the screened women needed to undergo further examinations. The total number of screen-detected cancers was 129 (detection rate 1.1 %). 21 % of the carcinomas were classified as ductal carcinomas in situ, 40 % of the invasive carcinomas had a histological size ≤ 10 mm and 61 % < 15 mm. The frequency distribution of pT-categories of screen-detected cancer was as follows: pTis 20.9 %, pT1 61.2 %, pT2 14.7 %, pT3 2.3 %, pT4 0.8 %. 73 % of the invasive carcinomas were node-negative. (orig.)

  8. Carbon Nanotube and Nanofiber Exposure Assessments: An Analysis of 14 Site Visits

    Science.gov (United States)

    Dahm, Matthew M.; Schubauer-Berigan, Mary K.; Evans, Douglas E.; Birch, M. Eileen; Fernback, Joseph E.; Deddens, James A.

    2015-01-01

    Recent evidence has suggested the potential for wide-ranging health effects that could result from exposure to carbon nanotubes (CNT) and carbon nanofibers (CNF). In response, the National Institute for Occupational Safety and Health (NIOSH) set a recommended exposure limit (REL) for CNT and CNF: 1 µg m−3 as an 8-h time weighted average (TWA) of elemental carbon (EC) for the respirable size fraction. The purpose of this study was to conduct an industrywide exposure assessment among US CNT and CNF manufacturers and users. Fourteen total sites were visited to assess exposures to CNT (13 sites) and CNF (1 site). Personal breathing zone (PBZ) and area samples were collected for both the inhalable and respirable mass concentration of EC, using NIOSH Method 5040. Inhalable PBZ samples were collected at nine sites while at the remaining five sites both respirable and inhalable PBZ samples were collected side-by-side. Transmission electron microscopy (TEM) PBZ and area samples were also collected at the inhalable size fraction and analyzed to quantify and size CNT and CNF agglomerate and fibrous exposures. Respirable EC PBZ concentrations ranged from 0.02 to 2.94 µg m−3 with a geometric mean (GM) of 0.34 µg m−3 and an 8-h TWA of 0.16 µg m−3. PBZ samples at the inhalable size fraction for EC ranged from 0.01 to 79.57 µg m−3 with a GM of 1.21 µg m−3. PBZ samples analyzed by TEM showed concentrations ranging from 0.0001 to 1.613 CNT or CNF-structures per cm3 with a GM of 0.008 and an 8-h TWA concentration of 0.003. The most common CNT structure sizes were found to be larger agglomerates in the 2–5 µm range as well as agglomerates >5 µm. A statistically significant correlation was observed between the inhalable samples for the mass of EC and structure counts by TEM (Spearman ρ = 0.39, P 1 μg m−3. Until more information is known about health effects associated with larger agglomerates, it seems prudent to assess worker exposure to airborne CNT and CNF

  9. Analytical expressions for conditional averages: A numerical test

    DEFF Research Database (Denmark)

    Pécseli, H.L.; Trulsen, J.

    1991-01-01

    Conditionally averaged random potential fluctuations are an important quantity for analyzing turbulent electrostatic plasma fluctuations. Experimentally, this averaging can be readily performed by sampling the fluctuations only when a certain condition is fulfilled at a reference position...

  10. Experimental demonstration of squeezed-state quantum averaging

    DEFF Research Database (Denmark)

    Lassen, Mikael Østergaard; Madsen, Lars Skovgaard; Sabuncu, Metin

    2010-01-01

    We propose and experimentally demonstrate a universal quantum averaging process implementing the harmonic mean of quadrature variances. The averaged variances are prepared probabilistically by means of linear optical interference and measurement-induced conditioning. We verify that the implemented...

  11. Respirable dust and quartz exposure from three South African farms with sandy, sandy loam, and clay soils.

    Science.gov (United States)

    Swanepoel, Andrew J; Kromhout, Hans; Jinnah, Zubair A; Portengen, Lützen; Renton, Kevin; Gardiner, Kerry; Rees, David

    2011-07-01

    To quantify personal time-weighted average respirable dust and quartz exposure on a sandy, a sandy loam, and a clay soil farm in the Free State and North West provinces of South Africa and to ascertain whether soil type is a determinant of exposure to respirable quartz. Three farms, located in the Free State and North West provinces of South Africa, had their soil type confirmed as sandy, sandy loam, and clay; and, from these, a total of 298 respirable dust and respirable quartz measurements were collected between July 2006-November 2009 during periods of major farming operations. Values below the limit of detection (LOD) (22 μg · m(-3)) were estimated using multiple 'imputation'. Non-parametric tests were used to compare quartz exposure from the three different soil types. Exposure to respirable quartz occurred on all three farms with the highest individual concentration measured on the sandy soil farm (626 μg · m(-3)). Fifty-seven, 59, and 81% of the measurements on the sandy soil, sandy loam soil, and clay soil farm, respectively, exceeded the American Conference of Governmental Industrial Hygienists (ACGIH) threshold limit value (TLV) of 25 μg · m(-3). Twelve and 13% of respirable quartz concentrations exceeded 100 μg · m(-3) on the sandy soil and sandy loam soil farms, respectively, but none exceeded this level on the clay soil farm. The proportions of measurements >100 μg · m(-3) were not significantly different between the sandy and sandy loam soil farms ('prop.test'; P = 0.65), but both were significantly larger than for the clay soil farm ('prop.test'; P = 0.0001). The percentage of quartz in respirable dust was determined for all three farms using measurements > the limit of detection. Percentages ranged from 0.5 to 94.4% with no significant difference in the median quartz percentages across the three farms (Kruskal-Wallis test; P = 0.91). This study demonstrates that there is significant potential for over-exposure to respirable quartz in

  12. The flattening of the average potential in models with fermions

    International Nuclear Information System (INIS)

    Bornholdt, S.

    1993-01-01

    The average potential is a scale dependent scalar effective potential. In a phase with spontaneous symmetry breaking its inner region becomes flat as the averaging extends over infinite volume and the average potential approaches the convex effective potential. Fermion fluctuations affect the shape of the average potential in this region and its flattening with decreasing physical scale. They have to be taken into account to find the true minimum of the scalar potential which determines the scale of spontaneous symmetry breaking. (orig.)

  13. Environmental radioactivity and radiation exposure

    International Nuclear Information System (INIS)

    1980-01-01

    In 1977 population exposure in the Federal Republic of Germany has not changed as compared to the previous years. The main share of the total exposure, nearly two thirds, is attributed to natural radioactive substances and cosmic radiation. The largest part (around 85%) of the artificial radiation exposure is caused by X-ray diagnostics. In comparison to this, radiation exposure from application of ionizing radiation in medical therapy, use of radioactive material in research and technology, or from nuclear facilities is small. As in the years before, population exposure caused by nuclear power plants and other nuclear facilities is distinctly less than 1% of the natural radiation exposure. This is also true for the average radiation exposure within a radius of 3 km around nuclear facilities. On the whole, the report makes clear that the total amount of artificial population exposure will substantially decrease only if one succeeds in reducing the high contribution to the radiation exposure caused by medical measures. (orig.) [de

  14. 20 CFR 404.220 - Average-monthly-wage method.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Average-monthly-wage method. 404.220 Section... INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.220 Average-monthly-wage method. (a) Who is eligible for this method. You must...

  15. A time-averaged cosmic ray propagation theory

    International Nuclear Information System (INIS)

    Klimas, A.J.

    1975-01-01

    An argument is presented, which casts doubt on our ability to choose an appropriate magnetic field ensemble for computing the average behavior of cosmic ray particles. An alternate procedure, using time-averages rather than ensemble-averages, is presented. (orig.) [de

  16. 7 CFR 51.2561 - Average moisture content.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except when...

  17. Averaging in SU(2) open quantum random walk

    International Nuclear Information System (INIS)

    Ampadu Clement

    2014-01-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT

  18. Averaging in SU(2) open quantum random walk

    Science.gov (United States)

    Clement, Ampadu

    2014-03-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT.

  19. Neutron exposure

    International Nuclear Information System (INIS)

    Prillinger, G.; Konynenburg, R.A. van

    1998-01-01

    As a result of the popularity of the Agencies report 'Neutron Irradiation Embrittlement of Reactor Pressure Vessel Steels' of 1975, it was decided that another report on this broad subject would be of use. In this report, background and contemporary views on specially identified areas of the subject are considered as self-contained chapters, written by experts. In chapter 6, LWR-PV neutron transport calculations and dosimetry methods and how they are combined to evaluate the neutron exposure of the steel of pressure vessels are discussed. An effort to correlate neutron exposure parameters with damage is made

  20. Human exposure to nickel

    Energy Technology Data Exchange (ETDEWEB)

    Grandjean, P

    1984-01-01

    In order of abundance in the earth's crust, nickel ranks as the 24th element and has been detected in different media in all parts of the biosphere. Thus, humans are constantly exposed to this ubiquitous element, though in variable amounts. Occupational exposures may lead to the retention of 100 micrograms of nickel per day. Environmental nickel levels depend particularly on natural sources, pollution from nickel-manufacturing industries and airborne particles from combustion of fossil fuels. Absorption from atmospheric nickel pollution is of minor concern. Vegetables usually contain more nickel than do other food items. Certain products, such as baking powder and cocoa powder, have been found to contain excessive amounts of nickel, perhaps related to nickel leaching during the manufacturing process. Soft drinking-water and acid beverages may dissolve nickel from pipes and containers. Scattered studies indicate a highly variable dietary intake of nickel, but most averages are about 200-300 micrograms/day. In addition, skin contact to a multitude of metal objects may be of significance to the large number of individuals suffering from contact dermatitis and nickel allergy. Finally, nickel alloys are often used in nails and prostheses for orthopaedic surgery, and various sources may contaminate intravenous fluids. Thus, human nickel exposure originates from a variety of sources and is highly variable. Occupational nickel exposure is of major significance, and leaching of nickel may add to dietary intakes and to cutaneous exposures. 79 references.

  1. Doses from radiation exposure

    CERN Document Server

    Menzel, H G

    2012-01-01

    Practical implementation of the International Commission on Radiological Protection's (ICRP) system of protection requires the availability of appropriate methods and data. The work of Committee 2 is concerned with the development of reference data and methods for the assessment of internal and external radiation exposure of workers and members of the public. This involves the development of reference biokinetic and dosimetric models, reference anatomical models of the human body, and reference anatomical and physiological data. Following ICRP's 2007 Recommendations, Committee 2 has focused on the provision of new reference dose coefficients for external and internal exposure. As well as specifying changes to the radiation and tissue weighting factors used in the calculation of protection quantities, the 2007 Recommendations introduced the use of reference anatomical phantoms based on medical imaging data, requiring explicit sex averaging of male and female organ-equivalent doses in the calculation of effecti...

  2. Fast exposure time decision in multi-exposure HDR imaging

    Science.gov (United States)

    Piao, Yongjie; Jin, Guang

    2012-10-01

    Currently available imaging and display system exists the problem of insufficient dynamic range, and the system cannot restore all the information for an high dynamic range (HDR) scene. The number of low dynamic range(LDR) image samples and fastness of exposure time decision impacts the real-time performance of the system dramatically. In order to realize a real-time HDR video acquisition system, this paper proposed a fast and robust method for exposure time selection in under and over exposure area which is based on system response function. The method utilized the monotony of the imaging system. According to this characteristic the exposure time is adjusted to an initial value to make the median value of the image equals to the middle value of the system output range; then adjust the exposure time to make the pixel value on two sides of histogram be the middle value of the system output range. Thus three low dynamic range images are acquired. Experiments show that the proposed method for adjusting the initial exposure time can converge in two iterations which is more fast and stable than average gray control method. As to the exposure time adjusting in under and over exposed area, the proposed method can use the dynamic range of the system more efficiently than fixed exposure time method.

  3. Exposure Prophylaxis

    African Journals Online (AJOL)

    opsig

    health care workers who report exposure to HIV at work whether given PEP or not ... breast milk, amniotic fluid, cerebrospinal fluid, pericardial fluid ... or skin lesions [1]. Other body fluid like sweat, tears, saliva, urine and stool do not contain significant quantities of HIV unless there is blood mixed with them[1,2]. HIV is not ...

  4. Staff radiation exposure in radiation diagnostics

    International Nuclear Information System (INIS)

    Khakimova, N.U.; Malisheva, E.Yu.; Shosafarova, Sh.G.

    2010-01-01

    Present article is devoted to staff radiation exposure in radiation diagnostics. Data on staff radiation exposure obtained during 2005-2008 years was analyzed. It was found that average individual doses of staff of various occupations in Dushanbe city for 2008 year are at 0.29-2.16 mSv range. They are higher than the average health indicators but lower than maximum permissible dose. It was defined that paramedical personnel receives the highest doses among the various categories of staff.

  5. Averaging and sampling for magnetic-observatory hourly data

    Directory of Open Access Journals (Sweden)

    J. J. Love

    2010-11-01

    Full Text Available A time and frequency-domain analysis is made of the effects of averaging and sampling methods used for constructing magnetic-observatory hourly data values. Using 1-min data as a proxy for continuous, geomagnetic variation, we construct synthetic hourly values of two standard types: instantaneous "spot" measurements and simple 1-h "boxcar" averages. We compare these average-sample types with others: 2-h average, Gaussian, and "brick-wall" low-frequency-pass. Hourly spot measurements provide a statistically unbiased representation of the amplitude range of geomagnetic-field variation, but as a representation of continuous field variation over time, they are significantly affected by aliasing, especially at high latitudes. The 1-h, 2-h, and Gaussian average-samples are affected by a combination of amplitude distortion and aliasing. Brick-wall values are not affected by either amplitude distortion or aliasing, but constructing them is, in an operational setting, relatively more difficult than it is for other average-sample types. It is noteworthy that 1-h average-samples, the present standard for observatory hourly data, have properties similar to Gaussian average-samples that have been optimized for a minimum residual sum of amplitude distortion and aliasing. For 1-h average-samples from medium and low-latitude observatories, the average of the combination of amplitude distortion and aliasing is less than the 5.0 nT accuracy standard established by Intermagnet for modern 1-min data. For medium and low-latitude observatories, average differences between monthly means constructed from 1-min data and monthly means constructed from any of the hourly average-sample types considered here are less than the 1.0 nT resolution of standard databases. We recommend that observatories and World Data Centers continue the standard practice of reporting simple 1-h-average hourly values.

  6. Exposure of miners to diesel exhaust particulates in underground nonmetal mines.

    Science.gov (United States)

    Cohen, H J; Borak, J; Hall, T; Sirianni, G; Chemerynski, S

    2002-01-01

    A study was initiated to examine worker exposures in seven underground nonmetal mines and to examine the precision of the National Institute for Occupational Safety and Health (NIOSH) 5040 sampling and analytical method for diesel exhaust that has recently been adopted for compliance monitoring by the Mine Safety and Health Administration (MSHA). Approximately 1000 air samples using cyclones were taken on workers and in areas throughout the mines. Results indicated that worker exposures were consistently above the MSHA final limit of 160 micrograms/m3 (time-weighted average; TWA) for total carbon as determined by the NIOSH 5040 method and greater than the proposed American Conference of Governmental Industrial Hygienists TLV limit of 20 micrograms/m3 (TWA) for elemental carbon. A number of difficulties were documented when sampling for diesel exhaust using organic carbon: high and variable blank values from filters, a high variability (+/- 20%) from duplicate punches from the same sampling filter, a consistent positive interference (+26%) when open-faced monitors were sampled side-by-side with cyclones, poor correlation (r 2 = 0.38) to elemental carbon levels, and an interference from limestone that could not be adequately corrected by acid-washing of filters. The sampling and analytical precision (relative standard deviation) was approximately 11% for elemental carbon, 17% for organic carbon, and 11% for total carbon. An hypothesis is presented and supported with data that gaseous organic carbon constituents of diesel exhaust adsorb onto not only the submicron elemental carbon particles found in diesel exhaust, but also mining ore dusts. Such mining dusts are mostly nonrespirable and should not be considered equivalent to submicron diesel particulates in their potential for adverse pulmonary effects. It is recommended that size-selective sampling be employed, rather than open-faced monitoring, when using the NIOSH 5040 method.

  7. Average sperm count remains unchanged despite reduction in maternal smoking

    DEFF Research Database (Denmark)

    Priskorn, L; Nordkap, L; Bang, A K

    2018-01-01

    STUDY QUESTION: How are temporal trends in lifestyle factors, including exposure to maternal smoking in utero, associated to semen quality in young men from the general population? SUMMARY ANSWER: Exposure to maternal smoking was associated with lower sperm counts but no overall increase in sperm...... temporal trends. Parental age increased, and exposure in utero to maternal smoking declined from 40% among men investigated in 1996-2000 to 18% among men investigated in 2011-2016. Exposure to maternal smoking was associated with lower sperm counts but no overall increase in sperm counts was observed...... counts was observed during the study period despite a decrease in this exposure. WHAT IS KNOWN ALREADY: Meta-analyses suggest a continuous decline in semen quality but few studies have investigated temporal trends in unselected populations recruited and analysed with the same protocol over a long period...

  8. Radiation exposure during equine radiography

    International Nuclear Information System (INIS)

    Ackerman, N.; Spencer, C.P.; Hager, D.A.; Poulos, P.W. Jr.

    1988-01-01

    All personnel present in the X-ray examination room during equine radiography were monitored using low energy direct reading ionization chambers (pockets dosimeters) worn outside the lead apron at neck level. The individuals' task and dosimeter readings were recorded after each examination. Average doses ranged from 0 to 6 mrad per study. The greatest exposures were associated with radiography of the shoulder and averaged less than 4 mrad. The individual extending the horse's limb was at greatest risk although the individual holding the horse's halter and the one making the X-ray exposure received similar exposures. A survey of the overhead tube assembly used for some of the X-ray examinations also was performed. Meter readings obtained indicated an asymetric dose distribution around the tube assembly, with the highest dose occurring on the side to which the exposure cord was attached. Although the exposures observed were within acceptable limits for occupational workers, we have altered our protocol and no longer radiograph the equine shoulder unless the horse is anesthetized. Continued use of the pocket dosimeters and maintenance of a case record of radiation exposure appears to make the technologists more aware of radiation hazards

  9. Exposure Evaluation for Benzene, Lead and Noise in Vehicle and Equipment Repair Shops

    Energy Technology Data Exchange (ETDEWEB)

    Sweeney, Lynn C. [Washington State Univ., Pullman, WA (United States)

    2013-04-01

    An exposure assessment was performed at the equipment and vehicle maintenance repair shops operating at the U. S. Department of Energy Hanford site, in Richland, Washington. The maintenance shops repair and maintain vehicles and equipment used in support of the Hanford cleanup mission. There are three general mechanic shops and one auto body repair shop. The mechanics work on heavy equipment used in construction, cranes, commercial motor vehicles, passenger-type vehicles in addition to air compressors, generators, and farm equipment. Services include part fabrication, installation of equipment, repair and maintenance work in the engine compartment, and tire and brake services. Work performed at the auto body shop includes painting and surface preparation which involves applying body filler and sanding. 8-hour time-weighted-average samples were collected for benzene and noise exposure and task-based samples were collected for lead dust work activities involving painted metal surfaces. Benzene samples were obtained using 3M™ 3520 sampling badges and were analyzed for additional volatile organic compounds. These compounds were selected based on material safety data sheet information for the aerosol products used by the mechanics for each day of sampling. The compounds included acetone, ethyl ether, toluene, xylene, VM&P naphtha, methyl ethyl ketone, and trichloroethylene. Laboratory data for benzene, VM&P naphtha, methyl ethyl ketone and trichloroethylene were all below the reporting detection limit. Airborne concentrations for acetone, ethyl ether, toluene and xylene were all less than 10% of their occupational exposure limit. The task-based samples obtained for lead dusts were submitted for a metal scan analysis to identify other metals that might be present. Laboratory results for lead dusts were all below the reporting detection limit and airborne concentration for the other metals observed in the samples were less than 10% of the occupational exposure limit

  10. Influence of exposure assessment and parameterization on exposure response. Aspects of epidemiologic cohort analysis using the Libby Amphibole asbestos worker cohort.

    Science.gov (United States)

    Bateson, Thomas F; Kopylev, Leonid

    2015-01-01

    Recent meta-analyses of occupational epidemiology studies identified two important exposure data quality factors in predicting summary effect measures for asbestos-associated lung cancer mortality risk: sufficiency of job history data and percent coverage of work history by measured exposures. The objective was to evaluate different exposure parameterizations suggested in the asbestos literature using the Libby, MT asbestos worker cohort and to evaluate influences of exposure measurement error caused by historically estimated exposure data on lung cancer risks. Focusing on workers hired after 1959, when job histories were well-known and occupational exposures were predominantly based on measured exposures (85% coverage), we found that cumulative exposure alone, and with allowance of exponential decay, fit lung cancer mortality data similarly. Residence-time-weighted metrics did not fit well. Compared with previous analyses based on the whole cohort of Libby workers hired after 1935, when job histories were less well-known and exposures less frequently measured (47% coverage), our analyses based on higher quality exposure data yielded an effect size as much as 3.6 times higher. Future occupational cohort studies should continue to refine retrospective exposure assessment methods, consider multiple exposure metrics, and explore new methods of maintaining statistical power while minimizing exposure measurement error.

  11. Safety Impact of Average Speed Control in the UK

    DEFF Research Database (Denmark)

    Lahrmann, Harry Spaabæk; Brassøe, Bo; Johansen, Jonas Wibert

    2016-01-01

    of automatic speed control was point-based, but in recent years a potentially more effective alternative automatic speed control method has been introduced. This method is based upon records of drivers’ average travel speed over selected sections of the road and is normally called average speed control...... in the UK. The study demonstrates that the introduction of average speed control results in statistically significant and substantial reductions both in speed and in number of accidents. The evaluation indicates that average speed control has a higher safety effect than point-based automatic speed control....

  12. on the performance of Autoregressive Moving Average Polynomial

    African Journals Online (AJOL)

    Timothy Ademakinwa

    Distributed Lag (PDL) model, Autoregressive Polynomial Distributed Lag ... Moving Average Polynomial Distributed Lag (ARMAPDL) model. ..... Global Journal of Mathematics and Statistics. Vol. 1. ... Business and Economic Research Center.

  13. Decision trees with minimum average depth for sorting eight elements

    KAUST Repository

    AbouEisha, Hassan M.

    2015-11-19

    We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees is approximately equal to 8.548×10^326365), has also minimum depth. Both problems were considered by Knuth (1998). To obtain these results, we use tools based on extensions of dynamic programming which allow us to make sequential optimization of decision trees relative to depth and average depth, and to count the number of decision trees with minimum average depth.

  14. Comparison of Interpolation Methods as Applied to Time Synchronous Averaging

    National Research Council Canada - National Science Library

    Decker, Harry

    1999-01-01

    Several interpolation techniques were investigated to determine their effect on time synchronous averaging of gear vibration signals and also the effects on standard health monitoring diagnostic parameters...

  15. Light-cone averaging in cosmology: formalism and applications

    International Nuclear Information System (INIS)

    Gasperini, M.; Marozzi, G.; Veneziano, G.; Nugier, F.

    2011-01-01

    We present a general gauge invariant formalism for defining cosmological averages that are relevant for observations based on light-like signals. Such averages involve either null hypersurfaces corresponding to a family of past light-cones or compact surfaces given by their intersection with timelike hypersurfaces. Generalized Buchert-Ehlers commutation rules for derivatives of these light-cone averages are given. After introducing some adapted ''geodesic light-cone'' coordinates, we give explicit expressions for averaging the redshift to luminosity-distance relation and the so-called ''redshift drift'' in a generic inhomogeneous Universe

  16. Extremely Low Frequency-Magnetic Fields (ELF-EMF) occupational exposure and natural killer activity in peripheral blood lymphocytes

    International Nuclear Information System (INIS)

    Gobba, Fabriziomaria; Bargellini, Annalisa; Scaringi, Meri; Bravo, Giulia; Borella, Paola

    2009-01-01

    Extremely Low Frequency-Magnetic Fields (ELF-MF) are possible carcinogens to humans and some data suggest that they can act as promoters or progressors. Since NK cells play a major role in the control of cancer development, an adverse effect on ELF-MF on NK function has been hypothesized. We examined NK activity in 52 workers exposed to different levels of ELF-MF in various activities. Individual exposure was monitored during 3 complete work-shifts using personal dosimeters. Environmental exposure was also monitored. ELF-MF levels in the workers were expressed as Time-Weighted Average (TWA) values. NK activity was measured in peripheral blood lymphocytes (PBL). In the whole group the median occupational TWA was 0.21 μT. According to the TWA levels, workers were classified as low exposed (26 subjects, TWA ≤ 0.2 μT) and higher exposed workers (26 subjects; TWA > 0.2 μT). In higher exposed workers, we observed a trend to reduce NK activity compared to low exposed, but the difference was not significant. Then we selected a subgroup of highest exposed workers (12 subjects; TWA > 1 μT); no difference was observed between low and highest exposed subjects in the main personal variables. Considering both E:T ratios from 12:1 to 50:1 and Lytic Units, a significant reduction in NK activity was observed in the highest exposed workers compared to the low exposed. Multivariate analysis showed a significant negative correlation between exposure and LU, while no correlation was evidenced with other personal characteristics. ELF-MF are considered possible carcinogens, and existing data suggest that they can act as promoters. Due to the role of NK activity in host defence against cancer, the results obtained in this study in workers exposed to ELF-MF levels exceeding 1 μT are in agreement with this hypothesis, and support the need for further investigation in this field

  17. Ultrafine particle exposure in Danish residencies

    DEFF Research Database (Denmark)

    Bekö, Gabriel; Karottki, Dorina Gabriela; Wierzbicka, Aneta

    2016-01-01

    candle burning, cooking, toasting and unknown activities, were responsible on average for ∼65% of the residential integrated exposure. Residents of another 60 homes were then asked to carry a backpack equipped with a GPS recorder and a portable monitor to measure real-time individual exposure over ~48 h...... personal exposure, indoor environments other than home or vehicles contributed with ~40%, and being in transit or outdoors contributed 5% or less....

  18. Exposures series

    OpenAIRE

    Stimson, Blake

    2011-01-01

    Reaktion Books’ Exposures series, edited by Peter Hamilton and Mark Haworth-Booth, is comprised of 13 volumes and counting, each less than 200 pages with 80 high-quality illustrations in color and black and white. Currently available titles include Photography and Australia, Photography and Spirit, Photography and Cinema, Photography and Literature, Photography and Flight, Photography and Egypt, Photography and Science, Photography and Africa, Photography and Italy, Photography and the USA, P...

  19. Exposure to chrysotile asbestos associated with unpacking and repacking boxes of automobile brake pads and shoes.

    Science.gov (United States)

    Madl, A K; Scott, L L; Murbach, D M; Fehling, K A; Finley, B L; Paustenbach, D J

    2008-08-01

    Industrial hygiene surveys and epidemiologic studies of auto mechanics have shown that these workers are not at an increased risk of asbestos-related disease; however, concerns continue to be raised regarding asbestos exposure from asbestos-containing brakes. Handling new asbestos-containing brake components has recently been suggested as a potential source of asbestos exposure. A simulation study involving the unpacking and repacking of 105 boxes of brakes (for vehicles ca. 1946-80), including 62 boxes of brake pads and 43 boxes of brake shoes, was conducted to examine how this activity might contribute to both short-term and 8-h time-weighted average exposures to asbestos. Breathing zone samples on the lapel of a volunteer worker (n = 80) and area samples at bystander (e.g., 1.5 m from worker) (n = 56), remote area (n = 26) and ambient (n = 10) locations collected during the unpacking and repacking of boxes of asbestos-containing brakes were analyzed by phase contrast microscopy and transmission electron microscopy. Exposure to airborne asbestos was characterized for a variety of parameters including the number of boxes handled, brake type (i.e. pads versus shoes) and the distance from the activity (i.e. worker, bystander and remote area). This study also evaluated the fiber size and morphology distribution according to the International Organization for Standardization analytical method for asbestos. It was observed that (i) airborne asbestos concentrations increased with the number of boxes unpacked and repacked, (ii) handling boxes of brake pads resulted in higher worker asbestos exposures compared to handling boxes of brake shoes, (iii) cleanup and clothes-handling tasks produced less airborne asbestos than handling boxes of brakes and (iv) fiber size and morphology analysis showed that while the majority of fibers were free (e.g. not associated with a cluster or matrix), 20 microm length) considered to pose the greatest risk of asbestos-related disease. It

  20. Delineation of facial archetypes by 3d averaging.

    Science.gov (United States)

    Shaweesh, Ashraf I; Thomas, C David L; Bankier, Agnes; Clement, John G

    2004-10-01

    The objective of this study was to investigate the feasibility of creating archetypal 3D faces through computerized 3D facial averaging. A 3D surface scanner Fiore and its software were used to acquire the 3D scans of the faces while 3D Rugle3 and locally-developed software generated the holistic facial averages. 3D facial averages were created from two ethnic groups; European and Japanese and from children with three previous genetic disorders; Williams syndrome, achondroplasia and Sotos syndrome as well as the normal control group. The method included averaging the corresponding depth (z) coordinates of the 3D facial scans. Compared with other face averaging techniques there was not any warping or filling in the spaces by interpolation; however, this facial average lacked colour information. The results showed that as few as 14 faces were sufficient to create an archetypal facial average. In turn this would make it practical to use face averaging as an identification tool in cases where it would be difficult to recruit a larger number of participants. In generating the average, correcting for size differences among faces was shown to adjust the average outlines of the facial features. It is assumed that 3D facial averaging would help in the identification of the ethnic status of persons whose identity may not be known with certainty. In clinical medicine, it would have a great potential for the diagnosis of syndromes with distinctive facial features. The system would also assist in the education of clinicians in the recognition and identification of such syndromes.

  1. ASSESSMENT OF THE AVERAGE ANNUAL EFFECTIVE DOSES FOR THE INHABITANTS OF THE SETTLEMENTS LOCATED IN THE TERRITORIES CONTAMINATED DUE TO THE CHERNOBYL ACCIDENT

    Directory of Open Access Journals (Sweden)

    N. G. Vlasova

    2012-01-01

    Full Text Available Catalogue of the average annual effective exposure doses of the inhabitants of the territories contaminated due to the Chernobul accident had been developed according to the method of the assessment of the average annual effective exposure doses of the settlements inhabitants. The cost-efficacy of the use of the average annual effective dose assessment method was 250 000 USD for the current 5 years. Average annual effective dose exceeded 1 mSv/year for 191 Belarus settlements from 2613. About 50 000 persons are living in these settlements.

  2. Household air pollution and personal inhalation exposure to particles (TSP/PM2.5/PM1.0/PM0.25) in rural Shanxi, North China

    International Nuclear Information System (INIS)

    Huang, Ye; Du, Wei; Chen, Yuanchen; Shen, Guofeng; Su, Shu; Lin, Nan; Shen, Huizhong; Zhu, Dan; Yuan, Chenyi; Duan, Yonghong; Liu, Junfeng; Li, Bengang; Tao, Shu

    2017-01-01

    Personal exposure to size-segregated particles among rural residents in Shanxi, China in summer, 2011 were investigated using portable carried samplers (N = 84). Household air pollution was simultaneously studied using stationary samplers in nine homes. Information on household fuel types, cooking activity, smoking behavior, kitchen ventilation conditions etc., were also collected and discussed. The study found that even in the summer period, the daily average concentrations of PM 2.5 and PM 1.0 in the kitchen were as high as 376 ± 573 and 288 ± 397 μg/m 3 (N = 6), that were nearly 3 times of 114 ± 81 and 97 ± 77 μg/m 3 in the bedroom (N = 8), and significantly higher than those of 64 ± 28 and 47 ± 21 μg/m 3 in the outdoor air (N = 6). The personal daily exposure to PM 2.5 and PM 1.0 were 98 ± 52 and 77 ± 47 μg/m 3 , respectively, that were lower than the concentrations in the kitchen but higher than the outdoor levels. The mass fractions of PM 2.5 in TSP were 90%, 72%, 65% and 68% on average in the kitchen, bedroom, outdoor air and personal inhalation exposure, respectively, and moreover, a majority of particles in PM 2.5 had diameters less than 1.0 μm. Calculated time-weighted average exposure based on indoor and outdoor air concentrations and time spent indoor and outdoor were positively correlated but, was ∼33% lower than the directly measured exposure. The daily exposure among those burning traditional solid fuels could be lower by ∼41% if the kitchen was equipped with an outdoor chimney, but was still 8–14% higher than those household using cleaning energies, like electricity and gas. With a ventilator in the kitchen, the exposure among the population using clean energies could be further reduced by 10–24%. - Highlights: • High inhalation exposure of fine PM 2.5 and PM 1.0 among rural residents. • Smoking prevails on cooking in increasing exposure to finer particles. • PM exposure could be reduced by

  3. Interpreting Bivariate Regression Coefficients: Going beyond the Average

    Science.gov (United States)

    Halcoussis, Dennis; Phillips, G. Michael

    2010-01-01

    Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…

  4. Average stress in a Stokes suspension of disks

    NARCIS (Netherlands)

    Prosperetti, Andrea

    2004-01-01

    The ensemble-average velocity and pressure in an unbounded quasi-random suspension of disks (or aligned cylinders) are calculated in terms of average multipoles allowing for the possibility of spatial nonuniformities in the system. An expression for the stress due to the suspended particles is

  5. 47 CFR 1.959 - Computation of average terrain elevation.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Computation of average terrain elevation. 1.959 Section 1.959 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Wireless Radio Services Applications and Proceedings Application Requirements and Procedures § 1.959 Computation of average terrain elevation. Except a...

  6. 47 CFR 80.759 - Average terrain elevation.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Average terrain elevation. 80.759 Section 80.759 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.759 Average terrain elevation. (a)(1) Draw radials...

  7. The average covering tree value for directed graph games

    NARCIS (Netherlands)

    Khmelnitskaya, Anna Borisovna; Selcuk, Özer; Talman, Dolf

    We introduce a single-valued solution concept, the so-called average covering tree value, for the class of transferable utility games with limited communication structure represented by a directed graph. The solution is the average of the marginal contribution vectors corresponding to all covering

  8. The Average Covering Tree Value for Directed Graph Games

    NARCIS (Netherlands)

    Khmelnitskaya, A.; Selcuk, O.; Talman, A.J.J.

    2012-01-01

    Abstract: We introduce a single-valued solution concept, the so-called average covering tree value, for the class of transferable utility games with limited communication structure represented by a directed graph. The solution is the average of the marginal contribution vectors corresponding to all

  9. 18 CFR 301.7 - Average System Cost methodology functionalization.

    Science.gov (United States)

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Average System Cost... REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER MARKETING ADMINISTRATIONS AVERAGE SYSTEM COST METHODOLOGY FOR SALES FROM UTILITIES TO BONNEVILLE POWER ADMINISTRATION UNDER NORTHWEST POWER...

  10. Analytic computation of average energy of neutrons inducing fission

    International Nuclear Information System (INIS)

    Clark, Alexander Rich

    2016-01-01

    The objective of this report is to describe how I analytically computed the average energy of neutrons that induce fission in the bare BeRP ball. The motivation of this report is to resolve a discrepancy between the average energy computed via the FMULT and F4/FM cards in MCNP6 by comparison to the analytic results.

  11. An alternative scheme of the Bogolyubov's average method

    International Nuclear Information System (INIS)

    Ortiz Peralta, T.; Ondarza R, R.; Camps C, E.

    1990-01-01

    In this paper the average energy and the magnetic moment conservation laws in the Drift Theory of charged particle motion are obtained in a simple way. The approach starts from the energy and magnetic moment conservation laws and afterwards the average is performed. This scheme is more economic from the standpoint of time and algebraic calculations than the usual procedure of Bogolyubov's method. (Author)

  12. Decision trees with minimum average depth for sorting eight elements

    KAUST Repository

    AbouEisha, Hassan M.; Chikalov, Igor; Moshkov, Mikhail

    2015-01-01

    We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees

  13. Bounds on Average Time Complexity of Decision Trees

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any

  14. A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, Manfred

    2003-01-01

    We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages...

  15. Self-similarity of higher-order moving averages

    Science.gov (United States)

    Arianos, Sergio; Carbone, Anna; Türk, Christian

    2011-10-01

    In this work, higher-order moving average polynomials are defined by straightforward generalization of the standard moving average. The self-similarity of the polynomials is analyzed for fractional Brownian series and quantified in terms of the Hurst exponent H by using the detrending moving average method. We prove that the exponent H of the fractional Brownian series and of the detrending moving average variance asymptotically agree for the first-order polynomial. Such asymptotic values are compared with the results obtained by the simulations. The higher-order polynomials correspond to trend estimates at shorter time scales as the degree of the polynomial increases. Importantly, the increase of polynomial degree does not require to change the moving average window. Thus trends at different time scales can be obtained on data sets with the same size. These polynomials could be interesting for those applications relying on trend estimates over different time horizons (financial markets) or on filtering at different frequencies (image analysis).

  16. Anomalous behavior of q-averages in nonextensive statistical mechanics

    International Nuclear Information System (INIS)

    Abe, Sumiyoshi

    2009-01-01

    A generalized definition of average, termed the q-average, is widely employed in the field of nonextensive statistical mechanics. Recently, it has however been pointed out that such an average value may behave unphysically under specific deformations of probability distributions. Here, the following three issues are discussed and clarified. Firstly, the deformations considered are physical and may be realized experimentally. Secondly, in view of the thermostatistics, the q-average is unstable in both finite and infinite discrete systems. Thirdly, a naive generalization of the discussion to continuous systems misses a point, and a norm better than the L 1 -norm should be employed for measuring the distance between two probability distributions. Consequently, stability of the q-average is shown not to be established in all of the cases

  17. Bootstrapping pre-averaged realized volatility under market microstructure noise

    DEFF Research Database (Denmark)

    Hounyo, Ulrich; Goncalves, Sílvia; Meddahi, Nour

    The main contribution of this paper is to propose a bootstrap method for inference on integrated volatility based on the pre-averaging approach of Jacod et al. (2009), where the pre-averaging is done over all possible overlapping blocks of consecutive observations. The overlapping nature of the pre......-averaged returns implies that these are kn-dependent with kn growing slowly with the sample size n. This motivates the application of a blockwise bootstrap method. We show that the "blocks of blocks" bootstrap method suggested by Politis and Romano (1992) (and further studied by Bühlmann and Künsch (1995......)) is valid only when volatility is constant. The failure of the blocks of blocks bootstrap is due to the heterogeneity of the squared pre-averaged returns when volatility is stochastic. To preserve both the dependence and the heterogeneity of squared pre-averaged returns, we propose a novel procedure...

  18. A study on the image quality of mammography and the average glandular dose

    International Nuclear Information System (INIS)

    Lee, In Ja; Kim, Hak Sung; Kim, Sung Soo; Huh, Joon

    2002-01-01

    We came to the following conclusion as the results of experiment on the image quality of mammography and the average glandular dose using 4 apparatuses at 3 hospital in Seoul. Whereas the measurement of half value layer showed no differences among the apparatuses, the measurement by an attenuation curve method showed some differences by 5.9%. There were 9.1% differences in the measurement by aluminum conversion method. The basic density of an automatic exposure control unit must be D = 1.40, but there was no automatic exposure unit adjusted precisely at any hospital. The unit at the B hospital exceeded the allowable limit by ± 0.15. In the photographing using an automatic exposure control unit and the management of an automatic film processor using a sensitometer, most automatic film processors were well kept. But in some cases the mean value of a fluctuation coefficient exceeded the allowable limit. There is a need for more cautious management. The image quality of breast phantom photography was affected by the screen/film system among the hospital. The average glandular dose at a breast of 4.2 cm thickness depended on the tube voltage, In the case of Mo/Mo, it was measured 0.26 ∼ 1.39 mGy less than ACR standard 3.0 mGy

  19. Model averaging in the analysis of leukemia mortality among Japanese A-bomb survivors

    International Nuclear Information System (INIS)

    Richardson, David B.; Cole, Stephen R.

    2012-01-01

    Epidemiological studies often include numerous covariates, with a variety of possible approaches to control for confounding of the association of primary interest, as well as a variety of possible models for the exposure-response association of interest. Walsh and Kaiser (Radiat Environ Biophys 50:21-35, 2011) advocate a weighted averaging of the models, where the weights are a function of overall model goodness of fit and degrees of freedom. They apply this method to analyses of radiation-leukemia mortality associations among Japanese A-bomb survivors. We caution against such an approach, noting that the proposed model averaging approach prioritizes the inclusion of covariates that are strong predictors of the outcome, but which may be irrelevant as confounders of the association of interest, and penalizes adjustment for covariates that are confounders of the association of interest, but may contribute little to overall model goodness of fit. We offer a simple illustration of how this approach can lead to biased results. The proposed model averaging approach may also be suboptimal as way to handle competing model forms for an exposure-response association of interest, given adjustment for the same set of confounders; alternative approaches, such as hierarchical regression, may provide a more useful way to stabilize risk estimates in this setting. (orig.)

  20. Occupational radiation exposures in Canada - 1980

    International Nuclear Information System (INIS)

    Ashmore, J.P.; Fujimoto, K.R.; Wilson, J.A.; Grogan, D.

    1981-08-01

    This report is the third in a series of annual reports on Occupational Radiation Exposures in Canada. The data is derived from the Radiation Protection Bureau's National Dose Registry which includes dose records for radiation workers. The report presents average yearly doses by region and occupational category, dose distributions, and variation of average doses with time. Statistical data concerning investigations of high exposures reported by the National Dosimetry Services are included and individual cases are briefly summarized where the maximum permissible dose is exceeded. The decrease in the overall average doses established over the last 20 years appears to be changing. In some occupational categories a consistent upward trend is observed

  1. Past exposure

    International Nuclear Information System (INIS)

    Dropkin, G.; Clark, D.

    1992-01-01

    Past Exposure uses confidential company documents, obtained by the Namibia Support Committee over several years, to draw attention to risks to workers' health and the environment at Roessing Uranium mine. Particular reference is made to discussion of dust levels, radiation hazards, uranium poisoning, environmental leaks, especially from the tailings dam, and the lack of monitoring of thorium. In relation to agreements between trades unions and mines, agreements reached by RTZ-owned Canadian in Canada, and British Nuclear Fuels in the UK, are discussed. (UK)

  2. Bounds on Average Time Complexity of Decision Trees

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any diagnostic problem, the minimum average depth of decision tree is bounded from below by the entropy of probability distribution (with a multiplier 1/log2 k for a problem over a k-valued information system). Among diagnostic problems, the problems with a complete set of attributes have the lowest minimum average depth of decision trees (e.g, the problem of building optimal prefix code [1] and a blood test study in assumption that exactly one patient is ill [23]). For such problems, the minimum average depth of decision tree exceeds the lower bound by at most one. The minimum average depth reaches the maximum on the problems in which each attribute is "indispensable" [44] (e.g., a diagnostic problem with n attributes and kn pairwise different rows in the decision table and the problem of implementing the modulo 2 summation function). These problems have the minimum average depth of decision tree equal to the number of attributes in the problem description. © Springer-Verlag Berlin Heidelberg 2011.

  3. Lateral dispersion coefficients as functions of averaging time

    International Nuclear Information System (INIS)

    Sheih, C.M.

    1980-01-01

    Plume dispersion coefficients are discussed in terms of single-particle and relative diffusion, and are investigated as functions of averaging time. To demonstrate the effects of averaging time on the relative importance of various dispersion processes, and observed lateral wind velocity spectrum is used to compute the lateral dispersion coefficients of total, single-particle and relative diffusion for various averaging times and plume travel times. The results indicate that for a 1 h averaging time the dispersion coefficient of a plume can be approximated by single-particle diffusion alone for travel times <250 s and by relative diffusion for longer travel times. Furthermore, it is shown that the power-law formula suggested by Turner for relating pollutant concentrations for other averaging times to the corresponding 15 min average is applicable to the present example only when the averaging time is less than 200 s and the tral time smaller than about 300 s. Since the turbulence spectrum used in the analysis is an observed one, it is hoped that the results could represent many conditions encountered in the atmosphere. However, as the results depend on the form of turbulence spectrum, the calculations are not for deriving a set of specific criteria but for demonstrating the need in discriminating various processes in studies of plume dispersion

  4. 40 CFR 80.205 - How is the annual refinery or importer average and corporate pool average sulfur level determined?

    Science.gov (United States)

    2010-07-01

    ... volume of gasoline produced or imported in batch i. Si=The sulfur content of batch i determined under § 80.330. n=The number of batches of gasoline produced or imported during the averaging period. i=Individual batch of gasoline produced or imported during the averaging period. (b) All annual refinery or...

  5. 40 CFR 600.510-12 - Calculation of average fuel economy and average carbon-related exhaust emissions.

    Science.gov (United States)

    2010-07-01

    ... and average carbon-related exhaust emissions. 600.510-12 Section 600.510-12 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF... Transportation. (iv) [Reserved] (2) Average carbon-related exhaust emissions will be calculated to the nearest...

  6. Aircrew radiation exposure: sources-risks-measurement

    International Nuclear Information System (INIS)

    Duftschmid, K.E.

    1994-05-01

    A short review is given on the actual aircrew exposure and its sources. The resulting risks for harmful effects to the health and discuss methods for in-flight measurements of exposure is evaluated. An idea for a fairly simple and economic approach to a practical, airborne active dosimeter for the assessment of individual crew exposure is presented. The exposure of civil aircrew to cosmic radiation, should not be considered a tremendous risk to the health, there is no reason for panic. However, being significantly higher than the average exposure to radiation workers, it can certainly not be neglected. As recommended by ICRP, aircrew exposure has to be considered occupational radiation exposure and aircrews are certainly entitled to the same degree of protection, as other ground-based radiation workers have obtained by law, since long time. (author)

  7. Average inactivity time model, associated orderings and reliability properties

    Science.gov (United States)

    Kayid, M.; Izadkhah, S.; Abouammoh, A. M.

    2018-02-01

    In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.

  8. Average L-shell fluorescence, Auger, and electron yields

    International Nuclear Information System (INIS)

    Krause, M.O.

    1980-01-01

    The dependence of the average L-shell fluorescence and Auger yields on the initial vacancy distribution is shown to be small. By contrast, the average electron yield pertaining to both Auger and Coster-Kronig transitions is shown to display a strong dependence. Numerical examples are given on the basis of Krause's evaluation of subshell radiative and radiationless yields. Average yields are calculated for widely differing vacancy distributions and are intercompared graphically for 40 3 subshell yields in most cases of inner-shell ionization

  9. Simultaneous inference for model averaging of derived parameters

    DEFF Research Database (Denmark)

    Jensen, Signe Marie; Ritz, Christian

    2015-01-01

    Model averaging is a useful approach for capturing uncertainty due to model selection. Currently, this uncertainty is often quantified by means of approximations that do not easily extend to simultaneous inference. Moreover, in practice there is a need for both model averaging and simultaneous...... inference for derived parameters calculated in an after-fitting step. We propose a method for obtaining asymptotically correct standard errors for one or several model-averaged estimates of derived parameters and for obtaining simultaneous confidence intervals that asymptotically control the family...

  10. Salecker-Wigner-Peres clock and average tunneling times

    International Nuclear Information System (INIS)

    Lunardi, Jose T.; Manzoni, Luiz A.; Nystrom, Andrew T.

    2011-01-01

    The quantum clock of Salecker-Wigner-Peres is used, by performing a post-selection of the final state, to obtain average transmission and reflection times associated to the scattering of localized wave packets by static potentials in one dimension. The behavior of these average times is studied for a Gaussian wave packet, centered around a tunneling wave number, incident on a rectangular barrier and, in particular, on a double delta barrier potential. The regime of opaque barriers is investigated and the results show that the average transmission time does not saturate, showing no evidence of the Hartman effect (or its generalized version).

  11. Time average vibration fringe analysis using Hilbert transformation

    International Nuclear Information System (INIS)

    Kumar, Upputuri Paul; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad

    2010-01-01

    Quantitative phase information from a single interferogram can be obtained using the Hilbert transform (HT). We have applied the HT method for quantitative evaluation of Bessel fringes obtained in time average TV holography. The method requires only one fringe pattern for the extraction of vibration amplitude and reduces the complexity in quantifying the data experienced in the time average reference bias modulation method, which uses multiple fringe frames. The technique is demonstrated for the measurement of out-of-plane vibration amplitude on a small scale specimen using a time average microscopic TV holography system.

  12. Average multiplications in deep inelastic processes and their interpretation

    International Nuclear Information System (INIS)

    Kiselev, A.V.; Petrov, V.A.

    1983-01-01

    Inclusive production of hadrons in deep inelastic proceseseus is considered. It is shown that at high energies the jet evolution in deep inelastic processes is mainly of nonperturbative character. With the increase of a final hadron state energy the leading contribution to an average multiplicity comes from a parton subprocess due to production of massive quark and gluon jets and their further fragmentation as diquark contribution becomes less and less essential. The ratio of the total average multiplicity in deep inelastic processes to the average multiplicity in e + e - -annihilation at high energies tends to unity

  13. Fitting a function to time-dependent ensemble averaged data

    DEFF Research Database (Denmark)

    Fogelmark, Karl; Lomholt, Michael A.; Irbäck, Anders

    2018-01-01

    Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion...... method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software....

  14. Average wind statistics for SRP area meteorological towers

    International Nuclear Information System (INIS)

    Laurinat, J.E.

    1987-01-01

    A quality assured set of average wind Statistics for the seven SRP area meteorological towers has been calculated for the five-year period 1982--1986 at the request of DOE/SR. A Similar set of statistics was previously compiled for the years 1975-- 1979. The updated wind statistics will replace the old statistics as the meteorological input for calculating atmospheric radionuclide doses from stack releases, and will be used in the annual environmental report. This report details the methods used to average the wind statistics and to screen out bad measurements and presents wind roses generated by the averaged statistics

  15. Adaptation to antifaces and the perception of correct famous identity in an average face

    Directory of Open Access Journals (Sweden)

    Anthony eLittle

    2012-02-01

    Full Text Available Previous experiments have examined exposure to anti-identities (faces that possess traits opposite to an identity through a population average, finding that exposure to antifaces enhances recognition of the plus-identity images. Here we examine adaptation to antifaces using famous female celebrities. We demonstrate: that exposure to a color and shape transformed antiface of a celebrity increases the likelihood of perceiving the identity from which the antiface was manufactured in a composite face and that the effect shows size invariance (Experiment 1, equivalent effects are seen in internet and laboratory based studies (Experiment 2, adaptation to shape-only antifaces has stronger effects on identity recognition than adaptation to color-only antifaces (Experiment 3, and exposure to male versions of the antifaces does not influence the perception of female’s faces (Experiment 4. Across these studies we found an effect of order where aftereffects were more pronounced in early than later trials. Overall, our studies delineate several aspects of identity aftereffects and support the proposal that identity is coded relative to other faces with special reference to a relatively sex-specific mean face representation.

  16. Average monthly and annual climate maps for Bolivia

    KAUST Repository

    Vicente-Serrano, Sergio M.

    2015-02-24

    This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.

  17. Medicare Part B Drug Average Sales Pricing Files

    Data.gov (United States)

    U.S. Department of Health & Human Services — Manufacturer reporting of Average Sales Price (ASP) data - A manufacturers ASP must be calculated by the manufacturer every calendar quarter and submitted to CMS...

  18. High Average Power Fiber Laser for Satellite Communications, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Very high average power lasers with high electrical-top-optical (E-O) efficiency, which also support pulse position modulation (PPM) formats in the MHz-data rate...

  19. A time averaged background compensator for Geiger-Mueller counters

    International Nuclear Information System (INIS)

    Bhattacharya, R.C.; Ghosh, P.K.

    1983-01-01

    The GM tube compensator described stores background counts to cancel an equal number of pulses from the measuring channel providing time averaged compensation. The method suits portable instruments. (orig.)

  20. Time averaging, ageing and delay analysis of financial time series

    Science.gov (United States)

    Cherstvy, Andrey G.; Vinod, Deepak; Aghion, Erez; Chechkin, Aleksei V.; Metzler, Ralf

    2017-06-01

    We introduce three strategies for the analysis of financial time series based on time averaged observables. These comprise the time averaged mean squared displacement (MSD) as well as the ageing and delay time methods for varying fractions of the financial time series. We explore these concepts via statistical analysis of historic time series for several Dow Jones Industrial indices for the period from the 1960s to 2015. Remarkably, we discover a simple universal law for the delay time averaged MSD. The observed features of the financial time series dynamics agree well with our analytical results for the time averaged measurables for geometric Brownian motion, underlying the famed Black-Scholes-Merton model. The concepts we promote here are shown to be useful for financial data analysis and enable one to unveil new universal features of stock market dynamics.

  1. Historical Data for Average Processing Time Until Hearing Held

    Data.gov (United States)

    Social Security Administration — This dataset provides historical data for average wait time (in days) from the hearing request date until a hearing was held. This dataset includes data from fiscal...

  2. GIS Tools to Estimate Average Annual Daily Traffic

    Science.gov (United States)

    2012-06-01

    This project presents five tools that were created for a geographical information system to estimate Annual Average Daily : Traffic using linear regression. Three of the tools can be used to prepare spatial data for linear regression. One tool can be...

  3. A high speed digital signal averager for pulsed NMR

    International Nuclear Information System (INIS)

    Srinivasan, R.; Ramakrishna, J.; Ra agopalan, S.R.

    1978-01-01

    A 256-channel digital signal averager suitable for pulsed nuclear magnetic resonance spectroscopy is described. It implements 'stable averaging' algorithm and hence provides a calibrated display of the average signal at all times during the averaging process on a CRT. It has a maximum sampling rate of 2.5 μ sec and a memory capacity of 256 x 12 bit words. Number of sweeps is selectable through a front panel control in binary steps from 2 3 to 2 12 . The enhanced signal can be displayed either on a CRT or by a 3.5-digit LED display. The maximum S/N improvement that can be achieved with this instrument is 36 dB. (auth.)

  4. The average-shadowing property and topological ergodicity for flows

    International Nuclear Information System (INIS)

    Gu Rongbao; Guo Wenjing

    2005-01-01

    In this paper, the transitive property for a flow without sensitive dependence on initial conditions is studied and it is shown that a Lyapunov stable flow with the average-shadowing property on a compact metric space is topologically ergodic

  5. Application of Bayesian approach to estimate average level spacing

    International Nuclear Information System (INIS)

    Huang Zhongfu; Zhao Zhixiang

    1991-01-01

    A method to estimate average level spacing from a set of resolved resonance parameters by using Bayesian approach is given. Using the information given in the distributions of both levels spacing and neutron width, the level missing in measured sample can be corrected more precisely so that better estimate for average level spacing can be obtained by this method. The calculation of s-wave resonance has been done and comparison with other work was carried out

  6. Annual average equivalent dose of workers form health area

    International Nuclear Information System (INIS)

    Daltro, T.F.L.; Campos, L.L.

    1992-01-01

    The data of personnel monitoring during 1985 and 1991 of personnel that work in health area were studied, obtaining a general overview of the value change of annual average equivalent dose. Two different aspects were presented: the analysis of annual average equivalent dose in the different sectors of a hospital and the comparison of these doses in the same sectors in different hospitals. (C.G.C.)

  7. A precise measurement of the average b hadron lifetime

    CERN Document Server

    Buskulic, Damir; De Bonis, I; Décamp, D; Ghez, P; Goy, C; Lees, J P; Lucotte, A; Minard, M N; Odier, P; Pietrzyk, B; Ariztizabal, F; Chmeissani, M; Crespo, J M; Efthymiopoulos, I; Fernández, E; Fernández-Bosman, M; Gaitan, V; Garrido, L; Martínez, M; Orteu, S; Pacheco, A; Padilla, C; Palla, Fabrizio; Pascual, A; Perlas, J A; Sánchez, F; Teubert, F; Colaleo, A; Creanza, D; De Palma, M; Farilla, A; Gelao, G; Girone, M; Iaselli, Giuseppe; Maggi, G; Maggi, M; Marinelli, N; Natali, S; Nuzzo, S; Ranieri, A; Raso, G; Romano, F; Ruggieri, F; Selvaggi, G; Silvestris, L; Tempesta, P; Zito, G; Huang, X; Lin, J; Ouyang, Q; Wang, T; Xie, Y; Xu, R; Xue, S; Zhang, J; Zhang, L; Zhao, W; Bonvicini, G; Cattaneo, M; Comas, P; Coyle, P; Drevermann, H; Engelhardt, A; Forty, Roger W; Frank, M; Hagelberg, R; Harvey, J; Jacobsen, R; Janot, P; Jost, B; Knobloch, J; Lehraus, Ivan; Markou, C; Martin, E B; Mato, P; Meinhard, H; Minten, Adolf G; Miquel, R; Oest, T; Palazzi, P; Pater, J R; Pusztaszeri, J F; Ranjard, F; Rensing, P E; Rolandi, Luigi; Schlatter, W D; Schmelling, M; Schneider, O; Tejessy, W; Tomalin, I R; Venturi, A; Wachsmuth, H W; Wiedenmann, W; Wildish, T; Witzeling, W; Wotschack, J; Ajaltouni, Ziad J; Bardadin-Otwinowska, Maria; Barrès, A; Boyer, C; Falvard, A; Gay, P; Guicheney, C; Henrard, P; Jousset, J; Michel, B; Monteil, S; Montret, J C; Pallin, D; Perret, P; Podlyski, F; Proriol, J; Rossignol, J M; Saadi, F; Fearnley, Tom; Hansen, J B; Hansen, J D; Hansen, J R; Hansen, P H; Nilsson, B S; Kyriakis, A; Simopoulou, Errietta; Siotis, I; Vayaki, Anna; Zachariadou, K; Blondel, A; Bonneaud, G R; Brient, J C; Bourdon, P; Passalacqua, L; Rougé, A; Rumpf, M; Tanaka, R; Valassi, Andrea; Verderi, M; Videau, H L; Candlin, D J; Parsons, M I; Focardi, E; Parrini, G; Corden, M; Delfino, M C; Georgiopoulos, C H; Jaffe, D E; Antonelli, A; Bencivenni, G; Bologna, G; Bossi, F; Campana, P; Capon, G; Chiarella, V; Felici, G; Laurelli, P; Mannocchi, G; Murtas, F; Murtas, G P; Pepé-Altarelli, M; Dorris, S J; Halley, A W; ten Have, I; Knowles, I G; Lynch, J G; Morton, W T; O'Shea, V; Raine, C; Reeves, P; Scarr, J M; Smith, K; Smith, M G; Thompson, A S; Thomson, F; Thorn, S; Turnbull, R M; Becker, U; Braun, O; Geweniger, C; Graefe, G; Hanke, P; Hepp, V; Kluge, E E; Putzer, A; Rensch, B; Schmidt, M; Sommer, J; Stenzel, H; Tittel, K; Werner, S; Wunsch, M; Beuselinck, R; Binnie, David M; Cameron, W; Colling, D J; Dornan, Peter J; Konstantinidis, N P; Moneta, L; Moutoussi, A; Nash, J; San Martin, G; Sedgbeer, J K; Stacey, A M; Dissertori, G; Girtler, P; Kneringer, E; Kuhn, D; Rudolph, G; Bowdery, C K; Brodbeck, T J; Colrain, P; Crawford, G; Finch, A J; Foster, F; Hughes, G; Sloan, Terence; Whelan, E P; Williams, M I; Galla, A; Greene, A M; Kleinknecht, K; Quast, G; Raab, J; Renk, B; Sander, H G; Wanke, R; Van Gemmeren, P; Zeitnitz, C; Aubert, Jean-Jacques; Bencheikh, A M; Benchouk, C; Bonissent, A; Bujosa, G; Calvet, D; Carr, J; Diaconu, C A; Etienne, F; Thulasidas, M; Nicod, D; Payre, P; Rousseau, D; Talby, M; Abt, I; Assmann, R W; Bauer, C; Blum, Walter; Brown, D; Dietl, H; Dydak, Friedrich; Ganis, G; Gotzhein, C; Jakobs, K; Kroha, H; Lütjens, G; Lutz, Gerhard; Männer, W; Moser, H G; Richter, R H; Rosado-Schlosser, A; Schael, S; Settles, Ronald; Seywerd, H C J; Stierlin, U; Saint-Denis, R; Wolf, G; Alemany, R; Boucrot, J; Callot, O; Cordier, A; Courault, F; Davier, M; Duflot, L; Grivaz, J F; Heusse, P; Jacquet, M; Kim, D W; Le Diberder, F R; Lefrançois, J; Lutz, A M; Musolino, G; Nikolic, I A; Park, H J; Park, I C; Schune, M H; Simion, S; Veillet, J J; Videau, I; Abbaneo, D; Azzurri, P; Bagliesi, G; Batignani, G; Bettarini, S; Bozzi, C; Calderini, G; Carpinelli, M; Ciocci, M A; Ciulli, V; Dell'Orso, R; Fantechi, R; Ferrante, I; Foà, L; Forti, F; Giassi, A; Giorgi, M A; Gregorio, A; Ligabue, F; Lusiani, A; Marrocchesi, P S; Messineo, A; Rizzo, G; Sanguinetti, G; Sciabà, A; Spagnolo, P; Steinberger, Jack; Tenchini, Roberto; Tonelli, G; Triggiani, G; Vannini, C; Verdini, P G; Walsh, J; Betteridge, A P; Blair, G A; Bryant, L M; Cerutti, F; Gao, Y; Green, M G; Johnson, D L; Medcalf, T; Mir, L M; Perrodo, P; Strong, J A; Bertin, V; Botterill, David R; Clifft, R W; Edgecock, T R; Haywood, S; Edwards, M; Maley, P; Norton, P R; Thompson, J C; Bloch-Devaux, B; Colas, P; Duarte, H; Emery, S; Kozanecki, Witold; Lançon, E; Lemaire, M C; Locci, E; Marx, B; Pérez, P; Rander, J; Renardy, J F; Rosowsky, A; Roussarie, A; Schuller, J P; Schwindling, J; Si Mohand, D; Trabelsi, A; Vallage, B; Johnson, R P; Kim, H Y; Litke, A M; McNeil, M A; Taylor, G; Beddall, A; Booth, C N; Boswell, R; Cartwright, S L; Combley, F; Dawson, I; Köksal, A; Letho, M; Newton, W M; Rankin, C; Thompson, L F; Böhrer, A; Brandt, S; Cowan, G D; Feigl, E; Grupen, Claus; Lutters, G; Minguet-Rodríguez, J A; Rivera, F; Saraiva, P; Smolik, L; Stephan, F; Apollonio, M; Bosisio, L; Della Marina, R; Giannini, G; Gobbo, B; Ragusa, F; Rothberg, J E; Wasserbaech, S R; Armstrong, S R; Bellantoni, L; Elmer, P; Feng, P; Ferguson, D P S; Gao, Y S; González, S; Grahl, J; Harton, J L; Hayes, O J; Hu, H; McNamara, P A; Nachtman, J M; Orejudos, W; Pan, Y B; Saadi, Y; Schmitt, M; Scott, I J; Sharma, V; Turk, J; Walsh, A M; Wu Sau Lan; Wu, X; Yamartino, J M; Zheng, M; Zobernig, G

    1996-01-01

    An improved measurement of the average b hadron lifetime is performed using a sample of 1.5 million hadronic Z decays, collected during the 1991-1993 runs of ALEPH, with the silicon vertex detector fully operational. This uses the three-dimensional impact parameter distribution of lepton tracks coming from semileptonic b decays and yields an average b hadron lifetime of 1.533 \\pm 0.013 \\pm 0.022 ps.

  8. Bivariate copulas on the exponentially weighted moving average control chart

    Directory of Open Access Journals (Sweden)

    Sasigarn Kuvattana

    2016-10-01

    Full Text Available This paper proposes four types of copulas on the Exponentially Weighted Moving Average (EWMA control chart when observations are from an exponential distribution using a Monte Carlo simulation approach. The performance of the control chart is based on the Average Run Length (ARL which is compared for each copula. Copula functions for specifying dependence between random variables are used and measured by Kendall’s tau. The results show that the Normal copula can be used for almost all shifts.

  9. Averaging Bias Correction for Future IPDA Lidar Mission MERLIN

    Directory of Open Access Journals (Sweden)

    Tellier Yoann

    2018-01-01

    Full Text Available The CNES/DLR MERLIN satellite mission aims at measuring methane dry-air mixing ratio column (XCH4 and thus improving surface flux estimates. In order to get a 1% precision on XCH4 measurements, MERLIN signal processing assumes an averaging of data over 50 km. The induced biases due to the non-linear IPDA lidar equation are not compliant with accuracy requirements. This paper analyzes averaging biases issues and suggests correction algorithms tested on realistic simulated scenes.

  10. Averaging Bias Correction for Future IPDA Lidar Mission MERLIN

    Science.gov (United States)

    Tellier, Yoann; Pierangelo, Clémence; Wirth, Martin; Gibert, Fabien

    2018-04-01

    The CNES/DLR MERLIN satellite mission aims at measuring methane dry-air mixing ratio column (XCH4) and thus improving surface flux estimates. In order to get a 1% precision on XCH4 measurements, MERLIN signal processing assumes an averaging of data over 50 km. The induced biases due to the non-linear IPDA lidar equation are not compliant with accuracy requirements. This paper analyzes averaging biases issues and suggests correction algorithms tested on realistic simulated scenes.

  11. The average action for scalar fields near phase transitions

    International Nuclear Information System (INIS)

    Wetterich, C.

    1991-08-01

    We compute the average action for fields in two, three and four dimensions, including the effects of wave function renormalization. A study of the one loop evolution equations for the scale dependence of the average action gives a unified picture of the qualitatively different behaviour in various dimensions for discrete as well as abelian and nonabelian continuous symmetry. The different phases and the phase transitions can be infered from the evolution equation. (orig.)

  12. Wave function collapse implies divergence of average displacement

    OpenAIRE

    Marchewka, A.; Schuss, Z.

    2005-01-01

    We show that propagating a truncated discontinuous wave function by Schr\\"odinger's equation, as asserted by the collapse axiom, gives rise to non-existence of the average displacement of the particle on the line. It also implies that there is no Zeno effect. On the other hand, if the truncation is done so that the reduced wave function is continuous, the average coordinate is finite and there is a Zeno effect. Therefore the collapse axiom of measurement needs to be revised.

  13. Average geodesic distance of skeleton networks of Sierpinski tetrahedron

    Science.gov (United States)

    Yang, Jinjin; Wang, Songjing; Xi, Lifeng; Ye, Yongchao

    2018-04-01

    The average distance is concerned in the research of complex networks and is related to Wiener sum which is a topological invariant in chemical graph theory. In this paper, we study the skeleton networks of the Sierpinski tetrahedron, an important self-similar fractal, and obtain their asymptotic formula for average distances. To provide the formula, we develop some technique named finite patterns of integral of geodesic distance on self-similar measure for the Sierpinski tetrahedron.

  14. Radiation exposure of nursing personnel to brachytherapy patients

    International Nuclear Information System (INIS)

    Cobb, P.D.; Kase, K.R.; Bjaerngard, B.E.

    1978-01-01

    The radiation exposure of nursing personnel to brachytherapy patients has been analyzed from data collected during the years 1973-1976, at four different hospitals. The average annual dose per exposed nurse ranged between 25 and 150 mrem. The radiation exposure per nurse was found to be proportional to the total potential exposure and was uncorrelated with the size of the nursing staff. (author)

  15. Personal exposure versus monitoring station data for respirable particles

    Energy Technology Data Exchange (ETDEWEB)

    Sega, K; Fugas, M

    1982-01-01

    Personal exposure to respirable particles of 12 subjects working at the same location, but living in various parts of Zagreb, was monitored for 7 consecutive days and compared with simultaneously obtained data from the outdoor network station nearest to subject's home. Although personal exposure is related to the outdoor pollution, other sources play a considerable role. Indoor exposure takes, on the average, more than 80% of the total time. The ratio between average personal exposure and respirable particle levels in the outdoor air decreases with the increased outdoor concentration (r = -0.93), indicating that this relationship might serve as a basis for a rough estimate of possible personal exposure.

  16. Average Soil Water Retention Curves Measured by Neutron Radiography

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Chu-Lin [ORNL; Perfect, Edmund [University of Tennessee, Knoxville (UTK); Kang, Misun [ORNL; Voisin, Sophie [ORNL; Bilheux, Hassina Z [ORNL; Horita, Juske [Texas Tech University (TTU); Hussey, Dan [NIST Center for Neutron Research (NCRN), Gaithersburg, MD

    2011-01-01

    Water retention curves are essential for understanding the hydrologic behavior of partially-saturated porous media and modeling flow transport processes within the vadose zone. In this paper we report direct measurements of the main drying and wetting branches of the average water retention function obtained using 2-dimensional neutron radiography. Flint sand columns were saturated with water and then drained under quasi-equilibrium conditions using a hanging water column setup. Digital images (2048 x 2048 pixels) of the transmitted flux of neutrons were acquired at each imposed matric potential (~10-15 matric potential values per experiment) at the NCNR BT-2 neutron imaging beam line. Volumetric water contents were calculated on a pixel by pixel basis using Beer-Lambert s law after taking into account beam hardening and geometric corrections. To remove scattering effects at high water contents the volumetric water contents were normalized (to give relative saturations) by dividing the drying and wetting sequences of images by the images obtained at saturation and satiation, respectively. The resulting pixel values were then averaged and combined with information on the imposed basal matric potentials to give average water retention curves. The average relative saturations obtained by neutron radiography showed an approximate one-to-one relationship with the average values measured volumetrically using the hanging water column setup. There were no significant differences (at p < 0.05) between the parameters of the van Genuchten equation fitted to the average neutron radiography data and those estimated from replicated hanging water column data. Our results indicate that neutron imaging is a very effective tool for quantifying the average water retention curve.

  17. Assessment of specific absorbed fractions for photons and electrons using average adult Japanese female phantom

    International Nuclear Information System (INIS)

    Manabe, Kentaro; Sato, Kaoru; Takahashi, Fumiaki

    2016-12-01

    In the 2007 Recommendations of the International Commission on Radiological Protection (ICRP), the procedure for calculating effective doses was modified as follows. Equivalent doses are evaluated using the male and female voxel phantoms on the basis of reference anatomical data of Caucasians, and effective doses are calculated using sex-averaged equivalent doses in applying tissue weighting factors. Specific absorbed fractions (SAFs), which are essential data for calculating internal doses, depend on the body weights, organ masses, and positional relations of organs of the phantoms. Then, the dose coefficients, which are committed effective doses per unit intake of radionuclides, developed by ICRP on the basis of the 2007 Recommendations reflect the physical characteristics of Caucasians and are averaged over the sexes. Meanwhile, the physiques of adult Japanese are generally smaller than those of adult Caucasians, and organ masses are also different from each other. Therefore, dose coefficients reflecting Japanese physical characteristics are different from those of ICRP. Knowledge of the influence of race differences on dose coefficients is important to apply the sex averaged dose coefficients of ICRP to the Japanese system of radiation protection. SAF data based on phantoms which have Japanese physical characteristics is essential for assessment of the dose coefficients reflecting Japanese physical characteristics. The Japan Atomic Energy Agency constructed average adult Japanese phantoms, JM-103 (male) and JF-103 (female), and is developing a dose estimation method for internal exposure using these phantoms. This report provides photon and electron SAFs of JF-103. The data of this report and the previously published data of JM-103 are applicable to evaluate sex-specific and sex-averaged dose coefficients reflecting the physical characteristics of the average adult Japanese for intakes of radionuclides emitting photons and electrons. Appendix as CD-ROM. (J.P.N.)

  18. Radon exposure and lung cancer

    International Nuclear Information System (INIS)

    Planinic, J.; Vukovic, B.; Faj, Z.; Radolic, V.; Suveljak, B.

    2003-01-01

    Although studies of radon exposure have established that Rn decay products are a cause of lung cancer among miners, the lung cancer risk to the general population from indoor radon remains unclear and controversial. Our epidemiological investigation of indoor radon influence on lung cancer incidence was carried out for 201 patients from the Osijek town. Ecological method was applied by using the town map with square fields of 1 km 2 and the town was divided into 24 fields. Multiple regression study for the lung cancer rate on field, average indoor radon exposure and smoking showed a positive linear double regression for the mentioned variables. Case-control study showed that patients, diseased of lung cancer, dwelt in homes with significantly higher radon concentrations, by comparison to the average indoor radon level of control sample. (author)

  19. Environmental radioactivity and radiation exposure

    International Nuclear Information System (INIS)

    1976-01-01

    The environmental radioactivity in the Federal Republic of Germany was almost as high in 1976 as in 1975. It only increased temporarily in autumn 1976 as a result of the above-ground nuclear weapons test of the People's Republic of China on September 29th 1976 and then returned to its previous level. The radioactivity in food had a slight decreasing trend in 1976, apart from a temporary increase in the radioactivity in milk also caused by the nuclear weapons test mentioned. The population exposure remains basically unchanged in 1976 compared with 1975. The artificial radiation exposure is about half as high as the natural radiation exposure to which man has always been exposed. The former is based to 83% on using X-rays in medicine, particularly for X-ray diagnostic purposes. The population exposure due to nuclear power plants and other nuclear plants is still well below 1% of the natural radiation exposure although in 1976 three new nuclear power plants were put into operation. This is also true for the average radiation exposure within an area of 3 km around the nuclear plant. (orig.) [de

  20. New device for time-averaged measurement of volatile organic compounds (VOCs)

    Energy Technology Data Exchange (ETDEWEB)

    Santiago Sánchez, Noemí; Tejada Alarcón, Sergio; Tortajada Santonja, Rafael; Llorca-Pórcel, Julio, E-mail: julio.llorca@aqualogy.net

    2014-07-01

    through a glass cell containing adsorbent material where the VOCs are retained. The adsorbent used, made in LABAQUA, is a mixture of alginic acid and activated carbon. Due to its high permeability it allows the passage and retention of THMs in a suitable way, thus solving many of the problems of other common adsorbents. Also, to avoid degradation of the adsorbent, it is wrapped in a low density polyethylene (LDPE) membrane. After a sampling period of between 1 and 14 days, the adsorbent is collected and analyzed in the laboratory to quantify the VOC average concentration. This device resolves some of the limitations of the classical sampling system (spot samples), since we will take into account the fluctuations in the concentration of VOCs by averaging the same over time. This study presents the results obtained by the device for quantifying the VOCs legislated in the Directive 2000/60/EC. We present the validation of linearity over time and the limits of quantification, as well as the results of sample rate (Rs) obtained for each compound. The results demonstrate the high robustness and high sensitivity of the device. In addition the system has been validated in real waste water samples, comparing the results obtained with this device with the values of classical spot sampling, obtaining excellent results. - Highlights: • Device to determine time weighted average concentrations of VOCs in water • This device is presented as an important alternative to spot sampling. • Very low LOD values of VOCs are obtained over 7 days of sampling. • Optimization, validation and application of the device in waters.

  1. Accurate phenotyping: Reconciling approaches through Bayesian model averaging.

    Directory of Open Access Journals (Sweden)

    Carla Chia-Ming Chen

    Full Text Available Genetic research into complex diseases is frequently hindered by a lack of clear biomarkers for phenotype ascertainment. Phenotypes for such diseases are often identified on the basis of clinically defined criteria; however such criteria may not be suitable for understanding the genetic composition of the diseases. Various statistical approaches have been proposed for phenotype definition; however our previous studies have shown that differences in phenotypes estimated using different approaches have substantial impact on subsequent analyses. Instead of obtaining results based upon a single model, we propose a new method, using Bayesian model averaging to overcome problems associated with phenotype definition. Although Bayesian model averaging has been used in other fields of research, this is the first study that uses Bayesian model averaging to reconcile phenotypes obtained using multiple models. We illustrate the new method by applying it to simulated genetic and phenotypic data for Kofendred personality disorder-an imaginary disease with several sub-types. Two separate statistical methods were used to identify clusters of individuals with distinct phenotypes: latent class analysis and grade of membership. Bayesian model averaging was then used to combine the two clusterings for the purpose of subsequent linkage analyses. We found that causative genetic loci for the disease produced higher LOD scores using model averaging than under either individual model separately. We attribute this improvement to consolidation of the cores of phenotype clusters identified using each individual method.

  2. Yearly, seasonal and monthly daily average diffuse sky radiation models

    International Nuclear Information System (INIS)

    Kassem, A.S.; Mujahid, A.M.; Turner, D.W.

    1993-01-01

    A daily average diffuse sky radiation regression model based on daily global radiation was developed utilizing two year data taken near Blytheville, Arkansas (Lat. =35.9 0 N, Long. = 89.9 0 W), U.S.A. The model has a determination coefficient of 0.91 and 0.092 standard error of estimate. The data were also analyzed for a seasonal dependence and four seasonal average daily models were developed for the spring, summer, fall and winter seasons. The coefficient of determination is 0.93, 0.81, 0.94 and 0.93, whereas the standard error of estimate is 0.08, 0.102, 0.042 and 0.075 for spring, summer, fall and winter, respectively. A monthly average daily diffuse sky radiation model was also developed. The coefficient of determination is 0.92 and the standard error of estimate is 0.083. A seasonal monthly average model was also developed which has 0.91 coefficient of determination and 0.085 standard error of estimate. The developed monthly daily average and daily models compare well with a selected number of previously developed models. (author). 11 ref., figs., tabs

  3. Exposure of the orthopaedic surgeon to radiation

    Energy Technology Data Exchange (ETDEWEB)

    Katoh, Kiyonobu; Koga, Takamasa; Matsuzaki, Akio; Kido, Masaki; Satoh, Tetsunori [Fukuoka Univ. (Japan). Chikushi Hospital

    1995-09-01

    We monitored the amount of radiation received by surgeons and assistants during surgery carried out with fluoroscopic assistance. The radiation was monitored with the use of MYDOSE MINIX PDM107 made by Aloka Co. Over a one year period from Aug 20, 1992 to Aug 19, 1993, a study was undertaken to evaluate exposure of the groin level to radiation with or without use of the lead apron during 106 operation (Group-1). In another group, radiation was monitored at the breast and groin level outside of the lead apron during 39 operations (Group-2). In Group-1, the average exposure per person during one year was 46.0 {mu}SV and the average exposure for each procedure was 1.68 {mu}SV. The use of the lead apron affirmed its protective value; the average radiation dose at the groin level out-side of the apron was 9.11 {mu}SV, the measured dose beneath the apron 0.61 {mu}SV. The average dose of exposure to the head, breast at groin level outside of the lead apron, were 7.68 {mu}SV, 16.24 {mu}SV, 32.04 {mu}SV respectively. This study and review of the literature indicate that the total amount of radiation exposure during surgery done with fluoroscopic control remains well within maximum exposure limits. (author).

  4. Exposure of the orthopaedic surgeon to radiation

    International Nuclear Information System (INIS)

    Katoh, Kiyonobu; Koga, Takamasa; Matsuzaki, Akio; Kido, Masaki; Satoh, Tetsunori

    1995-01-01

    We monitored the amount of radiation received by surgeons and assistants during surgery carried out with fluoroscopic assistance. The radiation was monitored with the use of MYDOSE MINIX PDM107 made by Aloka Co. Over a one year period from Aug 20, 1992 to Aug 19, 1993, a study was undertaken to evaluate exposure of the groin level to radiation with or without use of the lead apron during 106 operation (Group-1). In another group, radiation was monitored at the breast and groin level outside of the lead apron during 39 operations (Group-2). In Group-1, the average exposure per person during one year was 46.0 μSV and the average exposure for each procedure was 1.68 μSV. The use of the lead apron affirmed its protective value; the average radiation dose at the groin level out-side of the apron was 9.11 μSV, the measured dose beneath the apron 0.61 μSV. The average dose of exposure to the head, breast at groin level outside of the lead apron, were 7.68 μSV, 16.24 μSV, 32.04 μSV respectively. This study and review of the literature indicate that the total amount of radiation exposure during surgery done with fluoroscopic control remains well within maximum exposure limits. (author)

  5. Average cross sections for the 252Cf neutron spectrum

    International Nuclear Information System (INIS)

    Dezso, Z.; Csikai, J.

    1977-01-01

    A number of average cross sections have been measured for 252 Cf neutrons in (n, γ), (n,p), (n,2n), (n,α) reactions by the activation method and for fission by fission chamber. Cross sections have been determined for 19 elements and 45 reactions. The (n,γ) cross section values lie in the interval from 0.3 to 200 mb. The data as a function of target neutron number increases up to about N=60 with minimum near to dosed shells. The values lie between 0.3 mb and 113 mb. These cross sections decrease significantly with increasing the threshold energy. The values are below 20 mb. The data do not exceed 10 mb. Average (n,p) cross sections as a function of the threshold energy and average fission cross sections as a function of Zsup(4/3)/A are shown. The results obtained are summarized in tables

  6. Testing averaged cosmology with type Ia supernovae and BAO data

    Energy Technology Data Exchange (ETDEWEB)

    Santos, B.; Alcaniz, J.S. [Departamento de Astronomia, Observatório Nacional, 20921-400, Rio de Janeiro – RJ (Brazil); Coley, A.A. [Department of Mathematics and Statistics, Dalhousie University, Halifax, B3H 3J5 Canada (Canada); Devi, N. Chandrachani, E-mail: thoven@on.br, E-mail: aac@mathstat.dal.ca, E-mail: chandrachaniningombam@astro.unam.mx, E-mail: alcaniz@on.br [Instituto de Astronomía, Universidad Nacional Autónoma de México, Box 70-264, México City, México (Mexico)

    2017-02-01

    An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO data is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.

  7. Average contraction and synchronization of complex switched networks

    International Nuclear Information System (INIS)

    Wang Lei; Wang Qingguo

    2012-01-01

    This paper introduces an average contraction analysis for nonlinear switched systems and applies it to investigating the synchronization of complex networks of coupled systems with switching topology. For a general nonlinear system with a time-dependent switching law, a basic convergence result is presented according to average contraction analysis, and a special case where trajectories of a distributed switched system converge to a linear subspace is then investigated. Synchronization is viewed as the special case with all trajectories approaching the synchronization manifold, and is thus studied for complex networks of coupled oscillators with switching topology. It is shown that the synchronization of a complex switched network can be evaluated by the dynamics of an isolated node, the coupling strength and the time average of the smallest eigenvalue associated with the Laplacians of switching topology and the coupling fashion. Finally, numerical simulations illustrate the effectiveness of the proposed methods. (paper)

  8. The Health Effects of Income Inequality: Averages and Disparities.

    Science.gov (United States)

    Truesdale, Beth C; Jencks, Christopher

    2016-01-01

    Much research has investigated the association of income inequality with average life expectancy, usually finding negative correlations that are not very robust. A smaller body of work has investigated socioeconomic disparities in life expectancy, which have widened in many countries since 1980. These two lines of work should be seen as complementary because changes in average life expectancy are unlikely to affect all socioeconomic groups equally. Although most theories imply long and variable lags between changes in income inequality and changes in health, empirical evidence is confined largely to short-term effects. Rising income inequality can affect individuals in two ways. Direct effects change individuals' own income. Indirect effects change other people's income, which can then change a society's politics, customs, and ideals, altering the behavior even of those whose own income remains unchanged. Indirect effects can thus change both average health and the slope of the relationship between individual income and health.

  9. Testing averaged cosmology with type Ia supernovae and BAO data

    International Nuclear Information System (INIS)

    Santos, B.; Alcaniz, J.S.; Coley, A.A.; Devi, N. Chandrachani

    2017-01-01

    An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO data is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.

  10. Perceived Average Orientation Reflects Effective Gist of the Surface.

    Science.gov (United States)

    Cha, Oakyoon; Chong, Sang Chul

    2018-03-01

    The human ability to represent ensemble visual information, such as average orientation and size, has been suggested as the foundation of gist perception. To effectively summarize different groups of objects into the gist of a scene, observers should form ensembles separately for different groups, even when objects have similar visual features across groups. We hypothesized that the visual system utilizes perceptual groups characterized by spatial configuration and represents separate ensembles for different groups. Therefore, participants could not integrate ensembles of different perceptual groups on a task basis. We asked participants to determine the average orientation of visual elements comprising a surface with a contour situated inside. Although participants were asked to estimate the average orientation of all the elements, they ignored orientation signals embedded in the contour. This constraint may help the visual system to keep the visual features of occluding objects separate from those of the occluded objects.

  11. Object detection by correlation coefficients using azimuthally averaged reference projections.

    Science.gov (United States)

    Nicholson, William V

    2004-11-01

    A method of computing correlation coefficients for object detection that takes advantage of using azimuthally averaged reference projections is described and compared with two alternative methods-computing a cross-correlation function or a local correlation coefficient versus the azimuthally averaged reference projections. Two examples of an application from structural biology involving the detection of projection views of biological macromolecules in electron micrographs are discussed. It is found that a novel approach to computing a local correlation coefficient versus azimuthally averaged reference projections, using a rotational correlation coefficient, outperforms using a cross-correlation function and a local correlation coefficient in object detection from simulated images with a range of levels of simulated additive noise. The three approaches perform similarly in detecting macromolecular views in electron microscope images of a globular macrolecular complex (the ribosome). The rotational correlation coefficient outperforms the other methods in detection of keyhole limpet hemocyanin macromolecular views in electron micrographs.

  12. Measurement of average radon gas concentration at workplaces

    International Nuclear Information System (INIS)

    Kavasi, N.; Somlai, J.; Kovacs, T.; Gorjanacz, Z.; Nemeth, Cs.; Szabo, T.; Varhegyi, A.; Hakl, J.

    2003-01-01

    In this paper results of measurement of average radon gas concentration at workplaces (the schools and kindergartens and the ventilated workplaces) are presented. t can be stated that the one month long measurements means very high variation (as it is obvious in the cases of the hospital cave and the uranium tailing pond). Consequently, in workplaces where the expectable changes of radon concentration considerable with the seasons should be measure for 12 months long. If it is not possible, the chosen six months period should contain summer and winter months as well. The average radon concentration during working hours can be differ considerable from the average of the whole time in the cases of frequent opening the doors and windows or using artificial ventilation. (authors)

  13. A Martian PFS average spectrum: Comparison with ISO SWS

    Science.gov (United States)

    Formisano, V.; Encrenaz, T.; Fonti, S.; Giuranna, M.; Grassi, D.; Hirsh, H.; Khatuntsev, I.; Ignatiev, N.; Lellouch, E.; Maturilli, A.; Moroz, V.; Orleanski, P.; Piccioni, G.; Rataj, M.; Saggin, B.; Zasova, L.

    2005-08-01

    The evaluation of the planetary Fourier spectrometer performance at Mars is presented by comparing an average spectrum with the ISO spectrum published by Lellouch et al. [2000. Planet. Space Sci. 48, 1393.]. First, the average conditions of Mars atmosphere are compared, then the mixing ratios of the major gases are evaluated. Major and minor bands of CO 2 are compared, from the point of view of features characteristics and bands depth. The spectral resolution is also compared using several solar lines. The result indicates that PFS radiance is valid to better than 1% in the wavenumber range 1800-4200 cm -1 for the average spectrum considered (1680 measurements). The PFS monochromatic transfer function generates an overshooting on the left-hand side of strong narrow lines (solar or atmospheric). The spectral resolution of PFS is of the order of 1.3 cm -1 or better. A large number of narrow features to be identified are discovered.

  14. Size and emotion averaging: costs of dividing attention after all.

    Science.gov (United States)

    Brand, John; Oriet, Chris; Tottenham, Laurie Sykes

    2012-03-01

    Perceptual averaging is a process by which sets of similar items are represented by summary statistics such as their average size, luminance, or orientation. Researchers have argued that this process is automatic, able to be carried out without interference from concurrent processing. Here, we challenge this conclusion and demonstrate a reliable cost of computing the mean size of circles distinguished by colour (Experiments 1 and 2) and the mean emotionality of faces distinguished by sex (Experiment 3). We also test the viability of two strategies that could have allowed observers to guess the correct response without computing the average size or emotionality of both sets concurrently. We conclude that although two means can be computed concurrently, doing so incurs a cost of dividing attention.

  15. A virtual pebble game to ensemble average graph rigidity.

    Science.gov (United States)

    González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J

    2015-01-01

    The body-bar Pebble Game (PG) algorithm is commonly used to calculate network rigidity properties in proteins and polymeric materials. To account for fluctuating interactions such as hydrogen bonds, an ensemble of constraint topologies are sampled, and average network properties are obtained by averaging PG characterizations. At a simpler level of sophistication, Maxwell constraint counting (MCC) provides a rigorous lower bound for the number of internal degrees of freedom (DOF) within a body-bar network, and it is commonly employed to test if a molecular structure is globally under-constrained or over-constrained. MCC is a mean field approximation (MFA) that ignores spatial fluctuations of distance constraints by replacing the actual molecular structure by an effective medium that has distance constraints globally distributed with perfect uniform density. The Virtual Pebble Game (VPG) algorithm is a MFA that retains spatial inhomogeneity in the density of constraints on all length scales. Network fluctuations due to distance constraints that may be present or absent based on binary random dynamic variables are suppressed by replacing all possible constraint topology realizations with the probabilities that distance constraints are present. The VPG algorithm is isomorphic to the PG algorithm, where integers for counting "pebbles" placed on vertices or edges in the PG map to real numbers representing the probability to find a pebble. In the VPG, edges are assigned pebble capacities, and pebble movements become a continuous flow of probability within the network. Comparisons between the VPG and average PG results over a test set of proteins and disordered lattices demonstrate the VPG quantitatively estimates the ensemble average PG results well. The VPG performs about 20% faster than one PG, and it provides a pragmatic alternative to averaging PG rigidity characteristics over an ensemble of constraint topologies. The utility of the VPG falls in between the most

  16. Exactly averaged equations for flow and transport in random media

    International Nuclear Information System (INIS)

    Shvidler, Mark; Karasaki, Kenzi

    2001-01-01

    It is well known that exact averaging of the equations of flow and transport in random porous media can be realized only for a small number of special, occasionally exotic, fields. On the other hand, the properties of approximate averaging methods are not yet fully understood. For example, the convergence behavior and the accuracy of truncated perturbation series. Furthermore, the calculation of the high-order perturbations is very complicated. These problems for a long time have stimulated attempts to find the answer for the question: Are there in existence some exact general and sufficiently universal forms of averaged equations? If the answer is positive, there arises the problem of the construction of these equations and analyzing them. There exist many publications related to these problems and oriented on different applications: hydrodynamics, flow and transport in porous media, theory of elasticity, acoustic and electromagnetic waves in random fields, etc. We present a method of finding the general form of exactly averaged equations for flow and transport in random fields by using (1) an assumption of the existence of Green's functions for appropriate stochastic problems, (2) some general properties of the Green's functions, and (3) the some basic information about the random fields of the conductivity, porosity and flow velocity. We present a general form of the exactly averaged non-local equations for the following cases. 1. Steady-state flow with sources in porous media with random conductivity. 2. Transient flow with sources in compressible media with random conductivity and porosity. 3. Non-reactive solute transport in random porous media. We discuss the problem of uniqueness and the properties of the non-local averaged equations, for the cases with some types of symmetry (isotropic, transversal isotropic, orthotropic) and we analyze the hypothesis of the structure non-local equations in general case of stochastically homogeneous fields. (author)

  17. Increase in average foveal thickness after internal limiting membrane peeling

    Directory of Open Access Journals (Sweden)

    Kumagai K

    2017-04-01

    Full Text Available Kazuyuki Kumagai,1 Mariko Furukawa,1 Tetsuyuki Suetsugu,1 Nobuchika Ogino2 1Department of Ophthalmology, Kami-iida Daiichi General Hospital, 2Department of Ophthalmology, Nishigaki Eye Clinic, Aichi, Japan Purpose: To report the findings in three cases in which the average foveal thickness was increased after a thin epiretinal membrane (ERM was removed by vitrectomy with internal limiting membrane (ILM peeling.Methods: The foveal contour was normal preoperatively in all eyes. All cases underwent successful phacovitrectomy with ILM peeling for a thin ERM. The optical coherence tomography (OCT images were examined before and after the surgery. The changes in the average foveal (1 mm thickness and the foveal areas within 500 µm from the foveal center were measured. The postoperative changes in the inner and outer retinal areas determined from the cross-sectional OCT images were analyzed.Results: The average foveal thickness and the inner and outer foveal areas increased significantly after the surgery in each of the three cases. The percentage increase in the average foveal thickness relative to the baseline thickness was 26% in Case 1, 29% in Case 2, and 31% in Case 3. The percentage increase in the foveal inner retinal area was 71% in Case 1, 113% in Case 2, and 110% in Case 3, and the percentage increase in foveal outer retinal area was 8% in Case 1, 13% in Case 2, and 18% in Case 3.Conclusion: The increase in the average foveal thickness and the inner and outer foveal areas suggests that a centripetal movement of the inner and outer retinal layers toward the foveal center probably occurred due to the ILM peeling. Keywords: internal limiting membrane, optical coherence tomography, average foveal thickness, epiretinal membrane, vitrectomy

  18. Occupational radiation exposure in Slovakia

    International Nuclear Information System (INIS)

    Boehm, K.; Cabanekova, H.

    2014-01-01

    Recently are 2 nuclear power plants in operation in the Slovak republic. Apart from nuclear facilities there are 450 licensed undertakings with monitored workers. The majority of the licensed undertakings are active in health care. In Slovak republic are five dosimetry services performing assessments on personal doses due to external exposure and two dosimetry services are approved to carry out monitoring of internal exposure. Dosemeters used for the monitoring of external individual exposure include: personal whole-body film dosemeters, thermoluminescence dosemeters (TLD) or optically stimulated luminescence dosimeters (OSL) for measurements of beta and gamma radiation; TLD for measurements of neutron radiation and TLD for extremities. The measured operational dose quantities are Hp(10), Hp(3) and Hp(0.07). Approved dosimetry service reports the measured dose data to the employers and to the Central register of occupational doses (CROD). Annually are monitored about 12500 - 16200 active workers. Average effective doses per one monitored worker are presented. (author)

  19. Positivity of the spherically averaged atomic one-electron density

    DEFF Research Database (Denmark)

    Fournais, Søren; Hoffmann-Ostenhof, Maria; Hoffmann-Ostenhof, Thomas

    2008-01-01

    We investigate the positivity of the spherically averaged atomic one-electron density . For a which stems from a physical ground state we prove that for r ≥  0. This article may be reproduced in its entirety for non-commercial purposes.......We investigate the positivity of the spherically averaged atomic one-electron density . For a which stems from a physical ground state we prove that for r ≥  0. This article may be reproduced in its entirety for non-commercial purposes....

  20. Research & development and growth: A Bayesian model averaging analysis

    Czech Academy of Sciences Publication Activity Database

    Horváth, Roman

    2011-01-01

    Roč. 28, č. 6 (2011), s. 2669-2673 ISSN 0264-9993. [Society for Non-linear Dynamics and Econometrics Annual Conferencen. Washington DC, 16.03.2011-18.03.2011] R&D Projects: GA ČR GA402/09/0965 Institutional research plan: CEZ:AV0Z10750506 Keywords : Research and development * Growth * Bayesian model averaging Subject RIV: AH - Economic s Impact factor: 0.701, year: 2011 http://library.utia.cas.cz/separaty/2011/E/horvath-research & development and growth a bayesian model averaging analysis.pdf

  1. MAIN STAGES SCIENTIFIC AND PRODUCTION MASTERING THE TERRITORY AVERAGE URAL

    Directory of Open Access Journals (Sweden)

    V.S. Bochko

    2006-09-01

    Full Text Available Questions of the shaping Average Ural, as industrial territory, on base her scientific study and production mastering are considered in the article. It is shown that studies of Ural resources and particularities of the vital activity of its population were concerned by Russian and foreign scientist in XVIII-XIX centuries. It is noted that in XX century there was a transition to systematic organizing-economic study of production power, society and natures of Average Ural. More attention addressed on new problems of region and on needs of their scientific solving.

  2. High-Average, High-Peak Current Injector Design

    CERN Document Server

    Biedron, S G; Virgo, M

    2005-01-01

    There is increasing interest in high-average-power (>100 kW), um-range FELs. These machines require high peak current (~1 kA), modest transverse emittance, and beam energies of ~100 MeV. High average currents (~1 A) place additional constraints on the design of the injector. We present a design for an injector intended to produce the required peak currents at the injector, eliminating the need for magnetic compression within the linac. This reduces the potential for beam quality degradation due to CSR and space charge effects within magnetic chicanes.

  3. Non-self-averaging nucleation rate due to quenched disorder

    International Nuclear Information System (INIS)

    Sear, Richard P

    2012-01-01

    We study the nucleation of a new thermodynamic phase in the presence of quenched disorder. The quenched disorder is a generic model of both impurities and disordered porous media; both are known to have large effects on nucleation. We find that the nucleation rate is non-self-averaging. This is in a simple Ising model with clusters of quenched spins. We also show that non-self-averaging behaviour is straightforward to detect in experiments, and may be rather common. (fast track communication)

  4. A note on moving average models for Gaussian random fields

    DEFF Research Database (Denmark)

    Hansen, Linda Vadgård; Thorarinsdottir, Thordis L.

    The class of moving average models offers a flexible modeling framework for Gaussian random fields with many well known models such as the Matérn covariance family and the Gaussian covariance falling under this framework. Moving average models may also be viewed as a kernel smoothing of a Lévy...... basis, a general modeling framework which includes several types of non-Gaussian models. We propose a new one-parameter spatial correlation model which arises from a power kernel and show that the associated Hausdorff dimension of the sample paths can take any value between 2 and 3. As a result...

  5. Averaging processes in granular flows driven by gravity

    Science.gov (United States)

    Rossi, Giulia; Armanini, Aronne

    2016-04-01

    One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental

  6. Sources of radiation exposure - an overview

    International Nuclear Information System (INIS)

    Mason, G.C.

    1990-01-01

    Sources of radiation exposure are reviewed from the perspective of mining and milling of radioactive ores in Australia. The major sources of occupational and public exposure are identified and described, and exposures from mining and milling operations are discussed in the context of natural radiation sources and other sources arising from human activities. Most radiation exposure of humans comes from natural sources. About 80% of the world average of the effective dose equivalents received by individual people arises from natural radiation, with a further 15-20% coming from medical exposures*. Exposures results from human activities, such as mining and milling of radioactive ores, nuclear power generation, fallout from nuclear weapons testing and non-medical use of radioisotopes and X-rays, add less than 1% to the total. 9 refs., 4 tabs., 10 figs

  7. Measurement uncertainties of long-term 222Rn averages at environmental levels using alpha track detectors

    International Nuclear Information System (INIS)

    Nelson, R.A.

    1987-01-01

    More than 250 replicate measurements of outdoor Rn concentration integrated over quarterly periods were made to estimate the random component of the measurement uncertainty of Track Etch detectors (type F) under outdoor conditions. The measurements were performed around three U mill tailings piles to provide a range of environmental concentrations. The measurement uncertainty was typically greater than could be accounted for by Poisson counting statistics. Average coefficients of variation of the order of 20% for all measured concentrations were found. It is concluded that alpha track detectors can be successfully used to determine annual average outdoor Rn concentrations through the use of careful quality control procedures. These include rapid deployment and collection of detectors to minimize unintended Rn exposure, careful packaging and shipping to and from the manufacturer, use of direct sunlight shields for all detectors and careful and secure mounting of all detectors in as similar a manner as possible. The use of multiple (at least duplicate) detectors at each monitoring location and an exposure period of no less than one quarter are suggested

  8. Multi-Repeated Projection Lithography for High-Precision Linear Scale Based on Average Homogenization Effect

    Directory of Open Access Journals (Sweden)

    Dongxu Ren

    2016-04-01

    Full Text Available A multi-repeated photolithography method for manufacturing an incremental linear scale using projection lithography is presented. The method is based on the average homogenization effect that periodically superposes the light intensity of different locations of pitches in the mask to make a consistent energy distribution at a specific wavelength, from which the accuracy of a linear scale can be improved precisely using the average pitch with different step distances. The method’s theoretical error is within 0.01 µm for a periodic mask with a 2-µm sine-wave error. The intensity error models in the focal plane include the rectangular grating error on the mask, static positioning error, and lithography lens focal plane alignment error, which affect pitch uniformity less than in the common linear scale projection lithography splicing process. It was analyzed and confirmed that increasing the repeat exposure number of a single stripe could improve accuracy, as could adjusting the exposure spacing to achieve a set proportion of black and white stripes. According to the experimental results, the effectiveness of the multi-repeated photolithography method is confirmed to easily realize a pitch accuracy of 43 nm in any 10 locations of 1 m, and the whole length accuracy of the linear scale is less than 1 µm/m.

  9. Small Bandwidth Asymptotics for Density-Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    This paper proposes (apparently) novel standard error formulas for the density-weighted average derivative estimator of Powell, Stock, and Stoker (1989). Asymptotic validity of the standard errors developed in this paper does not require the use of higher-order kernels and the standard errors...

  10. High Average Power UV Free Electron Laser Experiments At JLAB

    International Nuclear Information System (INIS)

    Douglas, David; Benson, Stephen; Evtushenko, Pavel; Gubeli, Joseph; Hernandez-Garcia, Carlos; Legg, Robert; Neil, George; Powers, Thomas; Shinn, Michelle; Tennant, Christopher; Williams, Gwyn

    2012-01-01

    Having produced 14 kW of average power at ∼2 microns, JLAB has shifted its focus to the ultraviolet portion of the spectrum. This presentation will describe the JLab UV Demo FEL, present specifics of its driver ERL, and discuss the latest experimental results from FEL experiments and machine operations.

  11. Average subentropy, coherence and entanglement of random mixed quantum states

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Lin, E-mail: godyalin@163.com [Institute of Mathematics, Hangzhou Dianzi University, Hangzhou 310018 (China); Singh, Uttam, E-mail: uttamsingh@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India); Pati, Arun K., E-mail: akpati@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India)

    2017-02-15

    Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate that mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.

  12. Establishment of Average Body Measurement and the Development ...

    African Journals Online (AJOL)

    cce

    body measurement for height and backneck to waist for ages 2,3,4 and 5 years. The ... average measurements of the different parts of the body must be established. ..... and OAU Charter on Rights of the child: Lagos: Nigeria Country office.

  13. Adaptive Spontaneous Transitions between Two Mechanisms of Numerical Averaging.

    Science.gov (United States)

    Brezis, Noam; Bronfman, Zohar Z; Usher, Marius

    2015-06-04

    We investigated the mechanism with which humans estimate numerical averages. Participants were presented with 4, 8 or 16 (two-digit) numbers, serially and rapidly (2 numerals/second) and were instructed to convey the sequence average. As predicted by a dual, but not a single-component account, we found a non-monotonic influence of set-size on accuracy. Moreover, we observed a marked decrease in RT as set-size increases and RT-accuracy tradeoff in the 4-, but not in the 16-number condition. These results indicate that in accordance with the normative directive, participants spontaneously employ analytic/sequential thinking in the 4-number condition and intuitive/holistic thinking in the 16-number condition. When the presentation rate is extreme (10 items/sec) we find that, while performance still remains high, the estimations are now based on intuitive processing. The results are accounted for by a computational model postulating population-coding underlying intuitive-averaging and working-memory-mediated symbolic procedures underlying analytical-averaging, with flexible allocation between the two.

  14. Determination of the average lifetime of bottom hadrons

    Energy Technology Data Exchange (ETDEWEB)

    Althoff, M; Braunschweig, W; Kirschfink, F J; Martyn, H U; Rosskamp, P; Schmitz, D; Siebke, H; Wallraff, W [Technische Hochschule Aachen (Germany, F.R.). Lehrstuhl fuer Experimentalphysik 1A und 1. Physikalisches Inst.; Eisenmann, J; Fischer, H M

    1984-12-27

    We have determined the average lifetime of hadrons containing b quarks produced in e/sup +/e/sup -/ annihilation to be tausub(B)=1.83 x 10/sup -12/ s. Our method uses charged decay products from both non-leptonic and semileptonic decay modes.

  15. Determination of the average lifetime of bottom hadrons

    Energy Technology Data Exchange (ETDEWEB)

    Althoff, M; Braunschweig, W; Kirschfink, F J; Martyn, H U; Rosskamp, P; Schmitz, D; Siebke, H; Wallraff, W; Eisenmann, J; Fischer, H M

    1984-12-27

    We have determined the average lifetime of hadrons containing b quarks produced in e e annihilation to be tausub(B)=1.83x10 S s. Our method uses charged decay products from both non-leptonic and semileptonic decay modes. (orig./HSI).

  16. Time Series ARIMA Models of Undergraduate Grade Point Average.

    Science.gov (United States)

    Rogers, Bruce G.

    The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…

  17. Reducing Noise by Repetition: Introduction to Signal Averaging

    Science.gov (United States)

    Hassan, Umer; Anwar, Muhammad Sabieh

    2010-01-01

    This paper describes theory and experiments, taken from biophysics and physiological measurements, to illustrate the technique of signal averaging. In the process, students are introduced to the basic concepts of signal processing, such as digital filtering, Fourier transformation, baseline correction, pink and Gaussian noise, and the cross- and…

  18. Environmental stresses can alleviate the average deleterious effect of mutations

    Directory of Open Access Journals (Sweden)

    Leibler Stanislas

    2003-05-01

    Full Text Available Abstract Background Fundamental questions in evolutionary genetics, including the possible advantage of sexual reproduction, depend critically on the effects of deleterious mutations on fitness. Limited existing experimental evidence suggests that, on average, such effects tend to be aggravated under environmental stresses, consistent with the perception that stress diminishes the organism's ability to tolerate deleterious mutations. Here, we ask whether there are also stresses with the opposite influence, under which the organism becomes more tolerant to mutations. Results We developed a technique, based on bioluminescence, which allows accurate automated measurements of bacterial growth rates at very low cell densities. Using this system, we measured growth rates of Escherichia coli mutants under a diverse set of environmental stresses. In contrast to the perception that stress always reduces the organism's ability to tolerate mutations, our measurements identified stresses that do the opposite – that is, despite decreasing wild-type growth, they alleviate, on average, the effect of deleterious mutations. Conclusions Our results show a qualitative difference between various environmental stresses ranging from alleviation to aggravation of the average effect of mutations. We further show how the existence of stresses that are biased towards alleviation of the effects of mutations may imply the existence of average epistatic interactions between mutations. The results thus offer a connection between the two main factors controlling the effects of deleterious mutations: environmental conditions and epistatic interactions.

  19. The background effective average action approach to quantum gravity

    DEFF Research Database (Denmark)

    D’Odorico, G.; Codello, A.; Pagani, C.

    2016-01-01

    of an UV attractive non-Gaussian fixed-point, which we find characterized by real critical exponents. Our closure method is general and can be applied systematically to more general truncations of the gravitational effective average action. © Springer International Publishing Switzerland 2016....

  20. Error estimates in horocycle averages asymptotics: challenges from string theory

    NARCIS (Netherlands)

    Cardella, M.A.

    2010-01-01

    For modular functions of rapid decay, a classical result connects the error estimate in their long horocycle average asymptotic to the Riemann hypothesis. We study similar asymptotics, for modular functions with not that mild growing conditions, such as of polynomial growth and of exponential growth

  1. Moving average rules as a source of market instability

    NARCIS (Netherlands)

    Chiarella, C.; He, X.Z.; Hommes, C.H.

    2006-01-01

    Despite the pervasiveness of the efficient markets paradigm in the academic finance literature, the use of various moving average (MA) trading rules remains popular with financial market practitioners. This paper proposes a stochastic dynamic financial market model in which demand for traded assets

  2. arXiv Averaged Energy Conditions and Bouncing Universes

    CERN Document Server

    Giovannini, Massimo

    2017-11-16

    The dynamics of bouncing universes is characterized by violating certain coordinate-invariant restrictions on the total energy-momentum tensor, customarily referred to as energy conditions. Although there could be epochs in which the null energy condition is locally violated, it may perhaps be enforced in an averaged sense. Explicit examples of this possibility are investigated in different frameworks.

  3. 26 CFR 1.1301-1 - Averaging of farm income.

    Science.gov (United States)

    2010-04-01

    ... January 1, 2003, rental income based on a share of a tenant's production determined under an unwritten... the Collection of Income Tax at Source on Wages (Federal income tax withholding), or the amount of net... 26 Internal Revenue 11 2010-04-01 2010-04-01 true Averaging of farm income. 1.1301-1 Section 1...

  4. Implications of Methodist clergies' average lifespan and missional ...

    African Journals Online (AJOL)

    2015-06-09

    Jun 9, 2015 ... The author of Genesis 5 paid meticulous attention to the lifespan of several people ... of Southern Africa (MCSA), and to argue that memories of the ... average ages at death were added up and the sum was divided by 12 (which represents the 12 ..... not explicit in how the departed Methodist ministers were.

  5. Pareto Principle in Datamining: an Above-Average Fencing Algorithm

    Directory of Open Access Journals (Sweden)

    K. Macek

    2008-01-01

    Full Text Available This paper formulates a new datamining problem: which subset of input space has the relatively highest output where the minimal size of this subset is given. This can be useful where usual datamining methods fail because of error distribution asymmetry. The paper provides a novel algorithm for this datamining problem, and compares it with clustering of above-average individuals.

  6. Average Distance Travelled To School by Primary and Secondary ...

    African Journals Online (AJOL)

    This study investigated average distance travelled to school by students in primary and secondary schools in Anambra, Enugu, and Ebonyi States and effect on attendance. These are among the top ten densely populated and educationally advantaged States in Nigeria. Research evidences report high dropout rates in ...

  7. Trend of Average Wages as Indicator of Hypothetical Money Illusion

    Directory of Open Access Journals (Sweden)

    Julian Daszkowski

    2010-06-01

    Full Text Available The definition of wage in Poland not before 1998 includes any value of social security contribution. Changed definition creates higher level of reported wages, but was expected not to influence the take home pay. Nevertheless, the trend of average wages, after a short period, has returned to its previous line. Such effect is explained in the term of money illusion.

  8. Computation of the average energy for LXY electrons

    International Nuclear Information System (INIS)

    Grau Carles, A.; Grau, A.

    1996-01-01

    The application of an atomic rearrangement model in which we only consider the three shells K, L and M, to compute the counting efficiency for electron capture nuclides, requires a fine averaged energy value for LMN electrons. In this report, we illustrate the procedure with two example, ''125 I and ''109 Cd. (Author) 4 refs

  9. Bounding quantum gate error rate based on reported average fidelity

    International Nuclear Information System (INIS)

    Sanders, Yuval R; Wallman, Joel J; Sanders, Barry C

    2016-01-01

    Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli distance as a measure of this deviation, and we show that knowledge of the Pauli distance enables tighter estimates of the error rate of quantum gates. (fast track communication)

  10. 75 FR 78157 - Farmer and Fisherman Income Averaging

    Science.gov (United States)

    2010-12-15

    ... to the averaging of farm and fishing income in computing income tax liability. The regulations...: PART 1--INCOME TAXES 0 Paragraph 1. The authority citation for part 1 continues to read in part as... section 1 tax would be increased if one-third of elected farm income were allocated to each year. The...

  11. Domain-averaged Fermi-hole Analysis for Solids

    Czech Academy of Sciences Publication Activity Database

    Baranov, A.; Ponec, Robert; Kohout, M.

    2012-01-01

    Roč. 137, č. 21 (2012), s. 214109 ISSN 0021-9606 R&D Projects: GA ČR GA203/09/0118 Institutional support: RVO:67985858 Keywords : bonding in solids * domain averaged fermi hole * natural orbitals Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 3.164, year: 2012

  12. Characteristics of phase-averaged equations for modulated wave groups

    NARCIS (Netherlands)

    Klopman, G.; Petit, H.A.H.; Battjes, J.A.

    2000-01-01

    The project concerns the influence of long waves on coastal morphology. The modelling of the combined motion of the long waves and short waves in the horizontal plane is done by phase-averaging over the short wave motion and using intra-wave modelling for the long waves, see e.g. Roelvink (1993).

  13. A depth semi-averaged model for coastal dynamics

    Science.gov (United States)

    Antuono, M.; Colicchio, G.; Lugni, C.; Greco, M.; Brocchini, M.

    2017-05-01

    The present work extends the semi-integrated method proposed by Antuono and Brocchini ["Beyond Boussinesq-type equations: Semi-integrated models for coastal dynamics," Phys. Fluids 25(1), 016603 (2013)], which comprises a subset of depth-averaged equations (similar to Boussinesq-like models) and a Poisson equation that accounts for vertical dynamics. Here, the subset of depth-averaged equations has been reshaped in a conservative-like form and both the Poisson equation formulations proposed by Antuono and Brocchini ["Beyond Boussinesq-type equations: Semi-integrated models for coastal dynamics," Phys. Fluids 25(1), 016603 (2013)] are investigated: the former uses the vertical velocity component (formulation A) and the latter a specific depth semi-averaged variable, ϒ (formulation B). Our analyses reveal that formulation A is prone to instabilities as wave nonlinearity increases. On the contrary, formulation B allows an accurate, robust numerical implementation. Test cases derived from the scientific literature on Boussinesq-type models—i.e., solitary and Stokes wave analytical solutions for linear dispersion and nonlinear evolution and experimental data for shoaling properties—are used to assess the proposed solution strategy. It is found that the present method gives reliable predictions of wave propagation in shallow to intermediate waters, in terms of both semi-averaged variables and conservation properties.

  14. Effect of tank geometry on its average performance

    Science.gov (United States)

    Orlov, Aleksey A.; Tsimbalyuk, Alexandr F.; Malyugin, Roman V.; Leontieva, Daria A.; Kotelnikova, Alexandra A.

    2018-03-01

    The mathematical model of non-stationary filling of vertical submerged tanks with gaseous uranium hexafluoride is presented in the paper. There are calculations of the average productivity, heat exchange area, and filling time of various volumes tanks with smooth inner walls depending on their "height : radius" ratio as well as the average productivity, degree, and filling time of horizontal ribbing tank with volume 6.10-2 m3 with change central hole diameter of the ribs. It has been shown that the growth of "height / radius" ratio in tanks with smooth inner walls up to the limiting values allows significantly increasing tank average productivity and reducing its filling time. Growth of H/R ratio of tank with volume 1.0 m3 to the limiting values (in comparison with the standard tank having H/R equal 3.49) augments tank productivity by 23.5 % and the heat exchange area by 20%. Besides, we have demonstrated that maximum average productivity and a minimum filling time are reached for the tank with volume 6.10-2 m3 having central hole diameter of horizontal ribs 6.4.10-2 m.

  15. An averaged polarizable potential for multiscale modeling in phospholipid membranes

    DEFF Research Database (Denmark)

    Witzke, Sarah; List, Nanna Holmgaard; Olsen, Jógvan Magnus Haugaard

    2017-01-01

    A set of average atom-centered charges and polarizabilities has been developed for three types of phospholipids for use in polarizable embedding calculations. The lipids investigated are 1,2-dimyristoyl-sn-glycero-3-phosphocholine, 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine, and 1-palmitoyl...

  16. Understanding coastal morphodynamic patterns from depth-averaged sediment concentration

    NARCIS (Netherlands)

    Ribas, F.; Falques, A.; de Swart, H. E.; Dodd, N.; Garnier, R.; Calvete, D.

    This review highlights the important role of the depth-averaged sediment concentration (DASC) to understand the formation of a number of coastal morphodynamic features that have an alongshore rhythmic pattern: beach cusps, surf zone transverse and crescentic bars, and shoreface-connected sand

  17. Post-model selection inference and model averaging

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  18. Determination of average activating thermal neutron flux in bulk samples

    International Nuclear Information System (INIS)

    Doczi, R.; Csikai, J.; Doczi, R.; Csikai, J.; Hassan, F. M.; Ali, M.A.

    2004-01-01

    A previous method used for the determination of the average neutron flux within bulky samples has been applied for the measurements of hydrogen contents of different samples. An analytical function is given for the description of the correlation between the activity of Dy foils and the hydrogen concentrations. Results obtained by the activation and the thermal neutron reflection methods are compared

  19. Capillary Electrophoresis Sensitivity Enhancement Based on Adaptive Moving Average Method.

    Science.gov (United States)

    Drevinskas, Tomas; Telksnys, Laimutis; Maruška, Audrius; Gorbatsova, Jelena; Kaljurand, Mihkel

    2018-06-05

    In the present work, we demonstrate a novel approach to improve the sensitivity of the "out of lab" portable capillary electrophoretic measurements. Nowadays, many signal enhancement methods are (i) underused (nonoptimal), (ii) overused (distorts the data), or (iii) inapplicable in field-portable instrumentation because of a lack of computational power. The described innovative migration velocity-adaptive moving average method uses an optimal averaging window size and can be easily implemented with a microcontroller. The contactless conductivity detection was used as a model for the development of a signal processing method and the demonstration of its impact on the sensitivity. The frequency characteristics of the recorded electropherograms and peaks were clarified. Higher electrophoretic mobility analytes exhibit higher-frequency peaks, whereas lower electrophoretic mobility analytes exhibit lower-frequency peaks. On the basis of the obtained data, a migration velocity-adaptive moving average algorithm was created, adapted, and programmed into capillary electrophoresis data-processing software. Employing the developed algorithm, each data point is processed depending on a certain migration time of the analyte. Because of the implemented migration velocity-adaptive moving average method, the signal-to-noise ratio improved up to 11 times for sampling frequency of 4.6 Hz and up to 22 times for sampling frequency of 25 Hz. This paper could potentially be used as a methodological guideline for the development of new smoothing algorithms that require adaptive conditions in capillary electrophoresis and other separation methods.

  20. Grade Point Average: What's Wrong and What's the Alternative?

    Science.gov (United States)

    Soh, Kay Cheng

    2011-01-01

    Grade point average (GPA) has been around for more than two centuries. However, it has created a lot of confusion, frustration, and anxiety to GPA-producers and users alike, especially when used across-nation for different purposes. This paper looks into the reasons for such a state of affairs from the perspective of educational measurement. It…

  1. The Effect of Honors Courses on Grade Point Averages

    Science.gov (United States)

    Spisak, Art L.; Squires, Suzanne Carter

    2016-01-01

    High-ability entering college students give three main reasons for not choosing to become part of honors programs and colleges; they and/or their parents believe that honors classes at the university level require more work than non-honors courses, are more stressful, and will adversely affect their self-image and grade point average (GPA) (Hill;…

  2. 40 CFR 63.652 - Emissions averaging provisions.

    Science.gov (United States)

    2010-07-01

    ... emissions more than the reference control technology, but the combination of the pollution prevention... emissions average. This must include any Group 1 emission points to which the reference control technology... agrees has a higher nominal efficiency than the reference control technology. Information on the nominal...

  3. An average salary: approaches to the index determination

    Directory of Open Access Journals (Sweden)

    T. M. Pozdnyakova

    2017-01-01

    Full Text Available The article “An average salary: approaches to the index determination” is devoted to studying various methods of calculating this index, both used by official state statistics of the Russian Federation and offered by modern researchers.The purpose of this research is to analyze the existing approaches to calculating the average salary of employees of enterprises and organizations, as well as to make certain additions that would help to clarify this index.The information base of the research is laws and regulations of the Russian Federation Government, statistical and analytical materials of the Federal State Statistics Service of Russia for the section «Socio-economic indexes: living standards of the population», as well as materials of scientific papers, describing different approaches to the average salary calculation. The data on the average salary of employees of educational institutions of the Khabarovsk region served as the experimental base of research. In the process of conducting the research, the following methods were used: analytical, statistical, calculated-mathematical and graphical.The main result of the research is an option of supplementing the method of calculating average salary index within enterprises or organizations, used by Goskomstat of Russia, by means of introducing a correction factor. Its essence consists in the specific formation of material indexes for different categories of employees in enterprises or organizations, mainly engaged in internal secondary jobs. The need for introducing this correction factor comes from the current reality of working conditions of a wide range of organizations, when an employee is forced, in addition to the main position, to fulfill additional job duties. As a result, the situation is frequent when the average salary at the enterprise is difficult to assess objectively because it consists of calculating multiple rates per staff member. In other words, the average salary of

  4. A Comparison of "Total Dust" and Inhalable Personal Sampling for Beryllium Exposure

    Energy Technology Data Exchange (ETDEWEB)

    Carter, Colleen M. [Tulane Univ., New Orleans, LA (United States). School of Public Health and Tropical Medicine

    2012-05-09

    In 2009, the American Conference of Governmental Industrial Hygienists (ACGIH) reduced the Beryllium (Be) 8-hr Time Weighted Average Threshold Limit Value (TLV-TWA) from 2.0 μg/m3 to 0.05 μg/m3 with an inhalable 'I' designation in accordance with ACGIH's particle size-selective criterion for inhalable mass. Currently, per the Department of Energy (DOE) requirements, the Lawrence Livermore National Laboratory (LLNL) is following the Occupational Health and Safety Administration (OSHA) Permissible Exposure Limit (PEL) of 2.0 μg/m3 as an 8-hr TWA, which is also the 2005 ACGIH TLV-TWA, and an Action Level (AL) of 0.2 μg/m3 and sampling is performed using the 37mm (total dust) sampling method. Since DOE is considering adopting the newer 2009 TLV guidelines, the goal of this study was to determine if the current method of sampling using the 37mm (total dust) sampler would produce results that are comparable to what would be measured using the IOM (inhalable) sampler specific to the application of high energy explosive work at LLNL's remote experimental test facility at Site 300. Side-by-side personal sampling using the two samplers was performed over an approximately two-week period during chamber re-entry and cleanup procedures following detonation of an explosive assembly containing Beryllium (Be). The average ratio of personal sampling results for the IOM (inhalable) vs. 37-mm (total dust) sampler was 1.1:1 with a P-value of 0.62, indicating that there was no statistically significant difference in the performance of the two samplers. Therefore, for the type of activity monitored during this study, the 37-mm sampling cassette would be considered a suitable alternative to the IOM sampler for collecting inhalable particulate matter, which is important given the many practical and economic advantages that it presents. However, similar comparison studies would be necessary for this conclusion to be

  5. High-average-power diode-pumped Yb: YAG lasers

    International Nuclear Information System (INIS)

    Avizonis, P V; Beach, R; Bibeau, C M; Emanuel, M A; Harris, D G; Honea, E C; Monroe, R S; Payne, S A; Skidmore, J A; Sutton, S B

    1999-01-01

    A scaleable diode end-pumping technology for high-average-power slab and rod lasers has been under development for the past several years at Lawrence Livermore National Laboratory (LLNL). This technology has particular application to high average power Yb:YAG lasers that utilize a rod configured gain element. Previously, this rod configured approach has achieved average output powers in a single 5 cm long by 2 mm diameter Yb:YAG rod of 430 W cw and 280 W q-switched. High beam quality (M(sup 2)= 2.4) q-switched operation has also been demonstrated at over 180 W of average output power. More recently, using a dual rod configuration consisting of two, 5 cm long by 2 mm diameter laser rods with birefringence compensation, we have achieved 1080 W of cw output with an M(sup 2) value of 13.5 at an optical-to-optical conversion efficiency of 27.5%. With the same dual rod laser operated in a q-switched mode, we have also demonstrated 532 W of average power with an M(sup 2) and lt; 2.5 at 17% optical-to-optical conversion efficiency. These q-switched results were obtained at a 10 kHz repetition rate and resulted in 77 nsec pulse durations. These improved levels of operational performance have been achieved as a result of technology advancements made in several areas that will be covered in this manuscript. These enhancements to our architecture include: (1) Hollow lens ducts that enable the use of advanced cavity architectures permitting birefringence compensation and the ability to run in large aperture-filling near-diffraction-limited modes. (2) Compound laser rods with flanged-nonabsorbing-endcaps fabricated by diffusion bonding. (3) Techniques for suppressing amplified spontaneous emission (ASE) and parasitics in the polished barrel rods

  6. High average power diode pumped solid state lasers for CALIOPE

    International Nuclear Information System (INIS)

    Comaskey, B.; Halpin, J.; Moran, B.

    1994-07-01

    Diode pumping of solid state media offers the opportunity for very low maintenance, high efficiency, and compact laser systems. For remote sensing, such lasers may be used to pump tunable non-linear sources, or if tunable themselves, act directly or through harmonic crystals as the probe. The needs of long range remote sensing missions require laser performance in the several watts to kilowatts range. At these power performance levels, more advanced thermal management technologies are required for the diode pumps. The solid state laser design must now address a variety of issues arising from the thermal loads, including fracture limits, induced lensing and aberrations, induced birefringence, and laser cavity optical component performance degradation with average power loading. In order to highlight the design trade-offs involved in addressing the above issues, a variety of existing average power laser systems are briefly described. Included are two systems based on Spectra Diode Laboratory's water impingement cooled diode packages: a two times diffraction limited, 200 watt average power, 200 Hz multi-rod laser/amplifier by Fibertek, and TRW's 100 watt, 100 Hz, phase conjugated amplifier. The authors also present two laser systems built at Lawrence Livermore National Laboratory (LLNL) based on their more aggressive diode bar cooling package, which uses microchannel cooler technology capable of 100% duty factor operation. They then present the design of LLNL's first generation OPO pump laser for remote sensing. This system is specified to run at 100 Hz, 20 nsec pulses each with 300 mJ, less than two times diffraction limited, and with a stable single longitudinal mode. The performance of the first testbed version will be presented. The authors conclude with directions their group is pursuing to advance average power lasers. This includes average power electro-optics, low heat load lasing media, and heat capacity lasers

  7. Construction of average adult Japanese voxel phantoms for dose assessment

    International Nuclear Information System (INIS)

    Sato, Kaoru; Takahashi, Fumiaki; Satoh, Daiki; Endo, Akira

    2011-12-01

    The International Commission on Radiological Protection (ICRP) adopted the adult reference voxel phantoms based on the physiological and anatomical reference data of Caucasian on October, 2007. The organs and tissues of these phantoms were segmented on the basis of ICRP Publication 103. In future, the dose coefficients for internal dose and dose conversion coefficients for external dose calculated using the adult reference voxel phantoms will be widely used for the radiation protection fields. On the other hand, the body sizes and organ masses of adult Japanese are generally smaller than those of adult Caucasian. In addition, there are some cases that the anatomical characteristics such as body sizes, organ masses and postures of subjects influence the organ doses in dose assessment for medical treatments and radiation accident. Therefore, it was needed to use human phantoms with average anatomical characteristics of Japanese. The authors constructed the averaged adult Japanese male and female voxel phantoms by modifying the previously developed high-resolution adult male (JM) and female (JF) voxel phantoms. It has been modified in the following three aspects: (1) The heights and weights were agreed with the Japanese averages; (2) The masses of organs and tissues were adjusted to the Japanese averages within 10%; (3) The organs and tissues, which were newly added for evaluation of the effective dose in ICRP Publication 103, were modeled. In this study, the organ masses, distances between organs, specific absorbed fractions (SAFs) and dose conversion coefficients of these phantoms were compared with those evaluated using the ICRP adult reference voxel phantoms. This report provides valuable information on the anatomical and dosimetric characteristics of the averaged adult Japanese male and female voxel phantoms developed as reference phantoms of adult Japanese. (author)

  8. Aerosol particles generated by diesel-powered school buses at urban schools as a source of children’s exposure

    Science.gov (United States)

    Hochstetler, Heather A.; Yermakov, Mikhail; Reponen, Tiina; Ryan, Patrick H.; Grinshpun, Sergey A.

    2015-01-01

    Various heath effects in children have been associated with exposure to traffic-related particulate matter (PM), including emissions from school buses. In this study, the indoor and outdoor aerosol at four urban elementary schools serviced by diesel-powered school buses was characterized with respect to the particle number concentrations and size distributions as well as the PM2.5 mass concentrations and elemental compositions. It was determined that the presence of school buses significantly affected the outdoor particle size distribution, specifically in the ultrafine fraction. The time-weighted average of the total number concentration measured outside the schools was significantly associated with the bus and the car counts. The concentration increase was consistently observed during the morning drop-off hours and in most of the days during the afternoon pick-up period (although at a lower degree). Outdoor PM2.5 mass concentrations measured at schools ranged from 3.8 to 27.6 µg m−3. The school with the highest number of operating buses exhibited the highest average PM2.5 mass concentration. The outdoor mass concentrations of elemental carbon (EC) and organic carbon (OC) were also highest at the school with the greatest number of buses. Most (47/55) correlations between traffic-related elements identified in the outdoor PM2.5 were significant with elements identified in the indoor PM2.5. Significant associations were observed between indoor and outdoor aerosols for EC, EC/OC, and the total particle number concentration. Day-to-day and school-to-school variations in Indoor/Outdoor (I/O) ratios were related to the observed differences in opening windows and doors, which enhanced the particle penetration, as well as indoor activities at schools. Overall, the results on I/O ratio obtained in this study reflect the sizes of particles emitted by diesel-powered school bus engines (primarily, an ultrafine fraction capable of penetrating indoors). PMID:25904818

  9. Aerosol particles generated by diesel-powered school buses at urban schools as a source of children's exposure.

    Science.gov (United States)

    Hochstetler, Heather A; Yermakov, Mikhail; Reponen, Tiina; Ryan, Patrick H; Grinshpun, Sergey A

    2011-03-01

    Various heath effects in children have been associated with exposure to traffic-related particulate matter (PM), including emissions from school buses. In this study, the indoor and outdoor aerosol at four urban elementary schools serviced by diesel-powered school buses was characterized with respect to the particle number concentrations and size distributions as well as the PM2.5 mass concentrations and elemental compositions. It was determined that the presence of school buses significantly affected the outdoor particle size distribution, specifically in the ultrafine fraction. The time-weighted average of the total number concentration measured outside the schools was significantly associated with the bus and the car counts. The concentration increase was consistently observed during the morning drop-off hours and in most of the days during the afternoon pick-up period (although at a lower degree). Outdoor PM2.5 mass concentrations measured at schools ranged from 3.8 to 27.6 µg m -3 . The school with the highest number of operating buses exhibited the highest average PM2.5 mass concentration. The outdoor mass concentrations of elemental carbon (EC) and organic carbon (OC) were also highest at the school with the greatest number of buses. Most (47/55) correlations between traffic-related elements identified in the outdoor PM2.5 were significant with elements identified in the indoor PM2.5. Significant associations were observed between indoor and outdoor aerosols for EC, EC/OC, and the total particle number concentration. Day-to-day and school-to-school variations in Indoor/Outdoor (I/O) ratios were related to the observed differences in opening windows and doors, which enhanced the particle penetration, as well as indoor activities at schools. Overall, the results on I/O ratio obtained in this study reflect the sizes of particles emitted by diesel-powered school bus engines (primarily, an ultrafine fraction capable of penetrating indoors).

  10. Reproducing multi-model ensemble average with Ensemble-averaged Reconstructed Forcings (ERF) in regional climate modeling

    Science.gov (United States)

    Erfanian, A.; Fomenko, L.; Wang, G.

    2016-12-01

    Multi-model ensemble (MME) average is considered the most reliable for simulating both present-day and future climates. It has been a primary reference for making conclusions in major coordinated studies i.e. IPCC Assessment Reports and CORDEX. The biases of individual models cancel out each other in MME average, enabling the ensemble mean to outperform individual members in simulating the mean climate. This enhancement however comes with tremendous computational cost, which is especially inhibiting for regional climate modeling as model uncertainties can originate from both RCMs and the driving GCMs. Here we propose the Ensemble-based Reconstructed Forcings (ERF) approach to regional climate modeling that achieves a similar level of bias reduction at a fraction of cost compared with the conventional MME approach. The new method constructs a single set of initial and boundary conditions (IBCs) by averaging the IBCs of multiple GCMs, and drives the RCM with this ensemble average of IBCs to conduct a single run. Using a regional climate model (RegCM4.3.4-CLM4.5), we tested the method over West Africa for multiple combination of (up to six) GCMs. Our results indicate that the performance of the ERF method is comparable to that of the MME average in simulating the mean climate. The bias reduction seen in ERF simulations is achieved by using more realistic IBCs in solving the system of equations underlying the RCM physics and dynamics. This endows the new method with a theoretical advantage in addition to reducing computational cost. The ERF output is an unaltered solution of the RCM as opposed to a climate state that might not be physically plausible due to the averaging of multiple solutions with the conventional MME approach. The ERF approach should be considered for use in major international efforts such as CORDEX. Key words: Multi-model ensemble, ensemble analysis, ERF, regional climate modeling

  11. Determination of the diagnostic x-ray tube practical peak voltage (PPV) from average or average peak voltage measurements

    Energy Technology Data Exchange (ETDEWEB)

    Hourdakis, C J, E-mail: khour@gaec.gr [Ionizing Radiation Calibration Laboratory-Greek Atomic Energy Commission, PO Box 60092, 15310 Agia Paraskevi, Athens, Attiki (Greece)

    2011-04-07

    The practical peak voltage (PPV) has been adopted as the reference measuring quantity for the x-ray tube voltage. However, the majority of commercial kV-meter models measure the average peak, U-bar{sub P}, the average, U-bar, the effective, U{sub eff} or the maximum peak, U{sub P} tube voltage. This work proposed a method for determination of the PPV from measurements with a kV-meter that measures the average U-bar or the average peak, U-bar{sub p} voltage. The kV-meter reading can be converted to the PPV by applying appropriate calibration coefficients and conversion factors. The average peak k{sub PPV,kVp} and the average k{sub PPV,Uav} conversion factors were calculated from virtual voltage waveforms for conventional diagnostic radiology (50-150 kV) and mammography (22-35 kV) tube voltages and for voltage ripples from 0% to 100%. Regression equation and coefficients provide the appropriate conversion factors at any given tube voltage and ripple. The influence of voltage waveform irregularities, like 'spikes' and pulse amplitude variations, on the conversion factors was investigated and discussed. The proposed method and the conversion factors were tested using six commercial kV-meters at several x-ray units. The deviations between the reference and the calculated - according to the proposed method - PPV values were less than 2%. Practical aspects on the voltage ripple measurement were addressed and discussed. The proposed method provides a rigorous base to determine the PPV with kV-meters from U-bar{sub p} and U-bar measurement. Users can benefit, since all kV-meters, irrespective of their measuring quantity, can be used to determine the PPV, complying with the IEC standard requirements.

  12. An analysis of collegiate band directors' exposure to sound pressure levels

    Science.gov (United States)

    Roebuck, Nikole Moore

    Noise-induced hearing loss (NIHL) is a significant but unfortunate common occupational hazard. The purpose of the current study was to measure the magnitude of sound pressure levels generated within a collegiate band room and determine if those sound pressure levels are of a magnitude that exceeds the policy standards and recommendations of the Occupational Safety and Health Administration (OSHA), and the National Institute of Occupational Safety and Health (NIOSH). In addition, reverberation times were measured and analyzed in order to determine the appropriateness of acoustical conditions for the band rehearsal environment. Sound pressure measurements were taken from the rehearsal of seven collegiate marching bands. Single sample t test were conducted to compare the sound pressure levels of all bands to the noise exposure standards of OSHA and NIOSH. Multiple regression analysis were conducted and analyzed in order to determine the effect of the band room's conditions on the sound pressure levels and reverberation times. Time weighted averages (TWA), noise percentage doses, and peak levels were also collected. The mean Leq for all band directors was 90.5 dBA. The total accumulated noise percentage dose for all band directors was 77.6% of the maximum allowable daily noise dose under the OSHA standard. The total calculated TWA for all band directors was 88.2% of the maximum allowable daily noise dose under the OSHA standard. The total accumulated noise percentage dose for all band directors was 152.1% of the maximum allowable daily noise dose under the NIOSH standards, and the total calculated TWA for all band directors was 93dBA of the maximum allowable daily noise dose under the NIOSH standard. Multiple regression analysis revealed that the room volume, the level of acoustical treatment and the mean room reverberation time predicted 80% of the variance in sound pressure levels in this study.

  13. Evolution of occupational exposure to environmental levels of aromatic hydrocarbons in service stations.

    Science.gov (United States)

    Periago, J F; Prado, C

    2005-04-01

    those obtained in 1995, for similar summer weather conditions (environmental temperature between 28 and 30 degrees C). A significant relationship between the volume of gasoline sold and the ambient concentration of aromatic hydrocarbons was found for each worker sampled in all three of the years. Furthermore, a significant decrease in the environmental levels of BTXs was observed after January 2000, especially in the case of benzene, with mean time-weighted average concentrations for 8 h of 736 microg/m(3) (range 272-1603) in 1995, 241 microg/m(3) (range 115-453) in 2000 and 163 microg/m(3) (range 36-564) in 2003, despite the high temperatures reached in the last mentioned year.

  14. Beef steers with average dry matter intake and divergent average daily gain have altered gene expression in the jejunum

    Science.gov (United States)

    The objective of this study was to determine the association of differentially expressed genes (DEG) in the jejunum of steers with average DMI and high or low ADG. Feed intake and growth were measured in a cohort of 144 commercial Angus steers consuming a finishing diet containing (on a DM basis) 67...

  15. Is average daily travel time expenditure constant? In search of explanations for an increase in average travel time.

    NARCIS (Netherlands)

    van Wee, B.; Rietveld, P.; Meurs, H.

    2006-01-01

    Recent research suggests that the average time spent travelling by the Dutch population has increased over the past decades. However, different data sources show different levels of increase. This paper explores possible causes for this increase. They include a rise in incomes, which has probably

  16. Multiple-level defect species evaluation from average carrier decay

    Science.gov (United States)

    Debuf, Didier

    2003-10-01

    An expression for the average decay is determined by solving the the carrier continuity equations, which include terms for multiple defect recombination. This expression is the decay measured by techniques such as the contactless photoconductance decay method, which determines the average or volume integrated decay. Implicit in the above is the requirement for good surface passivation such that only bulk properties are observed. A proposed experimental configuration is given to achieve the intended goal of an assessment of the type of defect in an n-type Czochralski-grown silicon semiconductor with an unusually high relative lifetime. The high lifetime is explained in terms of a ground excited state multiple-level defect system. Also, minority carrier trapping is investigated.

  17. SEASONAL AVERAGE FLOW IN RÂUL NEGRU HYDROGRAPHIC BASIN

    Directory of Open Access Journals (Sweden)

    VIGH MELINDA

    2015-03-01

    Full Text Available The Râul Negru hydrographic basin is a well individualised physical-geographical unit inside the Braşov Depression. The flow is controlled by six hydrometric stations placed on the main river and on two important tributaries. The data base for seasonal flow analysis contains the discharges from 1950-2012. The results of data analysis show that there significant space-time differences between multiannual seasonal averages. Some interesting conclusions can be obtained by comparing abundant and scarce periods. Flow analysis was made using seasonal charts Q = f(T. The similarities come from the basin’s relative homogeneity, and the differences from flow’s evolution and trend. Flow variation is analysed using variation coefficient. In some cases appear significant Cv values differences. Also, Cv values trends are analysed according to basins’ average altitude.

  18. A collisional-radiative average atom model for hot plasmas

    International Nuclear Information System (INIS)

    Rozsnyai, B.F.

    1996-01-01

    A collisional-radiative 'average atom' (AA) model is presented for the calculation of opacities of hot plasmas not in the condition of local thermodynamic equilibrium (LTE). The electron impact and radiative rate constants are calculated using the dipole oscillator strengths of the average atom. A key element of the model is the photon escape probability which at present is calculated for a semi infinite slab. The Fermi statistics renders the rate equation for the AA level occupancies nonlinear, which requires iterations until the steady state. AA level occupancies are found. Detailed electronic configurations are built into the model after the self-consistent non-LTE AA state is found. The model shows a continuous transition from the non-LTE to the LTE state depending on the optical thickness of the plasma. 22 refs., 13 figs., 1 tab

  19. Kumaraswamy autoregressive moving average models for double bounded environmental data

    Science.gov (United States)

    Bayer, Fábio Mariano; Bayer, Débora Missio; Pumi, Guilherme

    2017-12-01

    In this paper we introduce the Kumaraswamy autoregressive moving average models (KARMA), which is a dynamic class of models for time series taking values in the double bounded interval (a,b) following the Kumaraswamy distribution. The Kumaraswamy family of distribution is widely applied in many areas, especially hydrology and related fields. Classical examples are time series representing rates and proportions observed over time. In the proposed KARMA model, the median is modeled by a dynamic structure containing autoregressive and moving average terms, time-varying regressors, unknown parameters and a link function. We introduce the new class of models and discuss conditional maximum likelihood estimation, hypothesis testing inference, diagnostic analysis and forecasting. In particular, we provide closed-form expressions for the conditional score vector and conditional Fisher information matrix. An application to environmental real data is presented and discussed.

  20. Database of average-power damage thresholds at 1064 nm

    International Nuclear Information System (INIS)

    Rainer, F.; Hildum, E.A.; Milam, D.

    1987-01-01

    We have completed a database of average-power, laser-induced, damage thresholds at 1064 nm on a variety of materials. Measurements were made with a newly constructed laser to provide design input for moderate and high average-power laser projects. The measurements were conducted with 16-ns pulses at pulse-repetition frequencies ranging from 6 to 120 Hz. Samples were typically irradiated for time ranging from a fraction of a second up to 5 minutes (36,000 shots). We tested seven categories of samples which included antireflective coatings, high reflectors, polarizers, single and multiple layers of the same material, bare and overcoated metal surfaces, bare polished surfaces, and bulk materials. The measured damage threshold ranged from 2 for some metals to > 46 J/cm 2 for a bare polished glass substrate. 4 refs., 7 figs., 1 tab

  1. Partial Averaged Navier-Stokes approach for cavitating flow

    International Nuclear Information System (INIS)

    Zhang, L; Zhang, Y N

    2015-01-01

    Partial Averaged Navier Stokes (PANS) is a numerical approach developed for studying practical engineering problems (e.g. cavitating flow inside hydroturbines) with a resonance cost and accuracy. One of the advantages of PANS is that it is suitable for any filter width, leading a bridging method from traditional Reynolds Averaged Navier-Stokes (RANS) to direct numerical simulations by choosing appropriate parameters. Comparing with RANS, the PANS model will inherit many physical nature from parent RANS but further resolve more scales of motion in great details, leading to PANS superior to RANS. As an important step for PANS approach, one need to identify appropriate physical filter-width control parameters e.g. ratios of unresolved-to-total kinetic energy and dissipation. In present paper, recent studies of cavitating flow based on PANS approach are introduced with a focus on the influences of filter-width control parameters on the simulation results

  2. The B-dot Earth Average Magnetic Field

    Science.gov (United States)

    Capo-Lugo, Pedro A.; Rakoczy, John; Sanders, Devon

    2013-01-01

    The average Earth's magnetic field is solved with complex mathematical models based on mean square integral. Depending on the selection of the Earth magnetic model, the average Earth's magnetic field can have different solutions. This paper presents a simple technique that takes advantage of the damping effects of the b-dot controller and is not dependent of the Earth magnetic model; but it is dependent on the magnetic torquers of the satellite which is not taken into consideration in the known mathematical models. Also the solution of this new technique can be implemented so easily that the flight software can be updated during flight, and the control system can have current gains for the magnetic torquers. Finally, this technique is verified and validated using flight data from a satellite that it has been in orbit for three years.

  3. Thermal effects in high average power optical parametric amplifiers.

    Science.gov (United States)

    Rothhardt, Jan; Demmler, Stefan; Hädrich, Steffen; Peschel, Thomas; Limpert, Jens; Tünnermann, Andreas

    2013-03-01

    Optical parametric amplifiers (OPAs) have the reputation of being average power scalable due to the instantaneous nature of the parametric process (zero quantum defect). This Letter reveals serious challenges originating from thermal load in the nonlinear crystal caused by absorption. We investigate these thermal effects in high average power OPAs based on beta barium borate. Absorption of both pump and idler waves is identified to contribute significantly to heating of the nonlinear crystal. A temperature increase of up to 148 K with respect to the environment is observed and mechanical tensile stress up to 40 MPa is found, indicating a high risk of crystal fracture under such conditions. By restricting the idler to a wavelength range far from absorption bands and removing the crystal coating we reduce the peak temperature and the resulting temperature gradient significantly. Guidelines for further power scaling of OPAs and other nonlinear devices are given.

  4. Measuring average angular velocity with a smartphone magnetic field sensor

    Science.gov (United States)

    Pili, Unofre; Violanda, Renante

    2018-02-01

    The angular velocity of a spinning object is, by standard, measured using a device called a tachometer. However, by directly using it in a classroom setting, the activity is likely to appear as less instructive and less engaging. Indeed, some alternative classroom-suitable methods for measuring angular velocity have been presented. In this paper, we present a further alternative that is smartphone-based, making use of the real-time magnetic field (simply called B-field in what follows) data gathering capability of the B-field sensor of the smartphone device as the timer for measuring average rotational period and average angular velocity. The in-built B-field sensor in smartphones has already found a number of uses in undergraduate experimental physics. For instance, in elementary electrodynamics, it has been used to explore the well-known Bio-Savart law and in a measurement of the permeability of air.

  5. Nuclear fuel management via fuel quality factor averaging

    International Nuclear Information System (INIS)

    Mingle, J.O.

    1978-01-01

    The numerical procedure of prime number averaging is applied to the fuel quality factor distribution of once and twice-burned fuel in order to evolve a fuel management scheme. The resulting fuel shuffling arrangement produces a near optimal flat power profile both under beginning-of-life and end-of-life conditions. The procedure is easily applied requiring only the solution of linear algebraic equations. (author)

  6. Modeling methane emission via the infinite moving average process

    Czech Academy of Sciences Publication Activity Database

    Jordanova, D.; Dušek, Jiří; Stehlík, M.

    2013-01-01

    Roč. 122, - (2013), s. 40-49 ISSN 0169-7439 R&D Projects: GA MŠk(CZ) ED1.1.00/02.0073; GA ČR(CZ) GAP504/11/1151 Institutional support: RVO:67179843 Keywords : Environmental chemistry * Pareto tails * t-Hill estimator * Weak consistency * Moving average process * Methane emission model Subject RIV: EH - Ecology, Behaviour Impact factor: 2.381, year: 2013

  7. Spatial analysis based on variance of moving window averages

    OpenAIRE

    Wu, B M; Subbarao, K V; Ferrandino, F J; Hao, J J

    2006-01-01

    A new method for analysing spatial patterns was designed based on the variance of moving window averages (VMWA), which can be directly calculated in geographical information systems or a spreadsheet program (e.g. MS Excel). Different types of artificial data were generated to test the method. Regardless of data types, the VMWA method correctly determined the mean cluster sizes. This method was also employed to assess spatial patterns in historical plant disease survey data encompassing both a...

  8. Forecasting stock market averages to enhance profitable trading strategies

    OpenAIRE

    Haefke, Christian; Helmenstein, Christian

    1995-01-01

    In this paper we design a simple trading strategy to exploit the hypothesized distinct informational content of the arithmetic and geometric mean. The rejection of cointegration between the two stock market indicators supports this conjecture. The profits generated by this cheaply replicable trading scheme cannot be expected to persist. Therefore we forecast the averages using autoregressive linear and neural network models to gain a competitive advantage relative to other investors. Refining...

  9. Application of NMR circuit for superconducting magnet using signal averaging

    International Nuclear Information System (INIS)

    Yamada, R.; Ishimoto, H.; Shea, M.F.; Schmidt, E.E.; Borer, K.

    1977-01-01

    An NMR circuit was used to measure the absolute field values of Fermilab Energy Doubler magnets up to 44 kG. A signal averaging method to improve the S/N ratio was implemented by means of a Tektronix Digital Processing Oscilloscope, followed by the development of an inexpensive microprocessor based system contained in a NIM module. Some of the data obtained from measuring two superconducting dipole magnets are presented

  10. Average Case Analysis of Java 7's Dual Pivot Quicksort

    OpenAIRE

    Wild, Sebastian; Nebel, Markus E.

    2013-01-01

    Recently, a new Quicksort variant due to Yaroslavskiy was chosen as standard sorting method for Oracle's Java 7 runtime library. The decision for the change was based on empirical studies showing that on average, the new algorithm is faster than the formerly used classic Quicksort. Surprisingly, the improvement was achieved by using a dual pivot approach, an idea that was considered not promising by several theoretical studies in the past. In this paper, we identify the reason for this unexpe...

  11. The definition and computation of average neutron lifetimes

    International Nuclear Information System (INIS)

    Henry, A.F.

    1983-01-01

    A precise physical definition is offered for a class of average lifetimes for neutrons in an assembly of materials, either multiplying or not, or if the former, critical or not. A compact theoretical expression for the general member of this class is derived in terms of solutions to the transport equation. Three specific definitions are considered. Particular exact expressions for these are derived and reduced to simple algebraic formulas for one-group and two-group homogeneous bare-core models

  12. Marginal versus Average Beta of Equity under Corporate Taxation

    OpenAIRE

    Lund, Diderik

    2009-01-01

    Even for fully equity-financed firms there may be substantial effects of taxation on the after-tax cost of capital. Among the few studies of these effects, even fewer identify all effects correctly. When marginal investment is taxed together with inframarginal, marginal beta differs from average if there are investment-related deductions like depreciation. To calculate asset betas, one should not only 'unlever observed equity betas, but 'untax' and 'unaverage' them. Risky tax claims are value...

  13. High average power solid state laser power conditioning system

    International Nuclear Information System (INIS)

    Steinkraus, R.F.

    1987-01-01

    The power conditioning system for the High Average Power Laser program at Lawrence Livermore National Laboratory (LLNL) is described. The system has been operational for two years. It is high voltage, high power, fault protected, and solid state. The power conditioning system drives flashlamps that pump solid state lasers. Flashlamps are driven by silicon control rectifier (SCR) switched, resonant charged, (LC) discharge pulse forming networks (PFNs). The system uses fiber optics for control and diagnostics. Energy and thermal diagnostics are monitored by computers

  14. Minimal average consumption downlink base station power control strategy

    OpenAIRE

    Holtkamp H.; Auer G.; Haas H.

    2011-01-01

    We consider single cell multi-user OFDMA downlink resource allocation on a flat-fading channel such that average supply power is minimized while fulfilling a set of target rates. Available degrees of freedom are transmission power and duration. This paper extends our previous work on power optimal resource allocation in the mobile downlink by detailing the optimal power control strategy investigation and extracting fundamental characteristics of power optimal operation in cellular downlink. W...

  15. Cosmological measure with volume averaging and the vacuum energy problem

    Science.gov (United States)

    Astashenok, Artyom V.; del Popolo, Antonino

    2012-04-01

    In this paper, we give a possible solution to the cosmological constant problem. It is shown that the traditional approach, based on volume weighting of probabilities, leads to an incoherent conclusion: the probability that a randomly chosen observer measures Λ = 0 is exactly equal to 1. Using an alternative, volume averaging measure, instead of volume weighting can explain why the cosmological constant is non-zero.

  16. Cosmological measure with volume averaging and the vacuum energy problem

    International Nuclear Information System (INIS)

    Astashenok, Artyom V; Del Popolo, Antonino

    2012-01-01

    In this paper, we give a possible solution to the cosmological constant problem. It is shown that the traditional approach, based on volume weighting of probabilities, leads to an incoherent conclusion: the probability that a randomly chosen observer measures Λ = 0 is exactly equal to 1. Using an alternative, volume averaging measure, instead of volume weighting can explain why the cosmological constant is non-zero. (paper)

  17. Average radiation weighting factors for specific distributed neutron spectra

    International Nuclear Information System (INIS)

    Ninkovic, M.M.; Raicevic, J.J.

    1993-01-01

    Spectrum averaged radiation weighting factors for 6 specific neutron fields in the environment of 3 categories of the neutron sources (fission, spontaneous fission and (α,n)) are determined in this paper. Obtained values of these factors are greater 1.5 to 2 times than the corresponding quality factors used for the same purpose until a few years ago. This fact is very important to have in mind in the conversion of the neutron fluence into the neutron dose equivalent. (author)

  18. Occupational exposure in interventional radiology

    International Nuclear Information System (INIS)

    Oh, H.J.; Lee, K.Y.; Cha, S.H.; Kang, Y.K.; Kim, H.J.; Oh, H.J.

    2003-01-01

    This study was conducted to survey of radiation safety control and to measure occupational radiation exposure dose of staff in interventional radiology in Korea. Interventioanl radiology requires the operator and assisting personnel to remain close to the patient, and thus close to primary beams of radiation. Therefore exposure doses of these personnel are significant from a radiological protection point of view. We surveyed the status of radiation safety on interventional radiology of 72 hospitals. The result were that 119 radiation equipments are using in interventional radiology and 744 staffs are composed of 307 radiologists, 116 residents of radiology, 5 general physicians, 171 radiologic technologists and 145 nurses. 81.4% and 20.2 % of operating physicians are using neck collar protector and goggle respectively. The average radiation dose was measured 0.46±0.15 mSv/10 hours fluoroscopy inside examination room in radiation protection facilities. Occupational radiation exposure data on the staff were assessed in interventional radiology procedures from 8 interventional radiology equipments of 6 university hospitals. The dose measurements were made by placing a thermoluminesent dosimeter(TLD) on various body surface of operation and assistant staff during actual interventional radiology. The measured points were the corner of the eyes, neck(on the thyroid) , wrists, chest(outside and inside of the protector), and back. Average radiation equivalent dose of the corner of left eye and left wrist of operating physicians were 1.19 mSv(0.11∼4.13 mSv)/100 minutes fluoroscopy and 4.32 mSv(0.16∼11.0 mSv)/100 minutes fluoroscopy respectively. Average exposure dose may vary depending on the type of procedure, personal skills and the quality of equipment. These results will be contributed to prepare the guide line in interventional radiology in Korea

  19. Calculating ensemble averaged descriptions of protein rigidity without sampling.

    Directory of Open Access Journals (Sweden)

    Luis C González

    Full Text Available Previous works have demonstrated that protein rigidity is related to thermodynamic stability, especially under conditions that favor formation of native structure. Mechanical network rigidity properties of a single conformation are efficiently calculated using the integer body-bar Pebble Game (PG algorithm. However, thermodynamic properties require averaging over many samples from the ensemble of accessible conformations to accurately account for fluctuations in network topology. We have developed a mean field Virtual Pebble Game (VPG that represents the ensemble of networks by a single effective network. That is, all possible number of distance constraints (or bars that can form between a pair of rigid bodies is replaced by the average number. The resulting effective network is viewed as having weighted edges, where the weight of an edge quantifies its capacity to absorb degrees of freedom. The VPG is interpreted as a flow problem on this effective network, which eliminates the need to sample. Across a nonredundant dataset of 272 protein structures, we apply the VPG to proteins for the first time. Our results show numerically and visually that the rigidity characterizations of the VPG accurately reflect the ensemble averaged [Formula: see text] properties. This result positions the VPG as an efficient alternative to understand the mechanical role that chemical interactions play in maintaining protein stability.

  20. Calculating ensemble averaged descriptions of protein rigidity without sampling.

    Science.gov (United States)

    González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J

    2012-01-01

    Previous works have demonstrated that protein rigidity is related to thermodynamic stability, especially under conditions that favor formation of native structure. Mechanical network rigidity properties of a single conformation are efficiently calculated using the integer body-bar Pebble Game (PG) algorithm. However, thermodynamic properties require averaging over many samples from the ensemble of accessible conformations to accurately account for fluctuations in network topology. We have developed a mean field Virtual Pebble Game (VPG) that represents the ensemble of networks by a single effective network. That is, all possible number of distance constraints (or bars) that can form between a pair of rigid bodies is replaced by the average number. The resulting effective network is viewed as having weighted edges, where the weight of an edge quantifies its capacity to absorb degrees of freedom. The VPG is interpreted as a flow problem on this effective network, which eliminates the need to sample. Across a nonredundant dataset of 272 protein structures, we apply the VPG to proteins for the first time. Our results show numerically and visually that the rigidity characterizations of the VPG accurately reflect the ensemble averaged [Formula: see text] properties. This result positions the VPG as an efficient alternative to understand the mechanical role that chemical interactions play in maintaining protein stability.

  1. High Average Power, High Energy Short Pulse Fiber Laser System

    Energy Technology Data Exchange (ETDEWEB)

    Messerly, M J

    2007-11-13

    Recently continuous wave fiber laser systems with output powers in excess of 500W with good beam quality have been demonstrated [1]. High energy, ultrafast, chirped pulsed fiber laser systems have achieved record output energies of 1mJ [2]. However, these high-energy systems have not been scaled beyond a few watts of average output power. Fiber laser systems are attractive for many applications because they offer the promise of high efficiency, compact, robust systems that are turn key. Applications such as cutting, drilling and materials processing, front end systems for high energy pulsed lasers (such as petawatts) and laser based sources of high spatial coherence, high flux x-rays all require high energy short pulses and two of the three of these applications also require high average power. The challenge in creating a high energy chirped pulse fiber laser system is to find a way to scale the output energy while avoiding nonlinear effects and maintaining good beam quality in the amplifier fiber. To this end, our 3-year LDRD program sought to demonstrate a high energy, high average power fiber laser system. This work included exploring designs of large mode area optical fiber amplifiers for high energy systems as well as understanding the issues associated chirped pulse amplification in optical fiber amplifier systems.

  2. General and Local: Averaged k-Dependence Bayesian Classifiers

    Directory of Open Access Journals (Sweden)

    Limin Wang

    2015-06-01

    Full Text Available The inference of a general Bayesian network has been shown to be an NP-hard problem, even for approximate solutions. Although k-dependence Bayesian (KDB classifier can construct at arbitrary points (values of k along the attribute dependence spectrum, it cannot identify the changes of interdependencies when attributes take different values. Local KDB, which learns in the framework of KDB, is proposed in this study to describe the local dependencies implicated in each test instance. Based on the analysis of functional dependencies, substitution-elimination resolution, a new type of semi-naive Bayesian operation, is proposed to substitute or eliminate generalization to achieve accurate estimation of conditional probability distribution while reducing computational complexity. The final classifier, averaged k-dependence Bayesian (AKDB classifiers, will average the output of KDB and local KDB. Experimental results on the repository of machine learning databases from the University of California Irvine (UCI showed that AKDB has significant advantages in zero-one loss and bias relative to naive Bayes (NB, tree augmented naive Bayes (TAN, Averaged one-dependence estimators (AODE, and KDB. Moreover, KDB and local KDB show mutually complementary characteristics with respect to variance.

  3. Quantized Average Consensus on Gossip Digraphs with Reduced Computation

    Science.gov (United States)

    Cai, Kai; Ishii, Hideaki

    The authors have recently proposed a class of randomized gossip algorithms which solve the distributed averaging problem on directed graphs, with the constraint that each node has an integer-valued state. The essence of this algorithm is to maintain local records, called “surplus”, of individual state updates, thereby achieving quantized average consensus even though the state sum of all nodes is not preserved. In this paper we study a modified version of this algorithm, whose feature is primarily in reducing both computation and communication effort. Concretely, each node needs to update fewer local variables, and can transmit surplus by requiring only one bit. Under this modified algorithm we prove that reaching the average is ensured for arbitrary strongly connected graphs. The condition of arbitrary strong connection is less restrictive than those known in the literature for either real-valued or quantized states; in particular, it does not require the special structure on the network called balanced. Finally, we provide numerical examples to illustrate the convergence result, with emphasis on convergence time analysis.

  4. Role of spatial averaging in multicellular gradient sensing.

    Science.gov (United States)

    Smith, Tyler; Fancher, Sean; Levchenko, Andre; Nemenman, Ilya; Mugler, Andrew

    2016-05-20

    Gradient sensing underlies important biological processes including morphogenesis, polarization, and cell migration. The precision of gradient sensing increases with the length of a detector (a cell or group of cells) in the gradient direction, since a longer detector spans a larger range of concentration values. Intuition from studies of concentration sensing suggests that precision should also increase with detector length in the direction transverse to the gradient, since then spatial averaging should reduce the noise. However, here we show that, unlike for concentration sensing, the precision of gradient sensing decreases with transverse length for the simplest gradient sensing model, local excitation-global inhibition. The reason is that gradient sensing ultimately relies on a subtraction of measured concentration values. While spatial averaging indeed reduces the noise in these measurements, which increases precision, it also reduces the covariance between the measurements, which results in the net decrease in precision. We demonstrate how a recently introduced gradient sensing mechanism, regional excitation-global inhibition (REGI), overcomes this effect and recovers the benefit of transverse averaging. Using a REGI-based model, we compute the optimal two- and three-dimensional detector shapes, and argue that they are consistent with the shapes of naturally occurring gradient-sensing cell populations.

  5. Fitting a function to time-dependent ensemble averaged data.

    Science.gov (United States)

    Fogelmark, Karl; Lomholt, Michael A; Irbäck, Anders; Ambjörnsson, Tobias

    2018-05-03

    Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion constants). A commonly overlooked challenge in such function fitting procedures is that fluctuations around mean values, by construction, exhibit temporal correlations. We show that the only available general purpose function fitting methods, correlated chi-square method and the weighted least squares method (which neglects correlation), fail at either robust parameter estimation or accurate error estimation. We remedy this by deriving a new closed-form error estimation formula for weighted least square fitting. The new formula uses the full covariance matrix, i.e., rigorously includes temporal correlations, but is free of the robustness issues, inherent to the correlated chi-square method. We demonstrate its accuracy in four examples of importance in many fields: Brownian motion, damped harmonic oscillation, fractional Brownian motion and continuous time random walks. We also successfully apply our method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software.

  6. Time-dependence and averaging techniques in atomic photoionization calculations

    International Nuclear Information System (INIS)

    Scheibner, K.F.

    1984-01-01

    Two distinct problems in the development and application of averaging techniques to photoionization calculations are considered. The first part of the thesis is concerned with the specific problem of near-resonant three-photon ionization in hydrogen, a process for which no cross section exists. Effects of the inclusion of the laser pulse characteristics (both temporal and spatial) on the dynamics of the ionization probability and of the metastable state probability are examined. It is found, for example, that the ionization probability can decrease with increasing field intensity. The temporal profile of the laser pulse is found to affect the dynamics very little, whereas the spatial character of the pulse can affect the results drastically. In the second part of the thesis techniques are developed for calculating averaged cross sections directly without first calculating a detailed cross section. Techniques are developed whereby the detailed cross section never has to be calculated as an intermediate step, but rather, the averaged cross section is calculated directly. A variation of the moment technique and a new method based on the stabilization technique are applied successfully to atomic hydrogen and helium

  7. Ultra-low noise miniaturized neural amplifier with hardware averaging.

    Science.gov (United States)

    Dweiri, Yazan M; Eggers, Thomas; McCallum, Grant; Durand, Dominique M

    2015-08-01

    Peripheral nerves carry neural signals that could be used to control hybrid bionic systems. Cuff electrodes provide a robust and stable interface but the recorded signal amplitude is small (concept of hardware averaging to nerve recordings obtained with cuff electrodes. An optimization procedure is developed to minimize noise and power simultaneously. The novel design was based on existing neural amplifiers (Intan Technologies, LLC) and is validated with signals obtained from the FINE in chronic dog experiments. We showed that hardware averaging leads to a reduction in the total recording noise by a factor of 1/√N or less depending on the source resistance. Chronic recording of physiological activity with FINE using the presented design showed significant improvement on the recorded baseline noise with at least two parallel operation transconductance amplifiers leading to a 46.1% reduction at N = 8. The functionality of these recordings was quantified by the SNR improvement and shown to be significant for N = 3 or more. The present design was shown to be capable of generating hardware averaging on noise improvement for neural recording with cuff electrodes, and can accommodate the presence of high source impedances that are associated with the miniaturized contacts and the high channel count in electrode arrays. This technique can be adopted for other applications where miniaturized and implantable multichannel acquisition systems with ultra-low noise and low power are required.

  8. Occupational radiation exposures in canada-1983

    International Nuclear Information System (INIS)

    Fujimoto, K.; Wilson, J.A.; Ashmore, J.P.; Grogan, D.

    1984-08-01

    This is the sixth in a series of annual reports on Occupational Radiation Exposures in Canada. The information is derived from the National Dose Registry of the Radiation Protection Bureau, Department of National Health and Welfare. As in the past this report presents by occupation: average yearly whole body doses by region, dose distributions, and variations of the average doses with time. The format has been changed to provide more detailed information regarding the various occupations. Statistical data concerning investigations of high exposures reported by the National Dosimetry Services are tabulated in summary form

  9. Occupational radiation exposures in Canada - 1982

    International Nuclear Information System (INIS)

    Fujimoto, K.R.; Wilson, J.A.; Ashmore, J.P.; Grogan, D.

    1983-12-01

    This report is the fifth in a series of annual reports in Occupational Radiation Exposures in Canada. The data is derived from the Radiation Protection Bureau's National Dose Registry which contains dose records for radiation workers. The report presents average yearly doses by region and occupational category, dose distributions, and variation of average doses with time. Statistical data concerning investigations of high exposures reported by the National Dosimetry Services are included, and individual cases are briefly summarized where the maximum permissible dose is exceeded

  10. Occupational radiation exposures in Canada - 1979

    International Nuclear Information System (INIS)

    Ashmore, J.P.; Fujimoto, K.R.; Wilson, J.A.; Grogan, D.

    1980-12-01

    This report is the second in a series of annual reports on Occupational Radiation Exposures in Canada. The data is derived from the Radiation Protection Bureau's National Dose Registry which includes dose records for radiation workers in Canada. The report presents average yearly doses by region and occupational category, dose distributions, and variation of average doses with time. Statistical data concerning investigations of high exposures are included and individual cases are briefly summarized where the maximum permissible dose is exceeded. The 1979 data indicate that the gradually decreasing trend of the last two decades may be changing. In a number of areas the overall average doses and the averages for some job categories have increased over the corresponding values for 1977 and 1978

  11. [Algorithm for taking into account the average annual background of air pollution in the assessment of health risks].

    Science.gov (United States)

    Fokin, M V

    2013-01-01

    State Budgetary Educational Institution of Higher Professional Education "I.M. Sechenov First Moscow State Medical University" of the Ministry of Health care and Social Development, Moscow, Russian Federation. The assessment of health risks from air pollution with emissions from industrial facilities, without the average annual background of air pollution does not meet sanitary legislation. However Russian Federal Service for Hydrometeorology and Environmental Monitoring issues official certificates for a limited number of areas covered by the observations of the full program on the stationary points. Questions of accounting average background air pollution in the evaluation of health risks from exposure to emissions from industrial facilities are considered.

  12. Exposure to background radiation in Australia

    Energy Technology Data Exchange (ETDEWEB)

    Solomon, S.B. [Australian Radiation Lab., Melbourne, VIC (Australia)

    1997-12-31

    The average effective dose received by the Australian population is estimated to be {approx}1.8 mSv / year. One half of this exposure arises from exposure from terrestrial radiation and cosmic rays, the remainder from radionuclides within the body and from inhalation of radon progeny. This paper reviews a number of research programmes carried out by the Australian Radiation Laboratory to study radiation exposure from natural background, particularly in the workplace and illustrate approaches to the quantification and management of exposure to natural radiation. The average radiation doses to the Australian population are relatively low; the average annual radon concentration ranged from 6 Bq m{sup -3} in Queensland to 16 Bq m{sup -3} in the Australian Capital Territory (ACT). Of more importance is the emerging issue of exposure to elevated background radiation in the workplace. Two situation are presented; the radiation exposure to air crues and show cave tour guides. Annual doses up to 3.8 mSv were estimated for international crew members while the highest estimate for show cave tour guides was 9 mSv per year. 9 refs., 2 tabs., 4 figs.

  13. Exposure to background radiation in Australia

    International Nuclear Information System (INIS)

    Solomon, S.B.

    1997-01-01

    The average effective dose received by the Australian population is estimated to be ∼1.8 mSv / year. One half of this exposure arises from exposure from terrestrial radiation and cosmic rays, the remainder from radionuclides within the body and from inhalation of radon progeny. This paper reviews a number of research programmes carried out by the Australian Radiation Laboratory to study radiation exposure from natural background, particularly in the workplace and illustrate approaches to the quantification and management of exposure to natural radiation. The average radiation doses to the Australian population are relatively low; the average annual radon concentration ranged from 6 Bq m -3 in Queensland to 16 Bq m -3 in the Australian Capital Territory (ACT). Of more importance is the emerging issue of exposure to elevated background radiation in the workplace. Two situation are presented; the radiation exposure to air crues and show cave tour guides. Annual doses up to 3.8 mSv were estimated for international crew members while the highest estimate for show cave tour guides was 9 mSv per year

  14. Eighth annual occupational radiation exposure report, 1975

    International Nuclear Information System (INIS)

    Brooks, B.G.

    1976-10-01

    This is a report by the U.S. Nuclear Regulatory Commission on the operation of the Commission's centralized repository of personnel occupational radiation exposure information. Annual reports were received from 387 covered licensees indicating that some 78,713 individuals, having an average exposure of 0.36 rems, were monitored for exposure to radiation during 1975 and that 21,601 individuals terminated their employment or work assignment with covered licensees in 1975. The number of personnel overexposures reported in 1975 decreased from previous years. The most significant overexposures which occurred in 1975 are summarized

  15. Development of retrospective quantitative and qualitative job-exposure matrices for exposures at a beryllium processing facility.

    Science.gov (United States)

    Couch, James R; Petersen, Martin; Rice, Carol; Schubauer-Berigan, Mary K

    2011-05-01

    To construct a job-exposure matrix (JEM) for an Ohio beryllium processing facility between 1953 and 2006 and to evaluate temporal changes in airborne beryllium exposures. Quantitative area- and breathing-zone-based exposure measurements of airborne beryllium were made between 1953 and 2006 and used by plant personnel to estimate daily weighted average (DWA) exposure concentrations for sampled departments and operations. These DWA measurements were used to create a JEM with 18 exposure metrics, which was linked to the plant cohort consisting of 18,568 unique job, department and year combinations. The exposure metrics ranged from quantitative metrics (annual arithmetic/geometric average DWA exposures, maximum DWA and peak exposures) to descriptive qualitative metrics (chemical beryllium species and physical form) to qualitative assignment of exposure to other risk factors (yes/no). Twelve collapsed job titles with long-term consistent industrial hygiene samples were evaluated using regression analysis for time trends in DWA estimates. Annual arithmetic mean DWA estimates (overall plant-wide exposures including administration, non-production, and production estimates) for the data by decade ranged from a high of 1.39 μg/m(3) in the 1950s to a low of 0.33 μg/m(3) in the 2000s. Of the 12 jobs evaluated for temporal trend, the average arithmetic DWA mean was 2.46 μg/m(3) and the average geometric mean DWA was 1.53 μg/m(3). After the DWA calculations were log-transformed, 11 of the 12 had a statistically significant (p < 0.05) decrease in reported exposure over time. The constructed JEM successfully differentiated beryllium exposures across jobs and over time. This is the only quantitative JEM containing exposure estimates (average and peak) for the entire plant history.

  16. Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method

    Science.gov (United States)

    Tsai, F. T. C.; Elshall, A. S.

    2014-12-01

    Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.

  17. Artificial Intelligence Can Predict Daily Trauma Volume and Average Acuity.

    Science.gov (United States)

    Stonko, David P; Dennis, Bradley M; Betzold, Richard D; Peetz, Allan B; Gunter, Oliver L; Guillamondegui, Oscar D

    2018-04-19

    The goal of this study was to integrate temporal and weather data in order to create an artificial neural network (ANN) to predict trauma volume, the number of emergent operative cases, and average daily acuity at a level 1 trauma center. Trauma admission data from TRACS and weather data from the National Oceanic and Atmospheric Administration (NOAA) was collected for all adult trauma patients from July 2013-June 2016. The ANN was constructed using temporal (time, day of week), and weather factors (daily high, active precipitation) to predict four points of daily trauma activity: number of traumas, number of penetrating traumas, average ISS, and number of immediate OR cases per day. We trained a two-layer feed-forward network with 10 sigmoid hidden neurons via the Levenberg-Marquardt backpropagation algorithm, and performed k-fold cross validation and accuracy calculations on 100 randomly generated partitions. 10,612 patients over 1,096 days were identified. The ANN accurately predicted the daily trauma distribution in terms of number of traumas, number of penetrating traumas, number of OR cases, and average daily ISS (combined training correlation coefficient r = 0.9018+/-0.002; validation r = 0.8899+/- 0.005; testing r = 0.8940+/-0.006). We were able to successfully predict trauma and emergent operative volume, and acuity using an ANN by integrating local weather and trauma admission data from a level 1 center. As an example, for June 30, 2016, it predicted 9.93 traumas (actual: 10), and a mean ISS score of 15.99 (actual: 13.12); see figure 3. This may prove useful for predicting trauma needs across the system and hospital administration when allocating limited resources. Level III STUDY TYPE: Prognostic/Epidemiological.

  18. Exploring JLA supernova data with improved flux-averaging technique

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Shuang; Wen, Sixiang; Li, Miao, E-mail: wangshuang@mail.sysu.edu.cn, E-mail: wensx@mail2.sysu.edu.cn, E-mail: limiao9@mail.sysu.edu.cn [School of Physics and Astronomy, Sun Yat-Sen University, University Road (No. 2), Zhuhai (China)

    2017-03-01

    In this work, we explore the cosmological consequences of the ''Joint Light-curve Analysis'' (JLA) supernova (SN) data by using an improved flux-averaging (FA) technique, in which only the type Ia supernovae (SNe Ia) at high redshift are flux-averaged. Adopting the criterion of figure of Merit (FoM) and considering six dark energy (DE) parameterizations, we search the best FA recipe that gives the tightest DE constraints in the ( z {sub cut}, Δ z ) plane, where z {sub cut} and Δ z are redshift cut-off and redshift interval of FA, respectively. Then, based on the best FA recipe obtained, we discuss the impacts of varying z {sub cut} and varying Δ z , revisit the evolution of SN color luminosity parameter β, and study the effects of adopting different FA recipe on parameter estimation. We find that: (1) The best FA recipe is ( z {sub cut} = 0.6, Δ z =0.06), which is insensitive to a specific DE parameterization. (2) Flux-averaging JLA samples at z {sub cut} ≥ 0.4 will yield tighter DE constraints than the case without using FA. (3) Using FA can significantly reduce the redshift-evolution of β. (4) The best FA recipe favors a larger fractional matter density Ω {sub m} . In summary, we present an alternative method of dealing with JLA data, which can reduce the systematic uncertainties of SNe Ia and give the tighter DE constraints at the same time. Our method will be useful in the use of SNe Ia data for precision cosmology.

  19. Radial behavior of the average local ionization energies of atoms

    International Nuclear Information System (INIS)

    Politzer, P.; Murray, J.S.; Grice, M.E.; Brinck, T.; Ranganathan, S.

    1991-01-01

    The radial behavior of the average local ionization energy bar I(r) has been investigated for the atoms He--Kr, using ab initio Hartree--Fock atomic wave functions. bar I(r) is found to decrease in a stepwise manner with the inflection points serving effectively to define boundaries between electronic shells. There is a good inverse correlation between polarizability and the ionization energy in the outermost region of the atom, suggesting that bar I(r) may be a meaningful measure of local polarizabilities in atoms and molecules

  20. Average-case analysis of incremental topological ordering

    DEFF Research Database (Denmark)

    Ajwani, Deepak; Friedrich, Tobias

    2010-01-01

    Many applications like pointer analysis and incremental compilation require maintaining a topological ordering of the nodes of a directed acyclic graph (DAG) under dynamic updates. All known algorithms for this problem are either only analyzed for worst-case insertion sequences or only evaluated...... experimentally on random DAGs. We present the first average-case analysis of incremental topological ordering algorithms. We prove an expected runtime of under insertion of the edges of a complete DAG in a random order for the algorithms of Alpern et al. (1990) [4], Katriel and Bodlaender (2006) [18], and Pearce...

  1. Fast Decentralized Averaging via Multi-scale Gossip

    Science.gov (United States)

    Tsianos, Konstantinos I.; Rabbat, Michael G.

    We are interested in the problem of computing the average consensus in a distributed fashion on random geometric graphs. We describe a new algorithm called Multi-scale Gossip which employs a hierarchical decomposition of the graph to partition the computation into tractable sub-problems. Using only pairwise messages of fixed size that travel at most O(n^{1/3}) hops, our algorithm is robust and has communication cost of O(n loglogn logɛ - 1) transmissions, which is order-optimal up to the logarithmic factor in n. Simulated experiments verify the good expected performance on graphs of many thousands of nodes.

  2. Edgeworth expansion for the pre-averaging estimator

    DEFF Research Database (Denmark)

    Podolskij, Mark; Veliyev, Bezirgen; Yoshida, Nakahiro

    In this paper, we study the Edgeworth expansion for a pre-averaging estimator of quadratic variation in the framework of continuous diffusion models observed with noise. More specifically, we obtain a second order expansion for the joint density of the estimators of quadratic variation and its...... asymptotic variance. Our approach is based on martingale embedding, Malliavin calculus and stable central limit theorems for continuous diffusions. Moreover, we derive the density expansion for the studentized statistic, which might be applied to construct asymptotic confidence regions....

  3. Analysis of nonlinear systems using ARMA [autoregressive moving average] models

    International Nuclear Information System (INIS)

    Hunter, N.F. Jr.

    1990-01-01

    While many vibration systems exhibit primarily linear behavior, a significant percentage of the systems encountered in vibration and model testing are mildly to severely nonlinear. Analysis methods for such nonlinear systems are not yet well developed and the response of such systems is not accurately predicted by linear models. Nonlinear ARMA (autoregressive moving average) models are one method for the analysis and response prediction of nonlinear vibratory systems. In this paper we review the background of linear and nonlinear ARMA models, and illustrate the application of these models to nonlinear vibration systems. We conclude by summarizing the advantages and disadvantages of ARMA models and emphasizing prospects for future development. 14 refs., 11 figs

  4. Application of autoregressive moving average model in reactor noise analysis

    International Nuclear Information System (INIS)

    Tran Dinh Tri

    1993-01-01

    The application of an autoregressive (AR) model to estimating noise measurements has achieved many successes in reactor noise analysis in the last ten years. The physical processes that take place in the nuclear reactor, however, are described by an autoregressive moving average (ARMA) model rather than by an AR model. Consequently more correct results could be obtained by applying the ARMA model instead of the AR model to reactor noise analysis. In this paper the system of the generalised Yule-Walker equations is derived from the equation of an ARMA model, then a method for its solution is given. Numerical results show the applications of the method proposed. (author)

  5. Effect of random edge failure on the average path length

    Energy Technology Data Exchange (ETDEWEB)

    Guo Dongchao; Liang Mangui; Li Dandan; Jiang Zhongyuan, E-mail: mgliang58@gmail.com, E-mail: 08112070@bjtu.edu.cn [Institute of Information Science, Beijing Jiaotong University, 100044, Beijing (China)

    2011-10-14

    We study the effect of random removal of edges on the average path length (APL) in a large class of uncorrelated random networks in which vertices are characterized by hidden variables controlling the attachment of edges between pairs of vertices. A formula for approximating the APL of networks suffering random edge removal is derived first. Then, the formula is confirmed by simulations for classical ER (Erdoes and Renyi) random graphs, BA (Barabasi and Albert) networks, networks with exponential degree distributions as well as random networks with asymptotic power-law degree distributions with exponent {alpha} > 2. (paper)

  6. Generalized Heteroskedasticity ACF for Moving Average Models in Explicit Forms

    OpenAIRE

    Samir Khaled Safi

    2014-01-01

    The autocorrelation function (ACF) measures the correlation between observations at different   distances apart. We derive explicit equations for generalized heteroskedasticity ACF for moving average of order q, MA(q). We consider two cases: Firstly: when the disturbance term follow the general covariance matrix structure Cov(wi, wj)=S with si,j ¹ 0 " i¹j . Secondly: when the diagonal elements of S are not all identical but sij = 0 " i¹j, i.e. S=diag(s11, s22,&hellip...

  7. Nongeostrophic theory of zonally averaged circulation. I - Formulation

    Science.gov (United States)

    Tung, Ka Kit

    1986-01-01

    A nongeostrophic theory of zonally averaged circulation is formulated using the nonlinear primitive equations (mass conservation, thermodynamics, and zonal momentum) on a sphere. The relationship between the mean meridional circulation and diabatic heating rate is studied. Differences between results of nongeostropic theory and the geostrophic formulation concerning the role of eddy forcing of the diabatic circulation and the nonlinear nearly inviscid limit versus the geostrophic limit are discussed. Consideration is given to the Eliassen-Palm flux divergence, the Eliassen-Palm pseudodivergence, the nonacceleration theorem, and the nonlinear nongeostrophic Taylor relationship.

  8. Average methods and their applications in Differential Geometry I

    OpenAIRE

    Vincze, Csaba

    2013-01-01

    In Minkowski geometry the metric features are based on a compact convex body containing the origin in its interior. This body works as a unit ball with its boundary formed by the unit vectors. Using one-homogeneous extension we have a so-called Minkowski functional to measure the lenght of vectors. The half of its square is called the energy function. Under some regularity conditions we can introduce an average Euclidean inner product by integrating the Hessian matrix of the energy function o...

  9. Matrix product approach for the asymmetric random average process

    International Nuclear Information System (INIS)

    Zielen, F; Schadschneider, A

    2003-01-01

    We consider the asymmetric random average process which is a one-dimensional stochastic lattice model with nearest-neighbour interaction but continuous and unbounded state variables. First, the explicit functional representations, so-called beta densities, of all local interactions leading to steady states of product measure form are rigorously derived. This also completes an outstanding proof given in a previous publication. Then we present an alternative solution for the processes with factorized stationary states by using a matrix product ansatz. Due to continuous state variables we obtain a matrix algebra in the form of a functional equation which can be solved exactly

  10. Low Average Sidelobe Slot Array Antennas for Radiometer Applications

    Science.gov (United States)

    Rengarajan, Sembiam; Zawardzki, Mark S.; Hodges, Richard E.

    2012-01-01

    In radiometer applications, it is required to design antennas that meet low average sidelobe levels and low average return loss over a specified frequency bandwidth. It is a challenge to meet such specifications over a frequency range when one uses resonant elements such as waveguide feed slots. In addition to their inherent narrow frequency band performance, the problem is exacerbated due to modeling errors and manufacturing tolerances. There was a need to develop a design methodology to solve the problem. An iterative design procedure was developed by starting with an array architecture, lattice spacing, aperture distribution, waveguide dimensions, etc. The array was designed using Elliott s technique with appropriate values of the total slot conductance in each radiating waveguide, and the total resistance in each feed waveguide. Subsequently, the array performance was analyzed by the full wave method of moments solution to the pertinent integral equations. Monte Carlo simulations were also carried out to account for amplitude and phase errors introduced for the aperture distribution due to modeling errors as well as manufacturing tolerances. If the design margins for the average sidelobe level and the average return loss were not adequate, array architecture, lattice spacing, aperture distribution, and waveguide dimensions were varied in subsequent iterations. Once the design margins were found to be adequate, the iteration was stopped and a good design was achieved. A symmetric array architecture was found to meet the design specification with adequate margin. The specifications were near 40 dB for angular regions beyond 30 degrees from broadside. Separable Taylor distribution with nbar=4 and 35 dB sidelobe specification was chosen for each principal plane. A non-separable distribution obtained by the genetic algorithm was found to have similar characteristics. The element spacing was obtained to provide the required beamwidth and close to a null in the E

  11. Stochastic Optimal Prediction with Application to Averaged Euler Equations

    Energy Technology Data Exchange (ETDEWEB)

    Bell, John [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chorin, Alexandre J. [Univ. of California, Berkeley, CA (United States); Crutchfield, William [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2017-04-24

    Optimal prediction (OP) methods compensate for a lack of resolution in the numerical solution of complex problems through the use of an invariant measure as a prior measure in the Bayesian sense. In first-order OP, unresolved information is approximated by its conditional expectation with respect to the invariant measure. In higher-order OP, unresolved information is approximated by a stochastic estimator, leading to a system of random or stochastic differential equations. We explain the ideas through a simple example, and then apply them to the solution of Averaged Euler equations in two space dimensions.

  12. Glycogen with short average chain length enhances bacterial durability

    Science.gov (United States)

    Wang, Liang; Wise, Michael J.

    2011-09-01

    Glycogen is conventionally viewed as an energy reserve that can be rapidly mobilized for ATP production in higher organisms. However, several studies have noted that glycogen with short average chain length in some bacteria is degraded very slowly. In addition, slow utilization of glycogen is correlated with bacterial viability, that is, the slower the glycogen breakdown rate, the longer the bacterial survival time in the external environment under starvation conditions. We call that a durable energy storage mechanism (DESM). In this review, evidence from microbiology, biochemistry, and molecular biology will be assembled to support the hypothesis of glycogen as a durable energy storage compound. One method for testing the DESM hypothesis is proposed.

  13. Characterizing individual painDETECT symptoms by average pain severity

    Directory of Open Access Journals (Sweden)

    Sadosky A

    2016-07-01

    Full Text Available Alesia Sadosky,1 Vijaya Koduru,2 E Jay Bienen,3 Joseph C Cappelleri4 1Pfizer Inc, New York, NY, 2Eliassen Group, New London, CT, 3Outcomes Research Consultant, New York, NY, 4Pfizer Inc, Groton, CT, USA Background: painDETECT is a screening measure for neuropathic pain. The nine-item version consists of seven sensory items (burning, tingling/prickling, light touching, sudden pain attacks/electric shock-type pain, cold/heat, numbness, and slight pressure, a pain course pattern item, and a pain radiation item. The seven-item version consists only of the sensory items. Total scores of both versions discriminate average pain-severity levels (mild, moderate, and severe, but their ability to discriminate individual item severity has not been evaluated.Methods: Data were from a cross-sectional, observational study of six neuropathic pain conditions (N=624. Average pain severity was evaluated using the Brief Pain Inventory-Short Form, with severity levels defined using established cut points for distinguishing mild, moderate, and severe pain. The Wilcoxon rank sum test was followed by ridit analysis to represent the probability that a randomly selected subject from one average pain-severity level had a more favorable outcome on the specific painDETECT item relative to a randomly selected subject from a comparator severity level.Results: A probability >50% for a better outcome (less severe pain was significantly observed for each pain symptom item. The lowest probability was 56.3% (on numbness for mild vs moderate pain and highest probability was 76.4% (on cold/heat for mild vs severe pain. The pain radiation item was significant (P<0.05 and consistent with pain symptoms, as well as with total scores for both painDETECT versions; only the pain course item did not differ.Conclusion: painDETECT differentiates severity such that the ability to discriminate average pain also distinguishes individual pain item severity in an interpretable manner. Pain

  14. Image Denoising Using Interquartile Range Filter with Local Averaging

    OpenAIRE

    Jassim, Firas Ajil

    2013-01-01

    Image denoising is one of the fundamental problems in image processing. In this paper, a novel approach to suppress noise from the image is conducted by applying the interquartile range (IQR) which is one of the statistical methods used to detect outlier effect from a dataset. A window of size kXk was implemented to support IQR filter. Each pixel outside the IQR range of the kXk window is treated as noisy pixel. The estimation of the noisy pixels was obtained by local averaging. The essential...

  15. Updated precision measurement of the average lifetime of B hadrons

    CERN Document Server

    Abreu, P; Adye, T; Agasi, E; Ajinenko, I; Aleksan, Roy; Alekseev, G D; Alemany, R; Allport, P P; Almehed, S; Amaldi, Ugo; Amato, S; Andreazza, A; Andrieux, M L; Antilogus, P; Apel, W D; Arnoud, Y; Åsman, B; Augustin, J E; Augustinus, A; Baillon, Paul; Bambade, P; Barate, R; Barbi, M S; Barbiellini, Guido; Bardin, Dimitri Yuri; Baroncelli, A; Bärring, O; Barrio, J A; Bartl, Walter; Bates, M J; Battaglia, Marco; Baubillier, M; Baudot, J; Becks, K H; Begalli, M; Beillière, P; Belokopytov, Yu A; Benvenuti, Alberto C; Berggren, M; Bertrand, D; Bianchi, F; Bigi, M; Bilenky, S M; Billoir, P; Bloch, D; Blume, M; Blyth, S; Bolognese, T; Bonesini, M; Bonivento, W; Booth, P S L; Borisov, G; Bosio, C; Bosworth, S; Botner, O; Boudinov, E; Bouquet, B; Bourdarios, C; Bowcock, T J V; Bozzo, M; Branchini, P; Brand, K D; Brenke, T; Brenner, R A; Bricman, C; Brillault, L; Brown, R C A; Brückman, P; Brunet, J M; Bugge, L; Buran, T; Burgsmüller, T; Buschmann, P; Buys, A; Cabrera, S; Caccia, M; Calvi, M; Camacho-Rozas, A J; Camporesi, T; Canale, V; Canepa, M; Cankocak, K; Cao, F; Carena, F; Carroll, L; Caso, Carlo; Castillo-Gimenez, M V; Cattai, A; Cavallo, F R; Cerrito, L; Chabaud, V; Charpentier, P; Chaussard, L; Chauveau, J; Checchia, P; Chelkov, G A; Chen, M; Chierici, R; Chliapnikov, P V; Chochula, P; Chorowicz, V; Chudoba, J; Cindro, V; Collins, P; Contreras, J L; Contri, R; Cortina, E; Cosme, G; Cossutti, F; Crawley, H B; Crennell, D J; Crosetti, G; Cuevas-Maestro, J; Czellar, S; Dahl-Jensen, Erik; Dahm, J; D'Almagne, B; Dam, M; Damgaard, G; Dauncey, P D; Davenport, Martyn; Da Silva, W; Defoix, C; Deghorain, A; Della Ricca, G; Delpierre, P A; Demaria, N; De Angelis, A; de Boer, Wim; De Brabandere, S; De Clercq, C; La Vaissière, C de; De Lotto, B; De Min, A; De Paula, L S; De Saint-Jean, C; Dijkstra, H; Di Ciaccio, Lucia; Djama, F; Dolbeau, J; Dönszelmann, M; Doroba, K; Dracos, M; Drees, J; Drees, K A; Dris, M; Dufour, Y; Edsall, D M; Ehret, R; Eigen, G; Ekelöf, T J C; Ekspong, Gösta; Elsing, M; Engel, J P; Ershaidat, N; Erzen, B; Espirito-Santo, M C; Falk, E; Fassouliotis, D; Feindt, Michael; Fenyuk, A; Ferrer, A; Filippas-Tassos, A; Firestone, A; Fischer, P A; Föth, H; Fokitis, E; Fontanelli, F; Formenti, F; Franek, B J; Frenkiel, P; Fries, D E C; Frodesen, A G; Frühwirth, R; Fulda-Quenzer, F; Fuster, J A; Galloni, A; Gamba, D; Gandelman, M; García, C; García, J; Gaspar, C; Gasparini, U; Gavillet, P; Gazis, E N; Gelé, D; Gerber, J P; Gibbs, M; Gokieli, R; Golob, B; Gopal, Gian P; Gorn, L; Górski, M; Guz, Yu; Gracco, Valerio; Graziani, E; Grosdidier, G; Grzelak, K; Gumenyuk, S A; Gunnarsson, P; Günther, M; Guy, J; Hahn, F; Hahn, S; Hajduk, Z; Hallgren, A; Hamacher, K; Hao, W; Harris, F J; Hedberg, V; Henriques, R P; Hernández, J J; Herquet, P; Herr, H; Hessing, T L; Higón, E; Hilke, Hans Jürgen; Hill, T S; Holmgren, S O; Holt, P J; Holthuizen, D J; Hoorelbeke, S; Houlden, M A; Hrubec, Josef; Huet, K; Hultqvist, K; Jackson, J N; Jacobsson, R; Jalocha, P; Janik, R; Jarlskog, C; Jarlskog, G; Jarry, P; Jean-Marie, B; Johansson, E K; Jönsson, L B; Jönsson, P E; Joram, Christian; Juillot, P; Kaiser, M; Kapusta, F; Karafasoulis, K; Karlsson, M; Karvelas, E; Katsanevas, S; Katsoufis, E C; Keränen, R; Khokhlov, Yu A; Khomenko, B A; Khovanskii, N N; King, B J; Kjaer, N J; Klein, H; Klovning, A; Kluit, P M; Köne, B; Kokkinias, P; Koratzinos, M; Korcyl, K; Kourkoumelis, C; Kuznetsov, O; Kramer, P H; Krammer, Manfred; Kreuter, C; Kronkvist, I J; Krumshtein, Z; Krupinski, W; Kubinec, P; Kucewicz, W; Kurvinen, K L; Lacasta, C; Laktineh, I; Lamblot, S; Lamsa, J; Lanceri, L; Lane, D W; Langefeld, P; Last, I; Laugier, J P; Lauhakangas, R; Leder, Gerhard; Ledroit, F; Lefébure, V; Legan, C K; Leitner, R; Lemoigne, Y; Lemonne, J; Lenzen, Georg; Lepeltier, V; Lesiak, T; Liko, D; Lindner, R; Lipniacka, A; Lippi, I; Lörstad, B; Loken, J G; López, J M; Loukas, D; Lutz, P; Lyons, L; MacNaughton, J N; Maehlum, G; Maio, A; Malychev, V; Mandl, F; Marco, J; Marco, R P; Maréchal, B; Margoni, M; Marin, J C; Mariotti, C; Markou, A; Maron, T; Martínez-Rivero, C; Martínez-Vidal, F; Martí i García, S; Masik, J; Matorras, F; Matteuzzi, C; Matthiae, Giorgio; Mazzucato, M; McCubbin, M L; McKay, R; McNulty, R; Medbo, J; Merk, M; Meroni, C; Meyer, S; Meyer, W T; Michelotto, M; Migliore, E; Mirabito, L; Mitaroff, Winfried A; Mjörnmark, U; Moa, T; Møller, R; Mönig, K; Monge, M R; Morettini, P; Müller, H; Mundim, L M; Murray, W J; Muryn, B; Myatt, Gerald; Naraghi, F; Navarria, Francesco Luigi; Navas, S; Nawrocki, K; Negri, P; Neumann, W; Nicolaidou, R; Nielsen, B S; Nieuwenhuizen, M; Nikolaenko, V; Niss, P; Nomerotski, A; Normand, Ainsley; Novák, M; Oberschulte-Beckmann, W; Obraztsov, V F; Olshevskii, A G; Onofre, A; Orava, Risto; Österberg, K; Ouraou, A; Paganini, P; Paganoni, M; Pagès, P; Palka, H; Papadopoulou, T D; Papageorgiou, K; Pape, L; Parkes, C; Parodi, F; Passeri, A; Pegoraro, M; Peralta, L; Pernegger, H; Pernicka, Manfred; Perrotta, A; Petridou, C; Petrolini, A; Petrovykh, M; Phillips, H T; Piana, G; Pierre, F; Pimenta, M; Pindo, M; Plaszczynski, S; Podobrin, O; Pol, M E; Polok, G; Poropat, P; Pozdnyakov, V; Prest, M; Privitera, P; Pukhaeva, N; Pullia, Antonio; Radojicic, D; Ragazzi, S; Rahmani, H; Ratoff, P N; Read, A L; Reale, M; Rebecchi, P; Redaelli, N G; Regler, Meinhard; Reid, D; Renton, P B; Resvanis, L K; Richard, F; Richardson, J; Rídky, J; Rinaudo, G; Ripp, I; Romero, A; Roncagliolo, I; Ronchese, P; Ronjin, V M; Roos, L; Rosenberg, E I; Rosso, E; Roudeau, Patrick; Rovelli, T; Rückstuhl, W; Ruhlmann-Kleider, V; Ruiz, A; Rybicki, K; Saarikko, H; Sacquin, Yu; Sadovskii, A; Sajot, G; Salt, J; Sánchez, J; Sannino, M; Schimmelpfennig, M; Schneider, H; Schwickerath, U; Schyns, M A E; Sciolla, G; Scuri, F; Seager, P; Sedykh, Yu; Segar, A M; Seitz, A; Sekulin, R L; Shellard, R C; Siccama, I; Siegrist, P; Simonetti, S; Simonetto, F; Sissakian, A N; Sitár, B; Skaali, T B; Smadja, G; Smirnov, N; Smirnova, O G; Smith, G R; Solovyanov, O; Sosnowski, R; Souza-Santos, D; Spassoff, Tz; Spiriti, E; Sponholz, P; Squarcia, S; Stanescu, C; Stapnes, Steinar; Stavitski, I; Stichelbaut, F; Stocchi, A; Strauss, J; Strub, R; Stugu, B; Szczekowski, M; Szeptycka, M; Tabarelli de Fatis, T; Tavernet, J P; Chikilev, O G; Tilquin, A; Timmermans, J; Tkatchev, L G; Todorov, T; Toet, D Z; Tomaradze, A G; Tomé, B; Tonazzo, A; Tortora, L; Tranströmer, G; Treille, D; Trischuk, W; Tristram, G; Trombini, A; Troncon, C; Tsirou, A L; Turluer, M L; Tyapkin, I A; Tyndel, M; Tzamarias, S; Überschär, B; Ullaland, O; Uvarov, V; Valenti, G; Vallazza, E; Van der Velde, C; van Apeldoorn, G W; van Dam, P; Van Doninck, W K; Van Eldik, J; Vassilopoulos, N; Vegni, G; Ventura, L; Venus, W A; Verbeure, F; Verlato, M; Vertogradov, L S; Vilanova, D; Vincent, P; Vitale, L; Vlasov, E; Vodopyanov, A S; Vrba, V; Wahlen, H; Walck, C; Weierstall, M; Weilhammer, Peter; Weiser, C; Wetherell, Alan M; Wicke, D; Wickens, J H; Wielers, M; Wilkinson, G R; Williams, W S C; Winter, M; Witek, M; Woschnagg, K; Yip, K; Yushchenko, O P; Zach, F; Zaitsev, A; Zalewska-Bak, A; Zalewski, Piotr; Zavrtanik, D; Zevgolatakos, E; Zimin, N I; Zito, M; Zontar, D; Zuberi, R; Zucchelli, G C; Zumerle, G; Belokopytov, Yu; Charpentier, Ph; Gavillet, Ph; Gouz, Yu; Jarlskog, Ch; Khokhlov, Yu; Papadopoulou, Th D

    1996-01-01

    The measurement of the average lifetime of B hadrons using inclusively reconstructed secondary vertices has been updated using both an improved processing of previous data and additional statistics from new data. This has reduced the statistical and systematic uncertainties and gives \\tau_{\\mathrm{B}} = 1.582 \\pm 0.011\\ \\mathrm{(stat.)} \\pm 0.027\\ \\mathrm{(syst.)}\\ \\mathrm{ps.} Combining this result with the previous result based on charged particle impact parameter distributions yields \\tau_{\\mathrm{B}} = 1.575 \\pm 0.010\\ \\mathrm{(stat.)} \\pm 0.026\\ \\mathrm{(syst.)}\\ \\mathrm{ps.}

  16. Compliation of summary statistics for radiation worker exposure for the 200 Areas: 1978--1993

    International Nuclear Information System (INIS)

    Brown, R.C.

    1994-01-01

    This document provides estimates of average annual radiation worker exposures for the 200 Areas of the Hanford Site for various facilities. The period of exposures extends from calendar year 1978 through 1993. These estimates were extracted from annual dosimetry reports

  17. Natural background radiation exposures world-wide

    International Nuclear Information System (INIS)

    Bennett, B.G.

    1993-01-01

    The average radiation dose to the world's population from natural radiation sources has been assessed by UNSCEAR to be 2.4 mSv per year. The components of this exposure, methods of evaluation and, in particular, the variations in the natural background levels are presented in this paper. Exposures to cosmic radiation range from 0.26 mSv per year at sea level to 20 times more at an altitude of 6000 m. Exposures to cosmogenic radionuclides ( 3 H, 14 C) are relatively insignificant and little variable. The terrestrial radionuclides 40 K, 238 U, and 232 Th and the decay products of the latter two constitute the remainder of the natural radiation exposure. Wide variations in exposure occur for these components, particularly for radon and its decay products, which can accumulate to relatively high levels indoors. Unusually high exposures to uranium and thorium series radionuclides characterize the high natural background areas which occur in several localized regions in the world. Extreme values in natural radiation exposures have been estimated to range up to 100 times the average values. (author). 15 refs, 3 tabs

  18. On the average configuration of the geomagnetic tail

    International Nuclear Information System (INIS)

    Fairfield, D.H.

    1978-03-01

    Over 3000 hours of IMP-6 magnetic field data obtained between 20 and 33 R sub E in the geomagnetic tail have been used in a statistical study of the tail configuration. A distribution of 2.5 minute averages of B sub Z as a function of position across the tail reveals that more flux crosses the equatorial plane near the dawn and dusk flanks than near midnight. The tail field projected in the solar magnetospheric equatorial plane deviates from the X axis due to flaring and solar wind aberration by an angle alpha = -0.9 y sub SM - 1.7, where y/sub SM/ is in earth radii and alpha is in degrees. After removing these effects the Y component of the tail field is found to depend on interplanetary sector structure. During an away sector the B/sub Y/ component of the tail field is on average 0.5 gamma greater than that during a toward sector, a result that is true in both tail lobes and is independent of location across the tail

  19. Generalized Heteroskedasticity ACF for Moving Average Models in Explicit Forms

    Directory of Open Access Journals (Sweden)

    Samir Khaled Safi

    2014-02-01

    Full Text Available Normal 0 false false false MicrosoftInternetExplorer4 The autocorrelation function (ACF measures the correlation between observations at different   distances apart. We derive explicit equations for generalized heteroskedasticity ACF for moving average of order q, MA(q. We consider two cases: Firstly: when the disturbance term follow the general covariance matrix structure Cov(wi, wj=S with si,j ¹ 0 " i¹j . Secondly: when the diagonal elements of S are not all identical but sij = 0 " i¹j, i.e. S=diag(s11, s22,…,stt. The forms of the explicit equations depend essentially on the moving average coefficients and covariance structure of the disturbance terms.   /* Style Definitions */ table.MsoNormalTable {mso-style-name:"جدول عادي"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;}

  20. Eighth CW and High Average Power RF Workshop

    CERN Document Server

    2014-01-01

    We are pleased to announce the next Continuous Wave and High Average RF Power Workshop, CWRF2014, to take place at Hotel NH Trieste, Trieste, Italy from 13 to 16 May, 2014. This is the eighth in the CWRF workshop series and will be hosted by Elettra - Sincrotrone Trieste S.C.p.A. (www.elettra.eu). CWRF2014 will provide an opportunity for designers and users of CW and high average power RF systems to meet and interact in a convivial environment to share experiences and ideas on applications which utilize high-power klystrons, gridded tubes, combined solid-state architectures, high-voltage power supplies, high-voltage modulators, high-power combiners, circulators, cavities, power couplers and tuners. New ideas for high-power RF system upgrades and novel ways of RF power generation and distribution will also be discussed. CWRF2014 sessions will start on Tuesday morning and will conclude on Friday lunchtime. A visit to Elettra and FERMI will be organized during the workshop. ORGANIZING COMMITTEE (OC): Al...