WorldWideScience

Sample records for maximum average concentration

  1. Verification of average daily maximum permissible concentration of styrene in the atmospheric air of settlements under the results of epidemiological studies of the children’s population

    Directory of Open Access Journals (Sweden)

    М.А. Zemlyanova

    2015-03-01

    Full Text Available We presented the materials on the verification of the average daily maximum permissible concentration of styrene in the atmospheric air of settlements performed under the results of own in-depth epidemiological studies of children’s population according to the principles of the international risk assessment practice. It was established that children in the age of 4–7 years when exposed to styrene at the level above 1.2 of threshold level value for continuous exposure develop the negative exposure effects in the form of disorders of hormonal regulation, pigmentary exchange, antioxidative activity, cytolysis, immune reactivity and cytogenetic disbalance which contribute to the increased morbidity of diseases of the central nervous system, endocrine system, respiratory organs, digestion and skin. Based on the proved cause-and-effect relationships between the biomarkers of negative effects and styrene concentration in blood it was demonstrated that the benchmark styrene concentration in blood is 0.002 mg/dm3. The justified value complies with and confirms the average daily styrene concentration in the air of settlements at the level of 0.002 mg/m3 accepted in Russia which provides the safety for the health of population (1 threshold level value for continuous exposure.

  2. The moving-window Bayesian maximum entropy framework: estimation of PM(2.5) yearly average concentration across the contiguous United States.

    Science.gov (United States)

    Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L

    2012-09-01

    Geostatistical methods are widely used in estimating long-term exposures for epidemiological studies on air pollution, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and the uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian maximum entropy (BME) method and applied this framework to estimate fine particulate matter (PM(2.5)) yearly average concentrations over the contiguous US. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingness in the air-monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM(2.5) data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM(2.5). Moreover, the MWBME method further reduces the MSE by 8.4-43.7%, with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM(2.5) across large geographical domains with expected spatial non-stationarity.

  3. The moving-window Bayesian Maximum Entropy framework: Estimation of PM2.5 yearly average concentration across the contiguous United States

    Science.gov (United States)

    Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L.

    2013-01-01

    Geostatistical methods are widely used in estimating long-term exposures for air pollution epidemiological studies, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian Maximum Entropy (BME) method and applied this framework to estimate fine particulate matter (PM2.5) yearly average concentrations over the contiguous U.S. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingnees in the air monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM2.5 data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM2.5. Moreover, the MWBME method further reduces the MSE by 8.4% to 43.7% with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM2.5 across large geographical domains with expected spatial non-stationarity. PMID:22739679

  4. Maximum phytoplankton concentrations in the sea

    DEFF Research Database (Denmark)

    Jackson, G.A.; Kiørboe, Thomas

    2008-01-01

    A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collect...

  5. Maximum concentrations at work and maximum biologically tolerable concentration for working materials 1991

    International Nuclear Information System (INIS)

    1991-01-01

    The meaning of the term 'maximum concentration at work' in regard of various pollutants is discussed. Specifically, a number of dusts and smokes are dealt with. The valuation criteria for maximum biologically tolerable concentrations for working materials are indicated. The working materials in question are corcinogeneous substances or substances liable to cause allergies or mutate the genome. (VT) [de

  6. Maximum Permissible Concentrations and Negligible Concentrations for pesticides

    NARCIS (Netherlands)

    Crommentuijn T; Kalf DF; Polder MD; Posthumus R; Plassche EJ van de; CSR

    1997-01-01

    Maximum Permissible Concentrations (MPCs) and Negligible Concentrations (NCs) derived for a series of pesticides are presented in this report. These MPCs and NCs are used by the Ministry of Housing, Spatial Planning and the Environment (VROM) to set Environmental Quality Objectives. For some of the

  7. Stochastic modelling of the monthly average maximum and minimum temperature patterns in India 1981-2015

    Science.gov (United States)

    Narasimha Murthy, K. V.; Saravana, R.; Vijaya Kumar, K.

    2018-04-01

    The paper investigates the stochastic modelling and forecasting of monthly average maximum and minimum temperature patterns through suitable seasonal auto regressive integrated moving average (SARIMA) model for the period 1981-2015 in India. The variations and distributions of monthly maximum and minimum temperatures are analyzed through Box plots and cumulative distribution functions. The time series plot indicates that the maximum temperature series contain sharp peaks in almost all the years, while it is not true for the minimum temperature series, so both the series are modelled separately. The possible SARIMA model has been chosen based on observing autocorrelation function (ACF), partial autocorrelation function (PACF), and inverse autocorrelation function (IACF) of the logarithmic transformed temperature series. The SARIMA (1, 0, 0) × (0, 1, 1)12 model is selected for monthly average maximum and minimum temperature series based on minimum Bayesian information criteria. The model parameters are obtained using maximum-likelihood method with the help of standard error of residuals. The adequacy of the selected model is determined using correlation diagnostic checking through ACF, PACF, IACF, and p values of Ljung-Box test statistic of residuals and using normal diagnostic checking through the kernel and normal density curves of histogram and Q-Q plot. Finally, the forecasting of monthly maximum and minimum temperature patterns of India for the next 3 years has been noticed with the help of selected model.

  8. Measurement of average radon gas concentration at workplaces

    International Nuclear Information System (INIS)

    Kavasi, N.; Somlai, J.; Kovacs, T.; Gorjanacz, Z.; Nemeth, Cs.; Szabo, T.; Varhegyi, A.; Hakl, J.

    2003-01-01

    In this paper results of measurement of average radon gas concentration at workplaces (the schools and kindergartens and the ventilated workplaces) are presented. t can be stated that the one month long measurements means very high variation (as it is obvious in the cases of the hospital cave and the uranium tailing pond). Consequently, in workplaces where the expectable changes of radon concentration considerable with the seasons should be measure for 12 months long. If it is not possible, the chosen six months period should contain summer and winter months as well. The average radon concentration during working hours can be differ considerable from the average of the whole time in the cases of frequent opening the doors and windows or using artificial ventilation. (authors)

  9. Understanding coastal morphodynamic patterns from depth-averaged sediment concentration

    NARCIS (Netherlands)

    Ribas, F.; Falques, A.; de Swart, H. E.; Dodd, N.; Garnier, R.; Calvete, D.

    This review highlights the important role of the depth-averaged sediment concentration (DASC) to understand the formation of a number of coastal morphodynamic features that have an alongshore rhythmic pattern: beach cusps, surf zone transverse and crescentic bars, and shoreface-connected sand

  10. Accurate computations of monthly average daily extraterrestrial irradiation and the maximum possible sunshine duration

    International Nuclear Information System (INIS)

    Jain, P.C.

    1985-12-01

    The monthly average daily values of the extraterrestrial irradiation on a horizontal plane and the maximum possible sunshine duration are two important parameters that are frequently needed in various solar energy applications. These are generally calculated by solar scientists and engineers each time they are needed and often by using the approximate short-cut methods. Using the accurate analytical expressions developed by Spencer for the declination and the eccentricity correction factor, computations for these parameters have been made for all the latitude values from 90 deg. N to 90 deg. S at intervals of 1 deg. and are presented in a convenient tabular form. Monthly average daily values of the maximum possible sunshine duration as recorded on a Campbell Stoke's sunshine recorder are also computed and presented. These tables would avoid the need for repetitive and approximate calculations and serve as a useful ready reference for providing accurate values to the solar energy scientists and engineers

  11. Scientific substantination of maximum allowable concentration of fluopicolide in water

    Directory of Open Access Journals (Sweden)

    Pelo I.М.

    2014-03-01

    Full Text Available In order to substantiate fluopicolide maximum allowable concentration in the water of water reservoirs the research was carried out. Methods of study: laboratory hygienic experiment using organoleptic and sanitary-chemical, sanitary-toxicological, sanitary-microbiological and mathematical methods. The results of fluopicolide influence on organoleptic properties of water, sanitary regimen of reservoirs for household purposes were given and its subthreshold concentration in water by sanitary and toxicological hazard index was calculated. The threshold concentration of the substance by the main hazard criteria was established, the maximum allowable concentration in water was substantiated. The studies led to the following conclusions: fluopicolide threshold concentration in water by organoleptic hazard index (limiting criterion – the smell – 0.15 mg/dm3, general sanitary hazard index (limiting criteria – impact on the number of saprophytic microflora, biochemical oxygen demand and nitrification – 0.015 mg/dm3, the maximum noneffective concentration – 0.14 mg/dm3, the maximum allowable concentration - 0.015 mg/dm3.

  12. Concentration fluctuations and averaging time in vapor clouds

    CERN Document Server

    Wilson, David J

    2010-01-01

    This book contributes to more reliable and realistic predictions by focusing on sampling times from a few seconds to a few hours. Its objectives include developing clear definitions of statistical terms, such as plume sampling time, concentration averaging time, receptor exposure time, and other terms often confused with each other or incorrectly specified in hazard assessments; identifying and quantifying situations for which there is no adequate knowledge to predict concentration fluctuations in the near-field, close to sources, and far downwind where dispersion is dominated by atmospheric t

  13. Table for monthly average daily extraterrestrial irradiation on horizontal surface and the maximum possible sunshine duration

    International Nuclear Information System (INIS)

    Jain, P.C.

    1984-01-01

    The monthly average daily values of the extraterrestrial irradiation on a horizontal surface (H 0 ) and the maximum possible sunshine duration are two important parameters that are frequently needed in various solar energy applications. These are generally calculated by scientists each time they are needed and by using the approximate short-cut methods. Computations for these values have been made once and for all for latitude values of 60 deg. N to 60 deg. S at intervals of 1 deg. and are presented in a convenient tabular form. Values of the maximum possible sunshine duration as recorded on a Campbell Stoke's sunshine recorder are also computed and presented. These tables should avoid the need for repetition and approximate calculations and serve as a useful ready reference for solar energy scientists and engineers. (author)

  14. Maximum permissible concentrations and negligible concentrations for antifouling substances. Irgarol 1051, dichlofluanid, ziram, chlorothalonil and TCMTB

    NARCIS (Netherlands)

    Wezel AP van; Vlaardingen P van; CSR

    2001-01-01

    This report presents maximum permissible concentrations and negligible concentrations that have been derived for various antifouling substances used as substitutes for TBT. Included here are Irgarol 1051, dichlofluanide, ziram, chlorothalonil and TCMTB.

  15. Maximum-likelihood model averaging to profile clustering of site types across discrete linear sequences.

    Directory of Open Access Journals (Sweden)

    Zhang Zhang

    2009-06-01

    Full Text Available A major analytical challenge in computational biology is the detection and description of clusters of specified site types, such as polymorphic or substituted sites within DNA or protein sequences. Progress has been stymied by a lack of suitable methods to detect clusters and to estimate the extent of clustering in discrete linear sequences, particularly when there is no a priori specification of cluster size or cluster count. Here we derive and demonstrate a maximum likelihood method of hierarchical clustering. Our method incorporates a tripartite divide-and-conquer strategy that models sequence heterogeneity, delineates clusters, and yields a profile of the level of clustering associated with each site. The clustering model may be evaluated via model selection using the Akaike Information Criterion, the corrected Akaike Information Criterion, and the Bayesian Information Criterion. Furthermore, model averaging using weighted model likelihoods may be applied to incorporate model uncertainty into the profile of heterogeneity across sites. We evaluated our method by examining its performance on a number of simulated datasets as well as on empirical polymorphism data from diverse natural alleles of the Drosophila alcohol dehydrogenase gene. Our method yielded greater power for the detection of clustered sites across a breadth of parameter ranges, and achieved better accuracy and precision of estimation of clusters, than did the existing empirical cumulative distribution function statistics.

  16. An Invariance Property for the Maximum Likelihood Estimator of the Parameters of a Gaussian Moving Average Process

    OpenAIRE

    Godolphin, E. J.

    1980-01-01

    It is shown that the estimation procedure of Walker leads to estimates of the parameters of a Gaussian moving average process which are asymptotically equivalent to the maximum likelihood estimates proposed by Whittle and represented by Godolphin.

  17. The effects of disjunct sampling and averaging time on maximum mean wind speeds

    DEFF Research Database (Denmark)

    Larsén, Xiaoli Guo; Mann, J.

    2006-01-01

    Conventionally, the 50-year wind is calculated on basis of the annual maxima of consecutive 10-min averages. Very often, however, the averages are saved with a temporal spacing of several hours. We call it disjunct sampling. It may also happen that the wind speeds are averaged over a longer time...

  18. Maximum permissible concentration (MPC) values for spontaneously fissioning radionuclides

    International Nuclear Information System (INIS)

    Ford, M.R.; Snyder, W.S.; Dillman, L.T.; Watson, S.B.

    1976-01-01

    The radiation hazards involved in handling certain of the transuranic nuclides that exhibit spontaneous fission as a mode of decay were reaccessed using recent advances in dosimetry and metabolic modeling. Maximum permissible concentration (MPC) values in air and water for occupational exposure (168 hr/week) were calculated for 244 Pu, 246 Cm, 248 Cm, 250 Cf, 252 Cf, 254 Cf, /sup 254m/Es, 255 Es, 254 Fm, and 256 Fm. The half-lives, branching ratios, and principal modes of decay of the parent-daughter members down to a member that makes a negligible contribution to the dose are given, and all daughters that make a significant contribution to the dose to body organs following inhalation or ingestion are included in the calculations. Dose commitments for body organs are also given

  19. Scale dependence of the average potential around the maximum in Φ4 theories

    International Nuclear Information System (INIS)

    Tetradis, N.; Wetterich, C.

    1992-04-01

    The average potential describes the physics at a length scale k - 1 by averaging out the degrees of freedom with characteristic moments larger than k. The dependence on k can be described by differential evolution equations. We solve these equations for the nonconvex part of the potential around the origin in φ 4 theories, in the phase with spontaneous symmetry breaking. The average potential is real and approaches the convex effective potential in the limit k → 0. Our calculation is relevant for processes for which the shape of the potential at a given scale is important, such as tunneling phenomena or inflation. (orig.)

  20. Benefits of the maximum tolerated dose (MTD) and maximum tolerated concentration (MTC) concept in aquatic toxicology

    International Nuclear Information System (INIS)

    Hutchinson, Thomas H.; Boegi, Christian; Winter, Matthew J.; Owens, J. Willie

    2009-01-01

    development of sound criteria for data interpretation when the exposure of organisms has exceeded the MTD. While the MTD approach is well established for oral, topical, inhalational or injection exposure routes in mammalian toxicology, we propose that for exposure of aquatic organisms via immersion, the term Maximum Tolerated Concentration (MTC) is more appropriate

  1. Maximum stress estimation model for multi-span waler beams with deflections at the supports using average strains.

    Science.gov (United States)

    Park, Sung Woo; Oh, Byung Kwan; Park, Hyo Seon

    2015-03-30

    The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs), the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.

  2. Maximum Stress Estimation Model for Multi-Span Waler Beams with Deflections at the Supports Using Average Strains

    Directory of Open Access Journals (Sweden)

    Sung Woo Park

    2015-03-01

    Full Text Available The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs, the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.

  3. An implementation of the maximum-caliber principle by replica-averaged time-resolved restrained simulations.

    Science.gov (United States)

    Capelli, Riccardo; Tiana, Guido; Camilloni, Carlo

    2018-05-14

    Inferential methods can be used to integrate experimental informations and molecular simulations. The maximum entropy principle provides a framework for using equilibrium experimental data, and it has been shown that replica-averaged simulations, restrained using a static potential, are a practical and powerful implementation of such a principle. Here we show that replica-averaged simulations restrained using a time-dependent potential are equivalent to the principle of maximum caliber, the dynamic version of the principle of maximum entropy, and thus may allow us to integrate time-resolved data in molecular dynamics simulations. We provide an analytical proof of the equivalence as well as a computational validation making use of simple models and synthetic data. Some limitations and possible solutions are also discussed.

  4. A discussion about maximum uranium concentration in digestion solution of U3O8 type uranium ore concentrate

    International Nuclear Information System (INIS)

    Xia Dechang; Liu Chao

    2012-01-01

    On the basis of discussing the influence of single factor on maximum uranium concentration in digestion solution,the influence degree of some factors such as U content, H 2 O content, mass ratio of P and U was compared and analyzed. The results indicate that the relationship between U content and maximum uranium concentration in digestion solution was direct ratio, while the U content increases by 1%, the maximum uranium concentration in digestion solution increases by 4.8%-5.7%. The relationship between H 2 O content and maximum uranium concentration in digestion solution was inverse ratio, the maximum uranium concentration in digestion solution decreases by 46.1-55.2 g/L while H 2 O content increases by 1%. The relationship between mass ratio of P and U and maximum uranium concentration in digestion solution was inverse ratio, the maximum uranium concentration in digestion solution decreases by 116.0-181.0 g/L while the mass ratio of P and U increase 0.1%. When U content equals 62.5% and the influence of mass ratio of P and U is no considered, the maximum uranium concentration in digestion solution equals 1 578 g/L; while mass ratio of P and U equals 0.35%, the maximum uranium concentration decreases to 716 g/L, the decreased rate is 54.6%, so the mass ratio of P and U in U 3 O 8 type uranium ore concentrate is the main controlling factor. (authors)

  5. 40 CFR 80.1238 - How is a refinery's or importer's average benzene concentration determined?

    Science.gov (United States)

    2010-07-01

    ... concentration determined? (a) The average benzene concentration of gasoline produced at a refinery or imported... percent benzene). i = Individual batch of gasoline produced at the refinery or imported during the applicable averaging period. n = Total number of batches of gasoline produced at the refinery or imported...

  6. Estimation of maximum credible atmospheric radioactivity concentrations and dose rates from nuclear tests

    International Nuclear Information System (INIS)

    Telegadas, K.

    1979-01-01

    A simple technique is presented for estimating maximum credible gross beta air concentrations from nuclear detonations in the atmosphere, based on aircraft sampling of radioactivity following each Chinese nuclear test from 1964 to 1976. The calculated concentration is a function of the total yield and fission yield, initial vertical radioactivity distribution, time after detonation, and rate of horizontal spread of the debris with time. calculated maximum credible concentrations are compared with the highest concentrations measured during aircraft sampling. The technique provides a reasonable estimate of maximum air concentrations from 1 to 10 days after a detonation. An estimate of the whole-body external gamma dose rate corresponding to the maximum credible gross beta concentration is also given. (author)

  7. 38 CFR 4.76a - Computation of average concentric contraction of visual fields.

    Science.gov (United States)

    2010-07-01

    ... concentric contraction of visual fields. 4.76a Section 4.76a Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS SCHEDULE FOR RATING DISABILITIES Disability Ratings The Organs of Special Sense § 4.76a Computation of average concentric contraction of visual fields. Table III—Normal Visual...

  8. Phantom and Clinical Study of Differences in Cone Beam Computed Tomographic Registration When Aligned to Maximum and Average Intensity Projection

    Energy Technology Data Exchange (ETDEWEB)

    Shirai, Kiyonori [Department of Radiation Oncology, Osaka Medical Center for Cancer and Cardiovascular Diseases, Osaka (Japan); Nishiyama, Kinji, E-mail: sirai-ki@mc.pref.osaka.jp [Department of Radiation Oncology, Osaka Medical Center for Cancer and Cardiovascular Diseases, Osaka (Japan); Katsuda, Toshizo [Department of Radiology, National Cerebral and Cardiovascular Center, Osaka (Japan); Teshima, Teruki; Ueda, Yoshihiro; Miyazaki, Masayoshi; Tsujii, Katsutomo [Department of Radiation Oncology, Osaka Medical Center for Cancer and Cardiovascular Diseases, Osaka (Japan)

    2014-01-01

    Purpose: To determine whether maximum or average intensity projection (MIP or AIP, respectively) reconstructed from 4-dimensional computed tomography (4DCT) is preferred for alignment to cone beam CT (CBCT) images in lung stereotactic body radiation therapy. Methods and Materials: Stationary CT and 4DCT images were acquired with a target phantom at the center of motion and moving along the superior–inferior (SI) direction, respectively. Motion profiles were asymmetrical waveforms with amplitudes of 10, 15, and 20 mm and a 4-second cycle. Stationary CBCT and dynamic CBCT images were acquired in the same manner as stationary CT and 4DCT images. Stationary CBCT was aligned to stationary CT, and the couch position was used as the baseline. Dynamic CBCT was aligned to the MIP and AIP of corresponding amplitudes. Registration error was defined as the SI deviation of the couch position from the baseline. In 16 patients with isolated lung lesions, free-breathing CBCT (FBCBCT) was registered to AIP and MIP (64 sessions in total), and the difference in couch shifts was calculated. Results: In the phantom study, registration errors were within 0.1 mm for AIP and 1.5 to 1.8 mm toward the inferior direction for MIP. In the patient study, the difference in the couch shifts (mean, range) was insignificant in the right-left (0.0 mm, ≤1.0 mm) and anterior–posterior (0.0 mm, ≤2.1 mm) directions. In the SI direction, however, the couch position significantly shifted in the inferior direction after MIP registration compared with after AIP registration (mean, −0.6 mm; ranging 1.7 mm to the superior side and 3.5 mm to the inferior side, P=.02). Conclusions: AIP is recommended as the reference image for registration to FBCBCT when target alignment is performed in the presence of asymmetrical respiratory motion, whereas MIP causes systematic target positioning error.

  9. Phantom and Clinical Study of Differences in Cone Beam Computed Tomographic Registration When Aligned to Maximum and Average Intensity Projection

    International Nuclear Information System (INIS)

    Shirai, Kiyonori; Nishiyama, Kinji; Katsuda, Toshizo; Teshima, Teruki; Ueda, Yoshihiro; Miyazaki, Masayoshi; Tsujii, Katsutomo

    2014-01-01

    Purpose: To determine whether maximum or average intensity projection (MIP or AIP, respectively) reconstructed from 4-dimensional computed tomography (4DCT) is preferred for alignment to cone beam CT (CBCT) images in lung stereotactic body radiation therapy. Methods and Materials: Stationary CT and 4DCT images were acquired with a target phantom at the center of motion and moving along the superior–inferior (SI) direction, respectively. Motion profiles were asymmetrical waveforms with amplitudes of 10, 15, and 20 mm and a 4-second cycle. Stationary CBCT and dynamic CBCT images were acquired in the same manner as stationary CT and 4DCT images. Stationary CBCT was aligned to stationary CT, and the couch position was used as the baseline. Dynamic CBCT was aligned to the MIP and AIP of corresponding amplitudes. Registration error was defined as the SI deviation of the couch position from the baseline. In 16 patients with isolated lung lesions, free-breathing CBCT (FBCBCT) was registered to AIP and MIP (64 sessions in total), and the difference in couch shifts was calculated. Results: In the phantom study, registration errors were within 0.1 mm for AIP and 1.5 to 1.8 mm toward the inferior direction for MIP. In the patient study, the difference in the couch shifts (mean, range) was insignificant in the right-left (0.0 mm, ≤1.0 mm) and anterior–posterior (0.0 mm, ≤2.1 mm) directions. In the SI direction, however, the couch position significantly shifted in the inferior direction after MIP registration compared with after AIP registration (mean, −0.6 mm; ranging 1.7 mm to the superior side and 3.5 mm to the inferior side, P=.02). Conclusions: AIP is recommended as the reference image for registration to FBCBCT when target alignment is performed in the presence of asymmetrical respiratory motion, whereas MIP causes systematic target positioning error

  10. Variation in the annual average radon concentration measured in homes in Mesa County, Colorado

    International Nuclear Information System (INIS)

    Rood, A.S.; George, J.L.; Langner, G.H. Jr.

    1990-04-01

    The purpose of this study is to examine the variability in the annual average indoor radon concentration. The TMC has been collecting annual average radon data for the past 5 years in 33 residential structures in Mesa County, Colorado. This report is an interim report that presents the data collected up to the present. Currently, the plans are to continue this study in the future. 62 refs., 3 figs., 12 tabs

  11. Parameterization of Time-Averaged Suspended Sediment Concentration in the Nearshore

    Directory of Open Access Journals (Sweden)

    Hyun-Doug Yoon

    2015-11-01

    Full Text Available To quantify the effect of wave breaking turbulence on sediment transport in the nearshore, the vertical distribution of time-averaged suspended sediment concentration (SSC in the surf zone was parameterized in terms of the turbulent kinetic energy (TKE at different cross-shore locations, including the bar crest, bar trough, and inner surf zone. Using data from a large-scale laboratory experiment, a simple relationship was developed between the time-averaged SSC and the time-averaged TKE. The vertical variation of the time-averaged SSC was fitted to an equation analogous to the turbulent dissipation rate term. At the bar crest, the proposed equation was slightly modified to incorporate the effect of near-bed sediment processes and yielded reasonable agreement. This parameterization yielded the best agreement at the bar trough, with a coefficient of determination R2 ≥ 0.72 above the bottom boundary layer. The time-averaged SSC in the inner surf zone showed good agreement near the bed but poor agreement near the water surface, suggesting that there is a different sedimentation mechanism that controls the SSC in the inner surf zone.

  12. Protocol for the estimation of average indoor radon-daughter concentrations: Second edition

    International Nuclear Information System (INIS)

    Langner, G.H. Jr.; Pacer, J.C.

    1988-05-01

    The Technical Measurements Center has developed a protocol which specifies the procedures to be used for determining indoor radon-daughter concentrations in support of Department of Energy remedial action programs. This document is the central part of the protocol and is to be used in conjunction with the individual procedure manuals. The manuals contain the information and procedures required to implement the proven methods for estimating average indoor radon-daughter concentration. Proven in this case means that these methods have been determined to provide reasonable assurance that the average radon-daughter concentration within a structure is either above, at, or below the standards established for remedial action programs. This document contains descriptions of the generic aspects of methods used for estimating radon-daughter concentration and provides guidance with respect to method selection for a given situation. It is expected that the latter section of this document will be revised whenever another estimation method is proven to be capable of satisfying the criteria of reasonable assurance and cost minimization. 22 refs., 6 figs., 3 tabs

  13. Risk-informed Analytical Approaches to Concentration Averaging for the Purpose of Waste Classification

    International Nuclear Information System (INIS)

    Esh, D.W.; Pinkston, K.E.; Barr, C.S.; Bradford, A.H.; Ridge, A.Ch.

    2009-01-01

    Nuclear Regulatory Commission (NRC) staff has developed a concentration averaging approach and guidance for the review of Department of Energy (DOE) non-HLW determinations. Although the approach was focused on this specific application, concentration averaging is generally applicable to waste classification and thus has implications for waste management decisions as discussed in more detail in this paper. In the United States, radioactive waste has historically been classified into various categories for the purpose of ensuring that the disposal system selected is commensurate with the hazard of the waste such that public health and safety will be protected. However, the risk from the near-surface disposal of radioactive waste is not solely a function of waste concentration but is also a function of the volume (quantity) of waste and its accessibility. A risk-informed approach to waste classification for near-surface disposal of low-level waste would consider the specific characteristics of the waste, the quantity of material, and the disposal system features that limit accessibility to the waste. NRC staff has developed example analytical approaches to estimate waste concentration, and therefore waste classification, for waste disposed in facilities or with configurations that were not anticipated when the regulation for the disposal of commercial low-level waste (i.e. 10 CFR Part 61) was developed. (authors)

  14. Field test analysis of concentrator photovoltaic system focusing on average photon energy and temperature

    Science.gov (United States)

    Husna, Husyira Al; Ota, Yasuyuki; Minemoto, Takashi; Nishioka, Kensuke

    2015-08-01

    The concentrator photovoltaic (CPV) system is unique and different from the common flat-plate PV system. It uses a multi-junction solar cell and a Fresnel lens to concentrate direct solar radiation onto the cell while tracking the sun throughout the day. The cell efficiency could reach over 40% under high concentration ratio. In this study, we analyzed a one year set of environmental condition data of the University of Miyazaki, Japan, where the CPV system was installed. Performance ratio (PR) was discussed to describe the system’s performance. Meanwhile, the average photon energy (APE) was used to describe the spectrum distribution at the site where the CPV system was installed. A circuit simulator network was used to simulate the CPV system electrical characteristics under various environmental conditions. As for the result, we found that the PR of the CPV systems depends on the APE level rather than the cell temperature.

  15. Double-tailored nonimaging reflector optics for maximum-performance solar concentration.

    Science.gov (United States)

    Goldstein, Alex; Gordon, Jeffrey M

    2010-09-01

    A nonimaging strategy that tailors two mirror contours for concentration near the étendue limit is explored, prompted by solar applications where a sizable gap between the optic and absorber is required. Subtle limitations of this simultaneous multiple surface method approach are derived, rooted in the manner in which phase space boundaries can be tailored according to the edge-ray principle. The fundamental categories of double-tailored reflective optics are identified, only a minority of which can pragmatically offer maximum concentration at high collection efficiency. Illustrative examples confirm that acceptance half-angles as large as 30 mrad can be realized at a flux concentration of approximately 1000.

  16. Average concentrations of FSH and LH in seminal plasma as determined by radioimmunoassay

    International Nuclear Information System (INIS)

    Milbradt, R.; Linzbach, P.; Feller, H.

    1979-01-01

    In 322 males, 25 to 50 years of age, levels of LH and FSH respectively were determined in seminal plasma by radioimmunoassay. Average values of 0,78 ng/ml and 3,95 ng/ml were found as for FSH and LH respectively. Sperm count and motility were not related to FSH levels, but were to LH levels. A high count of spermatozoa corresponded to high concentration of LH, and normal motility was associated with higher levels of LH as compared to levels associated with asthenozoospermia. With respect to count of spermatozoa of a single or the average patient, it is suggested that the ratio of FSH/LH would be more meaningful than LH level alone. (orig.) [de

  17. Maximum solid concentrations of coal water slurries predicted by neural network models

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Jun; Li, Yanchang; Zhou, Junhu; Liu, Jianzhong; Cen, Kefa

    2010-12-15

    The nonlinear back-propagation (BP) neural network models were developed to predict the maximum solid concentration of coal water slurry (CWS) which is a substitute for oil fuel, based on physicochemical properties of 37 typical Chinese coals. The Levenberg-Marquardt algorithm was used to train five BP neural network models with different input factors. The data pretreatment method, learning rate and hidden neuron number were optimized by training models. It is found that the Hardgrove grindability index (HGI), moisture and coalification degree of parent coal are 3 indispensable factors for the prediction of CWS maximum solid concentration. Each BP neural network model gives a more accurate prediction result than the traditional polynomial regression equation. The BP neural network model with 3 input factors of HGI, moisture and oxygen/carbon ratio gives the smallest mean absolute error of 0.40%, which is much lower than that of 1.15% given by the traditional polynomial regression equation. (author)

  18. Evaluation of maximum radionuclide concentration from decay chains migration in aquifers

    International Nuclear Information System (INIS)

    Aquino Branco, O.E. de.

    1983-01-01

    The mathematical formulation of the mechanisms involved in the transport of contaminants in aquifers is presented. The methodology employed is described. A method of calculation the maximum concentration of radionuclides migrating in the underground water, and resulting from one decay chain, is then proposed. As an example, the methodology is applied to a waste basin, built to receive effluents from a hypothectical uranium ore mining and milling facility. (M.A.C.) [pt

  19. ANALYSIS OF THE STATISTICAL BEHAVIOUR OF DAILY MAXIMUM AND MONTHLY AVERAGE RAINFALL ALONG WITH RAINY DAYS VARIATION IN SYLHET, BANGLADESH

    Directory of Open Access Journals (Sweden)

    G. M. J. HASAN

    2014-10-01

    Full Text Available Climate, one of the major controlling factors for well-being of the inhabitants in the world, has been changing in accordance with the natural forcing and manmade activities. Bangladesh, the most densely populated countries in the world is under threat due to climate change caused by excessive use or abuse of ecology and natural resources. This study checks the rainfall patterns and their associated changes in the north-eastern part of Bangladesh mainly Sylhet city through statistical analysis of daily rainfall data during the period of 1957 - 2006. It has been observed that a good correlation exists between the monthly mean and daily maximum rainfall. A linear regression analysis of the data is found to be significant for all the months. Some key statistical parameters like the mean values of Coefficient of Variability (CV, Relative Variability (RV and Percentage Inter-annual Variability (PIV have been studied and found to be at variance. Monthly, yearly and seasonal variation of rainy days also analysed to check for any significant changes.

  20. The maximum ground level concentration of air pollutant and the effect of plume rise on concentration estimates

    International Nuclear Information System (INIS)

    Mayhoub, A.B.; Azzam, A.

    1991-01-01

    The emission of an air pollutant from an elevated point source according to Gaussian plume model has been presented. An elementary theoretical treatment for both the highest possible ground-level concentration and the downwind distance at which this maximum occurs for different stability classes has been constructed. The effective height release modification was taken into consideration. An illustrative case study, namely, the emission from the research reactor in Inchas, has been studied. The results of these analytical treatments and of the derived semi-empirical formulae are discussed and presented in few illustrative diagrams

  1. An improved procedure for determining grain boundary diffusion coefficients from averaged concentration profiles

    Science.gov (United States)

    Gryaznov, D.; Fleig, J.; Maier, J.

    2008-03-01

    Whipple's solution of the problem of grain boundary diffusion and Le Claire's relation, which is often used to determine grain boundary diffusion coefficients, are examined for a broad range of ratios of grain boundary to bulk diffusivities Δ and diffusion times t. Different reasons leading to errors in determining the grain boundary diffusivity (DGB) when using Le Claire's relation are discussed. It is shown that nonlinearities of the diffusion profiles in lnCav-y6/5 plots and deviations from "Le Claire's constant" (-0.78) are the major error sources (Cav=averaged concentration, y =coordinate in diffusion direction). An improved relation (replacing Le Claire's constant) is suggested for analyzing diffusion profiles particularly suited for small diffusion lengths (short times) as often required in diffusion experiments on nanocrystalline materials.

  2. Soil and Water Assessment Tool model predictions of annual maximum pesticide concentrations in high vulnerability watersheds.

    Science.gov (United States)

    Winchell, Michael F; Peranginangin, Natalia; Srinivasan, Raghavan; Chen, Wenlin

    2018-05-01

    Recent national regulatory assessments of potential pesticide exposure of threatened and endangered species in aquatic habitats have led to increased need for watershed-scale predictions of pesticide concentrations in flowing water bodies. This study was conducted to assess the ability of the uncalibrated Soil and Water Assessment Tool (SWAT) to predict annual maximum pesticide concentrations in the flowing water bodies of highly vulnerable small- to medium-sized watersheds. The SWAT was applied to 27 watersheds, largely within the midwest corn belt of the United States, ranging from 20 to 386 km 2 , and evaluated using consistent input data sets and an uncalibrated parameterization approach. The watersheds were selected from the Atrazine Ecological Exposure Monitoring Program and the Heidelberg Tributary Loading Program, both of which contain high temporal resolution atrazine sampling data from watersheds with exceptionally high vulnerability to atrazine exposure. The model performance was assessed based upon predictions of annual maximum atrazine concentrations in 1-d and 60-d durations, predictions critical in pesticide-threatened and endangered species risk assessments when evaluating potential acute and chronic exposure to aquatic organisms. The simulation results showed that for nearly half of the watersheds simulated, the uncalibrated SWAT model was able to predict annual maximum pesticide concentrations within a narrow range of uncertainty resulting from atrazine application timing patterns. An uncalibrated model's predictive performance is essential for the assessment of pesticide exposure in flowing water bodies, the majority of which have insufficient monitoring data for direct calibration, even in data-rich countries. In situations in which SWAT over- or underpredicted the annual maximum concentrations, the magnitude of the over- or underprediction was commonly less than a factor of 2, indicating that the model and uncalibrated parameterization

  3. Influence of the composition of radionuclide mixtures on the maximum permissible concentration

    International Nuclear Information System (INIS)

    Schillinger, K.; Schuricht, V.

    1975-08-01

    By dividing radionuclides according to their formation mechanisms it is possible to assess the influence of separate partial mixtures on the maximum permissible concentration (MPC) of the total mixture without knowing exactly their contribution to the total activity. Calculations showed that the MPC of a total mixture of unsoluble radionuclides, which may occur in all fields of peaceful uses of nuclear energy, depends on the gastrointestinal tract as the critical organ and on the composition of the fission product mixture. The influence of fractionation on the MPC can be reglected in such a case, whereas in case of soluble radionuclides this is not possible

  4. Uncertainties of estimating average radon and radon decay product concentrations in occupied houses

    International Nuclear Information System (INIS)

    Ronca-Battista, M.; Magno, P.; Windham, S.

    1986-01-01

    Radon and radon decay product measurements made in up to 68 Butte, Montana homes over a period of 18 months were used to estimate the uncertainty in estimating long-term average radon and radon decay product concentrations from a short-term measurement. This analysis was performed in support of the development of radon and radon decay product measurement protocols by the Environmental Protection Agency (EPA). The results of six measurement methods were analyzed: continuous radon and working level monitors, radon progeny integrating sampling units, alpha-track detectors, and grab radon and radon decay product techniques. Uncertainties were found to decrease with increasing sampling time and to be smaller when measurements were conducted during the winter months. In general, radon measurements had a smaller uncertainty than radon decay product measurements. As a result of this analysis, the EPA measurements protocols specify that all measurements be made under closed-house (winter) conditions, and that sampling times of at least a 24 hour period be used when the measurement will be the basis for a decision about remedial action or long-term health risks. 13 references, 3 tables

  5. SU-E-T-174: Evaluation of the Optimal Intensity Modulated Radiation Therapy Plans Done On the Maximum and Average Intensity Projection CTs

    Energy Technology Data Exchange (ETDEWEB)

    Jurkovic, I [University of Texas Health Science Center at San Antonio, San Antonio, TX (United States); Stathakis, S; Li, Y; Patel, A; Vincent, J; Papanikolaou, N; Mavroidis, P [Cancer Therapy and Research Center University of Texas Health Sciences Center at San Antonio, San Antonio, TX (United States)

    2014-06-01

    Purpose: To determine the difference in coverage between plans done on average intensity projection and maximum intensity projection CT data sets for lung patients and to establish correlations between different factors influencing the coverage. Methods: For six lung cancer patients, 10 phases of equal duration through the respiratory cycle, the maximum and average intensity projections (MIP and AIP) from their 4DCT datasets were obtained. MIP and AIP datasets had three GTVs delineated (GTVaip — delineated on AIP, GTVmip — delineated on MIP and GTVfus — delineated on each of the 10 phases and summed up). From the each GTV, planning target volumes (PTV) were then created by adding additional margins. For each of the PTVs an IMRT plan was developed on the AIP dataset. The plans were then copied to the MIP data set and were recalculated. Results: The effective depths in AIP cases were significantly smaller than in MIP (p < 0.001). The Pearson correlation coefficient of r = 0.839 indicates strong degree of positive linear relationship between the average percentage difference in effective depths and average PTV coverage on the MIP data set. The V2 0 Gy of involved lung depends on the PTV coverage. The relationship between PTVaip mean CT number difference and PTVaip coverage on MIP data set gives r = 0.830. When the plans are produced on MIP and copied to AIP, r equals −0.756. Conclusion: The correlation between the AIP and MIP data sets indicates that the selection of the data set for developing the treatment plan affects the final outcome (cases with high average percentage difference in effective depths between AIP and MIP should be calculated on AIP). The percentage of the lung volume receiving higher dose depends on how well PTV is covered, regardless of on which set plan is done.

  6. A comparison of muscle activity in concentric and counter movement maximum bench press.

    Science.gov (United States)

    van den Tillaar, Roland; Ettema, Gertjan

    2013-01-01

    The purpose of this study was to compare the kinematics and muscle activation patterns of regular free-weight bench press (counter movement) with pure concentric lifts in the ascending phase of a successful one repetition maximum (1-RM) attempt in the bench press. Our aim was to evaluate if diminishing potentiation could be the cause of the sticking region. Since diminishing potentiation cannot occur in pure concentric lifts, the occurrence of a sticking region in this type of muscle actions would support the hypothesis that the sticking region is due to a poor mechanical position. Eleven male participants (age 21.9 ± 1.7 yrs, body mass 80.7 ± 10.9 kg, body height 1.79 ± 0.07 m) conducted 1-RM lifts in counter movement and in pure concentric bench presses in which kinematics and EMG activity were measured. In both conditions, a sticking region occurred. However, the start of the sticking region was different between the two bench presses. In addition, in four of six muscles, the muscle activity was higher in the counter movement bench press compared to the concentric one. Considering the findings of the muscle activity of six muscles during the maximal lifts it was concluded that the diminishing effect of force potentiation, which occurs in the counter movement bench press, in combination with a delayed muscle activation unlikely explains the existence of the sticking region in a 1-RM bench press. Most likely, the sticking region is the result of a poor mechanical force position.

  7. Predicting long-term average concentrations of traffic-related air pollutants using GIS-based information

    Science.gov (United States)

    Hochadel, Matthias; Heinrich, Joachim; Gehring, Ulrike; Morgenstern, Verena; Kuhlbusch, Thomas; Link, Elke; Wichmann, H.-Erich; Krämer, Ursula

    Global regression models were developed to estimate individual levels of long-term exposure to traffic-related air pollutants. The models are based on data of a one-year measurement programme including geographic data on traffic and population densities. This investigation is part of a cohort study on the impact of traffic-related air pollution on respiratory health, conducted at the westerly end of the Ruhr-area in North-Rhine Westphalia, Germany. Concentrations of NO 2, fine particle mass (PM 2.5) and filter absorbance of PM 2.5 as a marker for soot were measured at 40 sites spread throughout the study region. Fourteen-day samples were taken between March 2002 and March 2003 for each season and site. Annual average concentrations for the sites were determined after adjustment for temporal variation. Information on traffic counts in major roads, building densities and community population figures were collected in a geographical information system (GIS). This information was used to calculate different potential traffic-based predictors: (a) daily traffic flow and maximum traffic intensity of buffers with radii from 50 to 10 000 m and (b) distances to main roads and highways. NO 2 concentration and PM 2.5 absorbance were strongly correlated with the traffic-based variables. Linear regression prediction models, which involved predictors with radii of 50 to 1000 m, were developed for the Wesel region where most of the cohort members lived. They reached a model fit ( R2) of 0.81 and 0.65 for NO 2 and PM 2.5 absorbance, respectively. Regression models for the whole area required larger spatial scales and reached R2=0.90 and 0.82. Comparison of predicted values with NO 2 measurements at independent public monitoring stations showed a satisfactory association ( r=0.66). PM 2.5 concentration, however, was only slightly correlated and thus poorly predictable by traffic-based variables ( rGIS-based regression models offer a promising approach to assess individual levels of

  8. Average daily and annual courses of 222Rn concentration in some natural medium

    International Nuclear Information System (INIS)

    Holy, K.; Bohm, R.; Polaskova, A.; Stelina, J.; Sykora, I.; Hola, O.

    1996-01-01

    Simultaneous measurements of the 222 Rn concentration in the outdoor atmosphere of Bratislava and in the soil air over one year period have been made. Daily and seasonal variations of the 222 Rn concentration in both media were found. Some attributes of these variations as well as methods of measurements are presented in this work. (author). 17 refs., 6 figs

  9. Association between average daily gain, faecal dry matter content and concentration of Lawsonia intracellularis in faeces

    DEFF Research Database (Denmark)

    Pedersen, Ken Steen; Skrubel, Rikke; Stege, Helle

    2012-01-01

    Background The objective of this study was to investigate the association between average daily gain and the number of Lawsonia intracellularis bacteria in faeces of growing pigs with different levels of diarrhoea. Methods A longitudinal field study (n?=?150 pigs) was performed in a Danish herd f...

  10. Sources Contributing to the Average Extracellular Concentration of Dopamine in the Nucleus Accumbens

    OpenAIRE

    Owesson-White, CA; Roitman, MF; Sombers, LA; Belle, AM; Keithley, RB; Peele, JL; Carelli, RM; Wightman, RM

    2012-01-01

    Mesolimbic dopamine neurons fire in both tonic and phasic modes resulting in detectable extracellular levels of dopamine in the nucleus accumbens (NAc). In the past, different techniques have targeted dopamine levels in the NAc to establish a basal concentration. In this study we used in vivo fast scan cyclic voltammetry (FSCV) in the NAc of awake, freely moving rats. The experiments were primarily designed to capture changes in dopamine due to phasic firing – that is, the measurement of dopa...

  11. Secondary poisoning of cadmium, copper and mercury: implications for the Maximum Permissible Concentrations and Negligible Concentrations in water, sediment and soil

    NARCIS (Netherlands)

    Smit CE; Wezel AP van; Jager T; Traas TP; CSR

    2000-01-01

    The impact of secondary poisoning on the Maximum Permissible Concentrations (MPCs) and Negligible Concentrations (NCs) of cadmium, copper and mercury in water, sediment and soil have been evaluated. Field data on accumulation of these elements by fish, mussels and earthworms were used to derive

  12. Metallurgical source-contribution analysis of PM10 annual average concentration: A dispersion modeling approach in moravian-silesian region

    Directory of Open Access Journals (Sweden)

    P. Jančík

    2013-10-01

    Full Text Available The goal of the article is to present analysis of metallurgical industry contribution to annual average PM10 concentrations in Moravian-Silesian based on means of the air pollution modelling in accord with the Czech reference methodology SYMOS´97.

  13. The average concentrations of 226Ra and 210Pb in foodstuff cultivated in the Pocos de Caldas plateau

    International Nuclear Information System (INIS)

    Hollanda Vasconcellos, L.M. de.

    1984-01-01

    The average concentrations of 226 Ra and 210 Pb in vegetables cultivated in the Pocos de Caldas plateau, mainly potatoes, carrots, beans and corn and the estimation of the average transfer factors soil-foodstuff for both radionuclides, were performed. The total 226 Ra and 210 Pb content in the soil was determined by gamma spectrometry. The exchangeable fraction was obtained by the classical radon emanation procedure and the 210 Pb was isolated by a radiochemical procedure and determined by radiometry of its daughter 210 Bi beta emissions with a Geiger Muller Counter. (M.A.C.) [pt

  14. 40 CFR Table C-1 to Subpart C of... - Test Concentration Ranges, Number of Measurements Required, and Maximum Discrepancy Specification

    Science.gov (United States)

    2010-07-01

    ... Measurements Required, and Maximum Discrepancy Specification C Table C-1 to Subpart C of Part 53 Protection of... Reference Methods Pt. 53, Subpt. C, Table C-1 Table C-1 to Subpart C of Part 53—Test Concentration Ranges..., June 22, 2010, table C-1 to subpart C was revised, effective Aug. 23, 2010. For the convenience of the...

  15. How precise is the determination of the average radon concentration in buildings from measurements lasting only a few days

    International Nuclear Information System (INIS)

    Janik, M.; Loskiewicz, J.; Olko, P.; Swakon, J.

    1998-01-01

    Radon concentration in outdoor air and in buildings is very variable, showing diurnal and seasonal variations. Long term track etch detectors measurements lasting up to one year give the most precise one year averages. It arrives, however, that we are obliged to get results much sooner e.g. for screening measurements. How long should we measure to get proper results? We have studied the problem of selecting proper time interval on the basis of our five long term (ca. 30 days) measurements in Cracow using AlphaGUARD ionization chamber detector. The mean radon concentration ranged from 543 to 1107 Bq/m 3 . It was found that the relative error of k day average was decreasing exponentially with a time constant of 4 days. Therefore we recommended a minimal measuring time of four (k = 4) and better six days. (author)

  16. Analysis of compound parabolic concentrators and aperture averaging to mitigate fading on free-space optical links

    Science.gov (United States)

    Wasiczko, Linda M.; Smolyaninov, Igor I.; Davis, Christopher C.

    2004-01-01

    Free space optics (FSO) is one solution to the bandwidth bottleneck resulting from increased demand for broadband access. It is well known that atmospheric turbulence distorts the wavefront of a laser beam propagating through the atmosphere. This research investigates methods of reducing the effects of intensity scintillation and beam wander on the performance of free space optical communication systems, by characterizing system enhancement using either aperture averaging techniques or nonimaging optics. Compound Parabolic Concentrators, nonimaging optics made famous by Winston and Welford, are inexpensive elements that may be easily integrated into intensity modulation-direct detection receivers to reduce fading caused by beam wander and spot breakup in the focal plane. Aperture averaging provides a methodology to show the improvement of a given receiver aperture diameter in averaging out the optical scintillations over the received wavefront.

  17. Procedure for the characterization of radon potential in existing dwellings and to assess the annual average indoor radon concentration

    International Nuclear Information System (INIS)

    Collignan, Bernard; Powaga, Emilie

    2014-01-01

    Risk assessment due to radon exposure indoors is based on annual average indoor radon activity concentration. To assess the radon exposure in a building, measurement is generally performed during at least two months during heating period in order to be representative of the annual average value. This is because radon presence indoors could be very variable during time. This measurement protocol is fairly reliable but may be a limiting in the radon risk management, particularly during a real estate transaction due to the duration of the measurement and the limitation of the measurement period. A previous field study defined a rapid methodology to characterize radon entry in dwellings. The objective of this study was at first, to test this methodology in various dwellings to assess its relevance with a daily test. At second, a ventilation model was used to assess numerically the air renewal of a building, the indoor air quality all along the year and the annual average indoor radon activity concentration, based on local meteorological conditions, some building characteristics and in-situ characterization of indoor pollutant emission laws. Experimental results obtained on thirteen individual dwellings showed that it is generally possible to obtain a representative characterization of radon entry into homes. It was also possible to refine the methodology defined in the previous study. In addition, numerical assessments of annual average indoor radon activity concentration showed generally a good agreement with measured values. These results are encouraging to allow a procedure with a short measurement time to be used to characterize long-term radon potential in dwellings. - Highlights: • Test of a daily procedure to characterize radon potential in dwellings. • Numerical assessment of the annual radon concentration. • Procedure applied on thirteen dwellings, characterization generally satisfactory. • Procedure useful to manage radon risk in dwellings, for real

  18. Comparison of depth-averaged concentration and bed load flux sediment transport models of dam-break flow

    Directory of Open Access Journals (Sweden)

    Jia-heng Zhao

    2017-10-01

    Full Text Available This paper presents numerical simulations of dam-break flow over a movable bed. Two different mathematical models were compared: a fully coupled formulation of shallow water equations with erosion and deposition terms (a depth-averaged concentration flux model, and shallow water equations with a fully coupled Exner equation (a bed load flux model. Both models were discretized using the cell-centered finite volume method, and a second-order Godunov-type scheme was used to solve the equations. The numerical flux was calculated using a Harten, Lax, and van Leer approximate Riemann solver with the contact wave restored (HLLC. A novel slope source term treatment that considers the density change was introduced to the depth-averaged concentration flux model to obtain higher-order accuracy. A source term that accounts for the sediment flux was added to the bed load flux model to reflect the influence of sediment movement on the momentum of the water. In a one-dimensional test case, a sensitivity study on different model parameters was carried out. For the depth-averaged concentration flux model, Manning's coefficient and sediment porosity values showed an almost linear relationship with the bottom change, and for the bed load flux model, the sediment porosity was identified as the most sensitive parameter. The capabilities and limitations of both model concepts are demonstrated in a benchmark experimental test case dealing with dam-break flow over variable bed topography.

  19. Performance of a geostationary mission, geoCARB, to measure CO2, CH4 and CO column-averaged concentrations

    Directory of Open Access Journals (Sweden)

    I. N. Polonsky

    2014-04-01

    Full Text Available GeoCARB is a proposed instrument to measure column averaged concentrations of CO2, CH4 and CO from geostationary orbit using reflected sunlight in near-infrared absorption bands of the gases. The scanning options, spectral channels and noise characteristics of geoCARB and two descope options are described. The accuracy of concentrations from geoCARB data is investigated using end-to-end retrievals; spectra at the top of the atmosphere in the geoCARB bands are simulated with realistic trace gas profiles, meteorology, aerosol, cloud and surface properties, and then the concentrations of CO2, CH4 and CO are estimated from the spectra after addition of noise characteristic of geoCARB. The sensitivity of the algorithm to aerosol, the prior distributions assumed for the gases and the meteorology are investigated. The contiguous spatial sampling and fine temporal resolution of geoCARB open the possibility of monitoring localised sources such as power plants. Simulations of emissions from a power plant with a Gaussian plume are conducted to assess the accuracy with which the emission strength may be recovered from geoCARB spectra. Scenarios for "clean" and "dirty" power plants are examined. It is found that a reliable estimate of the emission rate is possible, especially for power plants that have particulate filters, by averaging emission rates estimated from multiple snapshots of the CO2 field surrounding the plant. The result holds even in the presence of partial cloud cover.

  20. Procedure manual for the estimation of average indoor radon-daughter concentrations using the filtered alpha-track method

    International Nuclear Information System (INIS)

    George, J.L.

    1988-04-01

    One of the measurement needs of US Department of Energy (DOE) remedial action programs is the estimation of the annual-average indoor radon-daughter concentration (RDC) in structures. The filtered alpha-track method, using a 1-year exposure period, can be used to accomplish RDC estimations for the DOE remedial action programs. This manual describes the procedure used to obtain filtered alpha-track measurements to derive average RDC estimates from the measurrements. Appropriate quality-assurance and quality-control programs are also presented. The ''prompt'' alpha-track method of exposing monitors for 2 to 6 months during specific periods of the year is also briefly discussed in this manual. However, the prompt alpha-track method has been validated only for use in the Mesa County, Colorado, area. 3 refs., 3 figs

  1. [Estimation of maximum acceptable concentration of lead and cadmium in plants and their medicinal preparations].

    Science.gov (United States)

    Zitkevicius, Virgilijus; Savickiene, Nijole; Abdrachmanovas, Olegas; Ryselis, Stanislovas; Masteiková, Rūta; Chalupova, Zuzana; Dagilyte, Audrone; Baranauskas, Algirdas

    2003-01-01

    Heavy metals (lead, cadmium) are possible dashes which quantity is defined by the limiting acceptable contents. Different drugs preparations: infusions, decoctions, tinctures, extracts, etc. are produced using medicinal plants. The objective of this research was to study the impurities of heavy metals (lead, cadmium) in medicinal plants and some drug preparations. We investigated liquid extracts of fruits Crataegus monogyna Jacq. and herbs of Echinacea purpurea Moench., tinctures--of herbs Leonurus cardiaca L. The raw materials were imported from Poland. Investigations were carried out in cooperation with the Laboratory of Antropogenic Factors of the Institute for Biomedical Research. Amounts of lead and cadmium were established after "dry" mineralisation using "Perkin-Elmer Zeeman/3030" model electrothermic atomic absorption spectrophotometer (ETG AAS/Zeeman). It was established that lead is absorbed most efficiently after estimation of absorption capacity of cellular fibers. About 10.73% of lead crosses tinctures and extracts, better cadmium--49.63%. Herbs of Leonurus cardiaca L. are the best in holding back lead and cadmium. About 14.5% of lead and cadmium crosses the tincture of herbs Leonurus cardiaca L. We estimated the factors of heavy metals (lead, cadmium) in the liquid extracts of Crataegus monogyna Jacq. and Echinacea purpurea Moench., tincture of Leonurus cardiaca L. after investigations of heavy metals (lead, cadmium) in drugs and preparations of it. The amounts of heavy metals (lead, cadmium) don't exceed the allowable norms in fruits of Crataegus monogyna Jacq., herbs of Leonurus cardiaca L. and Echinacea purpurea Moench. after estimation of lead and cadmium extraction factors, the maximum of acceptable daily intake and the quantity of drugs consumption in day.

  2. Analysis for average heat transfer empirical correlation of natural convection on the concentric vertical cylinder modelling of APWR

    International Nuclear Information System (INIS)

    Daddy Setyawan

    2011-01-01

    There are several passive safety systems on APWR reactor design. One of the passive safety system is the cooling system with natural circulation air on the surface of concentric vertical cylinder containment wall. Since the natural circulation air performance in the Passive Containment Cooling System (PCCS) application is related to safety, the cooling characteristics of natural circulation air on concentric vertical cylinder containment wall should be studied experimentally. This paper focuses on the experimental study of the heat transfer coefficient of natural circulation air with heat flux level varied on the characteristics of APWR concentric vertical cylinder containment wall. The procedure of this experimental study is composed of 4 stages as follows: the design of APWR containment with scaling 1:40, the assembling of APWR containment with its instrumentation, calibration and experimentation. The experimentation was conducted in the transient and steady-state with the variation of heat flux, from 119 W/m 2 until 575 W/m 2 . From The experimentation result obtained average heat transfer empirical correlation of natural convection Nu L = 0,008(Ra * L ) 0,68 for the concentric vertical cylinder geometry modelling of APWR. (author)

  3. Analysis of the distributions of hourly NO2 concentrations contributing to annual average NO2 concentrations across the European monitoring network between 2000 and 2014

    Directory of Open Access Journals (Sweden)

    C. S. Malley

    2018-03-01

    Full Text Available Exposure to nitrogen dioxide (NO2 is associated with negative human health effects, both for short-term peak concentrations and from long-term exposure to a wider range of NO2 concentrations. For the latter, the European Union has established an air quality limit value of 40 µg m−3 as an annual average. However, factors such as proximity and strength of local emissions, atmospheric chemistry, and meteorological conditions mean that there is substantial variation in the hourly NO2 concentrations contributing to an annual average concentration. The aim of this analysis was to quantify the nature of this variation at thousands of monitoring sites across Europe through the calculation of a standard set of chemical climatology statistics. Specifically, at each monitoring site that satisfied data capture criteria for inclusion in this analysis, annual NO2 concentrations, as well as the percentage contribution from each month, hour of the day, and hourly NO2 concentrations divided into 5 µg m−3 bins were calculated. Across Europe, 2010–2014 average annual NO2 concentrations (NO2AA exceeded the annual NO2 limit value at 8 % of > 2500 monitoring sites. The application of this chemical climatology approach showed that sites with distinct monthly, hour of day, and hourly NO2 concentration bin contributions to NO2AA were not grouped into specific regions of Europe, furthermore, within relatively small geographic regions there were sites with similar NO2AA, but with differences in these contributions. Specifically, at sites with highest NO2AA, there were generally similar contributions from across the year, but there were also differences in the contribution of peak vs. moderate hourly NO2 concentrations to NO2AA, and from different hours across the day. Trends between 2000 and 2014 for 259 sites indicate that, in general, the contribution to NO2AA from winter months has increased, as has the contribution from the rush-hour periods of

  4. Analysis of the distributions of hourly NO2 concentrations contributing to annual average NO2 concentrations across the European monitoring network between 2000 and 2014

    Science.gov (United States)

    Malley, Christopher S.; von Schneidemesser, Erika; Moller, Sarah; Braban, Christine F.; Hicks, W. Kevin; Heal, Mathew R.

    2018-03-01

    Exposure to nitrogen dioxide (NO2) is associated with negative human health effects, both for short-term peak concentrations and from long-term exposure to a wider range of NO2 concentrations. For the latter, the European Union has established an air quality limit value of 40 µg m-3 as an annual average. However, factors such as proximity and strength of local emissions, atmospheric chemistry, and meteorological conditions mean that there is substantial variation in the hourly NO2 concentrations contributing to an annual average concentration. The aim of this analysis was to quantify the nature of this variation at thousands of monitoring sites across Europe through the calculation of a standard set of chemical climatology statistics. Specifically, at each monitoring site that satisfied data capture criteria for inclusion in this analysis, annual NO2 concentrations, as well as the percentage contribution from each month, hour of the day, and hourly NO2 concentrations divided into 5 µg m-3 bins were calculated. Across Europe, 2010-2014 average annual NO2 concentrations (NO2AA) exceeded the annual NO2 limit value at 8 % of > 2500 monitoring sites. The application of this chemical climatology approach showed that sites with distinct monthly, hour of day, and hourly NO2 concentration bin contributions to NO2AA were not grouped into specific regions of Europe, furthermore, within relatively small geographic regions there were sites with similar NO2AA, but with differences in these contributions. Specifically, at sites with highest NO2AA, there were generally similar contributions from across the year, but there were also differences in the contribution of peak vs. moderate hourly NO2 concentrations to NO2AA, and from different hours across the day. Trends between 2000 and 2014 for 259 sites indicate that, in general, the contribution to NO2AA from winter months has increased, as has the contribution from the rush-hour periods of the day, while the contribution from

  5. Classic maximum entropy recovery of the average joint distribution of apparent FRET efficiency and fluorescence photons for single-molecule burst measurements.

    Science.gov (United States)

    DeVore, Matthew S; Gull, Stephen F; Johnson, Carey K

    2012-04-05

    We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions.

  6. Procedure manual for the estimation of average indoor radon-daughter concentrations using the radon grab-sampling method

    International Nuclear Information System (INIS)

    George, J.L.

    1986-04-01

    The US Department of Energy (DOE) Office of Remedial Action and Waste Technology established the Technical Measurements Center to provide standardization, calibration, comparability, verification of data, quality assurance, and cost-effectiveness for the measurement requirements of DOE remedial action programs. One of the remedial-action measurement needs is the estimation of average indoor radon-daughter concentration. One method for accomplishing such estimations in support of DOE remedial action programs is the radon grab-sampling method. This manual describes procedures for radon grab sampling, with the application specifically directed to the estimation of average indoor radon-daughter concentration (RDC) in highly ventilated structures. This particular application of the measurement method is for cases where RDC estimates derived from long-term integrated measurements under occupied conditions are below the standard and where the structure being evaluated is considered to be highly ventilated. The radon grab-sampling method requires that sampling be conducted under standard maximized conditions. Briefly, the procedure for radon grab sampling involves the following steps: selection of sampling and counting equipment; sample acquisition and processing, including data reduction; calibration of equipment, including provisions to correct for pressure effects when sampling at various elevations; and incorporation of quality-control and assurance measures. This manual describes each of the above steps in detail and presents an example of a step-by-step radon grab-sampling procedure using a scintillation cell

  7. Influence of the turbulence typing scheme upon the cumulative frequency distribution of the calculated relative concentrations for different averaging times

    Energy Technology Data Exchange (ETDEWEB)

    Kretzschmar, J.G.; Mertens, I.

    1984-01-01

    Over the period 1977-1979, hourly meteorological measurements at the Nuclear Energy Research Centre, Mol, Belgium and simultaneous synoptic observations at the nearby military airport of Kleine Brogel, have been compiled as input data for a bi-Gaussian dispersion model. The available information has first of all been used to determine hourly stability classes in ten widely used turbulent diffusion typing schemes. Systematic correlations between different systems were rare. Twelve different combinations of diffusion typing scheme-dispersion parameters were then used for calculating cumulative frequency distributions of 1 h, 8 h, 16 h, 3 d, and 26 d average ground-level concentrations at receptors respectively at 500 m, 1 km, 2 km, 4 km and 8 km from continuous ground-level release and an elevated release at 100 m height. Major differences were noted as well in the extreme values, the higher percentiles, as in the annual mean concentrations. These differences are almost entirely due to the differences in the numercial values (as a function of distance) of the various sets of dispersion parameters actually in use for impact assessment studies. Dispersion parameter sets giving the lowest normalized ground-level concentration values for ground level releases give the highest results for elevated releases and vice versa. While it was illustrated once again that the applicability of a given set of dispersion parameters is restricted due to the specific conditions under which the given set derived, it was also concluded that systematic experimental work to validate certain assumptions is urgently needed.

  8. Development of a stacked ensemble model for forecasting and analyzing daily average PM2.5 concentrations in Beijing, China.

    Science.gov (United States)

    Zhai, Binxu; Chen, Jianguo

    2018-04-18

    A stacked ensemble model is developed for forecasting and analyzing the daily average concentrations of fine particulate matter (PM 2.5 ) in Beijing, China. Special feature extraction procedures, including those of simplification, polynomial, transformation and combination, are conducted before modeling to identify potentially significant features based on an exploratory data analysis. Stability feature selection and tree-based feature selection methods are applied to select important variables and evaluate the degrees of feature importance. Single models including LASSO, Adaboost, XGBoost and multi-layer perceptron optimized by the genetic algorithm (GA-MLP) are established in the level 0 space and are then integrated by support vector regression (SVR) in the level 1 space via stacked generalization. A feature importance analysis reveals that nitrogen dioxide (NO 2 ) and carbon monoxide (CO) concentrations measured from the city of Zhangjiakou are taken as the most important elements of pollution factors for forecasting PM 2.5 concentrations. Local extreme wind speeds and maximal wind speeds are considered to extend the most effects of meteorological factors to the cross-regional transportation of contaminants. Pollutants found in the cities of Zhangjiakou and Chengde have a stronger impact on air quality in Beijing than other surrounding factors. Our model evaluation shows that the ensemble model generally performs better than a single nonlinear forecasting model when applied to new data with a coefficient of determination (R 2 ) of 0.90 and a root mean squared error (RMSE) of 23.69μg/m 3 . For single pollutant grade recognition, the proposed model performs better when applied to days characterized by good air quality than when applied to days registering high levels of pollution. The overall classification accuracy level is 73.93%, with most misclassifications made among adjacent categories. The results demonstrate the interpretability and generalizability of

  9. A meta-analysis of cortisol concentration, vocalization, and average daily gain associated with castration in beef cattle.

    Science.gov (United States)

    Canozzi, Maria Eugênia Andrighetto; Mederos, America; Manteca, Xavier; Turner, Simon; McManus, Concepta; Zago, Daniele; Barcellos, Júlio Otávio Jardim

    2017-10-01

    A systematic review and meta-analysis (MA) were performed to summarize all scientific evidence for the effects of castration in male beef cattle on welfare indicators based on cortisol concentration, average daily gain (ADG), and vocalization. We searched five electronic databases, conference proceedings, and experts were contacted electronically. The main inclusion criteria involved completed studies using beef cattle up to one year of age undergoing surgical and non-surgical castration that presented cortisol concentration, ADG, or vocalization as an outcome. A random effect MA was conducted for each indicator separately with the mean of the control and treated groups. A total of 20 publications reporting 26 studies and 162 trials were included in the MA involving 1814 cattle. Between study heterogeneity was observed when analysing cortisol (I 2 =56.7%) and ADG (I 2 =79.6%). Surgical and non-surgical castration without drug administration compared to uncastrated animals showed no change (P≥0.05) in cortisol level. Multimodal therapy for pain did not decrease (P≥0.05) cortisol concentration after 30min when non-surgical castration was performed. Comparison between surgical castration, with and without anaesthesia, showed a tendency (P=0.077) to decrease cortisol levels after 120min of intervention. Non-surgical and surgical castration, performed with no pain mitigation, increased and tended to increase the ADG by 0.814g/d (P=0.001) and by 0.140g/d (P=0.091), respectively, when compared to a non-castrated group. Our MA study demonstrates an inconclusive result to draw recommendations on preferred castration practices to minimize pain in beef cattle. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Global Estimates of Average Ground-Level Fine Particulate Matter Concentrations from Satellite-Based Aerosol Optical Depth

    Science.gov (United States)

    Van Donkelaar, A.; Martin, R. V.; Brauer, M.; Kahn, R.; Levy, R.; Verduzco, C.; Villeneuve, P.

    2010-01-01

    Exposure to airborne particles can cause acute or chronic respiratory disease and can exacerbate heart disease, some cancers, and other conditions in susceptible populations. Ground stations that monitor fine particulate matter in the air (smaller than 2.5 microns, called PM2.5) are positioned primarily to observe severe pollution events in areas of high population density; coverage is very limited, even in developed countries, and is not well designed to capture long-term, lower-level exposure that is increasingly linked to chronic health effects. In many parts of the developing world, air quality observation is absent entirely. Instruments aboard NASA Earth Observing System satellites, such as the MODerate resolution Imaging Spectroradiometer (MODIS) and the Multi-angle Imaging SpectroRadiometer (MISR), monitor aerosols from space, providing once daily and about once-weekly coverage, respectively. However, these data are only rarely used for health applications, in part because the can retrieve the amount of aerosols only summed over the entire atmospheric column, rather than focusing just on the near-surface component, in the airspace humans actually breathe. In addition, air quality monitoring often includes detailed analysis of particle chemical composition, impossible from space. In this paper, near-surface aerosol concentrations are derived globally from the total-column aerosol amounts retrieved by MODIS and MISR. Here a computer aerosol simulation is used to determine how much of the satellite-retrieved total column aerosol amount is near the surface. The five-year average (2001-2006) global near-surface aerosol concentration shows that World Health Organization Air Quality standards are exceeded over parts of central and eastern Asia for nearly half the year.

  11. Determination of seasonal, diurnal, and height resolved average number concentration in a pollution impacted rural continental location

    Science.gov (United States)

    Bullard, Robert L.; Stanier, Charles O.; Ogren, John A.; Sheridan, Patrick J.

    2013-05-01

    The impact of aerosols on Earth's radiation balance and the associated climate forcing effects of aerosols represent significant uncertainties in assessment reports. The main source of ultrafine aerosols in the atmosphere is the nucleation and subsequent growth of gas phase aerosol precursors into liquid or solid phase particles. Long term records of aerosol number, nucleation event frequency, and vertical profiles of number concentration are rare. The data record from multiagency monitoring assets at Bondville, IL can contribute important information on long term and vertically resolved patterns. Although particle number size distribution data are only occasionally available at Bondville, highly time-resolved particle number concentration data have been measured for nearly twenty years by the NOAA ESRL Global Monitoring Division. Furthermore, vertically-resolved aerosol counts and other aerosol physical parameters are available from more than 300 flights of the NOAA Airborne Aerosol Observatory (AAO). These data sources are used to better understand the seasonal, diurnal, and vertical variation and trends in atmospheric aerosols. The highest peaks in condensation nuclei greater than 14 nm occur during the spring months (May, April) with slightly lower peaks during the fall months (September, October). The diurnal pattern of aerosol number has a midday peak and the timing of the peak has seasonal patterns (earlier during warm months and later during colder months). The seasonal and diurnal patterns of high particle number peaks correspond to seasons and times of day associated with low aerosol mass and surface area. Average vertical profiles show a nearly monotonic decrease with altitude in all months, and with peak magnitudes occurring in the spring and fall. Individual flight tracks show evidence of plumes (i.e., enhanced aerosol number is limited to a small altitude range, is not homogeneous horizontally, or both) as well as periods with enhanced particle number

  12. 2004 list of MAK (maximum work place concentration) and BAT (biological workplace tolerance) values; MAK- und BAT-Werte-Liste 2004

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-07-01

    The MAK value (maximum workplace concentration) is the highest permissible concentration of a working material occurring in the ambient air of the workplace as a gas or vapour or in suspended form which according to the present state of knowledge does not, in general, impair the health of or pose an unreasonable molestation (for example through repulsive odour) to employees even in the case of repeated, long-term exposure, that is as a rule 8 hours daily, assuming an average working week of no more than 40 hours. As a rule, MAK values are quoted as average values over a time period of up to one working day or shift. They are primarily defined in consideration of effect characteristics of the substances in question, but also - as far as possible - of the practical conditions attending the work processes or the exposure patterns which they entail. This is done on the basis of scientifically founded criteria of health protection, not on whether providing such protection is technically or economically feasible. In addition, substances are assessed in terms of carcinogenicity, sensitising effects, any contribution to systemic toxicity following cutaneous resorption, hazards for pregnancy and germ cell mutagenicity and are classified or marked accordingly. The Commission's procedures in assessing substances with respect to these criteria are described in corresponding sections of the MAK and BAT list of values, in the ''Toxicological and occupational medical explanations of MAK values'' and in scientific journals.

  13. Estimation of time averages from irregularly spaced observations - With application to coastal zone color scanner estimates of chlorophyll concentration

    Science.gov (United States)

    Chelton, Dudley B.; Schlax, Michael G.

    1991-01-01

    The sampling error of an arbitrary linear estimate of a time-averaged quantity constructed from a time series of irregularly spaced observations at a fixed located is quantified through a formalism. The method is applied to satellite observations of chlorophyll from the coastal zone color scanner. The two specific linear estimates under consideration are the composite average formed from the simple average of all observations within the averaging period and the optimal estimate formed by minimizing the mean squared error of the temporal average based on all the observations in the time series. The resulting suboptimal estimates are shown to be more accurate than composite averages. Suboptimal estimates are also found to be nearly as accurate as optimal estimates using the correct signal and measurement error variances and correlation functions for realistic ranges of these parameters, which makes it a viable practical alternative to the composite average method generally employed at present.

  14. The influence of poly(acrylic) acid number average molecular weight and concentration in solution on the compressive fracture strength and modulus of a glass-ionomer restorative.

    LENUS (Irish Health Repository)

    Dowling, Adam H

    2011-06-01

    The aim was to investigate the influence of number average molecular weight and concentration of the poly(acrylic) acid (PAA) liquid constituent of a GI restorative on the compressive fracture strength (σ) and modulus (E).

  15. The average concentrations of As, Cd, Cr, Hg, Ni and Pb in residential soil and drinking water obtained from springs and wells in Rosia Montana area.

    Data.gov (United States)

    U.S. Environmental Protection Agency — The average concentrations of As, Cd, Cr, Hg, Ni and Pb in n=84 residential soil samples, in Rosia Montana area, analyzed by X-ray fluorescence spectrometry are...

  16. Maximum permissible body burdens and maximum permissible concentrations of radionuclides in air and in water for occupational exposure. Recommendations of the National Committee on Radiation Protection. Handbook 69

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1959-06-05

    The present Handbook and its predecessors stem from the Second International Congress of Radiology, held in Stockholm in 1928. At that time, under the auspices of the Congress, the International Commission on Radiological Protection (ICRP) was organized to deal initially with problems of X-ray protection and later with radioactivity protection. At that time 'permissible' doses of X-rays were estimated primarily in terms of exposures which produced erythema, the amount of exposure which would produce a defined reddening of the skin. Obviously a critical problem in establishing criteria for radiation protection was one of developing useful standards and techniques of physical measurement. For this reason two of the organizations in this country with a major concern for X-ray protection, the American Roentgen Ray Society and the Radiology Society of North America, suggested that the National Bureau of Standards assume responsibility for organizing representative experts to deal with the problem. Accordingly, early in 1929, an Advisory Committee on X-ray and Radium Protection was organized to develop recommendations on the protection problem within the United States and to formulate United States points of view for presentation to the International Commission on Radiological Protection. The organization of the U.S. Advisory Committee included experts from both the medical and physical science fields. The recommendations of this Handbook take into consideration the NCRP statement entitled 'Maximum Permissible Radiation Exposures to Man', published as an addendum to Handbook 59 on April 15, 1958. As noted above this study was carried out jointly by the ICRP and the NCRP, and the complete report is more extensive than the material contained in this Handbook.

  17. Maximum permissible body burdens and maximum permissible concentrations of radionuclides in air and in water for occupational exposure. Recommendations of the National Committee on Radiation Protection. Handbook 69

    International Nuclear Information System (INIS)

    1959-01-01

    The present Handbook and its predecessors stem from the Second International Congress of Radiology, held in Stockholm in 1928. At that time, under the auspices of the Congress, the International Commission on Radiological Protection (ICRP) was organized to deal initially with problems of X-ray protection and later with radioactivity protection. At that time 'permissible' doses of X-rays were estimated primarily in terms of exposures which produced erythema, the amount of exposure which would produce a defined reddening of the skin. Obviously a critical problem in establishing criteria for radiation protection was one of developing useful standards and techniques of physical measurement. For this reason two of the organizations in this country with a major concern for X-ray protection, the American Roentgen Ray Society and the Radiology Society of North America, suggested that the National Bureau of Standards assume responsibility for organizing representative experts to deal with the problem. Accordingly, early in 1929, an Advisory Committee on X-ray and Radium Protection was organized to develop recommendations on the protection problem within the United States and to formulate United States points of view for presentation to the International Commission on Radiological Protection. The organization of the U.S. Advisory Committee included experts from both the medical and physical science fields. The recommendations of this Handbook take into consideration the NCRP statement entitled 'Maximum Permissible Radiation Exposures to Man', published as an addendum to Handbook 59 on April 15, 1958. As noted above this study was carried out jointly by the ICRP and the NCRP, and the complete report is more extensive than the material contained in this Handbook

  18. Maximum permissible concentration of radon {sup 222}Rn in air; La concentration maximale admissible du radon 222 dans l'air

    Energy Technology Data Exchange (ETDEWEB)

    Hamard, J; Beau, P G; Ergas, A [Commissariat a l' Energie Atomique, Fontenay-aux-Roses (France). Centre d' Etudes Nucleaires, departement de la protection sanitaire, service d' hygiene atomique

    1968-09-01

    In order to verify the validity of the values proposed for the maximum permissible concentration of {sup 222}Rn in air, one can either approach the problem: - by epidemiological studies tending to determine the relation dose-effect both quantitatively and qualitatively; - or by choosing a lung model and clearance constants allowing a more accurate determination of the delivered dose and the localisation of the more severely irradiated portions of the bronchial tree. The radon MPC have been calculated using the model and the respiration constants set up by the I.C.R.P. Task Group on Lung dynamics. Two cases have been considered, i.e. when the radon daughter products behave as soluble materials and as insoluble ones. The values which have been found have been compared with those given up to now by several national and international bodies. (authors) [French] Deux voies d'approche peuvent etre empruntees pour verifier la validite des valeurs proposees pour la concentration maximale admissible du radon 222 dans l'air: - etudes epidemiologiques tendant a preciser qualitativement et quantitativement la relation dose-effet; - choix d'un modele pulmonaire et de constantes d'epuration permettant une determination plus precise de la dose delivree et la localisation des segments de l'arbre pulmonaire les plus irradies. Les auteurs ont utilise pour le calcul de la CMA du radon le modele et les constantes respiratoires proposees par le Task Group on Lungs dynamics de la C.I.P.R. On a pris en consideration le cas ou les descendants du radon se comportent comme des substances solubles et celui ou ils se comportent comme des substances insolubles. Les valeurs trouvees sont comparees a celles proposees jusqu'alors par divers organismes nationaux et internationaux. (auteurs)

  19. Relation of average and highest solvent vapor concentrations in workplaces in small to medium enterprises and large enterprises.

    Science.gov (United States)

    Ukai, Hirohiko; Ohashi, Fumiko; Samoto, Hajime; Fukui, Yoshinari; Okamoto, Satoru; Moriguchi, Jiro; Ezaki, Takafumi; Takada, Shiro; Ikeda, Masayuki

    2006-04-01

    The present study was initiated to examine the relationship between the workplace concentrations and the estimated highest concentrations in solvent workplaces (SWPs), with special references to enterprise size and types of solvent work. Results of survey conducted in 1010 SWPs in 156 enterprises were taken as a database. Workplace air was sampled at > or = 5 crosses in each SWP following a grid sampling strategy. An additional air was grab-sampled at the site where the worker's exposure was estimated to be highest (estimated highest concentration or EHC). The samples were analyzed for 47 solvents designated by regulation, and solvent concentrations in each sample were summed up by use of additiveness formula. From the workplace concentrations at > or = 5 points, geometric mean and geometric standard deviations were calculated as the representative workplace concentration (RWC) and the indicator of variation in workplace concentration (VWC). Comparison between RWC and EHC in the total of 1010 SWPs showed that EHC was 1.2 (in large enterprises with>300 employees) to 1.7 times [in small to medium (SM) enterprises with enterprises and large enterprises, both RWC and EHC were significantly higher in SM enterprises than in large enterprises. Further comparison by types of solvent work showed that the difference was more marked in printing, surface coating and degreasing/cleaning/wiping SWPs, whereas it was less remarkable in painting SWPs and essentially nil in testing/research laboratories. In conclusion, the present observation as discussed in reference to previous publications suggests that RWC, EHC and the ratio of EHC/WRC varies substantially among different types of solvent work as well as enterprise size, and are typically higher in printing SWPs in SM enterprises.

  20. Greater-than-Class C low-level waste characterization. Appendix I: Impact of concentration averaging low-level radioactive waste volume projections

    International Nuclear Information System (INIS)

    Tuite, P.; Tuite, K.; O'Kelley, M.; Ely, P.

    1991-08-01

    This study provides a quantitative framework for bounding unpackaged greater-than-Class C low-level radioactive waste types as a function of concentration averaging. The study defines the three concentration averaging scenarios that lead to base, high, and low volumetric projections; identifies those waste types that could be greater-than-Class C under the high volume, or worst case, concentration averaging scenario; and quantifies the impact of these scenarios on identified waste types relative to the base case scenario. The base volume scenario was assumed to reflect current requirements at the disposal sites as well as the regulatory views. The high volume scenario was assumed to reflect the most conservative criteria as incorporated in some compact host state requirements. The low volume scenario was assumed to reflect the 10 CFR Part 61 criteria as applicable to both shallow land burial facilities and to practices that could be employed to reduce the generation of Class C waste types

  1. On the applicability of short time measurements to the determination of annual average of radon concentration in dwelling

    International Nuclear Information System (INIS)

    Loskiewicz, J.; Olko, P.; Swakon, J.; Bogacz, J.; Janik, M.; Mazur, D.; Mazur, J.

    1998-01-01

    The variation of radon concentration in some houses in the Krakow region was investigated in order to compare results obtained using various measuring techniques. It is concluded that short-term measurements should last at least 4 days to avoid errors exceeding 30%; that weather parameters and human activity during the measurement should be recorded; that measurements should be repeated several times under various weather conditions; that seasonal variation in the region should be taken into account. (A.K.)

  2. MAK and BAT values list 2016. Maximum permissible concentrations at the place of work and biological tolerance values for working materials

    International Nuclear Information System (INIS)

    2016-01-01

    The MAK and BAT values list 2016 includes the maximum permissible concentrations at the place of work and biological tolerance values for working materials. The following working materials are covered: carcinogenic working materials, sensitizing materials and aerosols. The report discusses the restriction of exposure peaks, skin resorption, MAK (maximum working place concentration) values during pregnancy, germ cell mutagens and specific working materials. Importance and application of BAT (biological working material tolerance) values, list of materials, carcinogens, biological guide values and reference values are also included.

  3. MAK and BAT values list 2014. Maximum permissible concentrations at the place of work and biological tolerance values for working materials

    International Nuclear Information System (INIS)

    2014-01-01

    The book on the MAK (maximum permissible concentrations at the place of work) and BAT (biological tolerance values for working materials) value list 2014 includes the following chapters: (a) Maximum permissible concentrations at the place of work: definition, application and determination of MAT values, list of materials; carcinogenic working materials, sensibilizing working materials, aerosols, limiting the exposition peaks, skin resorption, MAK values during pregnancy, germ cell mutagens, specific working materials; (b) Biological tolerance values for working materials: definition and application of BAT values, list of materials, carcinogenic working materials, biological guide values, biological working material reference values.

  4. MAK and BAT values list 2015. Maximum permissible concentrations at the place of work and biological tolerance values for working materials

    International Nuclear Information System (INIS)

    2015-01-01

    The book on the MAK (maximum permissible concentrations at the place of work) and BAT (biological tolerance values for working materials) value list 2015 includes the following chapters: (a) Maximum permissible concentrations at the place of work: definition, application and determination of MAT values, list of materials; carcinogenic working materials, sensibilizing working materials, aerosols, limiting the exposition peaks, skin resorption, MAK values during pregnancy, germ cell mutagens, specific working materials; (b) Biological tolerance values for working materials: definition and application of BAT values, list of materials, carcinogenic working materials, biological guide values, biological working material reference values.

  5. MAK and BAT values list 2017. Maximum permissible concentrations at the place of work and biological tolerance values for working materials

    International Nuclear Information System (INIS)

    2017-01-01

    The MAK and BAT values list 2017 includes the maximum permissible concentrations at the place of work and biological tolerance values for working materials. The following working materials are covered: carcinogenic working materials, sensitizing materials and aerosols. The report discusses the restriction of exposure peaks, skin resorption, MAK (maximum working place concentration) values during pregnancy, germ cell mutagens and specific working materials. Importance and application of BAT (biological working material tolerance) values, list of materials, carcinogens, biological guide values and reference values are also included.

  6. Secondary poisoning of cadmium, copper and mercury: implications for the Maximum Permissible Concentrations and Negligible Concentrations in water, sediment and soil

    NARCIS (Netherlands)

    Smit CE; van Wezel AP; Jager T; Traas TP; CSR

    2000-01-01

    De betekenis van doorvergiftiging voor de Maximum Toelaatbaar Risiconiveau's (MTRs) en Verwaarloosbaar Risiconiveau's (VRs) van cadmium, koper en kwik in water, sediment en bodem is geevalueerd. Veldgegevens met betrekking tot de accumulatie van deze elementen door vissen, mosselen en

  7. Air Pollution Modelling to Predict Maximum Ground Level Concentration for Dust from a Palm Oil Mill Stack

    Directory of Open Access Journals (Sweden)

    Regina A. A.

    2010-12-01

    Full Text Available The study is to model emission from a stack to estimate ground level concentration from a palm oil mill. The case study is a mill located in Kuala Langat, Selangor. Emission source is from boilers stacks. The exercise determines the estimate the ground level concentrations for dust to the surrounding areas through the utilization of modelling software. The surround area is relatively flat, an industrial area surrounded by factories and with palm oil plantations in the outskirts. The model utilized in the study was to gauge the worst-case scenario. Ambient air concentrations were garnered calculate the increase to localized conditions. Keywords: emission, modelling, palm oil mill, particulate, POME

  8. Results from transcranial Doppler examination on children and adolescents with sickle cell disease and correlation between the time-averaged maximum mean velocity and hematological characteristics: a cross-sectional analytical study

    Directory of Open Access Journals (Sweden)

    Mary Hokazono

    Full Text Available CONTEXT AND OBJECTIVE: Transcranial Doppler (TCD detects stroke risk among children with sickle cell anemia (SCA. Our aim was to evaluate TCD findings in patients with different sickle cell disease (SCD genotypes and correlate the time-averaged maximum mean (TAMM velocity with hematological characteristics. DESIGN AND SETTING: Cross-sectional analytical study in the Pediatric Hematology sector, Universidade Federal de São Paulo. METHODS: 85 SCD patients of both sexes, aged 2-18 years, were evaluated, divided into: group I (62 patients with SCA/Sß0 thalassemia; and group II (23 patients with SC hemoglobinopathy/Sß+ thalassemia. TCD was performed and reviewed by a single investigator using Doppler ultrasonography with a 2 MHz transducer, in accordance with the Stroke Prevention Trial in Sickle Cell Anemia (STOP protocol. The hematological parameters evaluated were: hematocrit, hemoglobin, reticulocytes, leukocytes, platelets and fetal hemoglobin. Univariate analysis was performed and Pearson's coefficient was calculated for hematological parameters and TAMM velocities (P < 0.05. RESULTS: TAMM velocities were 137 ± 28 and 103 ± 19 cm/s in groups I and II, respectively, and correlated negatively with hematocrit and hemoglobin in group I. There was one abnormal result (1.6% and five conditional results (8.1% in group I. All results were normal in group II. Middle cerebral arteries were the only vessels affected. CONCLUSION: There was a low prevalence of abnormal Doppler results in patients with sickle-cell disease. Time-average maximum mean velocity was significantly different between the genotypes and correlated with hematological characteristics.

  9. Satellite-derived ice data sets no. 2: Arctic monthly average microwave brightness temperatures and sea ice concentrations, 1973-1976

    Science.gov (United States)

    Parkinson, C. L.; Comiso, J. C.; Zwally, H. J.

    1987-01-01

    A summary data set for four years (mid 70's) of Arctic sea ice conditions is available on magnetic tape. The data include monthly and yearly averaged Nimbus 5 electrically scanning microwave radiometer (ESMR) brightness temperatures, an ice concentration parameter derived from the brightness temperatures, monthly climatological surface air temperatures, and monthly climatological sea level pressures. All data matrices are applied to 293 by 293 grids that cover a polar stereographic map enclosing the 50 deg N latitude circle. The grid size varies from about 32 X 32 km at the poles to about 28 X 28 km at 50 deg N. The ice concentration parameter is calculated assuming that the field of view contains only open water and first-year ice with an ice emissivity of 0.92. To account for the presence of multiyear ice, a nomogram is provided relating the ice concentration parameter, the total ice concentration, and the fraction of the ice cover which is multiyear ice.

  10. Maximum permissible concentrations and negligible concentrations for phthalates (dibutylphthalate and di(2-ethylhexyl)phthlate), with emphasis on endocrine disruptive properties

    NARCIS (Netherlands)

    Wezel AP van; Posthumus R; Vlaardingen P van; Crommentuijn T; Plassche EJ van de; CSR

    This report presents maximal permissible concentrations (MPCs) and negligible concentrations (NCs) are derived for di-n-butylphthalate (DBP) and di(2-ethylhexyl)phthalate (DEHP). Phthalates are often mentioned as suspected endocrine disrupters. Data with endpoints related to the endocrine or

  11. Use of MICRAS code on the evaluation of the maximum radionuclides concentrations due to transport/migration of decay chain in groundwaters

    International Nuclear Information System (INIS)

    Aquino Branco, O.E. de

    1995-01-01

    This paper presents a methodology for the evaluation of the maximum radionuclides concentrations in groundwaters, due to the transport/migration of decay chains. Analytical solution of the equations system is difficult, even if only three elements of the decay chain are considered. Therefore, a numerical solution is most convenient. An application of the MICRAS code, developed to assess maximum concentrations of each radionuclide, starting with the initial concentrations, is presented. The maximum concentration profile for 226 Ra, calculated using MICRAS, is compared with the results obtained through an analytical and a numerical model. The fitness of results is considered good. Simplified models, like the one represented by the application of MICRAS, are largely employed in the section in the selection and characterization of sites for radioactive wastes repositories and in studies of safety evaluation for the same purpose. A detailed analysis of the transport/migration of contaminants in aquifers requires a large quantify of data from the site and from the installation as well, which makes this analysis expensive and inviable during the preliminary phases of the studies. (author). 6 refs, 1 fig, 1 tab

  12. CORRELATION BETWEEN PATHOLOGY AND EXCESS OF MAXIMUM CONCENTRATION LIMIT OF POLLUTANTS IN THE ENVIRONMENT OF THE REPUBLIC OF DAGESTAN

    Directory of Open Access Journals (Sweden)

    G. M. Abdurakhmanov

    2013-01-01

    Full Text Available Abstract. Statistical data from "Indicators of health status of the Republic of Dagestan" for 1999 - 2010 years are presented in the work. The aim of this work was to identify a cause-effect correlation between non-communicable diseases (ischemic heart disease, neuropsychiatric disease, endemic goiter, diabetes, congenital anomalies and environmental factors in the Republic of Dagestan.Statistical data processing was carried out using the software package Statistica, Microsoft Excel. The Spearman rank correlation coefficient (ρ was used for identify of correlation between indicators of environmental quality and health of population.Moderate positive correlation is observed between the development of pathology and excess of concentrations of contaminants in drinking water sources. Direct correlations are founded between development of the studied pathologies and excess of concentrations of heavy metals and their mobile forms in soils of the region. Direct correlation is found between excess of concentrations of heavy metals in the pasture vegetation (factorial character and the morbidity of the population (effective character.

  13. State Averages

    Data.gov (United States)

    U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...

  14. Effect of Chinese traditional medicine anti-fatigue prescription on the concentration of the serum testosterone and cortisol in male rats under stress of maximum intensive training

    International Nuclear Information System (INIS)

    Dong Ling; Si Xulan

    2008-01-01

    Objective: To study the effect of chinese traditional medicine anti-fatigue prescription on the concentration of the serum testosterone (T) and cortisol (C) in male rats under the stress of maximum intensive training. Methods: Wistar male rat models of stress under maximum intensity training were established (n=40) and half of them were treated with Chinese traditional medicine anti-fatigue prescription twenty undisturbed rats served as controls. Testosterone and cortisol serum levels were determined with RIA at the end of the seven weeks' experiment. Results: Maximum intensive training would cause the level of the serum testosterone lowered, the concentration of the cortisol elevated and the ratio of T/C reduced. The serum T levels and T/C ratio were significantly lower and cortisol levels significantly higher in the untreated models than those in the treated models and controls (P<0.01). The levels of the two hormones were markedly corrected in the treated models with no significantly differences from those in the controls. However, the T/C ratio was still significantly lower than that in the controls (P <0.05) due to a relatively slightly greater degree of reduction of T levels. Conclusion: Anti-fatigue prescription can not only promote the recovery of fatigue after the maximum intensive training but also strengthen the anabolism of the rats. (authors)

  15. Estimation of Radionuclide Concentrations and Average Annual Committed Effective Dose due to Ingestion for the Population in the Red River Delta, Vietnam.

    Science.gov (United States)

    Van, Tran Thi; Bat, Luu Tam; Nhan, Dang Duc; Quang, Nguyen Hao; Cam, Bui Duy; Hung, Luu Viet

    2018-02-16

    Radioactivity concentrations of nuclides of the 232 Th and 238 U radioactive chains and 40 K, 90 Sr, 137 Cs, and 239+240 Pu were surveyed for raw and cooked food of the population in the Red River delta region, Vietnam, using α-, γ-spectrometry, and liquid scintillation counting techniques. The concentration of 40 K in the cooked food was the highest compared to those of other radionuclides ranging from (23 ± 5) (rice) to (347 ± 50) Bq kg -1 dw (tofu). The 210 Po concentration in the cooked food ranged from its limit of detection (LOD) of 5 mBq kg -1  dw (rice) to (4.0 ± 1.6) Bq kg -1  dw (marine bivalves). The concentrations of other nuclides of the 232 Th and 238 U chains in the food were low, ranging from LOD of 0.02 Bq kg -1  dw to (1.1 ± 0.3) Bq kg -1  dw. The activity concentrations of 90 Sr, 137 Cs, and 239+240 Pu in the food were minor compared to that of the natural radionuclides. The average annual committed effective dose to adults in the study region was estimated and it ranged from 0.24 to 0.42 mSv a -1 with an average of 0.32 mSv a -1 , out of which rice, leafy vegetable, and tofu contributed up to 16.2%, 24.4%, and 21.3%, respectively. The committed effective doses to adults due to ingestion of regular diet in the Red River delta region, Vietnam are within the range determined in other countries worldwide. This finding suggests that Vietnamese food is safe for human consumption with respect to radiation exposure.

  16. Greater-than-Class C low-level radioactive waste characterization. Appendix E-5: Impact of the 1993 NRC draft Branch Technical Position on concentration averaging of greater-than-Class C low-level radioactive waste

    International Nuclear Information System (INIS)

    Tuite, P.; Tuite, K.; Harris, G.

    1994-09-01

    This report evaluates the effects of concentration averaging practices on the disposal of greater-than-Class C low-level radioactive waste (GTCC LLW) generated by the nuclear utility industry and sealed sources. Using estimates of the number of waste components that individually exceed Class C limits, this report calculates the proportion that would be classified as GTCC LLW after applying concentration averaging; this proportion is called the concentration averaging factor. The report uses the guidance outlined in the 1993 Nuclear Regulatory Commission (NRC) draft Branch Technical Position on concentration averaging, as well as waste disposal experience at nuclear utilities, to calculate the concentration averaging factors for nuclear utility wastes. The report uses the 1993 NRC draft Branch Technical Position and the criteria from the Barnwell, South Carolina, LLW disposal site to calculate concentration averaging factors for sealed sources. The report addresses three waste groups: activated metals from light water reactors, process wastes from light-water reactors, and sealed sources. For each waste group, three concentration averaging cases are considered: high, base, and low. The base case, which is the most likely case to occur, assumes using the specific guidance given in the 1993 NRC draft Branch Technical Position on concentration averaging. To project future GTCC LLW generation, each waste category is assigned a concentration averaging factor for the high, base, and low cases

  17. Spatiotemporal modeling of PM2.5 concentrations at the national scale combining land use regression and Bayesian maximum entropy in China.

    Science.gov (United States)

    Chen, Li; Gao, Shuang; Zhang, Hui; Sun, Yanling; Ma, Zhenxing; Vedal, Sverre; Mao, Jian; Bai, Zhipeng

    2018-05-03

    Concentrations of particulate matter with aerodynamic diameter Bayesian Maximum Entropy (BME) interpolation of the LUR space-time residuals were developed to estimate the PM 2.5 concentrations on a national scale in China. This hybrid model could potentially provide more valid predictions than a commonly-used LUR model. The LUR/BME model had good performance characteristics, with R 2  = 0.82 and root mean square error (RMSE) of 4.6 μg/m 3 . Prediction errors of the LUR/BME model were reduced by incorporating soft data accounting for data uncertainty, with the R 2 increasing by 6%. The performance of LUR/BME is better than OK/BME. The LUR/BME model is the most accurate fine spatial scale PM 2.5 model developed to date for China. Copyright © 2018. Published by Elsevier Ltd.

  18. Selection of suitable mineral acid and its concentration for biphasic dilute acid hydrolysis of the sodium dithionite delignified Prosopis juliflora to hydrolyze maximum holocellulose.

    Science.gov (United States)

    Naseeruddin, Shaik; Desai, Suseelendra; Venkateswar Rao, L

    2016-02-01

    Two grams of delignified substrate at 10% (w/v) level was subjected to biphasic dilute acid hydrolysis using phosphoric acid, hydrochloric acid and sulfuric acid separately at 110 °C for 10 min in phase-I and 121 °C for 15 min in phase-II. Combinations of acid concentrations in two phases were varied for maximum holocellulose hydrolysis with release of fewer inhibitors, to select the suitable acid and its concentration. Among three acids, sulfuric acid in combination of 1 & 2% (v/v) hydrolyzed maximum holocellulose of 25.44±0.44% releasing 0.51±0.02 g/L of phenolics and 0.12±0.002 g/L of furans, respectively. Further, hydrolysis of delignified substrate using selected acid by varying reaction time and temperature hydrolyzed 55.58±1.78% of holocellulose releasing 2.11±0.07 g/L and 1.37±0.03 g/L of phenolics and furans, respectively at conditions of 110 °C for 45 min in phase-I & 121 °C for 60 min in phase-II. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Time to reach tacrolimus maximum blood concentration,mean residence time, and acute renal allograft rejection: an open-label, prospective, pharmacokinetic study in adult recipients.

    Science.gov (United States)

    Kuypers, Dirk R J; Vanrenterghem, Yves

    2004-11-01

    The aims of this study were to determine whether disposition-related pharmacokinetic parameters such as T(max) and mean residence time (MRT) could be used as predictors of clinical efficacy of tacrolimus in renal transplant recipients, and to what extent these parameters would be influenced by clinical variables. We previously demonstrated, in a prospective pharmacokinetic study in de novo renal allograft recipients, that patients who experienced early acute rejection did not differ from patients free from rejection in terms of tacrolimus pharmacokinetic exposure parameters (dose interval AUC, preadministration trough blood concentration, C(max), dose). However, recipients with acute rejection reached mean (SD) tacrolimus T(max) significantly faster than those who were free from rejection (0.96 [0.56] hour vs 1.77 [1.06] hours; P clearance nor T(1/2) could explain this unusual finding, we used data from the previous study to calculate MRT from the concentration-time curves. As part of the previous study, 100 patients (59 male, 41 female; mean [SD] age, 51.4 [13.8] years;age range, 20-75 years) were enrolled in the study The calculated MRT was significantly shorter in recipients with acute allograft rejection (11.32 [031] hours vs 11.52 [028] hours; P = 0.02), just like T(max) was an independent risk factor for acute rejection in a multivariate logistic regression model (odds ratio, 0.092 [95% CI, 0.014-0.629]; P = 0.01). Analyzing the impact of demographic, transplantation-related, and biochemical variables on MRT, we found that increasing serum albumin and hematocrit concentrations were associated with a prolonged MRT (P calculated MRT were associated with a higher incidence of early acute graft rejection. These findings suggest that a shorter transit time of tacrolimus in certain tissue compartments, rather than failure to obtain a maximum absolute tacrolimus blood concentration, might lead to inadequate immunosuppression early after transplantation.

  20. Average Revisited in Context

    Science.gov (United States)

    Watson, Jane; Chick, Helen

    2012-01-01

    This paper analyses the responses of 247 middle school students to items requiring the concept of average in three different contexts: a city's weather reported in maximum daily temperature, the number of children in a family, and the price of houses. The mixed but overall disappointing performance on the six items in the three contexts indicates…

  1. Time required to achieve maximum concentration of amikacin in synovial fluid of the distal interphalangeal joint after intravenous regional limb perfusion in horses.

    Science.gov (United States)

    Kilcoyne, Isabelle; Nieto, Jorge E; Knych, Heather K; Dechant, Julie E

    2018-03-01

    OBJECTIVE To determine the maximum concentration (Cmax) of amikacin and time to Cmax (Tmax) in the distal interphalangeal (DIP) joint in horses after IV regional limb perfusion (IVRLP) by use of the cephalic vein. ANIMALS 9 adult horses. PROCEDURES Horses were sedated and restrained in a standing position and then subjected to IVRLP (2 g of amikacin sulfate diluted to 60 mL with saline [0.9% NaCl] solution) by use of the cephalic vein. A pneumatic tourniquet was placed 10 cm proximal to the accessory carpal bone. Perfusate was instilled with a peristaltic pump over a 3-minute period. Synovial fluid was collected from the DIP joint 5, 10, 15, 20, 25, and 30 minutes after IVRLP; the tourniquet was removed after the 20-minute sample was collected. Blood samples were collected from the jugular vein 5, 10, 15, 19, 21, 25, and 30 minutes after IVRLP. Amikacin was quantified with a fluorescence polarization immunoassay. Median Cmax of amikacin and Tmax in the DIP joint were determined. RESULTS 2 horses were excluded because an insufficient volume of synovial fluid was collected. Median Cmax for the DIP joint was 600 μg/mL (range, 37 to 2,420 μg/mL). Median Tmax for the DIP joint was 15 minutes. CONCLUSIONS AND CLINICAL RELEVANCE Tmax of amikacin was 15 minutes after IVRLP in horses and Cmax did not increase > 15 minutes after IVRLP despite maintenance of the tourniquet. Application of a tourniquet for 15 minutes should be sufficient for completion of IVRLP when attempting to achieve an adequate concentration of amikacin in the synovial fluid of the DIP joint.

  2. Applying tracer techniques to NPP liquid effluents for estimating the maximum concentration of soluble pollutants in a man-made canal

    International Nuclear Information System (INIS)

    Varlam, Carmen; Stefanescu, Ioan; Varlam, Mihai; Raceanu, Mircea; Enache, Adrian; Faurescu, Ionut; Patrascu, Vasile; Bucur, Cristina

    2006-01-01

    -October 2002. We established tritium level and tritium concentrations significant for the edge and the tail of tritiated wastewater evacuations. We obtained unit-peak-attenuation (UPA) curve as related to different mixing times using three locations in which we measured tracer-response curves. The UPA curve, along with the time-of travel curves, provides a ready means of predicting maximum soluble contaminant levels that could be experienced at any location in the investigated area. (authors)

  3. Maximum a posteriori Bayesian estimation of mycophenolic Acid area under the concentration-time curve: is this clinically useful for dosage prediction yet?

    Science.gov (United States)

    Staatz, Christine E; Tett, Susan E

    2011-12-01

    This review seeks to summarize the available data about Bayesian estimation of area under the plasma concentration-time curve (AUC) and dosage prediction for mycophenolic acid (MPA) and evaluate whether sufficient evidence is available for routine use of Bayesian dosage prediction in clinical practice. A literature search identified 14 studies that assessed the predictive performance of maximum a posteriori Bayesian estimation of MPA AUC and one report that retrospectively evaluated how closely dosage recommendations based on Bayesian forecasting achieved targeted MPA exposure. Studies to date have mostly been undertaken in renal transplant recipients, with limited investigation in patients treated with MPA for autoimmune disease or haematopoietic stem cell transplantation. All of these studies have involved use of the mycophenolate mofetil (MMF) formulation of MPA, rather than the enteric-coated mycophenolate sodium (EC-MPS) formulation. Bias associated with estimation of MPA AUC using Bayesian forecasting was generally less than 10%. However some difficulties with imprecision was evident, with values ranging from 4% to 34% (based on estimation involving two or more concentration measurements). Evaluation of whether MPA dosing decisions based on Bayesian forecasting (by the free website service https://pharmaco.chu-limoges.fr) achieved target drug exposure has only been undertaken once. When MMF dosage recommendations were applied by clinicians, a higher proportion (72-80%) of subsequent estimated MPA AUC values were within the 30-60 mg · h/L target range, compared with when dosage recommendations were not followed (only 39-57% within target range). Such findings provide evidence that Bayesian dosage prediction is clinically useful for achieving target MPA AUC. This study, however, was retrospective and focussed only on adult renal transplant recipients. Furthermore, in this study, Bayesian-generated AUC estimations and dosage predictions were not compared

  4. MAK and BAT values list 2014. Maximum permissible concentrations at the place of work and biological tolerance values for working materials; MAK- und BAT-Werte-Liste 2014. Maximale Arbeitsplatzkonzentrationen und Biologische Arbeitsstofftoleranzwerte

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2014-11-01

    The book on the MAK (maximum permissible concentrations at the place of work) and BAT (biological tolerance values for working materials) value list 2014 includes the following chapters: (a) Maximum permissible concentrations at the place of work: definition, application and determination of MAT values, list of materials; carcinogenic working materials, sensibilizing working materials, aerosols, limiting the exposition peaks, skin resorption, MAK values during pregnancy, germ cell mutagens, specific working materials; (b) Biological tolerance values for working materials: definition and application of BAT values, list of materials, carcinogenic working materials, biological guide values, biological working material reference values.

  5. MAK and BAT values list 2013. Maximum permissible concentrations at the place of work and biological tolerance values for working materials; MAK- und BAT-Werte-Liste 2013. Maximale Arbeitsplatzkonzentrationen und Biologische Arbeitsstofftoleranzwerte

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2013-08-01

    The book on the MAK (maximum permissible concentrations at the place of work) and BAT (biological tolerance values for working materials) value list 2013 includes the following chapters: (a) Maximum permissible concentrations at the place of work: definition, application and determination of MAT values, list of materials; carcinogenic working materials, sensibilizing working materials, aerosols, limiting the exposition peaks, skin resorption, MAK values during pregnancy, germ cell mutagens, specific working materials; (b) Biological tolerance values for working materials: definition and application of BAT values, list of materials, carcinogenic working materials, biological guide values, biological working material reference values.

  6. MAK and BAT values list 2017. Maximum permissible concentrations at the place of work and biological tolerance values for working materials; MAK- und BAT-Werte-Liste 2017. Maximale Arbeitsplatzkonzentrationen und Biologische Arbeitsstofftoleranzwerte

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2017-08-01

    The MAK and BAT values list 2017 includes the maximum permissible concentrations at the place of work and biological tolerance values for working materials. The following working materials are covered: carcinogenic working materials, sensitizing materials and aerosols. The report discusses the restriction of exposure peaks, skin resorption, MAK (maximum working place concentration) values during pregnancy, germ cell mutagens and specific working materials. Importance and application of BAT (biological working material tolerance) values, list of materials, carcinogens, biological guide values and reference values are also included.

  7. MAK and BAT values list 2015. Maximum permissible concentrations at the place of work and biological tolerance values for working materials; MAK- und BAT-Werte-Liste 2015. Maximale Arbeitsplatzkonzentrationen und Biologische Arbeitsstofftoleranzwerte

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2015-11-01

    The book on the MAK (maximum permissible concentrations at the place of work) and BAT (biological tolerance values for working materials) value list 2015 includes the following chapters: (a) Maximum permissible concentrations at the place of work: definition, application and determination of MAT values, list of materials; carcinogenic working materials, sensibilizing working materials, aerosols, limiting the exposition peaks, skin resorption, MAK values during pregnancy, germ cell mutagens, specific working materials; (b) Biological tolerance values for working materials: definition and application of BAT values, list of materials, carcinogenic working materials, biological guide values, biological working material reference values.

  8. How to average logarithmic retrievals?

    Directory of Open Access Journals (Sweden)

    B. Funke

    2012-04-01

    Full Text Available Calculation of mean trace gas contributions from profiles obtained by retrievals of the logarithm of the abundance rather than retrievals of the abundance itself are prone to biases. By means of a system simulator, biases of linear versus logarithmic averaging were evaluated for both maximum likelihood and maximum a priori retrievals, for various signal to noise ratios and atmospheric variabilities. These biases can easily reach ten percent or more. As a rule of thumb we found for maximum likelihood retrievals that linear averaging better represents the true mean value in cases of large local natural variability and high signal to noise ratios, while for small local natural variability logarithmic averaging often is superior. In the case of maximum a posteriori retrievals, the mean is dominated by the a priori information used in the retrievals and the method of averaging is of minor concern. For larger natural variabilities, the appropriateness of the one or the other method of averaging depends on the particular case because the various biasing mechanisms partly compensate in an unpredictable manner. This complication arises mainly because of the fact that in logarithmic retrievals the weight of the prior information depends on abundance of the gas itself. No simple rule was found on which kind of averaging is superior, and instead of suggesting simple recipes we cannot do much more than to create awareness of the traps related with averaging of mixing ratios obtained from logarithmic retrievals.

  9. Method of estimating maximum VOC concentration in void volume of vented waste drums using limited sampling data: Application in transuranic waste drums

    International Nuclear Information System (INIS)

    Liekhus, K.J.; Connolly, M.J.

    1995-01-01

    A test program has been conducted at the Idaho National Engineering Laboratory to demonstrate that the concentration of volatile organic compounds (VOCs) within the innermost layer of confinement in a vented waste drum can be estimated using a model incorporating diffusion and permeation transport principles as well as limited waste drum sampling data. The model consists of a series of material balance equations describing steady-state VOC transport from each distinct void volume in the drum. The primary model input is the measured drum headspace VOC concentration. Model parameters are determined or estimated based on available process knowledge. The model effectiveness in estimating VOC concentration in the headspace of the innermost layer of confinement was examined for vented waste drums containing different waste types and configurations. This paper summarizes the experimental measurements and model predictions in vented transuranic waste drums containing solidified sludges and solid waste

  10. Experimental study, in rat wistar, of cadmium distribution and elimination as a function of administration route. Cadmium 109 maximum permissible concentration

    International Nuclear Information System (INIS)

    Valero, Marc.

    1979-01-01

    The absorption and the elimination of cadmium have been investigated in rats wistar after oral administration or after inhalation. Before studying gastro-intestinal absorption, it appeared necessary to precise acute toxicity of orally administred cadmium. The distribution of cadmium within organes was determined following a single or multiple oral doses, and we specially studied retention of a Cd dose ingested after several weeks of treatment with Cd-Acetate. Pulmonary and gastro-intestinal absorption of cadmium after ihalation of Cd-microparticles were studied. Data obtained from these studies on rats and extrapolated to man were used to calculate mximum permissible concentration (M.P.C.) of Cd-109 in water and in air [fr

  11. Concentrations and uncertainties of stratospheric trace species inferred from limb infrared monitor of the stratosphere data. I - Methodology and application to OH and HO2. II - Monthly averaged OH, HO2, H2O2, and HO2NO2

    Science.gov (United States)

    Kaye, J. A.; Jackman, C. H.

    1986-01-01

    Difficulties arise in connection with the verification of multidimensional chemical models of the stratosphere. The present study shows that LIMS data, together with a photochemical equilibrium model, may be used to infer concentrations of a variety of zonally averaged trace Ox, OHx, and NOx species over much of the stratosphere. In the lower stratosphere, where the photochemical equilibrium assumption for HOx species breaks down, inferred concentrations should still be accurate to about a factor of 2 for OH and 2.5 for HO2. The algebraic nature of the considered model makes it possible to see easily to the first order the effect of variation of any model input parameter or its uncertainty on the inferred concontration of the HOx species and their uncertainties.

  12. Maximum permissible concentrations of uranium in air

    CERN Document Server

    Adams, N

    1973-01-01

    The retention of uranium by bone and kidney has been re-evaluated taking account of recently published data for a man who had been occupationally exposed to natural uranium aerosols and for adults who had ingested uranium at the normal dietary levels. For life-time occupational exposure to uranium aerosols the new retention functions yield a greater retention in bone and a smaller retention in kidney than the earlier ones, which were based on acute intakes of uranium by terminal patients. Hence bone replaces kidney as the critical organ. The (MPC) sub a for uranium 238 on radiological considerations using the current (1959) ICRP lung model for the new retention functions is slightly smaller than for earlier functions but the (MPC) sub a determined by chemical toxicity remains the most restrictive.

  13. MAK- and BAT values list 2003. Maximum permissible concentrations at the place of work and biological tolerance values for working materials; MAK- und BAT-Werte-Liste 2003. Maximale Arbeitsplatzkonzentrationen und Biologische Arbeitsstofftoleranzwerte

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2003-07-01

    The importance, application and derivation of the maximum concentrations at the workplace is explained. A material's list contains the presently valid maximum concentration values supplemented by a list of materials for which no such values were determined as yet. Furthermore there is a list of working materials clearly identified as carcinogenic, and of working materials with a sensitizing effect, aerosols and some specific working materials are discussed. Finally, the importance and the application of biological tolerance values is explained, supplemented by a materials list. (orig.) [German] In der vorliegenden neuesten Ausgabe werden erneut Bedeutung, Benutzung und Ableitung der MAK-Werte (maximale Arbeitsplatzkonzentrationen) erlaeutert. Eine Stoffliste enthaelt die derzeit gueltigen MAK-Werte, ergaenzt durch eine Aufzaehlung von Stoffen, fuer die noch keine MAK-Werte aufgestellt werden koennen. Es folgt eine Auflistung der Arbeitsstoffe, die bereits eindeutig als krebserregend ausgewiesen wurden, sowie der sensibilisierenden Arbeitsstoffe, Aerosole und einige besondere Arbeitsstoffe. Abschliessend wird die Bedeutung und Benutzung der BAT-Werte erlaeutert, ergaenzt durch eine Stoffliste. (orig.)

  14. Evaluation of the external radiation exposure dosimetry and calculation of maximum permissible concentration values for airborne materials containing 18F, 15O, 13N, 11C and 133Xe

    International Nuclear Information System (INIS)

    Piltingsrud, H.V.; Gels, G.L.

    1985-01-01

    To better understand the dose equivalent (D.E.) rates produced by airborne releases of gaseous positron-emitting radionuclides under various conditions of cloud size, a study of the external radiation exposure dosimetry of these radionuclides, as well as negatron, gamma and x-ray emitting 133Xe, was undertaken. This included a calculation of the contributions to D.E. as a function of cloud radii, at tissue depths of 0.07 mm (skin), 3 mm (lens of eye) and 10 mm (whole body) from both the particulate and photon radiations emitted by these radionuclides. Estimates of maximum permissible concentration (MPC) values were also calculated based on the calculated D.E. rates and current regulations for personnel radiation protection (CFR84). Three continuous air monitors, designed for use with 133Xe, were evaluated for applications in monitoring air concentrations of the selected positron emitters. The results indicate that for a given radionuclide and for a cloud greater than a certain radius, personnel radiation dosimeters must respond acceptably to only the photon radiations emitted by the radionuclide to provide acceptable personnel dosimetry. For clouds under that radius, personnel radiation dosimeters must also respond acceptably to the positron or negatron radiations to provide acceptable personnel dosimetry. It was found that two out of the three air concentration monitors may be useful for monitoring air concentrations of the selected positron emitters

  15. Neutron resonance averaging

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1986-10-01

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs

  16. Functional Maximum Autocorrelation Factors

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg

    2005-01-01

    MAF outperforms the functional PCA in concentrating the interesting' spectra/shape variation in one end of the eigenvalue spectrum and allows for easier interpretation of effects. Conclusions. Functional MAF analysis is a useful methods for extracting low dimensional models of temporally or spatially......Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in......\\verb+~+\\$\\backslash\\$cite{ramsay97} to functional maximum autocorrelation factors (MAF)\\verb+~+\\$\\backslash\\$cite{switzer85,larsen2001d}. We apply the method to biological shapes as well as reflectance spectra. {\\$\\backslash\\$bf Methods}. MAF seeks linear combination of the original variables that maximize autocorrelation between...

  17. Lake Basin Fetch and Maximum Length/Width

    Data.gov (United States)

    Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...

  18. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  19. Maximum permissible dose

    International Nuclear Information System (INIS)

    Anon.

    1979-01-01

    This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed

  20. Critical analysis of the maximum non inhibitory concentration (MNIC) method in quantifying sub-lethal injury in Saccharomyces cerevisiae cells exposed to either thermal or pulsed electric field treatments.

    Science.gov (United States)

    Kethireddy, V; Oey, I; Jowett, Tim; Bremer, P

    2016-09-16

    Sub-lethal injury within a microbial population, due to processing treatments or environmental stress, is often assessed as the difference in the number of cells recovered on non-selective media compared to numbers recovered on a "selective media" containing a predetermined maximum non-inhibitory concentration (MNIC) of a selective agent. However, as knowledge of cell metabolic response to injury, population diversity and dynamics increased, the rationale behind the conventional approach of quantifying sub-lethal injury must be scrutinized further. This study reassessed the methodology used to quantify sub-lethal injury for Saccharomyces cerevisiae cells (≈ 4.75 Log CFU/mL) exposed to either a mild thermal (45°C for 0, 10 and 20min) or a mild pulsed electric field treatment (field strengths of 8.0-9.0kV/cm and energy levels of 8, 14 and 21kJ/kg). Treated cells were plated onto either Yeast Malt agar (YM) or YM containing NaCl, as a selective agent at 5-15% in 1% increments. The impact of sub-lethal stress due to initial processing, the stress due to selective agents in the plating media, and the subsequent variation of inhibition following the treatments was assessed based on the CFU count (cell numbers). ANOVA and a generalised least squares model indicated significant effects of media, treatments, and their interaction effects (P<0.05) on cell numbers. It was shown that the concentration of the selective agent used dictated the extent of sub-lethal injury recorded owing to the interaction effects of the selective component (NaCl) in the recovery media. Our findings highlight a potential common misunderstanding on how culture conditions impact on sub-lethal injury. Interestingly for S. cerevisiae cells the number of cells recovered at different NaCl concentrations in the media appears to provide valuable information about the mode of injury, the comparative efficacy of different processing regimes and the inherent degree of resistance within a population. This

  1. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    1999-01-01

    In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belo...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion......In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  2. Averaged RMHD equations

    International Nuclear Information System (INIS)

    Ichiguchi, Katsuji

    1998-01-01

    A new reduced set of resistive MHD equations is derived by averaging the full MHD equations on specified flux coordinates, which is consistent with 3D equilibria. It is confirmed that the total energy is conserved and the linearized equations for ideal modes are self-adjoint. (author)

  3. Determining average yarding distance.

    Science.gov (United States)

    Roger H. Twito; Charles N. Mann

    1979-01-01

    Emphasis on environmental and esthetic quality in timber harvesting has brought about increased use of complex boundaries of cutting units and a consequent need for a rapid and accurate method of determining the average yarding distance and area of these units. These values, needed for evaluation of road and landing locations in planning timber harvests, are easily and...

  4. Averaging operations on matrices

    Indian Academy of Sciences (India)

    2014-07-03

    Jul 3, 2014 ... Role of Positive Definite Matrices. • Diffusion Tensor Imaging: 3 × 3 pd matrices model water flow at each voxel of brain scan. • Elasticity: 6 × 6 pd matrices model stress tensors. • Machine Learning: n × n pd matrices occur as kernel matrices. Tanvi Jain. Averaging operations on matrices ...

  5. Average-energy games

    Directory of Open Access Journals (Sweden)

    Patricia Bouyer

    2015-09-01

    Full Text Available Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in NP inter coNP and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements.

  6. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    2001-01-01

    In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong ...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation.......In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  7. Average is Over

    Science.gov (United States)

    Eliazar, Iddo

    2018-02-01

    The popular perception of statistical distributions is depicted by the iconic bell curve which comprises of a massive bulk of 'middle-class' values, and two thin tails - one of small left-wing values, and one of large right-wing values. The shape of the bell curve is unimodal, and its peak represents both the mode and the mean. Thomas Friedman, the famous New York Times columnist, recently asserted that we have entered a human era in which "Average is Over" . In this paper we present mathematical models for the phenomenon that Friedman highlighted. While the models are derived via different modeling approaches, they share a common foundation. Inherent tipping points cause the models to phase-shift from a 'normal' bell-shape statistical behavior to an 'anomalous' statistical behavior: the unimodal shape changes to an unbounded monotone shape, the mode vanishes, and the mean diverges. Hence: (i) there is an explosion of small values; (ii) large values become super-large; (iii) 'middle-class' values are wiped out, leaving an infinite rift between the small and the super large values; and (iv) "Average is Over" indeed.

  8. Prevention of coronary and stroke events with atorvastatin in hypertensive patients who have average or lower-than-average cholesterol concentrations, in the Anglo-Scandinavian Cardiac Outcomes Trial--Lipid Lowering Arm (ASCOT-LLA): a multicentre randomised controlled trial

    DEFF Research Database (Denmark)

    Sever, Peter S; Dahlöf, Björn; Poulter, Neil R

    2003-01-01

    The lowering of cholesterol concentrations in individuals at high risk of cardiovascular disease improves outcome. No study, however, has assessed benefits of cholesterol lowering in the primary prevention of coronary heart disease (CHD) in hypertensive patients who are not conventionally deemed ...

  9. Prevention of coronary and stroke events with atorvastatin in hypertensive patients who have average or lower-than-average cholesterol concentrations, in the Anglo-Scandinavian Cardiac Outcomes Trial--Lipid Lowering Arm (ASCOT-LLA): a multicentre randomised controlled trial

    DEFF Research Database (Denmark)

    Sever, Peter S; Dahlöf, Björn; Poulter, Neil R

    2004-01-01

    The lowering of cholesterol concentrations in individuals at high risk of cardiovascular disease improves outcome. No study, however, has assessed benefits of cholesterol lowering in the primary prevention of coronary heart disease (CHD) in hypertensive patients who are not conventionally deemed ...

  10. Maximum Acceleration Recording Circuit

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1995-01-01

    Coarsely digitized maximum levels recorded in blown fuses. Circuit feeds power to accelerometer and makes nonvolatile record of maximum level to which output of accelerometer rises during measurement interval. In comparison with inertia-type single-preset-trip-point mechanical maximum-acceleration-recording devices, circuit weighs less, occupies less space, and records accelerations within narrower bands of uncertainty. In comparison with prior electronic data-acquisition systems designed for same purpose, circuit simpler, less bulky, consumes less power, costs and analysis of data recorded in magnetic or electronic memory devices. Circuit used, for example, to record accelerations to which commodities subjected during transportation on trucks.

  11. Average nuclear surface properties

    International Nuclear Information System (INIS)

    Groote, H. von.

    1979-01-01

    The definition of the nuclear surface energy is discussed for semi-infinite matter. This definition is extended also for the case that there is a neutron gas instead of vacuum on the one side of the plane surface. The calculations were performed with the Thomas-Fermi Model of Syler and Blanchard. The parameters of the interaction of this model were determined by a least squares fit to experimental masses. The quality of this fit is discussed with respect to nuclear masses and density distributions. The average surface properties were calculated for different particle asymmetry of the nucleon-matter ranging from symmetry beyond the neutron-drip line until the system no longer can maintain the surface boundary and becomes homogeneous. The results of the calculations are incorporated in the nuclear Droplet Model which then was fitted to experimental masses. (orig.)

  12. Americans' Average Radiation Exposure

    International Nuclear Information System (INIS)

    2000-01-01

    We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body

  13. Maximum Quantum Entropy Method

    OpenAIRE

    Sim, Jae-Hoon; Han, Myung Joon

    2018-01-01

    Maximum entropy method for analytic continuation is extended by introducing quantum relative entropy. This new method is formulated in terms of matrix-valued functions and therefore invariant under arbitrary unitary transformation of input matrix. As a result, the continuation of off-diagonal elements becomes straightforward. Without introducing any further ambiguity, the Bayesian probabilistic interpretation is maintained just as in the conventional maximum entropy method. The applications o...

  14. Maximum power demand cost

    International Nuclear Information System (INIS)

    Biondi, L.

    1998-01-01

    The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some [it

  15. Analysis of a linear solar concentrated with stationary reflector and movable center for applications of average temperature; Analisis de un concetrador solar lineal con reflector estacionario y foco movil para aplicaciones de media temperatura

    Energy Technology Data Exchange (ETDEWEB)

    Pujol, R.; Moia, A.; Martinez, V.

    2008-07-01

    Three different geometries of a fixed solar mirror concentrator and tracking absorber have been analyzed for medium temperature: FSMC flat mirrors, FSMC parabolic mirrors and only one parabolic mirror OPMSC. These designs can track the sun by moving the receiver around a static reflector in a circular path. A forward ray tracing procedure was implemented by the authors to analyze the influence of the collector parameters on optical efficiency. Various combinations of D/W ratios and geometric concentration ratios C were studied. The analysis showed that as D/W increases the efficiency increases well. Annual efficiencies of a 40% can be reached, in front of 35 % estimated with commercial evacuated tubes at 120 degree centigrade. (Author)

  16. LCLS Maximum Credible Beam Power

    International Nuclear Information System (INIS)

    Clendenin, J.

    2005-01-01

    The maximum credible beam power is defined as the highest credible average beam power that the accelerator can deliver to the point in question, given the laws of physics, the beam line design, and assuming all protection devices have failed. For a new accelerator project, the official maximum credible beam power is determined by project staff in consultation with the Radiation Physics Department, after examining the arguments and evidence presented by the appropriate accelerator physicist(s) and beam line engineers. The definitive parameter becomes part of the project's safety envelope. This technical note will first review the studies that were done for the Gun Test Facility (GTF) at SSRL, where a photoinjector similar to the one proposed for the LCLS is being tested. In Section 3 the maximum charge out of the gun for a single rf pulse is calculated. In Section 4, PARMELA simulations are used to track the beam from the gun to the end of the photoinjector. Finally in Section 5 the beam through the matching section and injected into Linac-1 is discussed

  17. Maximum likely scale estimation

    DEFF Research Database (Denmark)

    Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo

    2005-01-01

    A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...

  18. Robust Maximum Association Estimators

    NARCIS (Netherlands)

    A. Alfons (Andreas); C. Croux (Christophe); P. Filzmoser (Peter)

    2017-01-01

    textabstractThe maximum association between two multivariate variables X and Y is defined as the maximal value that a bivariate association measure between one-dimensional projections αX and αY can attain. Taking the Pearson correlation as projection index results in the first canonical correlation

  19. Average subentropy, coherence and entanglement of random mixed quantum states

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Lin, E-mail: godyalin@163.com [Institute of Mathematics, Hangzhou Dianzi University, Hangzhou 310018 (China); Singh, Uttam, E-mail: uttamsingh@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India); Pati, Arun K., E-mail: akpati@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India)

    2017-02-15

    Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate that mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.

  20. Maximum power point tracking

    International Nuclear Information System (INIS)

    Enslin, J.H.R.

    1990-01-01

    A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking can be m ore cost effective, has a higher reliability and can improve the quality of life in remote areas. This paper reports that a high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of 15% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply systems. The advantages at larger temperature variations and larger power rated systems are much higher. Other advantages include optimal sizing and system monitor and control

  1. Maximum entropy methods

    International Nuclear Information System (INIS)

    Ponman, T.J.

    1984-01-01

    For some years now two different expressions have been in use for maximum entropy image restoration and there has been some controversy over which one is appropriate for a given problem. Here two further entropies are presented and it is argued that there is no single correct algorithm. The properties of the four different methods are compared using simple 1D simulations with a view to showing how they can be used together to gain as much information as possible about the original object. (orig.)

  2. The last glacial maximum

    Science.gov (United States)

    Clark, P.U.; Dyke, A.S.; Shakun, J.D.; Carlson, A.E.; Clark, J.; Wohlfarth, B.; Mitrovica, J.X.; Hostetler, S.W.; McCabe, A.M.

    2009-01-01

    We used 5704 14C, 10Be, and 3He ages that span the interval from 10,000 to 50,000 years ago (10 to 50 ka) to constrain the timing of the Last Glacial Maximum (LGM) in terms of global ice-sheet and mountain-glacier extent. Growth of the ice sheets to their maximum positions occurred between 33.0 and 26.5 ka in response to climate forcing from decreases in northern summer insolation, tropical Pacific sea surface temperatures, and atmospheric CO2. Nearly all ice sheets were at their LGM positions from 26.5 ka to 19 to 20 ka, corresponding to minima in these forcings. The onset of Northern Hemisphere deglaciation 19 to 20 ka was induced by an increase in northern summer insolation, providing the source for an abrupt rise in sea level. The onset of deglaciation of the West Antarctic Ice Sheet occurred between 14 and 15 ka, consistent with evidence that this was the primary source for an abrupt rise in sea level ???14.5 ka.

  3. TRENDS IN ESTIMATED MIXING DEPTH DAILY MAXIMUMS

    Energy Technology Data Exchange (ETDEWEB)

    Buckley, R; Amy DuPont, A; Robert Kurzeja, R; Matt Parker, M

    2007-11-12

    Mixing depth is an important quantity in the determination of air pollution concentrations. Fireweather forecasts depend strongly on estimates of the mixing depth as a means of determining the altitude and dilution (ventilation rates) of smoke plumes. The Savannah River United States Forest Service (USFS) routinely conducts prescribed fires at the Savannah River Site (SRS), a heavily wooded Department of Energy (DOE) facility located in southwest South Carolina. For many years, the Savannah River National Laboratory (SRNL) has provided forecasts of weather conditions in support of the fire program, including an estimated mixing depth using potential temperature and turbulence change with height at a given location. This paper examines trends in the average estimated mixing depth daily maximum at the SRS over an extended period of time (4.75 years) derived from numerical atmospheric simulations using two versions of the Regional Atmospheric Modeling System (RAMS). This allows for differences to be seen between the model versions, as well as trends on a multi-year time frame. In addition, comparisons of predicted mixing depth for individual days in which special balloon soundings were released are also discussed.

  4. Maximum Entropy Fundamentals

    Directory of Open Access Journals (Sweden)

    F. Topsøe

    2001-09-01

    Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over

  5. Probable maximum flood control

    International Nuclear Information System (INIS)

    DeGabriele, C.E.; Wu, C.L.

    1991-11-01

    This study proposes preliminary design concepts to protect the waste-handling facilities and all shaft and ramp entries to the underground from the probable maximum flood (PMF) in the current design configuration for the proposed Nevada Nuclear Waste Storage Investigation (NNWSI) repository protection provisions were furnished by the United States Bureau of Reclamation (USSR) or developed from USSR data. Proposed flood protection provisions include site grading, drainage channels, and diversion dikes. Figures are provided to show these proposed flood protection provisions at each area investigated. These areas are the central surface facilities (including the waste-handling building and waste treatment building), tuff ramp portal, waste ramp portal, men-and-materials shaft, emplacement exhaust shaft, and exploratory shafts facility

  6. Introduction to maximum entropy

    International Nuclear Information System (INIS)

    Sivia, D.S.

    1988-01-01

    The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. We review the need for such methods in data analysis and show, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. We conclude with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab

  7. Solar maximum observatory

    International Nuclear Information System (INIS)

    Rust, D.M.

    1984-01-01

    The successful retrieval and repair of the Solar Maximum Mission (SMM) satellite by Shuttle astronauts in April 1984 permitted continuance of solar flare observations that began in 1980. The SMM carries a soft X ray polychromator, gamma ray, UV and hard X ray imaging spectrometers, a coronagraph/polarimeter and particle counters. The data gathered thus far indicated that electrical potentials of 25 MeV develop in flares within 2 sec of onset. X ray data show that flares are composed of compressed magnetic loops that have come too close together. Other data have been taken on mass ejection, impacts of electron beams and conduction fronts with the chromosphere and changes in the solar radiant flux due to sunspots. 13 references

  8. Introduction to maximum entropy

    International Nuclear Information System (INIS)

    Sivia, D.S.

    1989-01-01

    The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. The author reviews the need for such methods in data analysis and shows, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. He concludes with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab

  9. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin

    2015-01-01

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  10. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan

    2015-02-12

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  11. Maximum neutron flux in thermal reactors

    International Nuclear Information System (INIS)

    Strugar, P.V.

    1968-12-01

    Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples

  12. The difference between alternative averages

    Directory of Open Access Journals (Sweden)

    James Vaupel

    2012-09-01

    Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.

  13. Solar maximum mission

    International Nuclear Information System (INIS)

    Ryan, J.

    1981-01-01

    By understanding the sun, astrophysicists hope to expand this knowledge to understanding other stars. To study the sun, NASA launched a satellite on February 14, 1980. The project is named the Solar Maximum Mission (SMM). The satellite conducted detailed observations of the sun in collaboration with other satellites and ground-based optical and radio observations until its failure 10 months into the mission. The main objective of the SMM was to investigate one aspect of solar activity: solar flares. A brief description of the flare mechanism is given. The SMM satellite was valuable in providing information on where and how a solar flare occurs. A sequence of photographs of a solar flare taken from SMM satellite shows how a solar flare develops in a particular layer of the solar atmosphere. Two flares especially suitable for detailed observations by a joint effort occurred on April 30 and May 21 of 1980. These flares and observations of the flares are discussed. Also discussed are significant discoveries made by individual experiments

  14. Maximum likelihood Bayesian averaging of airflow models in unsaturated fractured tuff using Occam and variance windows

    NARCIS (Netherlands)

    Morales-Casique, E.; Neuman, S.P.; Vesselinov, V.V.

    2010-01-01

    We use log permeability and porosity data obtained from single-hole pneumatic packer tests in six boreholes drilled into unsaturated fractured tuff near Superior, Arizona, to postulate, calibrate and compare five alternative variogram models (exponential, exponential with linear drift, power,

  15. An Idea on the Maximum Permissible Concentrations of Radioactive Materials in Sea Water; Concentrations Maxima Admissibles des Substances Radioactives dans l'Eau de Mer; 041e 041c 0414 ; Concentracion Maxima Admisible de Materiales Radiactivos en las Aguas del Mar

    Energy Technology Data Exchange (ETDEWEB)

    Hiyama, Yoshio [University of Tokyo (Japan)

    1960-07-01

    The author of the present paper has tried to find a relationship between the level of sea-water contamination by several radionuclides and the level of the radiation dose-rate inside the human body caused by these radionuclides of marine origin, using the ICRP recommendation and the few data on the chemical analysis of sea water and of the human body at present available to the author. However, even if the idea of this calculation were to be recognized by scientists in the fields of oceanography, public health, nutrition, radiation biology, and others, it would still be necessary to get further data on the amounts of trace elements in sea-water, in marine products, and in the human body in order to complete a table of the maximum permissible concentrations in sea-water of various radionuclides. Here the idea is merely advanced and a few examples are described. (author) [French] L'auteur a cherche a etablir un rapport entre le degre de contamination de l'eau de mer par plusieurs radionuclides et le degre d'irradiation de l'interieur du corps humain due a ces radionuclides d'origine marine; a cet effet, il s'est fonde sur les recommandations de la CIPR et sur les quelques donnees relatives a l'analyse chimique de l'eau de mer et du corps humain qu'il avait a sa disposition. Cependant, meme si le principe de ce calcul etait reconnu par les oceanographes, les hygienistes, les bromatologistes, les radiobiologistes et d'autres specialistes, il n'en serait pas moins necessaire d'obtenir des renseignements supplementaires sur les quantites d'oligoelements existant dans l'eau de mer, les produits de la mer et le corps humain, en vue de dresser un tableau complet des concentrations maxima admissibles de divers radionuclides dans l'eau de mer. L'auteur se borne a formuler le concept et a donner quelques exemples. (author) [Spanish] El autor, teniendo en cuenta la recomendacion de la Comision Internacional de Proteccion Radiologica y los pocos datos de que actualmente dispone

  16. Maximum Recommended Dosage of Lithium for Pregnant Women Based on a PBPK Model for Lithium Absorption

    Directory of Open Access Journals (Sweden)

    Scott Horton

    2012-01-01

    Full Text Available Treatment of bipolar disorder with lithium therapy during pregnancy is a medical challenge. Bipolar disorder is more prevalent in women and its onset is often concurrent with peak reproductive age. Treatment typically involves administration of the element lithium, which has been classified as a class D drug (legal to use during pregnancy, but may cause birth defects and is one of only thirty known teratogenic drugs. There is no clear recommendation in the literature on the maximum acceptable dosage regimen for pregnant, bipolar women. We recommend a maximum dosage regimen based on a physiologically based pharmacokinetic (PBPK model. The model simulates the concentration of lithium in the organs and tissues of a pregnant woman and her fetus. First, we modeled time-dependent lithium concentration profiles resulting from lithium therapy known to have caused birth defects. Next, we identified maximum and average fetal lithium concentrations during treatment. Then, we developed a lithium therapy regimen to maximize the concentration of lithium in the mother’s brain, while maintaining the fetal concentration low enough to reduce the risk of birth defects. This maximum dosage regimen suggested by the model was 400 mg lithium three times per day.

  17. Independence, Odd Girth, and Average Degree

    DEFF Research Database (Denmark)

    Löwenstein, Christian; Pedersen, Anders Sune; Rautenbach, Dieter

    2011-01-01

      We prove several tight lower bounds in terms of the order and the average degree for the independence number of graphs that are connected and/or satisfy some odd girth condition. Our main result is the extension of a lower bound for the independence number of triangle-free graphs of maximum...... degree at most three due to Heckman and Thomas [Discrete Math 233 (2001), 233–237] to arbitrary triangle-free graphs. For connected triangle-free graphs of order n and size m, our result implies the existence of an independent set of order at least (4n−m−1) / 7.  ...

  18. Lagrangian averaging with geodesic mean.

    Science.gov (United States)

    Oliver, Marcel

    2017-11-01

    This paper revisits the derivation of the Lagrangian averaged Euler (LAE), or Euler- α equations in the light of an intrinsic definition of the averaged flow map as the geodesic mean on the volume-preserving diffeomorphism group. Under the additional assumption that first-order fluctuations are statistically isotropic and transported by the mean flow as a vector field, averaging of the kinetic energy Lagrangian of an ideal fluid yields the LAE Lagrangian. The derivation presented here assumes a Euclidean spatial domain without boundaries.

  19. Averaging in spherically symmetric cosmology

    International Nuclear Information System (INIS)

    Coley, A. A.; Pelavas, N.

    2007-01-01

    The averaging problem in cosmology is of fundamental importance. When applied to study cosmological evolution, the theory of macroscopic gravity (MG) can be regarded as a long-distance modification of general relativity. In the MG approach to the averaging problem in cosmology, the Einstein field equations on cosmological scales are modified by appropriate gravitational correlation terms. We study the averaging problem within the class of spherically symmetric cosmological models. That is, we shall take the microscopic equations and effect the averaging procedure to determine the precise form of the correlation tensor in this case. In particular, by working in volume-preserving coordinates, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. We find that the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background must be of the form of a spatial curvature. Inhomogeneities and spatial averaging, through this spatial curvature correction term, can have a very significant dynamical effect on the dynamics of the Universe and cosmological observations; in particular, we discuss whether spatial averaging might lead to a more conservative explanation of the observed acceleration of the Universe (without the introduction of exotic dark matter fields). We also find that the correlation tensor for a non-FLRW background can be interpreted as the sum of a spatial curvature and an anisotropic fluid. This may lead to interesting effects of averaging on astrophysical scales. We also discuss the results of averaging an inhomogeneous Lemaitre-Tolman-Bondi solution as well as calculations of linear perturbations (that is, the backreaction) in an FLRW background, which support the main conclusions of the analysis

  20. Averaging models: parameters estimation with the R-Average procedure

    Directory of Open Access Journals (Sweden)

    S. Noventa

    2010-01-01

    Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.

  1. Evaluations of average level spacings

    International Nuclear Information System (INIS)

    Liou, H.I.

    1980-01-01

    The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of 168 Er data. 19 figures, 2 tables

  2. Ergodic averages via dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    2006-01-01

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain....

  3. Credal Networks under Maximum Entropy

    OpenAIRE

    Lukasiewicz, Thomas

    2013-01-01

    We apply the principle of maximum entropy to select a unique joint probability distribution from the set of all joint probability distributions specified by a credal network. In detail, we start by showing that the unique joint distribution of a Bayesian tree coincides with the maximum entropy model of its conditional distributions. This result, however, does not hold anymore for general Bayesian networks. We thus present a new kind of maximum entropy models, which are computed sequentially. ...

  4. High average power supercontinuum sources

    Indian Academy of Sciences (India)

    The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium.

  5. Last Glacial Maximum Salinity Reconstruction

    Science.gov (United States)

    Homola, K.; Spivack, A. J.

    2016-12-01

    It has been previously demonstrated that salinity can be reconstructed from sediment porewater. The goal of our study is to reconstruct high precision salinity during the Last Glacial Maximum (LGM). Salinity is usually determined at high precision via conductivity, which requires a larger volume of water than can be extracted from a sediment core, or via chloride titration, which yields lower than ideal precision. It has been demonstrated for water column samples that high precision density measurements can be used to determine salinity at the precision of a conductivity measurement using the equation of state of seawater. However, water column seawater has a relatively constant composition, in contrast to porewater, where variations from standard seawater composition occur. These deviations, which affect the equation of state, must be corrected for through precise measurements of each ion's concentration and knowledge of apparent partial molar density in seawater. We have developed a density-based method for determining porewater salinity that requires only 5 mL of sample, achieving density precisions of 10-6 g/mL. We have applied this method to porewater samples extracted from long cores collected along a N-S transect across the western North Atlantic (R/V Knorr cruise KN223). Density was determined to a precision of 2.3x10-6 g/mL, which translates to salinity uncertainty of 0.002 gms/kg if the effect of differences in composition is well constrained. Concentrations of anions (Cl-, and SO4-2) and cations (Na+, Mg+, Ca+2, and K+) were measured. To correct salinities at the precision required to unravel LGM Meridional Overturning Circulation, our ion precisions must be better than 0.1% for SO4-/Cl- and Mg+/Na+, and 0.4% for Ca+/Na+, and K+/Na+. Alkalinity, pH and Dissolved Inorganic Carbon of the porewater were determined to precisions better than 4% when ratioed to Cl-, and used to calculate HCO3-, and CO3-2. Apparent partial molar densities in seawater were

  6. When good = better than average

    Directory of Open Access Journals (Sweden)

    Don A. Moore

    2007-10-01

    Full Text Available People report themselves to be above average on simple tasks and below average on difficult tasks. This paper proposes an explanation for this effect that is simpler than prior explanations. The new explanation is that people conflate relative with absolute evaluation, especially on subjective measures. The paper then presents a series of four studies that test this conflation explanation. These tests distinguish conflation from other explanations, such as differential weighting and selecting the wrong referent. The results suggest that conflation occurs at the response stage during which people attempt to disambiguate subjective response scales in order to choose an answer. This is because conflation has little effect on objective measures, which would be equally affected if the conflation occurred at encoding.

  7. Autoregressive Moving Average Graph Filtering

    OpenAIRE

    Isufi, Elvin; Loukas, Andreas; Simonetto, Andrea; Leus, Geert

    2016-01-01

    One of the cornerstones of the field of signal processing on graphs are graph filters, direct analogues of classical filters, but intended for signals defined on graphs. This work brings forth new insights on the distributed graph filtering problem. We design a family of autoregressive moving average (ARMA) recursions, which (i) are able to approximate any desired graph frequency response, and (ii) give exact solutions for tasks such as graph signal denoising and interpolation. The design phi...

  8. Maximum entropy estimation of a Benzene contaminated plume using ecotoxicological assays

    International Nuclear Information System (INIS)

    Wahyudi, Agung; Bartzke, Mariana; Küster, Eberhard; Bogaert, Patrick

    2013-01-01

    Ecotoxicological bioassays, e.g. based on Danio rerio teratogenicity (DarT) or the acute luminescence inhibition with Vibrio fischeri, could potentially lead to significant benefits for detecting on site contaminations on qualitative or semi-quantitative bases. The aim was to use the observed effects of two ecotoxicological assays for estimating the extent of a Benzene groundwater contamination plume. We used a Maximum Entropy (MaxEnt) method to rebuild a bivariate probability table that links the observed toxicity from the bioassays with Benzene concentrations. Compared with direct mapping of the contamination plume as obtained from groundwater samples, the MaxEnt concentration map exhibits on average slightly higher concentrations though the global pattern is close to it. This suggest MaxEnt is a valuable method to build a relationship between quantitative data, e.g. contaminant concentrations, and more qualitative or indirect measurements, in a spatial mapping framework, which is especially useful when clear quantitative relation is not at hand. - Highlights: ► Ecotoxicological shows significant benefits for detecting on site contaminations. ► MaxEnt to rebuild qualitative link on concentration and ecotoxicological assays. ► MaxEnt shows similar pattern when compared with concentrations map of groundwater. ► MaxEnt is a valuable method especially when quantitative relation is not at hand. - A Maximum Entropy method to rebuild qualitative relationships between Benzene groundwater concentrations and their ecotoxicological effect.

  9. Averaging Robertson-Walker cosmologies

    International Nuclear Information System (INIS)

    Brown, Iain A.; Robbers, Georg; Behrend, Juliane

    2009-01-01

    The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ω eff 0 ≈ 4 × 10 −6 , with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10 −8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w eff < −1/3 can be found for strongly phantom models

  10. Maximum entropy decomposition of quadrupole mass spectra

    International Nuclear Information System (INIS)

    Toussaint, U. von; Dose, V.; Golan, A.

    2004-01-01

    We present an information-theoretic method called generalized maximum entropy (GME) for decomposing mass spectra of gas mixtures from noisy measurements. In this GME approach to the noisy, underdetermined inverse problem, the joint entropies of concentration, cracking, and noise probabilities are maximized subject to the measured data. This provides a robust estimation for the unknown cracking patterns and the concentrations of the contributing molecules. The method is applied to mass spectroscopic data of hydrocarbons, and the estimates are compared with those received from a Bayesian approach. We show that the GME method is efficient and is computationally fast

  11. Estimation of Freely-Dissolved Concentrations of Polychlorinated Biphenyls, 2,3,7,8-Substituted Congeners and Homologs of Polychlorinated dibenzo-p-dioxins and Dibenzofurans in Water for Development of Total Maximum Daily Loadings for the Bluestone River Watershed, Virginia and West Virginia

    Science.gov (United States)

    Gale, Robert W.

    2007-01-01

    The Commonwealth of Virginia Department of Environmental Quality, working closely with the State of West Virginia Department of Environmental Protection and the U.S. Environmental Protection Agency is undertaking a polychlorinated biphenyl source assessment study for the Bluestone River watershed. The study area extends from the Bluefield area of Virginia and West Virginia, targets the Bluestone River and tributaries suspected of contributing to polychlorinated biphenyl, polychlorinated dibenzo-p-dioxin and dibenzofuran contamination, and includes sites near confluences of Big Branch, Brush Fork, and Beaver Pond Creek. The objectives of this study were to gather information about the concentrations, patterns, and distribution of these contaminants at specific study sites to expand current knowledge about polychlorinated biphenyl impacts and to identify potential new sources of contamination. Semipermeable membrane devices were used to integratively accumulate the dissolved fraction of the contaminants at each site. Performance reference compounds were added prior to deployment and used to determine site-specific sampling rates, enabling estimations of time-weighted average water concentrations during the deployed period. Minimum estimated concentrations of polychlorinated biphenyl congeners in water were about 1 picogram per liter per congener, and total concentrations at study sites ranged from 130 to 18,000 picograms per liter. The lowest concentration was 130 picograms per liter, about threefold greater than total hypothetical concentrations from background levels in field blanks. Polychlorinated biphenyl concentrations in water fell into three groups of sites: low (130-350 picogram per liter); medium (640-3,500 picogram per liter; and high (11,000-18,000 picogram per liter). Concentrations at the high sites, Beacon Cave and Beaverpond Branch at the Resurgence, were about four- to sixfold higher than concentrations estimated for the medium group of sites

  12. Zipf's law, power laws and maximum entropy

    International Nuclear Information System (INIS)

    Visser, Matt

    2013-01-01

    Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified. (paper)

  13. A high speed digital signal averager for pulsed NMR

    International Nuclear Information System (INIS)

    Srinivasan, R.; Ramakrishna, J.; Ra agopalan, S.R.

    1978-01-01

    A 256-channel digital signal averager suitable for pulsed nuclear magnetic resonance spectroscopy is described. It implements 'stable averaging' algorithm and hence provides a calibrated display of the average signal at all times during the averaging process on a CRT. It has a maximum sampling rate of 2.5 μ sec and a memory capacity of 256 x 12 bit words. Number of sweeps is selectable through a front panel control in binary steps from 2 3 to 2 12 . The enhanced signal can be displayed either on a CRT or by a 3.5-digit LED display. The maximum S/N improvement that can be achieved with this instrument is 36 dB. (auth.)

  14. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  15. Chaotic Universe, Friedmannian on the average 2

    Energy Technology Data Exchange (ETDEWEB)

    Marochnik, L S [AN SSSR, Moscow. Inst. Kosmicheskikh Issledovanij

    1980-11-01

    The cosmological solutions are found for the equations for correlators, describing a statistically chaotic Universe, Friedmannian on the average in which delta-correlated fluctuations with amplitudes h >> 1 are excited. For the equation of state of matter p = n epsilon, the kind of solutions depends on the position of maximum of the spectrum of the metric disturbances. The expansion of the Universe, in which long-wave potential and vortical motions and gravitational waves (modes diverging at t ..-->.. 0) had been excited, tends asymptotically to the Friedmannian one at t ..-->.. identity and depends critically on n: at n < 0.26, the solution for the scalefactor is situated higher than the Friedmannian one, and lower at n > 0.26. The influence of finite at t ..-->.. 0 long-wave fluctuation modes leads to an averaged quasiisotropic solution. The contribution of quantum fluctuations and of short-wave parts of the spectrum of classical fluctuations to the expansion law is considered. Their influence is equivalent to the contribution from an ultrarelativistic gas with corresponding energy density and pressure. The restrictions are obtained for the degree of chaos (the spectrum characteristics) compatible with the observed helium abundance, which could have been retained by a completely chaotic Universe during its expansion up to the nucleosynthesis epoch.

  16. Topological quantization of ensemble averages

    International Nuclear Information System (INIS)

    Prodan, Emil

    2009-01-01

    We define the current of a quantum observable and, under well-defined conditions, we connect its ensemble average to the index of a Fredholm operator. The present work builds on a formalism developed by Kellendonk and Schulz-Baldes (2004 J. Funct. Anal. 209 388) to study the quantization of edge currents for continuous magnetic Schroedinger operators. The generalization given here may be a useful tool to scientists looking for novel manifestations of the topological quantization. As a new application, we show that the differential conductance of atomic wires is given by the index of a certain operator. We also comment on how the formalism can be used to probe the existence of edge states

  17. Flexible time domain averaging technique

    Science.gov (United States)

    Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng

    2013-09-01

    Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

  18. maximum neutron flux at thermal nuclear reactors

    International Nuclear Information System (INIS)

    Strugar, P.

    1968-10-01

    Since actual research reactors are technically complicated and expensive facilities it is important to achieve savings by appropriate reactor lattice configurations. There is a number of papers, and practical examples of reactors with central reflector, dealing with spatial distribution of fuel elements which would result in higher neutron flux. Common disadvantage of all the solutions is that the choice of best solution is done starting from the anticipated spatial distributions of fuel elements. The weakness of these approaches is lack of defined optimization criteria. Direct approach is defined as follows: determine the spatial distribution of fuel concentration starting from the condition of maximum neutron flux by fulfilling the thermal constraints. Thus the problem of determining the maximum neutron flux is solving a variational problem which is beyond the possibilities of classical variational calculation. This variational problem has been successfully solved by applying the maximum principle of Pontrjagin. Optimum distribution of fuel concentration was obtained in explicit analytical form. Thus, spatial distribution of the neutron flux and critical dimensions of quite complex reactor system are calculated in a relatively simple way. In addition to the fact that the results are innovative this approach is interesting because of the optimization procedure itself [sr

  19. Maximum Entropy in Drug Discovery

    Directory of Open Access Journals (Sweden)

    Chih-Yuan Tseng

    2014-07-01

    Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.

  20. 40 CFR 1045.140 - What is my engine's maximum engine power?

    Science.gov (United States)

    2010-07-01

    ...) Maximum engine power for an engine family is generally the weighted average value of maximum engine power... engine family's maximum engine power apply in the following circumstances: (1) For outboard or personal... value for maximum engine power from all the different configurations within the engine family to...

  1. The average Indian female nose.

    Science.gov (United States)

    Patil, Surendra B; Kale, Satish M; Jaiswal, Sumeet; Khare, Nishant; Math, Mahantesh

    2011-12-01

    This study aimed to delineate the anthropometric measurements of the noses of young women of an Indian population and to compare them with the published ideals and average measurements for white women. This anthropometric survey included a volunteer sample of 100 young Indian women ages 18 to 35 years with Indian parents and no history of previous surgery or trauma to the nose. Standardized frontal, lateral, oblique, and basal photographs of the subjects' noses were taken, and 12 standard anthropometric measurements of the nose were determined. The results were compared with published standards for North American white women. In addition, nine nasal indices were calculated and compared with the standards for North American white women. The nose of Indian women differs significantly from the white nose. All the nasal measurements for the Indian women were found to be significantly different from those for North American white women. Seven of the nine nasal indices also differed significantly. Anthropometric analysis suggests differences between the Indian female nose and the North American white nose. Thus, a single aesthetic ideal is inadequate. Noses of Indian women are smaller and wider, with a less projected and rounded tip than the noses of white women. This study established the nasal anthropometric norms for nasal parameters, which will serve as a guide for cosmetic and reconstructive surgery in Indian women.

  2. Long-term variation of outdoor radon equilibrium equivalent concentration

    Energy Technology Data Exchange (ETDEWEB)

    Hoetzl, H. [GSF-Forschungszentrum fuer Umwelt und Gesundheit, Inst. fuer Strahlenschutz, Oberschleissheim (Germany); Winkler, R. [GSF-Forschungszentrum fuer Umwelt und Gesundheit, Inst. fuer Strahlenschutz, Oberschleissheim (Germany)

    1994-10-01

    Long-term variation of outdoor radon equilibrium equivalent concentration was investigated from 1982 to 1992 at a semi-natural location 10 km north of Munich, southern Germany. For this period the continuous measurement yielded a long-term average of 8.6 Bq.m{sup -3} (arithmetic mean) and 6.9 Bq.m{sup -3} (geometric mean), from which an average annual effective dose of 0.14 mSv due to outdoor radon can be derived. A long-term trend of the radon concentration was not detectable over the whole period of observation. However, by time series analysis, a long-term cyclic pattern was identified with two maxima (1984-1986, 1989-1991) and two minima (1982-1983, 1987-1988). The seasonal pattern is characterized by an autumn maximum and an early summer minimum. On average, the seasonal maximum in October was found to be higher by a factor of 2 than the June minimum. The diurnal variation of the radon concentration shows a maximum in the early morning and a minimum in the afternoon. On average, this maximum is a factor of 2 higher than the minimum. In the long term a seasonal pattern was observed for diurnal variation, with an average diurnal maximum to minimum ratio of 1.5 in winter compared with 3.5 in the summer months. The radon concentration is correlated with a meteorological parameter (stagnation index) which takes into account horizontal and vertical exchange processes and the wash-out of aerosols in the lower atmosphere. (orig.)

  3. Long-term variation of outdoor radon equilibrium equivalent concentration

    International Nuclear Information System (INIS)

    Hoetzl, H.; Winkler, R.

    1994-01-01

    Long-term variation of outdoor radon equilibrium equivalent concentration was investigated from 1982 to 1992 at a semi-natural location 10 km north of Munich, southern Germany. For this period the continuous measurement yielded a long-term average of 8.6 Bq.m -3 (arithmetic mean) and 6.9 Bq.m -3 (geometric mean), from which an average annual effective dose of 0.14 mSv due to outdoor radon can be derived. A long-term trend of the radon concentration was not detectable over the whole period of observation. However, by time series analysis, a long-term cyclic pattern was identified with two maxima (1984-1986, 1989-1991) and two minima (1982-1983, 1987-1988). The seasonal pattern is characterized by an autumn maximum and an early summer minimum. On average, the seasonal maximum in October was found to be higher by a factor of 2 than the June minimum. The diurnal variation of the radon concentration shows a maximum in the early morning and a minimum in the afternoon. On average, this maximum is a factor of 2 higher than the minimum. In the long term a seasonal pattern was observed for diurnal variation, with an average diurnal maximum to minimum ratio of 1.5 in winter compared with 3.5 in the summer months. The radon concentration is correlated with a meteorological parameter (stagnation index) which takes into account horizontal and vertical exchange processes and the wash-out of aerosols in the lower atmosphere. (orig.)

  4. Maximum stellar iron core mass

    Indian Academy of Sciences (India)

    60, No. 3. — journal of. March 2003 physics pp. 415–422. Maximum stellar iron core mass. F W GIACOBBE. Chicago Research Center/American Air Liquide ... iron core compression due to the weight of non-ferrous matter overlying the iron cores within large .... thermal equilibrium velocities will tend to be non-relativistic.

  5. Maximum entropy beam diagnostic tomography

    International Nuclear Information System (INIS)

    Mottershead, C.T.

    1985-01-01

    This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore. 11 refs., 4 figs

  6. Maximum entropy beam diagnostic tomography

    International Nuclear Information System (INIS)

    Mottershead, C.T.

    1985-01-01

    This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore

  7. A portable storage maximum thermometer

    International Nuclear Information System (INIS)

    Fayart, Gerard.

    1976-01-01

    A clinical thermometer storing the voltage corresponding to the maximum temperature in an analog memory is described. End of the measurement is shown by a lamp switch out. The measurement time is shortened by means of a low thermal inertia platinum probe. This portable thermometer is fitted with cell test and calibration system [fr

  8. Neutron spectra unfolding with maximum entropy and maximum likelihood

    International Nuclear Information System (INIS)

    Itoh, Shikoh; Tsunoda, Toshiharu

    1989-01-01

    A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)

  9. On Maximum Entropy and Inference

    Directory of Open Access Journals (Sweden)

    Luigi Gresele

    2017-11-01

    Full Text Available Maximum entropy is a powerful concept that entails a sharp separation between relevant and irrelevant variables. It is typically invoked in inference, once an assumption is made on what the relevant variables are, in order to estimate a model from data, that affords predictions on all other (dependent variables. Conversely, maximum entropy can be invoked to retrieve the relevant variables (sufficient statistics directly from the data, once a model is identified by Bayesian model selection. We explore this approach in the case of spin models with interactions of arbitrary order, and we discuss how relevant interactions can be inferred. In this perspective, the dimensionality of the inference problem is not set by the number of parameters in the model, but by the frequency distribution of the data. We illustrate the method showing its ability to recover the correct model in a few prototype cases and discuss its application on a real dataset.

  10. Maximum Water Hammer Sensitivity Analysis

    OpenAIRE

    Jalil Emadi; Abbas Solemani

    2011-01-01

    Pressure waves and Water Hammer occur in a pumping system when valves are closed or opened suddenly or in the case of sudden failure of pumps. Determination of maximum water hammer is considered one of the most important technical and economical items of which engineers and designers of pumping stations and conveyance pipelines should take care. Hammer Software is a recent application used to simulate water hammer. The present study focuses on determining significance of ...

  11. Maximum Gene-Support Tree

    Directory of Open Access Journals (Sweden)

    Yunfeng Shan

    2008-01-01

    Full Text Available Genomes and genes diversify during evolution; however, it is unclear to what extent genes still retain the relationship among species. Model species for molecular phylogenetic studies include yeasts and viruses whose genomes were sequenced as well as plants that have the fossil-supported true phylogenetic trees available. In this study, we generated single gene trees of seven yeast species as well as single gene trees of nine baculovirus species using all the orthologous genes among the species compared. Homologous genes among seven known plants were used for validation of the finding. Four algorithms—maximum parsimony (MP, minimum evolution (ME, maximum likelihood (ML, and neighbor-joining (NJ—were used. Trees were reconstructed before and after weighting the DNA and protein sequence lengths among genes. Rarely a gene can always generate the “true tree” by all the four algorithms. However, the most frequent gene tree, termed “maximum gene-support tree” (MGS tree, or WMGS tree for the weighted one, in yeasts, baculoviruses, or plants was consistently found to be the “true tree” among the species. The results provide insights into the overall degree of divergence of orthologous genes of the genomes analyzed and suggest the following: 1 The true tree relationship among the species studied is still maintained by the largest group of orthologous genes; 2 There are usually more orthologous genes with higher similarities between genetically closer species than between genetically more distant ones; and 3 The maximum gene-support tree reflects the phylogenetic relationship among species in comparison.

  12. Preliminary study of radioactive concentration in treated sewage water

    Energy Technology Data Exchange (ETDEWEB)

    Elassaly, F M; Beal, A D.R. [Ministry of Health P.O. Box 1853 Dubai, (United Arab Emirates)

    1995-10-01

    Water from sewage treatment plant is used after processing for irrigation. Two water samples and one consolidated sludge (waste treatment products) were taken each day for period of months. Medical applications and research are the main sources of radioactivity such as Cr-51, Co-57, Ga-67, Se-75, Tc-99 m, In-111, Au-198 and Tl-201. Measurements were carried out using Hp Ge spectrometer with one liter Marinelli breaker. The maximum detected activity was 5.7 Bq.liter with a daily average of 2.4 Bq/liter for water. In the second period maximum activity was found to be 5 Bq/liter with an average daily activity 1.8 Bq/liter. The maximum activity recorded in the sludge during this period was 352 Bq/liter of which 343 Bq/liter was from I-131. The average daily activity was 162 Bq/liter. From these studies the levels of radioactivity concentration were 5 Bq/liter with an average 2 Bq/1 compared level 10 Bq/1 set for drinking water for Gcc countries. Although the sludge show higher activity of 353 Bq/liter it is kept for about year before being disposed. The maximum level for animal fodder is 300 Bq/kg for Gcc countries. These results indicate that radioactive concentration (2 Bq/liter) in the treated waste water present hazard to the public and environment. 6 figs., 4 tabs.

  13. Average monthly and annual climate maps for Bolivia

    KAUST Repository

    Vicente-Serrano, Sergio M.

    2015-02-24

    This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.

  14. Generic maximum likely scale selection

    DEFF Research Database (Denmark)

    Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo

    2007-01-01

    in this work is on applying this selection principle under a Brownian image model. This image model provides a simple scale invariant prior for natural images and we provide illustrative examples of the behavior of our scale estimation on such images. In these illustrative examples, estimation is based......The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...

  15. Extreme Maximum Land Surface Temperatures.

    Science.gov (United States)

    Garratt, J. R.

    1992-09-01

    There are numerous reports in the literature of observations of land surface temperatures. Some of these, almost all made in situ, reveal maximum values in the 50°-70°C range, with a few, made in desert regions, near 80°C. Consideration of a simplified form of the surface energy balance equation, utilizing likely upper values of absorbed shortwave flux (1000 W m2) and screen air temperature (55°C), that surface temperatures in the vicinity of 90°-100°C may occur for dry, darkish soils of low thermal conductivity (0.1-0.2 W m1 K1). Numerical simulations confirm this and suggest that temperature gradients in the first few centimeters of soil may reach 0.5°-1°C mm1 under these extreme conditions. The study bears upon the intrinsic interest of identifying extreme maximum temperatures and yields interesting information regarding the comfort zone of animals (including man).

  16. Averaging of nonlinearity-managed pulses

    International Nuclear Information System (INIS)

    Zharnitsky, Vadim; Pelinovsky, Dmitry

    2005-01-01

    We consider the nonlinear Schroedinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons

  17. Maximum permissible concentrations and negligible concentrations for antifouling substances. Irgarol 1051, dichlofluanid, ziram, chlorothalonil and TCMTB

    NARCIS (Netherlands)

    Wezel AP van; Vlaardingen P van; CSR

    2001-01-01

    In dit rapport zijn maximaal toelaatbare concentratie's en verwaarloosbare concentratie's afgeleid voor diverse aangroeiwerende middelen, welke worden gebruikt als vervanger voor TBT zoals Irgarol 1051, dichlofluanide, ziram, chloorthalonil en TCMTB.

  18. Linear and regressive stochastic models for prediction of daily maximum ozone values at Mexico City atmosphere

    Energy Technology Data Exchange (ETDEWEB)

    Bravo, J. L [Instituto de Geofisica, UNAM, Mexico, D.F. (Mexico); Nava, M. M [Instituto Mexicano del Petroleo, Mexico, D.F. (Mexico); Gay, C [Centro de Ciencias de la Atmosfera, UNAM, Mexico, D.F. (Mexico)

    2001-07-01

    We developed a procedure to forecast, with 2 or 3 hours, the daily maximum of surface ozone concentrations. It involves the adjustment of Autoregressive Integrated and Moving Average (ARIMA) models to daily ozone maximum concentrations at 10 monitoring atmospheric stations in Mexico City during one-year period. A one-day forecast is made and it is adjusted with the meteorological and solar radiation information acquired during the first 3 hours before the occurrence of the maximum value. The relative importance for forecasting of the history of the process and of meteorological conditions is evaluated. Finally an estimate of the daily probability of exceeding a given ozone level is made. [Spanish] Se aplica un procedimiento basado en la metodologia conocida como ARIMA, para predecir, con 2 o 3 horas de anticipacion, el valor maximo de la concentracion diaria de ozono. Esta basado en el calculo de autorregresiones y promedios moviles aplicados a los valores maximos de ozono superficial provenientes de 10 estaciones de monitoreo atmosferico en la Ciudad de Mexico y obtenidos durante un ano de muestreo. El pronostico para un dia se ajusta con la informacion meteorologica y de radiacion solar correspondiente a un periodo que antecede con al menos tres horas la ocurrencia esperada del valor maximo. Se compara la importancia relativa de la historia del proceso y de las condiciones meteorologicas previas para el pronostico. Finalmente se estima la probabilidad diaria de que un nivel normativo o preestablecido para contingencias de ozono sea rebasado.

  19. Maximum vehicle cabin temperatures under different meteorological conditions

    Science.gov (United States)

    Grundstein, Andrew; Meentemeyer, Vernon; Dowd, John

    2009-05-01

    A variety of studies have documented the dangerously high temperatures that may occur within the passenger compartment (cabin) of cars under clear sky conditions, even at relatively low ambient air temperatures. Our study, however, is the first to examine cabin temperatures under variable weather conditions. It uses a unique maximum vehicle cabin temperature dataset in conjunction with directly comparable ambient air temperature, solar radiation, and cloud cover data collected from April through August 2007 in Athens, GA. Maximum cabin temperatures, ranging from 41-76°C, varied considerably depending on the weather conditions and the time of year. Clear days had the highest cabin temperatures, with average values of 68°C in the summer and 61°C in the spring. Cloudy days in both the spring and summer were on average approximately 10°C cooler. Our findings indicate that even on cloudy days with lower ambient air temperatures, vehicle cabin temperatures may reach deadly levels. Additionally, two predictive models of maximum daily vehicle cabin temperatures were developed using commonly available meteorological data. One model uses maximum ambient air temperature and average daily solar radiation while the other uses cloud cover percentage as a surrogate for solar radiation. From these models, two maximum vehicle cabin temperature indices were developed to assess the level of danger. The models and indices may be useful for forecasting hazardous conditions, promoting public awareness, and to estimate past cabin temperatures for use in forensic analyses.

  20. System for memorizing maximum values

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1992-08-01

    The invention discloses a system capable of memorizing maximum sensed values. The system includes conditioning circuitry which receives the analog output signal from a sensor transducer. The conditioning circuitry rectifies and filters the analog signal and provides an input signal to a digital driver, which may be either linear or logarithmic. The driver converts the analog signal to discrete digital values, which in turn triggers an output signal on one of a plurality of driver output lines n. The particular output lines selected is dependent on the converted digital value. A microfuse memory device connects across the driver output lines, with n segments. Each segment is associated with one driver output line, and includes a microfuse that is blown when a signal appears on the associated driver output line.

  1. Remarks on the maximum luminosity

    Science.gov (United States)

    Cardoso, Vitor; Ikeda, Taishi; Moore, Christopher J.; Yoo, Chul-Moon

    2018-04-01

    The quest for fundamental limitations on physical processes is old and venerable. Here, we investigate the maximum possible power, or luminosity, that any event can produce. We show, via full nonlinear simulations of Einstein's equations, that there exist initial conditions which give rise to arbitrarily large luminosities. However, the requirement that there is no past horizon in the spacetime seems to limit the luminosity to below the Planck value, LP=c5/G . Numerical relativity simulations of critical collapse yield the largest luminosities observed to date, ≈ 0.2 LP . We also present an analytic solution to the Einstein equations which seems to give an unboundedly large luminosity; this will guide future numerical efforts to investigate super-Planckian luminosities.

  2. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-09-07

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  3. Scintillation counter, maximum gamma aspect

    International Nuclear Information System (INIS)

    Thumim, A.D.

    1975-01-01

    A scintillation counter, particularly for counting gamma ray photons, includes a massive lead radiation shield surrounding a sample-receiving zone. The shield is disassembleable into a plurality of segments to allow facile installation and removal of a photomultiplier tube assembly, the segments being so constructed as to prevent straight-line access of external radiation through the shield into radiation-responsive areas. Provisions are made for accurately aligning the photomultiplier tube with respect to one or more sample-transmitting bores extending through the shield to the sample receiving zone. A sample elevator, used in transporting samples into the zone, is designed to provide a maximum gamma-receiving aspect to maximize the gamma detecting efficiency. (U.S.)

  4. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin

    2014-01-01

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  5. Bounds on Average Time Complexity of Decision Trees

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any diagnostic problem, the minimum average depth of decision tree is bounded from below by the entropy of probability distribution (with a multiplier 1/log2 k for a problem over a k-valued information system). Among diagnostic problems, the problems with a complete set of attributes have the lowest minimum average depth of decision trees (e.g, the problem of building optimal prefix code [1] and a blood test study in assumption that exactly one patient is ill [23]). For such problems, the minimum average depth of decision tree exceeds the lower bound by at most one. The minimum average depth reaches the maximum on the problems in which each attribute is "indispensable" [44] (e.g., a diagnostic problem with n attributes and kn pairwise different rows in the decision table and the problem of implementing the modulo 2 summation function). These problems have the minimum average depth of decision tree equal to the number of attributes in the problem description. © Springer-Verlag Berlin Heidelberg 2011.

  6. Maximum time-dependent space-charge limited diode currents

    Energy Technology Data Exchange (ETDEWEB)

    Griswold, M. E. [Tri Alpha Energy, Inc., Rancho Santa Margarita, California 92688 (United States); Fisch, N. J. [Princeton Plasma Physics Laboratory, Princeton University, Princeton, New Jersey 08543 (United States)

    2016-01-15

    Recent papers claim that a one dimensional (1D) diode with a time-varying voltage drop can transmit current densities that exceed the Child-Langmuir (CL) limit on average, apparently contradicting a previous conjecture that there is a hard limit on the average current density across any 1D diode, as t → ∞, that is equal to the CL limit. However, these claims rest on a different definition of the CL limit, namely, a comparison between the time-averaged diode current and the adiabatic average of the expression for the stationary CL limit. If the current were considered as a function of the maximum applied voltage, rather than the average applied voltage, then the original conjecture would not have been refuted.

  7. Maximum entropy and Bayesian methods

    International Nuclear Information System (INIS)

    Smith, C.R.; Erickson, G.J.; Neudorfer, P.O.

    1992-01-01

    Bayesian probability theory and Maximum Entropy methods are at the core of a new view of scientific inference. These 'new' ideas, along with the revolution in computational methods afforded by modern computers allow astronomers, electrical engineers, image processors of any type, NMR chemists and physicists, and anyone at all who has to deal with incomplete and noisy data, to take advantage of methods that, in the past, have been applied only in some areas of theoretical physics. The title workshops have been the focus of a group of researchers from many different fields, and this diversity is evident in this book. There are tutorial and theoretical papers, and applications in a very wide variety of fields. Almost any instance of dealing with incomplete and noisy data can be usefully treated by these methods, and many areas of theoretical research are being enhanced by the thoughtful application of Bayes' theorem. Contributions contained in this volume present a state-of-the-art overview that will be influential and useful for many years to come

  8. The average size of ordered binary subgraphs

    NARCIS (Netherlands)

    van Leeuwen, J.; Hartel, Pieter H.

    To analyse the demands made on the garbage collector in a graph reduction system, the change in size of an average graph is studied when an arbitrary edge is removed. In ordered binary trees the average number of deleted nodes as a result of cutting a single edge is equal to the average size of a

  9. Maximum entropy principal for transportation

    International Nuclear Information System (INIS)

    Bilich, F.; Da Silva, R.

    2008-01-01

    In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.

  10. Latitudinal Change of Tropical Cyclone Maximum Intensity in the Western North Pacific

    OpenAIRE

    Choi, Jae-Won; Cha, Yumi; Kim, Hae-Dong; Kang, Sung-Dae

    2016-01-01

    This study obtained the latitude where tropical cyclones (TCs) show maximum intensity and applied statistical change-point analysis on the time series data of the average annual values. The analysis results found that the latitude of the TC maximum intensity increased from 1999. To investigate the reason behind this phenomenon, the difference of the average latitude between 1999 and 2013 and the average between 1977 and 1998 was analyzed. In a difference of 500 hPa streamline between the two ...

  11. An investigation of factors influencing indoor radon concentrations

    International Nuclear Information System (INIS)

    Majborn, B.; Soerensen, A.; Nielsen, S.P.; Boetter-Jensen, L.

    1988-05-01

    Variations in indoor radon concentrations and some influencing factors have been studied during a two-year period (1986-1987) in 16 almost identical single-family houses.The annual average radon concentration in the houses varied from about 50 to about 400 Bq/m 3 . Variations in soil characteristics and radon concentration in soil gas could not be directly related to the variations of the average indoor radon concentrations. Most of the houses showed a ''normal'' seasonal variation of the radon concentration with a maximum in the winter and minimum in the summer. A deviating seasonal variation was found in three of the houses. Hourly data obtained in one unoccupied house during a period of 2-1/2 months showed no or only weak correlations between the indoor radon concentration and meteorological factors. However, for most of the houses, the seasonal variation of the indoor radon concentration was well correlated with the average indoor-outdoor temperature difference on a 2-month basis. It was demonstrated that the radon concentration can be strongly reduced in the Risoe houses if a district-heating duct, which is connected to all the houses, is ventilated, so that a slightly lowered pressure is maintained in the duct. 5 taps., 24 ill. (author)

  12. Determination of average activating thermal neutron flux in bulk samples

    International Nuclear Information System (INIS)

    Doczi, R.; Csikai, J.; Doczi, R.; Csikai, J.; Hassan, F. M.; Ali, M.A.

    2004-01-01

    A previous method used for the determination of the average neutron flux within bulky samples has been applied for the measurements of hydrogen contents of different samples. An analytical function is given for the description of the correlation between the activity of Dy foils and the hydrogen concentrations. Results obtained by the activation and the thermal neutron reflection methods are compared

  13. Maximum Parsimony on Phylogenetic networks

    Science.gov (United States)

    2012-01-01

    Background Phylogenetic networks are generalizations of phylogenetic trees, that are used to model evolutionary events in various contexts. Several different methods and criteria have been introduced for reconstructing phylogenetic trees. Maximum Parsimony is a character-based approach that infers a phylogenetic tree by minimizing the total number of evolutionary steps required to explain a given set of data assigned on the leaves. Exact solutions for optimizing parsimony scores on phylogenetic trees have been introduced in the past. Results In this paper, we define the parsimony score on networks as the sum of the substitution costs along all the edges of the network; and show that certain well-known algorithms that calculate the optimum parsimony score on trees, such as Sankoff and Fitch algorithms extend naturally for networks, barring conflicting assignments at the reticulate vertices. We provide heuristics for finding the optimum parsimony scores on networks. Our algorithms can be applied for any cost matrix that may contain unequal substitution costs of transforming between different characters along different edges of the network. We analyzed this for experimental data on 10 leaves or fewer with at most 2 reticulations and found that for almost all networks, the bounds returned by the heuristics matched with the exhaustively determined optimum parsimony scores. Conclusion The parsimony score we define here does not directly reflect the cost of the best tree in the network that displays the evolution of the character. However, when searching for the most parsimonious network that describes a collection of characters, it becomes necessary to add additional cost considerations to prefer simpler structures, such as trees over networks. The parsimony score on a network that we describe here takes into account the substitution costs along the additional edges incident on each reticulate vertex, in addition to the substitution costs along the other edges which are

  14. Estimating total maximum daily loads with the Stochastic Empirical Loading and Dilution Model

    Science.gov (United States)

    Granato, Gregory; Jones, Susan Cheung

    2017-01-01

    The Massachusetts Department of Transportation (DOT) and the Rhode Island DOT are assessing and addressing roadway contributions to total maximum daily loads (TMDLs). Example analyses for total nitrogen, total phosphorus, suspended sediment, and total zinc in highway runoff were done by the U.S. Geological Survey in cooperation with FHWA to simulate long-term annual loads for TMDL analyses with the stochastic empirical loading and dilution model known as SELDM. Concentration statistics from 19 highway runoff monitoring sites in Massachusetts were used with precipitation statistics from 11 long-term monitoring sites to simulate long-term pavement yields (loads per unit area). Highway sites were stratified by traffic volume or surrounding land use to calculate concentration statistics for rural roads, low-volume highways, high-volume highways, and ultraurban highways. The median of the event mean concentration statistics in each traffic volume category was used to simulate annual yields from pavement for a 29- or 30-year period. Long-term average yields for total nitrogen, phosphorus, and zinc from rural roads are lower than yields from the other categories, but yields of sediment are higher than for the low-volume highways. The average yields of the selected water quality constituents from high-volume highways are 1.35 to 2.52 times the associated yields from low-volume highways. The average yields of the selected constituents from ultraurban highways are 1.52 to 3.46 times the associated yields from high-volume highways. Example simulations indicate that both concentration reduction and flow reduction by structural best management practices are crucial for reducing runoff yields.

  15. Averaging for solitons with nonlinearity management

    International Nuclear Information System (INIS)

    Pelinovsky, D.E.; Kevrekidis, P.G.; Frantzeskakis, D.J.

    2003-01-01

    We develop an averaging method for solitons of the nonlinear Schroedinger equation with a periodically varying nonlinearity coefficient, which is used to effectively describe solitons in Bose-Einstein condensates, in the context of the recently proposed technique of Feshbach resonance management. Using the derived local averaged equation, we study matter-wave bright and dark solitons and demonstrate a very good agreement between solutions of the averaged and full equations

  16. DSCOVR Magnetometer Level 2 One Minute Averages

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-minute average of Level 1 data

  17. DSCOVR Magnetometer Level 2 One Second Averages

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-second average of Level 1 data

  18. Spacetime averaging of exotic singularity universes

    International Nuclear Information System (INIS)

    Dabrowski, Mariusz P.

    2011-01-01

    Taking a spacetime average as a measure of the strength of singularities we show that big-rips (type I) are stronger than big-bangs. The former have infinite spacetime averages while the latter have them equal to zero. The sudden future singularities (type II) and w-singularities (type V) have finite spacetime averages. The finite scale factor (type III) singularities for some values of the parameters may have an infinite average and in that sense they may be considered stronger than big-bangs.

  19. NOAA Average Annual Salinity (3-Zone)

    Data.gov (United States)

    California Natural Resource Agency — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...

  20. Lateral dispersion coefficients as functions of averaging time

    International Nuclear Information System (INIS)

    Sheih, C.M.

    1980-01-01

    Plume dispersion coefficients are discussed in terms of single-particle and relative diffusion, and are investigated as functions of averaging time. To demonstrate the effects of averaging time on the relative importance of various dispersion processes, and observed lateral wind velocity spectrum is used to compute the lateral dispersion coefficients of total, single-particle and relative diffusion for various averaging times and plume travel times. The results indicate that for a 1 h averaging time the dispersion coefficient of a plume can be approximated by single-particle diffusion alone for travel times <250 s and by relative diffusion for longer travel times. Furthermore, it is shown that the power-law formula suggested by Turner for relating pollutant concentrations for other averaging times to the corresponding 15 min average is applicable to the present example only when the averaging time is less than 200 s and the tral time smaller than about 300 s. Since the turbulence spectrum used in the analysis is an observed one, it is hoped that the results could represent many conditions encountered in the atmosphere. However, as the results depend on the form of turbulence spectrum, the calculations are not for deriving a set of specific criteria but for demonstrating the need in discriminating various processes in studies of plume dispersion

  1. Improving consensus structure by eliminating averaging artifacts

    Directory of Open Access Journals (Sweden)

    KC Dukka B

    2009-03-01

    Full Text Available Abstract Background Common structural biology methods (i.e., NMR and molecular dynamics often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures. However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%; in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38 of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA 1, our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction 2, which

  2. Two-dimensional maximum entropy image restoration

    International Nuclear Information System (INIS)

    Brolley, J.E.; Lazarus, R.B.; Suydam, B.R.; Trussell, H.J.

    1977-07-01

    An optical check problem was constructed to test P LOG P maximum entropy restoration of an extremely distorted image. Useful recovery of the original image was obtained. Comparison with maximum a posteriori restoration is made. 7 figures

  3. 40 CFR 76.11 - Emissions averaging.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Emissions averaging. 76.11 Section 76.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General...

  4. Determinants of College Grade Point Averages

    Science.gov (United States)

    Bailey, Paul Dean

    2012-01-01

    Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the…

  5. Pushing concentration of stationary solar concentrators to the limit.

    Science.gov (United States)

    Winston, Roland; Zhang, Weiya

    2010-04-26

    We give the theoretical limit of concentration allowed by nonimaging optics for stationary solar concentrators after reviewing sun- earth geometry in direction cosine space. We then discuss the design principles that we follow to approach the maximum concentration along with examples including a hollow CPC trough, a dielectric CPC trough, and a 3D dielectric stationary solar concentrator which concentrates sun light four times (4x), eight hours per day year around.

  6. Role of spatial averaging in multicellular gradient sensing.

    Science.gov (United States)

    Smith, Tyler; Fancher, Sean; Levchenko, Andre; Nemenman, Ilya; Mugler, Andrew

    2016-05-20

    Gradient sensing underlies important biological processes including morphogenesis, polarization, and cell migration. The precision of gradient sensing increases with the length of a detector (a cell or group of cells) in the gradient direction, since a longer detector spans a larger range of concentration values. Intuition from studies of concentration sensing suggests that precision should also increase with detector length in the direction transverse to the gradient, since then spatial averaging should reduce the noise. However, here we show that, unlike for concentration sensing, the precision of gradient sensing decreases with transverse length for the simplest gradient sensing model, local excitation-global inhibition. The reason is that gradient sensing ultimately relies on a subtraction of measured concentration values. While spatial averaging indeed reduces the noise in these measurements, which increases precision, it also reduces the covariance between the measurements, which results in the net decrease in precision. We demonstrate how a recently introduced gradient sensing mechanism, regional excitation-global inhibition (REGI), overcomes this effect and recovers the benefit of transverse averaging. Using a REGI-based model, we compute the optimal two- and three-dimensional detector shapes, and argue that they are consistent with the shapes of naturally occurring gradient-sensing cell populations.

  7. 12 CFR 702.105 - Weighted-average life of investments.

    Science.gov (United States)

    2010-01-01

    ... investment funds. (1) For investments in registered investment companies (e.g., mutual funds) and collective investment funds, the weighted-average life is defined as the maximum weighted-average life disclosed, directly or indirectly, in the prospectus or trust instrument; (2) For investments in money market funds...

  8. Receiver function estimated by maximum entropy deconvolution

    Institute of Scientific and Technical Information of China (English)

    吴庆举; 田小波; 张乃铃; 李卫平; 曾融生

    2003-01-01

    Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.

  9. Maximum likelihood convolutional decoding (MCD) performance due to system losses

    Science.gov (United States)

    Webster, L.

    1976-01-01

    A model for predicting the computational performance of a maximum likelihood convolutional decoder (MCD) operating in a noisy carrier reference environment is described. This model is used to develop a subroutine that will be utilized by the Telemetry Analysis Program to compute the MCD bit error rate. When this computational model is averaged over noisy reference phase errors using a high-rate interpolation scheme, the results are found to agree quite favorably with experimental measurements.

  10. Kumaraswamy autoregressive moving average models for double bounded environmental data

    Science.gov (United States)

    Bayer, Fábio Mariano; Bayer, Débora Missio; Pumi, Guilherme

    2017-12-01

    In this paper we introduce the Kumaraswamy autoregressive moving average models (KARMA), which is a dynamic class of models for time series taking values in the double bounded interval (a,b) following the Kumaraswamy distribution. The Kumaraswamy family of distribution is widely applied in many areas, especially hydrology and related fields. Classical examples are time series representing rates and proportions observed over time. In the proposed KARMA model, the median is modeled by a dynamic structure containing autoregressive and moving average terms, time-varying regressors, unknown parameters and a link function. We introduce the new class of models and discuss conditional maximum likelihood estimation, hypothesis testing inference, diagnostic analysis and forecasting. In particular, we provide closed-form expressions for the conditional score vector and conditional Fisher information matrix. An application to environmental real data is presented and discussed.

  11. Maximum Power from a Solar Panel

    Directory of Open Access Journals (Sweden)

    Michael Miller

    2010-01-01

    Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.

  12. Computation of the bounce-average code

    International Nuclear Information System (INIS)

    Cutler, T.A.; Pearlstein, L.D.; Rensink, M.E.

    1977-01-01

    The bounce-average computer code simulates the two-dimensional velocity transport of ions in a mirror machine. The code evaluates and bounce-averages the collision operator and sources along the field line. A self-consistent equilibrium magnetic field is also computed using the long-thin approximation. Optionally included are terms that maintain μ, J invariance as the magnetic field changes in time. The assumptions and analysis that form the foundation of the bounce-average code are described. When references can be cited, the required results are merely stated and explained briefly. A listing of the code is appended

  13. Effect of tank geometry on its average performance

    Science.gov (United States)

    Orlov, Aleksey A.; Tsimbalyuk, Alexandr F.; Malyugin, Roman V.; Leontieva, Daria A.; Kotelnikova, Alexandra A.

    2018-03-01

    The mathematical model of non-stationary filling of vertical submerged tanks with gaseous uranium hexafluoride is presented in the paper. There are calculations of the average productivity, heat exchange area, and filling time of various volumes tanks with smooth inner walls depending on their "height : radius" ratio as well as the average productivity, degree, and filling time of horizontal ribbing tank with volume 6.10-2 m3 with change central hole diameter of the ribs. It has been shown that the growth of "height / radius" ratio in tanks with smooth inner walls up to the limiting values allows significantly increasing tank average productivity and reducing its filling time. Growth of H/R ratio of tank with volume 1.0 m3 to the limiting values (in comparison with the standard tank having H/R equal 3.49) augments tank productivity by 23.5 % and the heat exchange area by 20%. Besides, we have demonstrated that maximum average productivity and a minimum filling time are reached for the tank with volume 6.10-2 m3 having central hole diameter of horizontal ribs 6.4.10-2 m.

  14. Rotational averaging of multiphoton absorption cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Friese, Daniel H., E-mail: daniel.h.friese@uit.no; Beerepoot, Maarten T. P.; Ruud, Kenneth [Centre for Theoretical and Computational Chemistry, University of Tromsø — The Arctic University of Norway, N-9037 Tromsø (Norway)

    2014-11-28

    Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.

  15. Sea Surface Temperature Average_SST_Master

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using ArcGIS...

  16. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-01-01

    to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic

  17. Should the average tax rate be marginalized?

    Czech Academy of Sciences Publication Activity Database

    Feldman, N. E.; Katuščák, Peter

    -, č. 304 (2006), s. 1-65 ISSN 1211-3298 Institutional research plan: CEZ:MSM0021620846 Keywords : tax * labor supply * average tax Subject RIV: AH - Economics http://www.cerge-ei.cz/pdf/wp/Wp304.pdf

  18. A practical guide to averaging functions

    CERN Document Server

    Beliakov, Gleb; Calvo Sánchez, Tomasa

    2016-01-01

    This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...

  19. MN Temperature Average (1961-1990) - Line

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  20. MN Temperature Average (1961-1990) - Polygon

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  1. Average Bandwidth Allocation Model of WFQ

    Directory of Open Access Journals (Sweden)

    Tomáš Balogh

    2012-01-01

    Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.

  2. Nonequilibrium statistical averages and thermo field dynamics

    International Nuclear Information System (INIS)

    Marinaro, A.; Scarpetta, Q.

    1984-01-01

    An extension of thermo field dynamics is proposed, which permits the computation of nonequilibrium statistical averages. The Brownian motion of a quantum oscillator is treated as an example. In conclusion it is pointed out that the procedure proposed to computation of time-dependent statistical average gives the correct two-point Green function for the damped oscillator. A simple extension can be used to compute two-point Green functions of free particles

  3. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....

  4. Efficiency of autonomous soft nanomachines at maximum power.

    Science.gov (United States)

    Seifert, Udo

    2011-01-14

    We consider nanosized artificial or biological machines working in steady state enforced by imposing nonequilibrium concentrations of solutes or by applying external forces, torques, or electric fields. For unicyclic and strongly coupled multicyclic machines, efficiency at maximum power is not bounded by the linear response value 1/2. For strong driving, it can even approach the thermodynamic limit 1. Quite generally, such machines fall into three different classes characterized, respectively, as "strong and efficient," "strong and inefficient," and "balanced." For weakly coupled multicyclic machines, efficiency at maximum power has lost any universality even in the linear response regime.

  5. Radon concentration in The Netherlands

    International Nuclear Information System (INIS)

    Meijer, R.J. de; Put, L.W.; Veldhuizen, A.

    1986-02-01

    In 1000 dwellings, which can be assumed to be an reasonable representation of the average Dutch dwellings, time averaged radon concentrations, radon daughter concentrations and gamma-exposure tempi are determined during a year with passive dosemeters. They are also determined outdoor at circa 200 measure points. (Auth.)

  6. Improved averaging for non-null interferometry

    Science.gov (United States)

    Fleig, Jon F.; Murphy, Paul E.

    2013-09-01

    Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.

  7. Determination of the diagnostic x-ray tube practical peak voltage (PPV) from average or average peak voltage measurements

    Energy Technology Data Exchange (ETDEWEB)

    Hourdakis, C J, E-mail: khour@gaec.gr [Ionizing Radiation Calibration Laboratory-Greek Atomic Energy Commission, PO Box 60092, 15310 Agia Paraskevi, Athens, Attiki (Greece)

    2011-04-07

    The practical peak voltage (PPV) has been adopted as the reference measuring quantity for the x-ray tube voltage. However, the majority of commercial kV-meter models measure the average peak, U-bar{sub P}, the average, U-bar, the effective, U{sub eff} or the maximum peak, U{sub P} tube voltage. This work proposed a method for determination of the PPV from measurements with a kV-meter that measures the average U-bar or the average peak, U-bar{sub p} voltage. The kV-meter reading can be converted to the PPV by applying appropriate calibration coefficients and conversion factors. The average peak k{sub PPV,kVp} and the average k{sub PPV,Uav} conversion factors were calculated from virtual voltage waveforms for conventional diagnostic radiology (50-150 kV) and mammography (22-35 kV) tube voltages and for voltage ripples from 0% to 100%. Regression equation and coefficients provide the appropriate conversion factors at any given tube voltage and ripple. The influence of voltage waveform irregularities, like 'spikes' and pulse amplitude variations, on the conversion factors was investigated and discussed. The proposed method and the conversion factors were tested using six commercial kV-meters at several x-ray units. The deviations between the reference and the calculated - according to the proposed method - PPV values were less than 2%. Practical aspects on the voltage ripple measurement were addressed and discussed. The proposed method provides a rigorous base to determine the PPV with kV-meters from U-bar{sub p} and U-bar measurement. Users can benefit, since all kV-meters, irrespective of their measuring quantity, can be used to determine the PPV, complying with the IEC standard requirements.

  8. Xanthium strumarium L. pollen concentration in aeroplankton of Lublin in the years 2003-2005

    Directory of Open Access Journals (Sweden)

    Elżbieta Weryszko-Chmielewska

    2012-12-01

    Full Text Available Xanthium strumarium (common cocklebur pollen grains are included in allergenic types. During a three-year study (2003-2005 conducted by using the gravimetric method at two trap sites in Lublin, daily concentrations, maximum concentrations and annual sums of pollen grains, as well as the length of pollen seasons of this species were compared. The pollen season of common cocklebur starts in the first or second decade of July and lasts until the third decade of September. The length of the pollen season is 70-80 days. The highest cocklebur pollen concentrations, amounting to 40-59 z·cm-2, occurred between 8 and 18 August. The maximum cocklebur pollen concentrations differed slightly in particular trap sites over the period of three years of study. A statistically significant correlation between the Xanthium strumarium pollen concentration and average temperature was demonstrated only in one year of study (2004.

  9. Maximum permissible voltage of YBCO coated conductors

    Energy Technology Data Exchange (ETDEWEB)

    Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)

    2014-06-15

    Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.

  10. Experimental characterization of a concentrating photovoltaic system varying the light concentration

    International Nuclear Information System (INIS)

    Renno, C.; Petito, F.; Landi, G.; Neitzert, H.C.

    2017-01-01

    Highlights: • Experimental characterization of a concentrating photovoltaic system. • Analysis of the point-focus concentrating system performances. • Photovoltaic system parameters as function of the concentration factor. - Abstract: The concentrating photovoltaic system represents one of the most promising solar technologies because it allows a more efficient energy conversion. When a CPV system is designed, the main parameter which has to be considered is the concentration factor that affects both the system energy performances and its configuration. An experimental characterization of a CPV system previously realized at the University of Salerno, is presented in this paper considering several aspects related to the optical configuration, the concentration factor and the solar cell used. In particular, the parameters of an Indium Gallium Phosphide/Gallium Arsenide/Germanium triple-junction solar cell are investigated as function of the concentration factor determined by means of an experimental procedure that uses different optical configurations. The maximum concentration factor reached by the CPV system is 310 suns. The cell parameters dependence on the concentration is reported together with an electroluminescence analysis of the Indium Gallium Phosphide/Gallium Arsenide/Germanium cell. A monitoring of the electrical power provided by the system during its working is also presented corresponding to different direct irradiance values. A mean power of 2.95 W with an average efficiency of 32.8% is obtained referring to a mean irradiance of 930 W/m"2; lower values are obtained when the irradiance is highly fluctuating. The concentrating photovoltaic system electric energy output is estimated considering different concentration levels; the maximal obtained value is 23.5 W h on a sunny day at 310×. Finally, the temperature of the triple-junction solar cell is evaluated for different months in order to evaluate the potential annual thermal energy production

  11. Asynchronous Gossip for Averaging and Spectral Ranking

    Science.gov (United States)

    Borkar, Vivek S.; Makhijani, Rahul; Sundaresan, Rajesh

    2014-08-01

    We consider two variants of the classical gossip algorithm. The first variant is a version of asynchronous stochastic approximation. We highlight a fundamental difficulty associated with the classical asynchronous gossip scheme, viz., that it may not converge to a desired average, and suggest an alternative scheme based on reinforcement learning that has guaranteed convergence to the desired average. We then discuss a potential application to a wireless network setting with simultaneous link activation constraints. The second variant is a gossip algorithm for distributed computation of the Perron-Frobenius eigenvector of a nonnegative matrix. While the first variant draws upon a reinforcement learning algorithm for an average cost controlled Markov decision problem, the second variant draws upon a reinforcement learning algorithm for risk-sensitive control. We then discuss potential applications of the second variant to ranking schemes, reputation networks, and principal component analysis.

  12. Benchmarking statistical averaging of spectra with HULLAC

    Science.gov (United States)

    Klapisch, Marcel; Busquet, Michel

    2008-11-01

    Knowledge of radiative properties of hot plasmas is important for ICF, astrophysics, etc When mid-Z or high-Z elements are present, the spectra are so complex that one commonly uses statistically averaged description of atomic systems [1]. In a recent experiment on Fe[2], performed under controlled conditions, high resolution transmission spectra were obtained. The new version of HULLAC [3] allows the use of the same model with different levels of details/averaging. We will take advantage of this feature to check the effect of averaging with comparison with experiment. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Quant. Spectros. Rad. Transf. 65, 43 (2000). [2] J. E. Bailey, G. A. Rochau, C. A. Iglesias et al., Phys. Rev. Lett. 99, 265002-4 (2007). [3]. M. Klapisch, M. Busquet, and A. Bar-Shalom, AIP Conference Proceedings 926, 206-15 (2007).

  13. An approach to averaging digitized plantagram curves.

    Science.gov (United States)

    Hawes, M R; Heinemeyer, R; Sovak, D; Tory, B

    1994-07-01

    The averaging of outline shapes of the human foot for the purposes of determining information concerning foot shape and dimension within the context of comfort of fit of sport shoes is approached as a mathematical problem. An outline of the human footprint is obtained by standard procedures and the curvature is traced with a Hewlett Packard Digitizer. The paper describes the determination of an alignment axis, the identification of two ray centres and the division of the total curve into two overlapping arcs. Each arc is divided by equiangular rays which intersect chords between digitized points describing the arc. The radial distance of each ray is averaged within groups of foot lengths which vary by +/- 2.25 mm (approximately equal to 1/2 shoe size). The method has been used to determine average plantar curves in a study of 1197 North American males (Hawes and Sovak 1993).

  14. Books average previous decade of economic misery.

    Science.gov (United States)

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  15. Books Average Previous Decade of Economic Misery

    Science.gov (United States)

    Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  16. Exploiting scale dependence in cosmological averaging

    International Nuclear Information System (INIS)

    Mattsson, Teppo; Ronkainen, Maria

    2008-01-01

    We study the role of scale dependence in the Buchert averaging method, using the flat Lemaitre–Tolman–Bondi model as a testing ground. Within this model, a single averaging scale gives predictions that are too coarse, but by replacing it with the distance of the objects R(z) for each redshift z, we find an O(1%) precision at z<2 in the averaged luminosity and angular diameter distances compared to their exact expressions. At low redshifts, we show the improvement for generic inhomogeneity profiles, and our numerical computations further verify it up to redshifts z∼2. At higher redshifts, the method breaks down due to its inability to capture the time evolution of the inhomogeneities. We also demonstrate that the running smoothing scale R(z) can mimic acceleration, suggesting that it could be at least as important as the backreaction in explaining dark energy as an inhomogeneity induced illusion

  17. Stochastic Averaging and Stochastic Extremum Seeking

    CERN Document Server

    Liu, Shu-Jun

    2012-01-01

    Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering  and analysis of bacterial  convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...

  18. Aperture averaging in strong oceanic turbulence

    Science.gov (United States)

    Gökçe, Muhsin Caner; Baykal, Yahya

    2018-04-01

    Receiver aperture averaging technique is employed in underwater wireless optical communication (UWOC) systems to mitigate the effects of oceanic turbulence, thus to improve the system performance. The irradiance flux variance is a measure of the intensity fluctuations on a lens of the receiver aperture. Using the modified Rytov theory which uses the small-scale and large-scale spatial filters, and our previously presented expression that shows the atmospheric structure constant in terms of oceanic turbulence parameters, we evaluate the irradiance flux variance and the aperture averaging factor of a spherical wave in strong oceanic turbulence. Irradiance flux variance variations are examined versus the oceanic turbulence parameters and the receiver aperture diameter are examined in strong oceanic turbulence. Also, the effect of the receiver aperture diameter on the aperture averaging factor is presented in strong oceanic turbulence.

  19. Regional averaging and scaling in relativistic cosmology

    International Nuclear Information System (INIS)

    Buchert, Thomas; Carfora, Mauro

    2002-01-01

    Averaged inhomogeneous cosmologies lie at the forefront of interest, since cosmological parameters such as the rate of expansion or the mass density are to be considered as volume-averaged quantities and only these can be compared with observations. For this reason the relevant parameters are intrinsically scale-dependent and one wishes to control this dependence without restricting the cosmological model by unphysical assumptions. In the latter respect we contrast our way to approach the averaging problem in relativistic cosmology with shortcomings of averaged Newtonian models. Explicitly, we investigate the scale-dependence of Eulerian volume averages of scalar functions on Riemannian three-manifolds. We propose a complementary view of a Lagrangian smoothing of (tensorial) variables as opposed to their Eulerian averaging on spatial domains. This programme is realized with the help of a global Ricci deformation flow for the metric. We explain rigorously the origin of the Ricci flow which, on heuristic grounds, has already been suggested as a possible candidate for smoothing the initial dataset for cosmological spacetimes. The smoothing of geometry implies a renormalization of averaged spatial variables. We discuss the results in terms of effective cosmological parameters that would be assigned to the smoothed cosmological spacetime. In particular, we find that on the smoothed spatial domain B-bar evaluated cosmological parameters obey Ω-bar B-bar m + Ω-bar B-bar R + Ω-bar B-bar A + Ω-bar B-bar Q 1, where Ω-bar B-bar m , Ω-bar B-bar R and Ω-bar B-bar A correspond to the standard Friedmannian parameters, while Ω-bar B-bar Q is a remnant of cosmic variance of expansion and shear fluctuations on the averaging domain. All these parameters are 'dressed' after smoothing out the geometrical fluctuations, and we give the relations of the 'dressed' to the 'bare' parameters. While the former provide the framework of interpreting observations with a 'Friedmannian bias

  20. Average: the juxtaposition of procedure and context

    Science.gov (United States)

    Watson, Jane; Chick, Helen; Callingham, Rosemary

    2014-09-01

    This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.

  1. Average-case analysis of numerical problems

    CERN Document Server

    2000-01-01

    The average-case analysis of numerical problems is the counterpart of the more traditional worst-case approach. The analysis of average error and cost leads to new insight on numerical problems as well as to new algorithms. The book provides a survey of results that were mainly obtained during the last 10 years and also contains new results. The problems under consideration include approximation/optimal recovery and numerical integration of univariate and multivariate functions as well as zero-finding and global optimization. Background material, e.g. on reproducing kernel Hilbert spaces and random fields, is provided.

  2. Grassmann Averages for Scalable Robust PCA

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Black, Michael J.

    2014-01-01

    As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...

  3. Spatio-temporal observations of the tertiary ozone maximum

    Directory of Open Access Journals (Sweden)

    V. F. Sofieva

    2009-07-01

    Full Text Available We present spatio-temporal distributions of the tertiary ozone maximum (TOM, based on GOMOS (Global Ozone Monitoring by Occultation of Stars ozone measurements in 2002–2006. The tertiary ozone maximum is typically observed in the high-latitude winter mesosphere at an altitude of ~72 km. Although the explanation for this phenomenon has been found recently – low concentrations of odd-hydrogen cause the subsequent decrease in odd-oxygen losses – models have had significant deviations from existing observations until recently. Good coverage of polar night regions by GOMOS data has allowed for the first time to obtain spatial and temporal observational distributions of night-time ozone mixing ratio in the mesosphere.

    The distributions obtained from GOMOS data have specific features, which are variable from year to year. In particular, due to a long lifetime of ozone in polar night conditions, the downward transport of polar air by the meridional circulation is clearly observed in the tertiary ozone maximum time series. Although the maximum tertiary ozone mixing ratio is achieved close to the polar night terminator (as predicted by the theory, TOM can be observed also at very high latitudes, not only in the beginning and at the end, but also in the middle of winter. We have compared the observational spatio-temporal distributions of the tertiary ozone maximum with that obtained using WACCM (Whole Atmosphere Community Climate Model and found that the specific features are reproduced satisfactorily by the model.

    Since ozone in the mesosphere is very sensitive to HOx concentrations, energetic particle precipitation can significantly modify the shape of the ozone profiles. In particular, GOMOS observations have shown that the tertiary ozone maximum was temporarily destroyed during the January 2005 and December 2006 solar proton events as a result of the HOx enhancement from the increased ionization.

  4. Ten years of continual monitoring of 222Rn concentration in Bratislava atmosphere

    International Nuclear Information System (INIS)

    Holy, K.; Bosa, I.; Polaskova, A.; Boehm, R.; Ondo-Estok, D.; Bulko, M.; Hola, O.

    2003-01-01

    By the continual monitoring we obtained the extensive set of radon data in Bratislava atmosphere covering the time period of 1991 - 2000. The average annual radon activity concentrations varied from 4.1 to 7.2 Bq/m 3 . In the years 1996 - 1999 the decreasing of the average annual radon concentration was observed. The average daily courses of the radon activity concentration for individual months calculated on the basis of all data from 1991 - 2000 have a form of waves with the maximum in morning hours and with the minimum in the afternoon. The maximal amplitude of daily wave was found out in August (2.9 Bq/m 3 ) and minimal in December (0.5 Bq/m 3 ). The average daily wave obtained as the mean off all data from years 1991 - 2000 reaches the maximum between 4 and 6 a. m. and the minimum between 2 and 4 p. m. The Rn-222 activity concentration reaches its average daily value equal to 5.6 Bq/m 3 at about 10 a. m and at 9 p. m. The amplitude of average daily wave is equal to 1.5 Bq/m 3 . The average annual radon course calculated on the basis of all the measured data reaches the minimum in April and the maximum in October with seasonal variation from 3.9 to 6.9 Bq/m 3 . The annual radon courses differ from each other for various periods of the day. (authors)

  5. Forecasts of methane concentration at the outlet of the longwall with caving area - case study

    Science.gov (United States)

    Badura, Henryk; Bańka, Piotr; Musioł, Dariusz; Wesołowski, Marek

    2017-11-01

    This paper presents the characteristics of methane hazard and prevention undertaken in the N-6 longwall of seam 330/2 in “Krupiński" coal mine. On the basis of methane concentration measurements conducted with the use of telemetric system, time series of the average and maximum methane concentration at the outlet of the longwall area were generated. It was ascertained that they exhibit a strong autocorrelation. Based on a series of the average methane concentration, a time series of ventilation methane content was created and a total methane content was calculated with the use of methane flow rate measurements in the demethanization system. It was ascertained that dependence between methane concentration and output on the examined day and on the previous day is weak and also that the dependence between methane concentration and air flow rate is very weak. Dependencies between ventilation methane content, total methane content and demethanization efficiency were also investigated. Based on forecasting models [1] developed earlier by H. Badura, forecasts have been made to predict the average and maximum methane concentrations. The measured values f methane concentration show a high level of accordance with forecasted ones.

  6. Revealing the Maximum Strength in Nanotwinned Copper

    DEFF Research Database (Denmark)

    Lu, L.; Chen, X.; Huang, Xiaoxu

    2009-01-01

    boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...

  7. Modelling maximum canopy conductance and transpiration in ...

    African Journals Online (AJOL)

    There is much current interest in predicting the maximum amount of water that can be transpired by Eucalyptus trees. It is possible that industrial waste water may be applied as irrigation water to eucalypts and it is important to predict the maximum transpiration rates of these plantations in an attempt to dispose of this ...

  8. Concentrated Ownership

    DEFF Research Database (Denmark)

    Rose, Caspar

    2014-01-01

    This entry summarizes the main theoretical contributions and empirical findings in relation to concentrated ownership from a law and economics perspective. The various forms of concentrated ownership are described as well as analyzed from the perspective of the legal protection of investors......, especially minority shareholders. Concentrated ownership is associated with benefits and costs. Concentrated ownership may reduce agency costs by increased monitoring of top management. However, concentrated ownership may also provide dominating owners with private benefits of control....

  9. Model averaging, optimal inference and habit formation

    Directory of Open Access Journals (Sweden)

    Thomas H B FitzGerald

    2014-06-01

    Full Text Available Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function – the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge – that of determining which model or models of their environment are the best for guiding behaviour. Bayesian model averaging – which says that an agent should weight the predictions of different models according to their evidence – provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent’s behaviour should show an equivalent balance. We hypothesise that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realisable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behaviour. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded Bayesian inference, focussing particularly upon the relationship between goal-directed and habitual behaviour.

  10. Generalized Jackknife Estimators of Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic...

  11. Average beta measurement in EXTRAP T1

    International Nuclear Information System (INIS)

    Hedin, E.R.

    1988-12-01

    Beginning with the ideal MHD pressure balance equation, an expression for the average poloidal beta, Β Θ , is derived. A method for unobtrusively measuring the quantities used to evaluate Β Θ in Extrap T1 is described. The results if a series of measurements yielding Β Θ as a function of externally applied toroidal field are presented. (author)

  12. HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS

    International Nuclear Information System (INIS)

    2005-01-01

    Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department

  13. Bayesian Averaging is Well-Temperated

    DEFF Research Database (Denmark)

    Hansen, Lars Kai

    2000-01-01

    Bayesian predictions are stochastic just like predictions of any other inference scheme that generalize from a finite sample. While a simple variational argument shows that Bayes averaging is generalization optimal given that the prior matches the teacher parameter distribution the situation is l...

  14. Gibbs equilibrium averages and Bogolyubov measure

    International Nuclear Information System (INIS)

    Sankovich, D.P.

    2011-01-01

    Application of the functional integration methods in equilibrium statistical mechanics of quantum Bose-systems is considered. We show that Gibbs equilibrium averages of Bose-operators can be represented as path integrals over a special Gauss measure defined in the corresponding space of continuous functions. We consider some problems related to integration with respect to this measure

  15. High average-power induction linacs

    International Nuclear Information System (INIS)

    Prono, D.S.; Barrett, D.; Bowles, E.; Caporaso, G.J.; Chen, Yu-Jiuan; Clark, J.C.; Coffield, F.; Newton, M.A.; Nexsen, W.; Ravenscroft, D.; Turner, W.C.; Watson, J.A.

    1989-01-01

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of ∼ 50-ns duration pulses to > 100 MeV. In this paper the authors report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  16. Function reconstruction from noisy local averages

    International Nuclear Information System (INIS)

    Chen Yu; Huang Jianguo; Han Weimin

    2008-01-01

    A regularization method is proposed for the function reconstruction from noisy local averages in any dimension. Error bounds for the approximate solution in L 2 -norm are derived. A number of numerical examples are provided to show computational performance of the method, with the regularization parameters selected by different strategies

  17. A singularity theorem based on spatial averages

    Indian Academy of Sciences (India)

    journal of. July 2007 physics pp. 31–47. A singularity theorem based on spatial ... In this paper I would like to present a result which confirms – at least partially – ... A detailed analysis of how the model fits in with the .... Further, the statement that the spatial average ...... Financial support under grants FIS2004-01626 and no.

  18. Multiphase averaging of periodic soliton equations

    International Nuclear Information System (INIS)

    Forest, M.G.

    1979-01-01

    The multiphase averaging of periodic soliton equations is considered. Particular attention is given to the periodic sine-Gordon and Korteweg-deVries (KdV) equations. The periodic sine-Gordon equation and its associated inverse spectral theory are analyzed, including a discussion of the spectral representations of exact, N-phase sine-Gordon solutions. The emphasis is on physical characteristics of the periodic waves, with a motivation from the well-known whole-line solitons. A canonical Hamiltonian approach for the modulational theory of N-phase waves is prescribed. A concrete illustration of this averaging method is provided with the periodic sine-Gordon equation; explicit averaging results are given only for the N = 1 case, laying a foundation for a more thorough treatment of the general N-phase problem. For the KdV equation, very general results are given for multiphase averaging of the N-phase waves. The single-phase results of Whitham are extended to general N phases, and more importantly, an invariant representation in terms of Abelian differentials on a Riemann surface is provided. Several consequences of this invariant representation are deduced, including strong evidence for the Hamiltonian structure of N-phase modulational equations

  19. A dynamic analysis of moving average rules

    NARCIS (Netherlands)

    Chiarella, C.; He, X.Z.; Hommes, C.H.

    2006-01-01

    The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type

  20. Essays on model averaging and political economics

    NARCIS (Netherlands)

    Wang, W.

    2013-01-01

    This thesis first investigates various issues related with model averaging, and then evaluates two policies, i.e. West Development Drive in China and fiscal decentralization in U.S, using econometric tools. Chapter 2 proposes a hierarchical weighted least squares (HWALS) method to address multiple

  1. 7 CFR 1209.12 - On average.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false On average. 1209.12 Section 1209.12 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS... CONSUMER INFORMATION ORDER Mushroom Promotion, Research, and Consumer Information Order Definitions § 1209...

  2. High average-power induction linacs

    International Nuclear Information System (INIS)

    Prono, D.S.; Barrett, D.; Bowles, E.

    1989-01-01

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  3. Average Costs versus Net Present Value

    NARCIS (Netherlands)

    E.A. van der Laan (Erwin); R.H. Teunter (Ruud)

    2000-01-01

    textabstractWhile the net present value (NPV) approach is widely accepted as the right framework for studying production and inventory control systems, average cost (AC) models are more widely used. For the well known EOQ model it can be verified that (under certain conditions) the AC approach gives

  4. Average beta-beating from random errors

    CERN Document Server

    Tomas Garcia, Rogelio; Langner, Andy Sven; Malina, Lukas; Franchi, Andrea; CERN. Geneva. ATS Department

    2018-01-01

    The impact of random errors on average β-beating is studied via analytical derivations and simulations. A systematic positive β-beating is expected from random errors quadratic with the sources or, equivalently, with the rms β-beating. However, random errors do not have a systematic effect on the tune.

  5. Reliability Estimates for Undergraduate Grade Point Average

    Science.gov (United States)

    Westrick, Paul A.

    2017-01-01

    Undergraduate grade point average (GPA) is a commonly employed measure in educational research, serving as a criterion or as a predictor depending on the research question. Over the decades, researchers have used a variety of reliability coefficients to estimate the reliability of undergraduate GPA, which suggests that there has been no consensus…

  6. Tendon surveillance requirements - average tendon force

    International Nuclear Information System (INIS)

    Fulton, J.F.

    1982-01-01

    Proposed Rev. 3 to USNRC Reg. Guide 1.35 discusses the need for comparing, for individual tendons, the measured and predicted lift-off forces. Such a comparison is intended to detect any abnormal tendon force loss which might occur. Recognizing that there are uncertainties in the prediction of tendon losses, proposed Guide 1.35.1 has allowed specific tolerances on the fundamental losses. Thus, the lift-off force acceptance criteria for individual tendons appearing in Reg. Guide 1.35, Proposed Rev. 3, is stated relative to a lower bound predicted tendon force, which is obtained using the 'plus' tolerances on the fundamental losses. There is an additional acceptance criterion for the lift-off forces which is not specifically addressed in these two Reg. Guides; however, it is included in a proposed Subsection IWX to ASME Code Section XI. This criterion is based on the overriding requirement that the magnitude of prestress in the containment structure be sufficeint to meet the minimum prestress design requirements. This design requirement can be expressed as an average tendon force for each group of vertical hoop, or dome tendons. For the purpose of comparing the actual tendon forces with the required average tendon force, the lift-off forces measured for a sample of tendons within each group can be averaged to construct the average force for the entire group. However, the individual lift-off forces must be 'corrected' (normalized) prior to obtaining the sample average. This paper derives the correction factor to be used for this purpose. (orig./RW)

  7. Thin-source concentration dependent diffusion

    International Nuclear Information System (INIS)

    Eng, G.

    1978-01-01

    The diffusion of (Ca ++ ) in NaCl has been measured for various diffusion times and for the temperature range (575 0 to 775 0 C), using a thin-source of 45 Ca tracer, rectangular geometry, and serial sectioning. The pre-diffusion surface concentration was approximately 3 x 10 16 (Ca)-atoms/cm 2 , which, for an average penetration depth of 100 to 300 μm, produces a maximum (post-diffusion) impurity concentration comparable to or greater than the intrinsic cation vacancy concentration. The high-temperature function closely matches the D 0 (T) function obtained from low impurity concentration experiments. The lower-temperature function, combined with the sudden failure of the D(C) = D 0 (1 + [C] + 0.5[C] 2 ) function at these lower temperatures, indicates the onset of a second diffusion process, one which would operate only at extremely high impurity concentrations. This low-temperature behavior is shown to be consistent with a breakdown of the conditions assumed for vacancy equilibrium

  8. Maximum power analysis of photovoltaic module in Ramadi city

    Energy Technology Data Exchange (ETDEWEB)

    Shahatha Salim, Majid; Mohammed Najim, Jassim [College of Science, University of Anbar (Iraq); Mohammed Salih, Salih [Renewable Energy Research Center, University of Anbar (Iraq)

    2013-07-01

    Performance of photovoltaic (PV) module is greatly dependent on the solar irradiance, operating temperature, and shading. Solar irradiance can have a significant impact on power output of PV module and energy yield. In this paper, a maximum PV power which can be obtain in Ramadi city (100km west of Baghdad) is practically analyzed. The analysis is based on real irradiance values obtained as the first time by using Soly2 sun tracker device. Proper and adequate information on solar radiation and its components at a given location is very essential in the design of solar energy systems. The solar irradiance data in Ramadi city were analyzed based on the first three months of 2013. The solar irradiance data are measured on earth's surface in the campus area of Anbar University. Actual average data readings were taken from the data logger of sun tracker system, which sets to save the average readings for each two minutes and based on reading in each one second. The data are analyzed from January to the end of March-2013. Maximum daily readings and monthly average readings of solar irradiance have been analyzed to optimize the output of photovoltaic solar modules. The results show that the system sizing of PV can be reduced by 12.5% if a tracking system is used instead of fixed orientation of PV modules.

  9. Elliptical concentrators.

    Science.gov (United States)

    Garcia-Botella, Angel; Fernandez-Balbuena, Antonio Alvarez; Bernabeu, Eusebio

    2006-10-10

    Nonimaging optics is a field devoted to the design of optical components for applications such as solar concentration or illumination. In this field, many different techniques have been used to produce optical devices, including the use of reflective and refractive components or inverse engineering techniques. However, many of these optical components are based on translational symmetries, rotational symmetries, or free-form surfaces. We study a new family of nonimaging concentrators called elliptical concentrators. This new family of concentrators provides new capabilities and can have different configurations, either homofocal or nonhomofocal. Translational and rotational concentrators can be considered as particular cases of elliptical concentrators.

  10. Radon and radon-daughter concentrations in air in the vicinity of the Anaconda Uranium Mill

    Energy Technology Data Exchange (ETDEWEB)

    Momeni, M H; Lindstrom, J B; Dungey, C E; Kisieleski, W E

    1979-11-01

    Radon concentration, working level, and meteorological variables were measured continuously from June 1977 through June 1978 at three stations in the vicinity of the Anaconda Uranium Mill with measurements integrated to hourly intervals. Both radon and daughters show strong variations associated with low wind velocities and stable atmospheric conditions, and diurnal variations associated with thermal inversions. Average radon concentration shows seasonal dependence with highest concentrations observed during fall and winter. Comparison of radon concentrations and working levels between three stations shows strong dependence on wind direction and velocity. Radon concentrations and working-level distributions for each month and each station were analyzed. The average maximum, minimum, and modal concentration and working levels were estimated with observed frequencies. The highest concentration is 11,000 pCi/m/sup 3/ on the tailings. Working-level variations parallel radon variations but lag by less than one hour. The highest working levels were observed at night when conditions of higher secular radioactive equilibrium for radon daughters exist. Background radon concentration was measured at two stations, each located about 25 km from the mill, and the average is 408 pCi/m/sup 3/. Average working-level background is 3.6 x 10/sup -3/.

  11. Radon and radon-daughter concentrations in air in the vicinity of the Anaconda Uranium Mill

    International Nuclear Information System (INIS)

    Momeni, M.H.; Lindstrom, J.B.; Dungey, C.E.; Kisieleski, W.E.

    1979-11-01

    Radon concentration, working level, and meteorological variables were measured continuously from June 1977 through June 1978 at three stations in the vicinity of the Anaconda Uranium Mill with measurements integrated to hourly intervals. Both radon and daughters show strong variations associated with low wind velocities and stable atmospheric conditions, and diurnal variations associated with thermal inversions. Average radon concentration shows seasonal dependence with highest concentrations observed during fall and winter. Comparison of radon concentrations and working levels between three stations shows strong dependence on wind direction and velocity. Radon concentrations and working-level distributions for each month and each station were analyzed. The average maximum, minimum, and modal concentration and working levels were estimated with observed frequencies. The highest concentration is 11,000 pCi/m 3 on the tailings. Working-level variations parallel radon variations but lag by less than one hour. The highest working levels were observed at night when conditions of higher secular radioactive equilibrium for radon daughters exist. Background radon concentration was measured at two stations, each located about 25 km from the mill, and the average is 408 pCi/m 3 . Average working-level background is 3.6 x 10 -3

  12. Effects of bruxism on the maximum bite force

    Directory of Open Access Journals (Sweden)

    Todić Jelena T.

    2017-01-01

    Full Text Available Background/Aim. Bruxism is a parafunctional activity of the masticatory system, which is characterized by clenching or grinding of teeth. The purpose of this study was to determine whether the presence of bruxism has impact on maximum bite force, with particular reference to the potential impact of gender on bite force values. Methods. This study included two groups of subjects: without and with bruxism. The presence of bruxism in the subjects was registered using a specific clinical questionnaire on bruxism and physical examination. The subjects from both groups were submitted to the procedure of measuring the maximum bite pressure and occlusal contact area using a single-sheet pressure-sensitive films (Fuji Prescale MS and HS Film. Maximal bite force was obtained by multiplying maximal bite pressure and occlusal contact area values. Results. The average values of maximal bite force were significantly higher in the subjects with bruxism compared to those without bruxism (p 0.01. Maximal bite force was significantly higher in the males compared to the females in all segments of the research. Conclusion. The presence of bruxism influences the increase in the maximum bite force as shown in this study. Gender is a significant determinant of bite force. Registration of maximum bite force can be used in diagnosing and analysing pathophysiological events during bruxism.

  13. Statistics on exponential averaging of periodograms

    Energy Technology Data Exchange (ETDEWEB)

    Peeters, T.T.J.M. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Ciftcioglu, Oe. [Istanbul Technical Univ. (Turkey). Dept. of Electrical Engineering

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a {chi}{sup 2} distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.).

  14. Statistics on exponential averaging of periodograms

    International Nuclear Information System (INIS)

    Peeters, T.T.J.M.; Ciftcioglu, Oe.

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a χ 2 distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.)

  15. ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE

    Directory of Open Access Journals (Sweden)

    Carmen BOGHEAN

    2013-12-01

    Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.

  16. MXLKID: a maximum likelihood parameter identifier

    International Nuclear Information System (INIS)

    Gavel, D.T.

    1980-07-01

    MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables

  17. Weighted estimates for the averaging integral operator

    Czech Academy of Sciences Publication Activity Database

    Opic, Bohumír; Rákosník, Jiří

    2010-01-01

    Roč. 61, č. 3 (2010), s. 253-262 ISSN 0010-0757 R&D Projects: GA ČR GA201/05/2033; GA ČR GA201/08/0383 Institutional research plan: CEZ:AV0Z10190503 Keywords : averaging integral operator * weighted Lebesgue spaces * weights Subject RIV: BA - General Mathematics Impact factor: 0.474, year: 2010 http://link.springer.com/article/10.1007%2FBF03191231

  18. Average Transverse Momentum Quantities Approaching the Lightfront

    OpenAIRE

    Boer, Daniel

    2015-01-01

    In this contribution to Light Cone 2014, three average transverse momentum quantities are discussed: the Sivers shift, the dijet imbalance, and the $p_T$ broadening. The definitions of these quantities involve integrals over all transverse momenta that are overly sensitive to the region of large transverse momenta, which conveys little information about the transverse momentum distributions of quarks and gluons inside hadrons. TMD factorization naturally suggests alternative definitions of su...

  19. Time-averaged MSD of Brownian motion

    OpenAIRE

    Andreanov, Alexei; Grebenkov, Denis

    2012-01-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we de...

  20. Average configuration of the geomagnetic tail

    International Nuclear Information System (INIS)

    Fairfield, D.H.

    1979-01-01

    Over 3000 hours of Imp 6 magnetic field data obtained between 20 and 33 R/sub E/ in the geomagnetic tail have been used in a statistical study of the tail configuration. A distribution of 2.5-min averages of B/sub z/ as a function of position across the tail reveals that more flux crosses the equatorial plane near the dawn and dusk flanks (B-bar/sub z/=3.γ) than near midnight (B-bar/sub z/=1.8γ). The tail field projected in the solar magnetospheric equatorial plane deviates from the x axis due to flaring and solar wind aberration by an angle α=-0.9 Y/sub SM/-2.7, where Y/sub SM/ is in earth radii and α is in degrees. After removing these effects, the B/sub y/ component of the tail field is found to depend on interplanetary sector structure. During an 'away' sector the B/sub y/ component of the tail field is on average 0.5γ greater than that during a 'toward' sector, a result that is true in both tail lobes and is independent of location across the tail. This effect means the average field reversal between northern and southern lobes of the tail is more often 178 0 rather than the 180 0 that is generally supposed

  1. Unscrambling The "Average User" Of Habbo Hotel

    Directory of Open Access Journals (Sweden)

    Mikael Johnson

    2007-01-01

    Full Text Available The “user” is an ambiguous concept in human-computer interaction and information systems. Analyses of users as social actors, participants, or configured users delineate approaches to studying design-use relationships. Here, a developer’s reference to a figure of speech, termed the “average user,” is contrasted with design guidelines. The aim is to create an understanding about categorization practices in design through a case study about the virtual community, Habbo Hotel. A qualitative analysis highlighted not only the meaning of the “average user,” but also the work that both the developer and the category contribute to this meaning. The average user a represents the unknown, b influences the boundaries of the target user groups, c legitimizes the designer to disregard marginal user feedback, and d keeps the design space open, thus allowing for creativity. The analysis shows how design and use are intertwined and highlights the developers’ role in governing different users’ interests.

  2. Changing mortality and average cohort life expectancy

    Directory of Open Access Journals (Sweden)

    Robert Schoen

    2005-10-01

    Full Text Available Period life expectancy varies with changes in mortality, and should not be confused with the life expectancy of those alive during that period. Given past and likely future mortality changes, a recent debate has arisen on the usefulness of the period life expectancy as the leading measure of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure, the average cohort life expectancy (ACLE, to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate measures of mortality are calculated for England and Wales, Norway, and Switzerland for the years 1880 to 2000. CAL is found to be sensitive to past and present changes in death rates. ACLE requires the most data, but gives the best representation of the survivorship of cohorts present at a given time.

  3. Jarzynski equality in the context of maximum path entropy

    Science.gov (United States)

    González, Diego; Davis, Sergio

    2017-06-01

    In the global framework of finding an axiomatic derivation of nonequilibrium Statistical Mechanics from fundamental principles, such as the maximum path entropy - also known as Maximum Caliber principle -, this work proposes an alternative derivation of the well-known Jarzynski equality, a nonequilibrium identity of great importance today due to its applications to irreversible processes: biological systems (protein folding), mechanical systems, among others. This equality relates the free energy differences between two equilibrium thermodynamic states with the work performed when going between those states, through an average over a path ensemble. In this work the analysis of Jarzynski's equality will be performed using the formalism of inference over path space. This derivation highlights the wide generality of Jarzynski's original result, which could even be used in non-thermodynamical settings such as social systems, financial and ecological systems.

  4. 222Rn concentration in the outdoor atmosphere and its relation to the atmospheric stability

    International Nuclear Information System (INIS)

    Holy, K.; Boehm, R.; Bosa, I.; Polaskova, A.; Hola, O.

    1998-01-01

    The radon in the outdoor atmosphere has been monitored continuously since 1991. On the basis of the measured data mainly the average daily and the average annual courses of the 222 Rn concentrations have been studied. The annual courses of 222 Rn concentration are similar for all years. They present the annual variations. The average course of the 222 Rn concentration calculated on the basis of all continual measurements in the years 1991-1997 reaches the maximum value in October and the minimum value in April. The average daily courses of the 222 Rn concentration for the individual months of the year. The average daily courses have a form of waves with a maximum in the morning hours and with a minimum in the afternoon. The maximal amplitudes of daily waves have been reached in the summer months, from June till August. The amplitudes of daily waves are very small at the end of an autumn and during the winter months. The analysis of the daily waves and annual courses of 222 Rn showed that the amplitudes of the daily waves are in proportion to the global solar radiation irradiating the Earth's surface. The day duration influence on the phase of the daily wave and the wind velocity influence mainly on the level of the radon concentration. For the study of the relation of the radon concentration in the outdoor atmosphere to the stability the data of the atmosphere were obtained and they were correlated with the radon concentration. The results indicate that the 222 Rn concentrations int he outdoor atmosphere could be used for determination of the vertical atmospheric stability and these ones could reflect the atmospheric stability more completely than the different classifications based on the knowledge pertinent to the meteorological parameters. (authors)

  5. Concentration risk

    Directory of Open Access Journals (Sweden)

    Matić Vesna

    2016-01-01

    Full Text Available Concentration risk has been gaining a special dimension in the contemporary financial and economic environment. Financial institutions are exposed to this risk mainly in the field of lending, mostly through their credit activities and concentration of credit portfolios. This refers to the concentration of different exposures within a single risk category (credit risk, market risk, operational risk, liquidity risk.

  6. Maximum allowable load on wheeled mobile manipulators

    International Nuclear Information System (INIS)

    Habibnejad Korayem, M.; Ghariblu, H.

    2003-01-01

    This paper develops a computational technique for finding the maximum allowable load of mobile manipulator during a given trajectory. The maximum allowable loads which can be achieved by a mobile manipulator during a given trajectory are limited by the number of factors; probably the dynamic properties of mobile base and mounted manipulator, their actuator limitations and additional constraints applied to resolving the redundancy are the most important factors. To resolve extra D.O.F introduced by the base mobility, additional constraint functions are proposed directly in the task space of mobile manipulator. Finally, in two numerical examples involving a two-link planar manipulator mounted on a differentially driven mobile base, application of the method to determining maximum allowable load is verified. The simulation results demonstrates the maximum allowable load on a desired trajectory has not a unique value and directly depends on the additional constraint functions which applies to resolve the motion redundancy

  7. Extracting Credible Dependencies for Averaged One-Dependence Estimator Analysis

    Directory of Open Access Journals (Sweden)

    LiMin Wang

    2014-01-01

    Full Text Available Of the numerous proposals to improve the accuracy of naive Bayes (NB by weakening the conditional independence assumption, averaged one-dependence estimator (AODE demonstrates remarkable zero-one loss performance. However, indiscriminate superparent attributes will bring both considerable computational cost and negative effect on classification accuracy. In this paper, to extract the most credible dependencies we present a new type of seminaive Bayesian operation, which selects superparent attributes by building maximum weighted spanning tree and removes highly correlated children attributes by functional dependency and canonical cover analysis. Our extensive experimental comparison on UCI data sets shows that this operation efficiently identifies possible superparent attributes at training time and eliminates redundant children attributes at classification time.

  8. Maximum-Likelihood Detection Of Noncoherent CPM

    Science.gov (United States)

    Divsalar, Dariush; Simon, Marvin K.

    1993-01-01

    Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.

  9. Entropy concentration and the empirical coding game

    NARCIS (Netherlands)

    Grünwald, P.D.

    2008-01-01

    We give a characterization of maximum entropy/minimum relative entropy inference by providing two 'strong entropy concentration' theorems. These theorems unify and generalize Jaynes''concentration phenomenon' and Van Campenhout and Cover's 'conditional limit theorem'. The theorems characterize

  10. Concentrator Photovoltaics

    CERN Document Server

    Luque, Antonio L

    2007-01-01

    Photovoltaic solar-energy conversion is one of the most promising technologies for generating renewable energy, and conversion of concentrated sunlight can lead to reduced cost for solar electricity. In fact, photovoltaic conversion of concentrated sunlight insures an efficient and cost-effective sustainable power resource. This book gives an overview of all components, e.g. cells, concentrators, modules and systems, for systems of concentrator photovoltaics. The authors report on significant results related to design, technology, and applications, and also cover the fundamental physics and market considerations. Specific contributions include: theory and practice of sunlight concentrators; an overview of concentrator PV activities; a description of concentrator solar cells; design and technology of modules and systems; manufacturing aspects; and a market study.

  11. Operator product expansion and its thermal average

    Energy Technology Data Exchange (ETDEWEB)

    Mallik, S [Saha Inst. of Nuclear Physics, Calcutta (India)

    1998-05-01

    QCD sum rules at finite temperature, like the ones at zero temperature, require the coefficients of local operators, which arise in the short distance expansion of the thermal average of two-point functions of currents. We extend the configuration space method, applied earlier at zero temperature, to the case at finite temperature. We find that, upto dimension four, two new operators arise, in addition to the two appearing already in the vacuum correlation functions. It is argued that the new operators would contribute substantially to the sum rules, when the temperature is not too low. (orig.) 7 refs.

  12. Fluctuations of wavefunctions about their classical average

    International Nuclear Information System (INIS)

    Benet, L; Flores, J; Hernandez-Saldana, H; Izrailev, F M; Leyvraz, F; Seligman, T H

    2003-01-01

    Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics

  13. Phase-averaged transport for quasiperiodic Hamiltonians

    CERN Document Server

    Bellissard, J; Schulz-Baldes, H

    2002-01-01

    For a class of discrete quasi-periodic Schroedinger operators defined by covariant re- presentations of the rotation algebra, a lower bound on phase-averaged transport in terms of the multifractal dimensions of the density of states is proven. This result is established under a Diophantine condition on the incommensuration parameter. The relevant class of operators is distinguished by invariance with respect to symmetry automorphisms of the rotation algebra. It includes the critical Harper (almost-Mathieu) operator. As a by-product, a new solution of the frame problem associated with Weyl-Heisenberg-Gabor lattices of coherent states is given.

  14. Baseline-dependent averaging in radio interferometry

    Science.gov (United States)

    Wijnholds, S. J.; Willis, A. G.; Salvini, S.

    2018-05-01

    This paper presents a detailed analysis of the applicability and benefits of baseline-dependent averaging (BDA) in modern radio interferometers and in particular the Square Kilometre Array. We demonstrate that BDA does not affect the information content of the data other than a well-defined decorrelation loss for which closed form expressions are readily available. We verify these theoretical findings using simulations. We therefore conclude that BDA can be used reliably in modern radio interferometry allowing a reduction of visibility data volume (and hence processing costs for handling visibility data) by more than 80 per cent.

  15. Multistage parallel-serial time averaging filters

    International Nuclear Information System (INIS)

    Theodosiou, G.E.

    1980-01-01

    Here, a new time averaging circuit design, the 'parallel filter' is presented, which can reduce the time jitter, introduced in time measurements using counters of large dimensions. This parallel filter could be considered as a single stage unit circuit which can be repeated an arbitrary number of times in series, thus providing a parallel-serial filter type as a result. The main advantages of such a filter over a serial one are much less electronic gate jitter and time delay for the same amount of total time uncertainty reduction. (orig.)

  16. Time-averaged MSD of Brownian motion

    International Nuclear Information System (INIS)

    Andreanov, Alexei; Grebenkov, Denis S

    2012-01-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution

  17. Time-dependent angularly averaged inverse transport

    International Nuclear Information System (INIS)

    Bal, Guillaume; Jollivet, Alexandre

    2009-01-01

    This paper concerns the reconstruction of the absorption and scattering parameters in a time-dependent linear transport equation from knowledge of angularly averaged measurements performed at the boundary of a domain of interest. Such measurement settings find applications in medical and geophysical imaging. We show that the absorption coefficient and the spatial component of the scattering coefficient are uniquely determined by such measurements. We obtain stability results on the reconstruction of the absorption and scattering parameters with respect to the measured albedo operator. The stability results are obtained by a precise decomposition of the measurements into components with different singular behavior in the time domain

  18. Bootstrapping Density-Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...... (1989). In many cases validity of bootstrap-based inference procedures is found to depend crucially on whether the bandwidth sequence satisfies a particular (asymptotic linearity) condition. An exception to this rule occurs for inference procedures involving a studentized estimator employing a "robust...

  19. Average Nuclear properties based on statistical model

    International Nuclear Information System (INIS)

    El-Jaick, L.J.

    1974-01-01

    The rough properties of nuclei were investigated by statistical model, in systems with the same and different number of protons and neutrons, separately, considering the Coulomb energy in the last system. Some average nuclear properties were calculated based on the energy density of nuclear matter, from Weizsscker-Beth mass semiempiric formulae, generalized for compressible nuclei. In the study of a s surface energy coefficient, the great influence exercised by Coulomb energy and nuclear compressibility was verified. For a good adjust of beta stability lines and mass excess, the surface symmetry energy were established. (M.C.K.) [pt

  20. Time-averaged MSD of Brownian motion

    Science.gov (United States)

    Andreanov, Alexei; Grebenkov, Denis S.

    2012-07-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution.

  1. Bayesian model averaging and weighted average least squares : Equivariance, stability, and numerical issues

    NARCIS (Netherlands)

    De Luca, G.; Magnus, J.R.

    2011-01-01

    In this article, we describe the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals, which implement, respectively, the exact Bayesian model-averaging estimator and the weighted-average least-squares

  2. Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.

    Science.gov (United States)

    Dirks, Jean; And Others

    1983-01-01

    Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)

  3. Analysis of the average daily radon variations in the soil air

    International Nuclear Information System (INIS)

    Holy, K.; Matos, M.; Boehm, R.; Stanys, T.; Polaskova, A.; Hola, O.

    1998-01-01

    In this contribution the search of the relation between the daily variations of the radon concentration and the regular daily oscillations of the atmospheric pressure are presented. The deviation of the radon activity concentration in the soil air from the average daily value reaches only a few percent. For the dry summer months the average daily course of the radon activity concentration can be described by the obtained equation. The analysis of the average daily courses could give the information concerning the depth of the gas permeable soil layer. The soil parameter is determined by others method with difficulty

  4. Personal carbon monoxide exposures of preschool children in Helsinki, Finland - comparison to ambient air concentrations

    Energy Technology Data Exchange (ETDEWEB)

    Alm, S.; Mukala, K.; Tittanen, P.; Jantunen, M.J. [KTL National Public Health Institute, Kuopio (Finland). Dept. of Environmental Health

    2001-07-01

    The associations of personal carbon monoxide (CO) exposures with ambient air CO concentrations measured at fixed monitoring sites, were studied among 194 children aged 3-6yr in four downtown and four suburban day-care centers in Helsinki, Finland. Each child carried a personal CO exposure monitor between 1 and 4 times for a time period of between 20 and 24h. CO concentrations at two fixed monitoring sites were measured simultaneously. The CO concentrations measured at the fixed monitoring sites were usually lower (mean maximum 8-h concentration: 0.9 and 2.6mgm{sup -3}) than the personal CO exposure concentrations (mean maximum 8-h concentration: 3.3mgm{sup -3}).The fixed site CO concentrations were poor predictors of the personal CO exposure concentrations. However, the correlations between the personal CO exposure and the fixed monitoring site CO concentrations increased (-0.03 -- -0.12 to 0.13-0.16) with increasing averaging times from 1 to 8h. Also, the fixed monitoring site CO concentrations explained the mean daily or weekly personal CO exposures of a group of simultaneously measured children better than individual exposure CO concentrations. This study suggests that the short-term CO personal exposure of children cannot be meaningfully assessed using fixed monitoring sites. (author)

  5. An Experimental Observation of Axial Variation of Average Size of Methane Clusters in a Gas Jet

    International Nuclear Information System (INIS)

    Ji-Feng, Han; Chao-Wen, Yang; Jing-Wei, Miao; Jian-Feng, Lu; Meng, Liu; Xiao-Bing, Luo; Mian-Gong, Shi

    2010-01-01

    Axial variation of average size of methane clusters in a gas jet produced by supersonic expansion of methane through a cylindrical nozzle of 0.8 mm in diameter is observed using a Rayleigh scattering method. The scattered light intensity exhibits a power scaling on the backing pressure ranging from 16 to 50 bar, and the power is strongly Z dependent varying from 8.4 (Z = 3 mm) to 5.4 (Z = 11 mm), which is much larger than that of the argon cluster. The scattered light intensity versus axial position shows that the position of 5 mm has the maximum signal intensity. The estimation of the average cluster size on axial position Z indicates that the cluster growth process goes forward until the maximum average cluster size is reached at Z = 9 mm, and the average cluster size will decrease gradually for Z > 9 mm

  6. MPBoot: fast phylogenetic maximum parsimony tree inference and bootstrap approximation.

    Science.gov (United States)

    Hoang, Diep Thi; Vinh, Le Sy; Flouri, Tomáš; Stamatakis, Alexandros; von Haeseler, Arndt; Minh, Bui Quang

    2018-02-02

    The nonparametric bootstrap is widely used to measure the branch support of phylogenetic trees. However, bootstrapping is computationally expensive and remains a bottleneck in phylogenetic analyses. Recently, an ultrafast bootstrap approximation (UFBoot) approach was proposed for maximum likelihood analyses. However, such an approach is still missing for maximum parsimony. To close this gap we present MPBoot, an adaptation and extension of UFBoot to compute branch supports under the maximum parsimony principle. MPBoot works for both uniform and non-uniform cost matrices. Our analyses on biological DNA and protein showed that under uniform cost matrices, MPBoot runs on average 4.7 (DNA) to 7 times (protein data) (range: 1.2-20.7) faster than the standard parsimony bootstrap implemented in PAUP*; but 1.6 (DNA) to 4.1 times (protein data) slower than the standard bootstrap with a fast search routine in TNT (fast-TNT). However, for non-uniform cost matrices MPBoot is 5 (DNA) to 13 times (protein data) (range:0.3-63.9) faster than fast-TNT. We note that MPBoot achieves better scores more frequently than PAUP* and fast-TNT. However, this effect is less pronounced if an intensive but slower search in TNT is invoked. Moreover, experiments on large-scale simulated data show that while both PAUP* and TNT bootstrap estimates are too conservative, MPBoot bootstrap estimates appear more unbiased. MPBoot provides an efficient alternative to the standard maximum parsimony bootstrap procedure. It shows favorable performance in terms of run time, the capability of finding a maximum parsimony tree, and high bootstrap accuracy on simulated as well as empirical data sets. MPBoot is easy-to-use, open-source and available at http://www.cibiv.at/software/mpboot .

  7. Averaged null energy condition from causality

    Science.gov (United States)

    Hartman, Thomas; Kundu, Sandipan; Tajdini, Amirhossein

    2017-07-01

    Unitary, Lorentz-invariant quantum field theories in flat spacetime obey mi-crocausality: commutators vanish at spacelike separation. For interacting theories in more than two dimensions, we show that this implies that the averaged null energy, ∫ duT uu , must be non-negative. This non-local operator appears in the operator product expansion of local operators in the lightcone limit, and therefore contributes to n-point functions. We derive a sum rule that isolates this contribution and is manifestly positive. The argument also applies to certain higher spin operators other than the stress tensor, generating an infinite family of new constraints of the form ∫ duX uuu··· u ≥ 0. These lead to new inequalities for the coupling constants of spinning operators in conformal field theory, which include as special cases (but are generally stronger than) the existing constraints from the lightcone bootstrap, deep inelastic scattering, conformal collider methods, and relative entropy. We also comment on the relation to the recent derivation of the averaged null energy condition from relative entropy, and suggest a more general connection between causality and information-theoretic inequalities in QFT.

  8. Beta-energy averaging and beta spectra

    International Nuclear Information System (INIS)

    Stamatelatos, M.G.; England, T.R.

    1976-07-01

    A simple yet highly accurate method for approximately calculating spectrum-averaged beta energies and beta spectra for radioactive nuclei is presented. This method should prove useful for users who wish to obtain accurate answers without complicated calculations of Fermi functions, complex gamma functions, and time-consuming numerical integrations as required by the more exact theoretical expressions. Therefore, this method should be a good time-saving alternative for investigators who need to make calculations involving large numbers of nuclei (e.g., fission products) as well as for occasional users interested in restricted number of nuclides. The average beta-energy values calculated by this method differ from those calculated by ''exact'' methods by no more than 1 percent for nuclides with atomic numbers in the 20 to 100 range and which emit betas of energies up to approximately 8 MeV. These include all fission products and the actinides. The beta-energy spectra calculated by the present method are also of the same quality

  9. Asymptotic Time Averages and Frequency Distributions

    Directory of Open Access Journals (Sweden)

    Muhammad El-Taha

    2016-01-01

    Full Text Available Consider an arbitrary nonnegative deterministic process (in a stochastic setting {X(t,  t≥0} is a fixed realization, i.e., sample-path of the underlying stochastic process with state space S=(-∞,∞. Using a sample-path approach, we give necessary and sufficient conditions for the long-run time average of a measurable function of process to be equal to the expectation taken with respect to the same measurable function of its long-run frequency distribution. The results are further extended to allow unrestricted parameter (time space. Examples are provided to show that our condition is not superfluous and that it is weaker than uniform integrability. The case of discrete-time processes is also considered. The relationship to previously known sufficient conditions, usually given in stochastic settings, will also be discussed. Our approach is applied to regenerative processes and an extension of a well-known result is given. For researchers interested in sample-path analysis, our results will give them the choice to work with the time average of a process or its frequency distribution function and go back and forth between the two under a mild condition.

  10. Averaging in the presence of sliding errors

    International Nuclear Information System (INIS)

    Yost, G.P.

    1991-08-01

    In many cases the precision with which an experiment can measure a physical quantity depends on the value of that quantity. Not having access to the true value, experimental groups are forced to assign their errors based on their own measured value. Procedures which attempt to derive an improved estimate of the true value by a suitable average of such measurements usually weight each experiment's measurement according to the reported variance. However, one is in a position to derive improved error estimates for each experiment from the average itself, provided an approximate idea of the functional dependence of the error on the central value is known. Failing to do so can lead to substantial biases. Techniques which avoid these biases without loss of precision are proposed and their performance is analyzed with examples. These techniques are quite general and can bring about an improvement even when the behavior of the errors is not well understood. Perhaps the most important application of the technique is in fitting curves to histograms

  11. Flow-covariate prediction of stream pesticide concentrations.

    Science.gov (United States)

    Mosquin, Paul L; Aldworth, Jeremy; Chen, Wenlin

    2018-01-01

    Potential peak functions (e.g., maximum rolling averages over a given duration) of annual pesticide concentrations in the aquatic environment are important exposure parameters (or target quantities) for ecological risk assessments. These target quantities require accurate concentration estimates on nonsampled days in a monitoring program. We examined stream flow as a covariate via universal kriging to improve predictions of maximum m-day (m = 1, 7, 14, 30, 60) rolling averages and the 95th percentiles of atrazine concentration in streams where data were collected every 7 or 14 d. The universal kriging predictions were evaluated against the target quantities calculated directly from the daily (or near daily) measured atrazine concentration at 32 sites (89 site-yr) as part of the Atrazine Ecological Monitoring Program in the US corn belt region (2008-2013) and 4 sites (62 site-yr) in Ohio by the National Center for Water Quality Research (1993-2008). Because stream flow data are strongly skewed to the right, 3 transformations of the flow covariate were considered: log transformation, short-term flow anomaly, and normalized Box-Cox transformation. The normalized Box-Cox transformation resulted in predictions of the target quantities that were comparable to those obtained from log-linear interpolation (i.e., linear interpolation on the log scale) for 7-d sampling. However, the predictions appeared to be negatively affected by variability in regression coefficient estimates across different sample realizations of the concentration time series. Therefore, revised models incorporating seasonal covariates and partially or fully constrained regression parameters were investigated, and they were found to provide much improved predictions in comparison with those from log-linear interpolation for all rolling average measures. Environ Toxicol Chem 2018;37:260-273. © 2017 SETAC. © 2017 SETAC.

  12. Entanglement in random pure states: spectral density and average von Neumann entropy

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, Santosh; Pandey, Akhilesh, E-mail: skumar.physics@gmail.com, E-mail: ap0700@mail.jnu.ac.in [School of Physical Sciences, Jawaharlal Nehru University, New Delhi 110 067 (India)

    2011-11-04

    Quantum entanglement plays a crucial role in quantum information, quantum teleportation and quantum computation. The information about the entanglement content between subsystems of the composite system is encoded in the Schmidt eigenvalues. We derive here closed expressions for the spectral density of Schmidt eigenvalues for all three invariant classes of random matrix ensembles. We also obtain exact results for average von Neumann entropy. We find that maximum average entanglement is achieved if the system belongs to the symplectic invariant class. (paper)

  13. Maximum Mass of Hybrid Stars in the Quark Bag Model

    Science.gov (United States)

    Alaverdyan, G. B.; Vartanyan, Yu. L.

    2017-12-01

    The effect of model parameters in the equation of state for quark matter on the magnitude of the maximum mass of hybrid stars is examined. Quark matter is described in terms of the extended MIT bag model including corrections for one-gluon exchange. For nucleon matter in the range of densities corresponding to the phase transition, a relativistic equation of state is used that is calculated with two-particle correlations taken into account based on using the Bonn meson-exchange potential. The Maxwell construction is used to calculate the characteristics of the first order phase transition and it is shown that for a fixed value of the strong interaction constant αs, the baryon concentrations of the coexisting phases grow monotonically as the bag constant B increases. It is shown that for a fixed value of the strong interaction constant αs, the maximum mass of a hybrid star increases as the bag constant B decreases. For a given value of the bag parameter B, the maximum mass rises as the strong interaction constant αs increases. It is shown that the configurations of hybrid stars with maximum masses equal to or exceeding the mass of the currently known most massive pulsar are possible for values of the strong interaction constant αs > 0.6 and sufficiently low values of the bag constant.

  14. STUDY ON MAXIMUM SPECIFIC SLUDGE ACIVITY OF DIFFERENT ANAEROBIC GRANULAR SLUDGE BY BATCH TESTS

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The maximum specific sludge activity of granular sludge from large-scale UASB, IC and Biobed anaerobic reactors were investigated by batch tests. The limitation factors related to maximum specific sludge activity (diffusion, substrate sort, substrate concentration and granular size) were studied. The general principle and procedure for the precise measurement of maximum specific sludge activity were suggested. The potential capacity of loading rate of the IC and Biobed anaerobic reactors were analyzed and compared by use of the batch tests results.

  15. Synoptic and meteorological drivers of extreme ozone concentrations over Europe

    Science.gov (United States)

    Otero, Noelia Felipe; Sillmann, Jana; Schnell, Jordan L.; Rust, Henning W.; Butler, Tim

    2016-04-01

    The present work assesses the relationship between local and synoptic meteorological conditions and surface ozone concentration over Europe in spring and summer months, during the period 1998-2012 using a new interpolated data set of observed surface ozone concentrations over the European domain. Along with local meteorological conditions, the influence of large-scale atmospheric circulation on surface ozone is addressed through a set of airflow indices computed with a novel implementation of a grid-by-grid weather type classification across Europe. Drivers of surface ozone over the full distribution of maximum daily 8-hour average values are investigated, along with drivers of the extreme high percentiles and exceedances or air quality guideline thresholds. Three different regression techniques are applied: multiple linear regression to assess the drivers of maximum daily ozone, logistic regression to assess the probability of threshold exceedances and quantile regression to estimate the meteorological influence on extreme values, as represented by the 95th percentile. The relative importance of the input parameters (predictors) is assessed by a backward stepwise regression procedure that allows the identification of the most important predictors in each model. Spatial patterns of model performance exhibit distinct variations between regions. The inclusion of the ozone persistence is particularly relevant over Southern Europe. In general, the best model performance is found over Central Europe, where the maximum temperature plays an important role as a driver of maximum daily ozone as well as its extreme values, especially during warmer months.

  16. High average power linear induction accelerator development

    International Nuclear Information System (INIS)

    Bayless, J.R.; Adler, R.J.

    1987-07-01

    There is increasing interest in linear induction accelerators (LIAs) for applications including free electron lasers, high power microwave generators and other types of radiation sources. Lawrence Livermore National Laboratory has developed LIA technology in combination with magnetic pulse compression techniques to achieve very impressive performance levels. In this paper we will briefly discuss the LIA concept and describe our development program. Our goals are to improve the reliability and reduce the cost of LIA systems. An accelerator is presently under construction to demonstrate these improvements at an energy of 1.6 MeV in 2 kA, 65 ns beam pulses at an average beam power of approximately 30 kW. The unique features of this system are a low cost accelerator design and an SCR-switched, magnetically compressed, pulse power system. 4 refs., 7 figs

  17. FEL system with homogeneous average output

    Energy Technology Data Exchange (ETDEWEB)

    Douglas, David R.; Legg, Robert; Whitney, R. Roy; Neil, George; Powers, Thomas Joseph

    2018-01-16

    A method of varying the output of a free electron laser (FEL) on very short time scales to produce a slightly broader, but smooth, time-averaged wavelength spectrum. The method includes injecting into an accelerator a sequence of bunch trains at phase offsets from crest. Accelerating the particles to full energy to result in distinct and independently controlled, by the choice of phase offset, phase-energy correlations or chirps on each bunch train. The earlier trains will be more strongly chirped, the later trains less chirped. For an energy recovered linac (ERL), the beam may be recirculated using a transport system with linear and nonlinear momentum compactions M.sub.56, which are selected to compress all three bunch trains at the FEL with higher order terms managed.

  18. Quetelet, the average man and medical knowledge.

    Science.gov (United States)

    Caponi, Sandra

    2013-01-01

    Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.

  19. [Quetelet, the average man and medical knowledge].

    Science.gov (United States)

    Caponi, Sandra

    2013-01-01

    Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.

  20. Asymmetric network connectivity using weighted harmonic averages

    Science.gov (United States)

    Morrison, Greg; Mahadevan, L.

    2011-02-01

    We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.

  1. Angle-averaged Compton cross sections

    International Nuclear Information System (INIS)

    Nickel, G.H.

    1983-01-01

    The scattering of a photon by an individual free electron is characterized by six quantities: α = initial photon energy in units of m 0 c 2 ; α/sub s/ = scattered photon energy in units of m 0 c 2 ; β = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV

  2. Average Gait Differential Image Based Human Recognition

    Directory of Open Access Journals (Sweden)

    Jinyan Chen

    2014-01-01

    Full Text Available The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI, AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition.

  3. Reynolds averaged simulation of unsteady separated flow

    International Nuclear Information System (INIS)

    Iaccarino, G.; Ooi, A.; Durbin, P.A.; Behnia, M.

    2003-01-01

    The accuracy of Reynolds averaged Navier-Stokes (RANS) turbulence models in predicting complex flows with separation is examined. The unsteady flow around square cylinder and over a wall-mounted cube are simulated and compared with experimental data. For the cube case, none of the previously published numerical predictions obtained by steady-state RANS produced a good match with experimental data. However, evidence exists that coherent vortex shedding occurs in this flow. Its presence demands unsteady RANS computation because the flow is not statistically stationary. The present study demonstrates that unsteady RANS does indeed predict periodic shedding, and leads to much better concurrence with available experimental data than has been achieved with steady computation

  4. Angle-averaged Compton cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Nickel, G.H.

    1983-01-01

    The scattering of a photon by an individual free electron is characterized by six quantities: ..cap alpha.. = initial photon energy in units of m/sub 0/c/sup 2/; ..cap alpha../sub s/ = scattered photon energy in units of m/sub 0/c/sup 2/; ..beta.. = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV.

  5. Kinetic regularities of change in the concentration of radionuclides in the Georgian tea content

    International Nuclear Information System (INIS)

    Mosulishvili, L.M.; Katamadze, N.M.; Shoniya, N.I.; Ginturi, Eh.N.

    1990-01-01

    The paper is concerned with the results of a study of behavior of artificial radionuclides in Georgian tea technological products after the accident at the Chernobyl Nuclear Station. A partial contribution of the activity of radionuclides 141 Ce, 140 La, 103 Ru, 106 Ru, 140 Ba, 137 Cs, 95 Nb, 95 Zr, 134 Cs and 90 Sr to the total activity to Georgian tea samples. Maximum tolerated concentrations of radionuclides were assessed provided average annual tea consumption per capita was 1 kg. The maximum of solubility in the water phase falls on Cs radionuclides. The regularities of migration of half-lived radionuclides 3 yrs. After the Chernobyl accident were established

  6. Maximum gravitational redshift of white dwarfs

    International Nuclear Information System (INIS)

    Shapiro, S.L.; Teukolsky, S.A.

    1976-01-01

    The stability of uniformly rotating, cold white dwarfs is examined in the framework of the Parametrized Post-Newtonian (PPN) formalism of Will and Nordtvedt. The maximum central density and gravitational redshift of a white dwarf are determined as functions of five of the nine PPN parameters (γ, β, zeta 2 , zeta 3 , and zeta 4 ), the total angular momentum J, and the composition of the star. General relativity predicts that the maximum redshifts is 571 km s -1 for nonrotating carbon and helium dwarfs, but is lower for stars composed of heavier nuclei. Uniform rotation can increase the maximum redshift to 647 km s -1 for carbon stars (the neutronization limit) and to 893 km s -1 for helium stars (the uniform rotation limit). The redshift distribution of a larger sample of white dwarfs may help determine the composition of their cores

  7. The balanced survivor average causal effect.

    Science.gov (United States)

    Greene, Tom; Joffe, Marshall; Hu, Bo; Li, Liang; Boucher, Ken

    2013-05-07

    Statistical analysis of longitudinal outcomes is often complicated by the absence of observable values in patients who die prior to their scheduled measurement. In such cases, the longitudinal data are said to be "truncated by death" to emphasize that the longitudinal measurements are not simply missing, but are undefined after death. Recently, the truncation by death problem has been investigated using the framework of principal stratification to define the target estimand as the survivor average causal effect (SACE), which in the context of a two-group randomized clinical trial is the mean difference in the longitudinal outcome between the treatment and control groups for the principal stratum of always-survivors. The SACE is not identified without untestable assumptions. These assumptions have often been formulated in terms of a monotonicity constraint requiring that the treatment does not reduce survival in any patient, in conjunction with assumed values for mean differences in the longitudinal outcome between certain principal strata. In this paper, we introduce an alternative estimand, the balanced-SACE, which is defined as the average causal effect on the longitudinal outcome in a particular subset of the always-survivors that is balanced with respect to the potential survival times under the treatment and control. We propose a simple estimator of the balanced-SACE that compares the longitudinal outcomes between equivalent fractions of the longest surviving patients between the treatment and control groups and does not require a monotonicity assumption. We provide expressions for the large sample bias of the estimator, along with sensitivity analyses and strategies to minimize this bias. We consider statistical inference under a bootstrap resampling procedure.

  8. Line-averaging measurement methods to estimate the gap in the CO2 balance closure – possibilities, challenges, and uncertainties

    Directory of Open Access Journals (Sweden)

    A. Ziemann

    2017-11-01

    Full Text Available An imbalance of surface energy fluxes using the eddy covariance (EC method is observed in global measurement networks although all necessary corrections and conversions are applied to the raw data. Mainly during nighttime, advection can occur, resulting in a closing gap that consequently should also affect the CO2 balances. There is the crucial need for representative concentration and wind data to measure advective fluxes. Ground-based remote sensing techniques are an ideal tool as they provide the spatially representative CO2 concentration together with wind components within the same voxel structure. For this purpose, the presented SQuAd (Spatially resolved Quantification of the Advection influence on the balance closure of greenhouse gases approach applies an integrated method combination of acoustic and optical remote sensing. The innovative combination of acoustic travel-time tomography (A-TOM and open-path Fourier-transform infrared spectroscopy (OP-FTIR will enable an upscaling and enhancement of EC measurements. OP-FTIR instrumentation offers the significant advantage of real-time simultaneous measurements of line-averaged concentrations for CO2 and other greenhouse gases (GHGs. A-TOM is a scalable method to remotely resolve 3-D wind and temperature fields. The paper will give an overview about the proposed SQuAd approach and first results of experimental tests at the FLUXNET site Grillenburg in Germany. Preliminary results of the comprehensive experiments reveal a mean nighttime horizontal advection of CO2 of about 10 µmol m−2 s−1 estimated by the spatially integrating and representative SQuAd method. Additionally, uncertainties in determining CO2 concentrations using passive OP-FTIR and wind speed applying A-TOM are systematically quantified. The maximum uncertainty for CO2 concentration was estimated due to environmental parameters, instrumental characteristics, and retrieval procedure with a total amount of approximately

  9. Line-averaging measurement methods to estimate the gap in the CO2 balance closure - possibilities, challenges, and uncertainties

    Science.gov (United States)

    Ziemann, Astrid; Starke, Manuela; Schütze, Claudia

    2017-11-01

    An imbalance of surface energy fluxes using the eddy covariance (EC) method is observed in global measurement networks although all necessary corrections and conversions are applied to the raw data. Mainly during nighttime, advection can occur, resulting in a closing gap that consequently should also affect the CO2 balances. There is the crucial need for representative concentration and wind data to measure advective fluxes. Ground-based remote sensing techniques are an ideal tool as they provide the spatially representative CO2 concentration together with wind components within the same voxel structure. For this purpose, the presented SQuAd (Spatially resolved Quantification of the Advection influence on the balance closure of greenhouse gases) approach applies an integrated method combination of acoustic and optical remote sensing. The innovative combination of acoustic travel-time tomography (A-TOM) and open-path Fourier-transform infrared spectroscopy (OP-FTIR) will enable an upscaling and enhancement of EC measurements. OP-FTIR instrumentation offers the significant advantage of real-time simultaneous measurements of line-averaged concentrations for CO2 and other greenhouse gases (GHGs). A-TOM is a scalable method to remotely resolve 3-D wind and temperature fields. The paper will give an overview about the proposed SQuAd approach and first results of experimental tests at the FLUXNET site Grillenburg in Germany. Preliminary results of the comprehensive experiments reveal a mean nighttime horizontal advection of CO2 of about 10 µmol m-2 s-1 estimated by the spatially integrating and representative SQuAd method. Additionally, uncertainties in determining CO2 concentrations using passive OP-FTIR and wind speed applying A-TOM are systematically quantified. The maximum uncertainty for CO2 concentration was estimated due to environmental parameters, instrumental characteristics, and retrieval procedure with a total amount of approximately 30 % for a single

  10. Averaging processes in granular flows driven by gravity

    Science.gov (United States)

    Rossi, Giulia; Armanini, Aronne

    2016-04-01

    One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental

  11. Quantile-based Bayesian maximum entropy approach for spatiotemporal modeling of ambient air quality levels.

    Science.gov (United States)

    Yu, Hwa-Lung; Wang, Chih-Hsin

    2013-02-05

    Understanding the daily changes in ambient air quality concentrations is important to the assessing human exposure and environmental health. However, the fine temporal scales (e.g., hourly) involved in this assessment often lead to high variability in air quality concentrations. This is because of the complex short-term physical and chemical mechanisms among the pollutants. Consequently, high heterogeneity is usually present in not only the averaged pollution levels, but also the intraday variance levels of the daily observations of ambient concentration across space and time. This characteristic decreases the estimation performance of common techniques. This study proposes a novel quantile-based Bayesian maximum entropy (QBME) method to account for the nonstationary and nonhomogeneous characteristics of ambient air pollution dynamics. The QBME method characterizes the spatiotemporal dependence among the ambient air quality levels based on their location-specific quantiles and accounts for spatiotemporal variations using a local weighted smoothing technique. The epistemic framework of the QBME method can allow researchers to further consider the uncertainty of space-time observations. This study presents the spatiotemporal modeling of daily CO and PM10 concentrations across Taiwan from 1998 to 2009 using the QBME method. Results show that the QBME method can effectively improve estimation accuracy in terms of lower mean absolute errors and standard deviations over space and time, especially for pollutants with strong nonhomogeneous variances across space. In addition, the epistemic framework can allow researchers to assimilate the site-specific secondary information where the observations are absent because of the common preferential sampling issues of environmental data. The proposed QBME method provides a practical and powerful framework for the spatiotemporal modeling of ambient pollutants.

  12. A simple maximum power point tracker for thermoelectric generators

    International Nuclear Information System (INIS)

    Paraskevas, Alexandros; Koutroulis, Eftichios

    2016-01-01

    Highlights: • A Maximum Power Point Tracking (MPPT) method for thermoelectric generators is proposed. • A power converter is controlled to operate on a pre-programmed locus. • The proposed MPPT technique has the advantage of operational and design simplicity. • The experimental average deviation from the MPP power of the TEG source is 1.87%. - Abstract: ThermoElectric Generators (TEGs) are capable to harvest the ambient thermal energy for power-supplying sensors, actuators, biomedical devices etc. in the μW up to several hundreds of Watts range. In this paper, a Maximum Power Point Tracking (MPPT) method for TEG elements is proposed, which is based on controlling a power converter such that it operates on a pre-programmed locus of operating points close to the MPPs of the power–voltage curves of the TEG power source. Compared to the past-proposed MPPT methods for TEGs, the technique presented in this paper has the advantage of operational and design simplicity. Thus, its implementation using off-the-shelf microelectronic components with low-power consumption characteristics is enabled, without being required to employ specialized integrated circuits or signal processing units of high development cost. Experimental results are presented, which demonstrate that for MPP power levels of the TEG source in the range of 1–17 mW, the average deviation of the power produced by the proposed system from the MPP power of the TEG source is 1.87%.

  13. Optimal Control of Polymer Flooding Based on Maximum Principle

    Directory of Open Access Journals (Sweden)

    Yang Lei

    2012-01-01

    Full Text Available Polymer flooding is one of the most important technologies for enhanced oil recovery (EOR. In this paper, an optimal control model of distributed parameter systems (DPSs for polymer injection strategies is established, which involves the performance index as maximum of the profit, the governing equations as the fluid flow equations of polymer flooding, and the inequality constraint as the polymer concentration limitation. To cope with the optimal control problem (OCP of this DPS, the necessary conditions for optimality are obtained through application of the calculus of variations and Pontryagin’s weak maximum principle. A gradient method is proposed for the computation of optimal injection strategies. The numerical results of an example illustrate the effectiveness of the proposed method.

  14. Maximum entropy analysis of EGRET data

    DEFF Research Database (Denmark)

    Pohl, M.; Strong, A.W.

    1997-01-01

    EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....

  15. The Maximum Resource Bin Packing Problem

    DEFF Research Database (Denmark)

    Boyar, J.; Epstein, L.; Favrholdt, L.M.

    2006-01-01

    Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...

  16. Shower maximum detector for SDC calorimetry

    International Nuclear Information System (INIS)

    Ernwein, J.

    1994-01-01

    A prototype for the SDC end-cap (EM) calorimeter complete with a pre-shower and a shower maximum detector was tested in beams of electrons and Π's at CERN by an SDC subsystem group. The prototype was manufactured from scintillator tiles and strips read out with 1 mm diameter wave-length shifting fibers. The design and construction of the shower maximum detector is described, and results of laboratory tests on light yield and performance of the scintillator-fiber system are given. Preliminary results on energy and position measurements with the shower max detector in the test beam are shown. (authors). 4 refs., 5 figs

  17. Topics in Bayesian statistics and maximum entropy

    International Nuclear Information System (INIS)

    Mutihac, R.; Cicuttin, A.; Cerdeira, A.; Stanciulescu, C.

    1998-12-01

    Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)

  18. Density estimation by maximum quantum entropy

    International Nuclear Information System (INIS)

    Silver, R.N.; Wallstrom, T.; Martz, H.F.

    1993-01-01

    A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The information divergence enforces global smoothing toward default models, convexity, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing is enforced by constraints on differential operators. The linear response of the estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood (evidence). The method is demonstrated on textbook data sets

  19. New Nordic diet versus average Danish diet

    DEFF Research Database (Denmark)

    Khakimov, Bekzod; Poulsen, Sanne Kellebjerg; Savorani, Francesco

    2016-01-01

    and 3-hydroxybutanoic acid were related to a higher weight loss, while higher concentrations of salicylic, lactic and N-aspartic acids, and 1,5-anhydro-D-sorbitol were related to a lower weight loss. Specific gender- and seasonal differences were also observed. The study strongly indicates that healthy...... metabolites reflecting specific differences in the diets, especially intake of plant foods and seafood, and in energy metabolism related to ketone bodies and gluconeogenesis, formed the predominant metabolite pattern discriminating the intervention groups. Among NND subjects higher levels of vaccenic acid...

  20. 77 FR 34411 - Branch Technical Position on Concentration Averaging and Encapsulation

    Science.gov (United States)

    2012-06-11

    ..., ``Licensing Requirements for Land Disposal of Radioactive Waste,'' establishes a waste classification system... Commission paper, SECY-07-0180, ``Strategic Assessment of Low- Level Radioactive Waste Regulatory Program... Requirements Memorandum for SECY-10-0043, ``Blending of Low-Level Radioactive Waste,'' (ADAMS Accession No...

  1. Average pollutant concentration in soil profile simulated with Convective-Dispersive Equation. Model and Manual

    Science.gov (United States)

    Different parts of soil solution move with different velocities, and therefore chemicals are leached gradually from soil with infiltrating water. Solute dispersivity is the soil parameter characterizing this phenomenon. To characterize the dispersivity of soil profile at field scale, it is desirable...

  2. Nonsymmetric entropy and maximum nonsymmetric entropy principle

    International Nuclear Information System (INIS)

    Liu Chengshi

    2009-01-01

    Under the frame of a statistical model, the concept of nonsymmetric entropy which generalizes the concepts of Boltzmann's entropy and Shannon's entropy, is defined. Maximum nonsymmetric entropy principle is proved. Some important distribution laws such as power law, can be derived from this principle naturally. Especially, nonsymmetric entropy is more convenient than other entropy such as Tsallis's entropy in deriving power laws.

  3. Maximum speed of dewetting on a fiber

    NARCIS (Netherlands)

    Chan, Tak Shing; Gueudre, Thomas; Snoeijer, Jacobus Hendrikus

    2011-01-01

    A solid object can be coated by a nonwetting liquid since a receding contact line cannot exceed a critical speed. We theoretically investigate this forced wetting transition for axisymmetric menisci on fibers of varying radii. First, we use a matched asymptotic expansion and derive the maximum speed

  4. Maximum potential preventive effect of hip protectors

    NARCIS (Netherlands)

    van Schoor, N.M.; Smit, J.H.; Bouter, L.M.; Veenings, B.; Asma, G.B.; Lips, P.T.A.M.

    2007-01-01

    OBJECTIVES: To estimate the maximum potential preventive effect of hip protectors in older persons living in the community or homes for the elderly. DESIGN: Observational cohort study. SETTING: Emergency departments in the Netherlands. PARTICIPANTS: Hip fracture patients aged 70 and older who

  5. Maximum gain of Yagi-Uda arrays

    DEFF Research Database (Denmark)

    Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.

    1971-01-01

    Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....

  6. correlation between maximum dry density and cohesion

    African Journals Online (AJOL)

    HOD

    represents maximum dry density, signifies plastic limit and is liquid limit. Researchers [6, 7] estimate compaction parameters. Aside from the correlation existing between compaction parameters and other physical quantities there are some other correlations that have been investigated by other researchers. The well-known.

  7. Weak scale from the maximum entropy principle

    Science.gov (United States)

    Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu

    2015-03-01

    The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.

  8. The maximum-entropy method in superspace

    Czech Academy of Sciences Publication Activity Database

    van Smaalen, S.; Palatinus, Lukáš; Schneider, M.

    2003-01-01

    Roč. 59, - (2003), s. 459-469 ISSN 0108-7673 Grant - others:DFG(DE) XX Institutional research plan: CEZ:AV0Z1010914 Keywords : maximum-entropy method, * aperiodic crystals * electron density Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.558, year: 2003

  9. Achieving maximum sustainable yield in mixed fisheries

    NARCIS (Netherlands)

    Ulrich, Clara; Vermard, Youen; Dolder, Paul J.; Brunel, Thomas; Jardim, Ernesto; Holmes, Steven J.; Kempf, Alexander; Mortensen, Lars O.; Poos, Jan Jaap; Rindorf, Anna

    2017-01-01

    Achieving single species maximum sustainable yield (MSY) in complex and dynamic fisheries targeting multiple species (mixed fisheries) is challenging because achieving the objective for one species may mean missing the objective for another. The North Sea mixed fisheries are a representative example

  10. 5 CFR 534.203 - Maximum stipends.

    Science.gov (United States)

    2010-01-01

    ... maximum stipend established under this section. (e) A trainee at a non-Federal hospital, clinic, or medical or dental laboratory who is assigned to a Federal hospital, clinic, or medical or dental... Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY UNDER OTHER SYSTEMS Student...

  11. Minimal length, Friedmann equations and maximum density

    Energy Technology Data Exchange (ETDEWEB)

    Awad, Adel [Center for Theoretical Physics, British University of Egypt,Sherouk City 11837, P.O. Box 43 (Egypt); Department of Physics, Faculty of Science, Ain Shams University,Cairo, 11566 (Egypt); Ali, Ahmed Farag [Centre for Fundamental Physics, Zewail City of Science and Technology,Sheikh Zayed, 12588, Giza (Egypt); Department of Physics, Faculty of Science, Benha University,Benha, 13518 (Egypt)

    2014-06-16

    Inspired by Jacobson’s thermodynamic approach, Cai et al. have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar-Cai derivation http://dx.doi.org/10.1103/PhysRevD.75.084003 of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure p(ρ,a) leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature k. As an example we study the evolution of the equation of state p=ωρ through its phase-space diagram to show the existence of a maximum energy which is reachable in a finite time.

  12. Generation and Applications of High Average Power Mid-IR Supercontinuum in Chalcogenide Fibers

    OpenAIRE

    Petersen, Christian Rosenberg

    2016-01-01

    Mid-infrared supercontinuum with up to 54.8 mW average power, and maximum bandwidth of 1.77-8.66 μm is demonstrated as a result of pumping tapered chalcogenide photonic crystal fibers with a MHz parametric source at 4 μm

  13. Daily variation of the radon concentration indoors and outdoors and the influence of meteorological parameters

    International Nuclear Information System (INIS)

    Porstendoerfer, J.; Butterweck, G.; Reineking, A.

    1994-01-01

    Series of continuous radon measurements in the open atmosphere and in a dwelling, including the parallel measurement of meteorological parameters, were performed over a period of several weeks. The radon concentration in indoor and outdoor air depends on meteorological conditions. In the open atmosphere the radon concentration varies between 1 and 100 Bq m -3 , depending on weather conditions and time of day. During time periods of low turbulent air exchange (high pressure weather with clear night sky), especially in the night and early morning hours (night inversion layer), the diurnal variation of the radon concentration showed a pronounced maximum. Cloudy and windy weather conditions yield a small diurnal variation of the radon concentration. Indoors, the average level and the diurnal variation of the indoor radon concentration is also influenced by meteorological conditions. The measurements are consistent with a dependence of indoor radon concentrations on indoor-outdoor pressure differences. 11 refs., 4 figs

  14. Industrial Applications of High Average Power FELS

    CERN Document Server

    Shinn, Michelle D

    2005-01-01

    The use of lasers for material processing continues to expand, and the annual sales of such lasers exceeds $1 B (US). Large scale (many m2) processing of materials require the economical production of laser powers of the tens of kilowatts, and therefore are not yet commercial processes, although they have been demonstrated. The development of FELs based on superconducting RF (SRF) linac technology provides a scaleable path to laser outputs above 50 kW in the IR, rendering these applications economically viable, since the cost/photon drops as the output power increases. This approach also enables high average power ~ 1 kW output in the UV spectrum. Such FELs will provide quasi-cw (PRFs in the tens of MHz), of ultrafast (pulsewidth ~ 1 ps) output with very high beam quality. This talk will provide an overview of applications tests by our facility's users such as pulsed laser deposition, laser ablation, and laser surface modification, as well as present plans that will be tested with our upgraded FELs. These upg...

  15. Calculating Free Energies Using Average Force

    Science.gov (United States)

    Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)

    2001-01-01

    A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.

  16. Geographic Gossip: Efficient Averaging for Sensor Networks

    Science.gov (United States)

    Dimakis, Alexandros D. G.; Sarwate, Anand D.; Wainwright, Martin J.

    Gossip algorithms for distributed computation are attractive due to their simplicity, distributed nature, and robustness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repeatedly recirculating redundant information. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing times of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of $n$ and $\\sqrt{n}$ respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy $\\epsilon$ using $O(\\frac{n^{1.5}}{\\sqrt{\\log n}} \\log \\epsilon^{-1})$ radio transmissions, which yields a $\\sqrt{\\frac{n}{\\log n}}$ factor improvement over standard gossip algorithms. We illustrate these theoretical results with experimental comparisons between our algorithm and standard methods as applied to various classes of random fields.

  17. High-average-power solid state lasers

    International Nuclear Information System (INIS)

    Summers, M.A.

    1989-01-01

    In 1987, a broad-based, aggressive R ampersand D program aimed at developing the technologies necessary to make possible the use of solid state lasers that are capable of delivering medium- to high-average power in new and demanding applications. Efforts were focused along the following major lines: development of laser and nonlinear optical materials, and of coatings for parasitic suppression and evanescent wave control; development of computational design tools; verification of computational models on thoroughly instrumented test beds; and applications of selected aspects of this technology to specific missions. In the laser materials areas, efforts were directed towards producing strong, low-loss laser glasses and large, high quality garnet crystals. The crystal program consisted of computational and experimental efforts aimed at understanding the physics, thermodynamics, and chemistry of large garnet crystal growth. The laser experimental efforts were directed at understanding thermally induced wave front aberrations in zig-zag slabs, understanding fluid mechanics, heat transfer, and optical interactions in gas-cooled slabs, and conducting critical test-bed experiments with various electro-optic switch geometries. 113 refs., 99 figs., 18 tabs

  18. The concept of average LET values determination

    International Nuclear Information System (INIS)

    Makarewicz, M.

    1981-01-01

    The concept of average LET (linear energy transfer) values determination, i.e. ordinary moments of LET in absorbed dose distribution vs. LET of ionizing radiation of any kind and any spectrum (even the unknown ones) has been presented. The method is based on measurement of ionization current with several values of voltage supplying an ionization chamber operating in conditions of columnar recombination of ions or ion recombination in clusters while the chamber is placed in the radiation field at the point of interest. By fitting a suitable algebraic expression to the measured current values one can obtain coefficients of the expression which can be interpreted as values of LET moments. One of the advantages of the method is its experimental and computational simplicity. It has been shown that for numerical estimation of certain effects dependent on LET of radiation it is not necessary to know the dose distribution but only a number of parameters of the distribution, i.e. the LET moments. (author)

  19. On spectral averages in nuclear spectroscopy

    International Nuclear Information System (INIS)

    Verbaarschot, J.J.M.

    1982-01-01

    In nuclear spectroscopy one tries to obtain a description of systems of bound nucleons. By means of theoretical models one attemps to reproduce the eigenenergies and the corresponding wave functions which then enable the computation of, for example, the electromagnetic moments and the transition amplitudes. Statistical spectroscopy can be used for studying nuclear systems in large model spaces. In this thesis, methods are developed and applied which enable the determination of quantities in a finite part of the Hilbert space, which is defined by specific quantum values. In the case of averages in a space defined by a partition of the nucleons over the single-particle orbits, the propagation coefficients reduce to Legendre interpolation polynomials. In chapter 1 these polynomials are derived with the help of a generating function and a generalization of Wick's theorem. One can then deduce the centroid and the variance of the eigenvalue distribution in a straightforward way. The results are used to calculate the systematic energy difference between states of even and odd parity for nuclei in the mass region A=10-40. In chapter 2 an efficient method for transforming fixed angular momentum projection traces into fixed angular momentum for the configuration space traces is developed. In chapter 3 it is shown that the secular behaviour can be represented by a Gaussian function of the energies. (Auth.)

  20. Maximum-power-point tracking control of solar heating system

    KAUST Repository

    Huang, Bin-Juine

    2012-11-01

    The present study developed a maximum-power point tracking control (MPPT) technology for solar heating system to minimize the pumping power consumption at an optimal heat collection. The net solar energy gain Q net (=Q s-W p/η e) was experimentally found to be the cost function for MPPT with maximum point. The feedback tracking control system was developed to track the optimal Q net (denoted Q max). A tracking filter which was derived from the thermal analytical model of the solar heating system was used to determine the instantaneous tracking target Q max(t). The system transfer-function model of solar heating system was also derived experimentally using a step response test and used in the design of tracking feedback control system. The PI controller was designed for a tracking target Q max(t) with a quadratic time function. The MPPT control system was implemented using a microprocessor-based controller and the test results show good tracking performance with small tracking errors. It is seen that the average mass flow rate for the specific test periods in five different days is between 18.1 and 22.9kg/min with average pumping power between 77 and 140W, which is greatly reduced as compared to the standard flow rate at 31kg/min and pumping power 450W which is based on the flow rate 0.02kg/sm 2 defined in the ANSI/ASHRAE 93-1986 Standard and the total collector area 25.9m 2. The average net solar heat collected Q net is between 8.62 and 14.1kW depending on weather condition. The MPPT control of solar heating system has been verified to be able to minimize the pumping energy consumption with optimal solar heat collection. © 2012 Elsevier Ltd.

  1. Gentile statistics with a large maximum occupation number

    International Nuclear Information System (INIS)

    Dai Wusheng; Xie Mi

    2004-01-01

    In Gentile statistics the maximum occupation number can take on unrestricted integers: 1 1 the Bose-Einstein case is not recovered from Gentile statistics as n goes to N. Attention is also concentrated on the contribution of the ground state which was ignored in related literature. The thermodynamic behavior of a ν-dimensional Gentile ideal gas of particle of dispersion E=p s /2m, where ν and s are arbitrary, is analyzed in detail. Moreover, we provide an alternative derivation of the partition function for Gentile statistics

  2. Concentrating Radioactivity

    Science.gov (United States)

    Herrmann, Richard A.

    1974-01-01

    By concentrating radioactivity contained on luminous dials, a teacher can make a high reading source for classroom experiments on radiation. The preparation of the source and its uses are described. (DT)

  3. Determinação de eletrólitos, gases sanguíneos, osmolalidade, hematócrito, hemoglobina, base titulável e anion gap no sangue venoso de equinos destreinados submetidos a exercício máximo e submáximo em esteira rolante Determination of electrolytes, hemogasometry, osmalility, hematocrit, hemoglobin, base concentration, and anion gap in detrained equines submitted a maximum and submaximum exercise on treadmill

    Directory of Open Access Journals (Sweden)

    M.A.G. Silva

    2009-10-01

    Full Text Available Estudaram-se as alterações nos eletrólitos, nos gases sanguíneos, na osmolalidade, no hematócrito, na hemoglobina, nas bases tituláveis e no anion gap no sangue venoso de 11 equinos da raça Puro Sangue Árabe, destreinados, submetidos a exercício máximo e submáximo em esteira rolante. Esses animais passaram por período de três dias de adaptação à esteira rolante e posteriormente realizaram dois exercícios testes, um de curta e outro de longa duração. Foram coletadas amostras de sangue venoso antes, imediatamente após e 30 minutos após o término dos exercícios. Após a realização do exercício máximo, observou-se diminuição significativa no pHv, na PvCO2, no HCO3, na cBase além de elevação no AG. Detectou-se também aumento do K+, do Ht e da Hb. Ao final do exercício submáximo, constatou-se somente aumento significativo no pHv, na cBase, na SatvO2 e na PvO2. Conclui-se que os equinos submetidos a exercício máximo desenvolveram acidose metabólica e alcalose respiratória compensatória, hipercalemia e aumento nos valores de hematócrito e hemoglobina. No exercício submáximo, os animais apresentaram alcalose metabólica hipoclorêmica e não ocorreram alterações no equilíbrio hidroeletrolítico.Changes in electrolytes, blood gas, osmolality, hematocrit, hemoglobin, base concentration, and anion gap in 11 detrained Arabian horses during exercise on a high-speed treadmill were investigated. After a period of three days of adaptation on the rolling mat, the animals were submitted to two exercises: one of short (maximum and other of long duration (submaximum. Venous blood samples were obtained right before, and 30 minutes after the exercise. After the maximum exercise, it was observed a significative decrease in pHv, PvCO2, HCO3, and cBase and an increase in AG. It was also observed hypercalemia and increase in Ht and Hb. At the final of the submaximum exercise, it was observed significative increase in pH, c

  4. Applications of the maximum entropy principle in nuclear physics

    International Nuclear Information System (INIS)

    Froehner, F.H.

    1990-01-01

    Soon after the advent of information theory the principle of maximum entropy was recognized as furnishing the missing rationale for the familiar rules of classical thermodynamics. More recently it has also been applied successfully in nuclear physics. As an elementary example we derive a physically meaningful macroscopic description of the spectrum of neutrons emitted in nuclear fission, and compare the well known result with accurate data on 252 Cf. A second example, derivation of an expression for resonance-averaged cross sections for nuclear reactions like scattering or fission, is less trivial. Entropy maximization, constrained by given transmission coefficients, yields probability distributions for the R- and S-matrix elements, from which average cross sections can be calculated. If constrained only by the range of the spectrum of compound-nuclear levels it produces the Gaussian Orthogonal Ensemble (GOE) of Hamiltonian matrices that again yields expressions for average cross sections. Both avenues give practically the same numbers in spite of the quite different cross section formulae. These results were employed in a new model-aided evaluation of the 238 U neutron cross sections in the unresolved resonance region. (orig.) [de

  5. RX: a nonimaging concentrator.

    Science.gov (United States)

    Miñano, J C; Benítez, P; González, J C

    1995-05-01

    A detailed description of the design procedure for a new concentrator, RX, and some examples of it's use are given. The method of design is basically the same as that used in the design of two other concentrators: the RR and the XR [Appl. Opt. 31, 3051 (1992)]. The RX is ideal in two-dimensional geometry. The performance of the rotational RX is good when the average angular spread of the input bundle is small: up to 95% of the power of the input bundle can be transferred to the output bundle (with the assumption of a constant radiance for the rays of the input bundle).

  6. 75 FR 43840 - Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum Civil Monetary Penalties for...

    Science.gov (United States)

    2010-07-27

    ...-17530; Notice No. 2] RIN 2130-ZA03 Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum... remains at $250. These adjustments are required by the Federal Civil Penalties Inflation Adjustment Act [email protected] . SUPPLEMENTARY INFORMATION: The Federal Civil Penalties Inflation Adjustment Act of 1990...

  7. Maximum-entropy description of animal movement.

    Science.gov (United States)

    Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M

    2015-03-01

    We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.

  8. Pareto versus lognormal: a maximum entropy test.

    Science.gov (United States)

    Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano

    2011-08-01

    It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.

  9. Maximum likelihood estimation for integrated diffusion processes

    DEFF Research Database (Denmark)

    Baltazar-Larios, Fernando; Sørensen, Michael

    We propose a method for obtaining maximum likelihood estimates of parameters in diffusion models when the data is a discrete time sample of the integral of the process, while no direct observations of the process itself are available. The data are, moreover, assumed to be contaminated...... EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...... by measurement errors. Integrated volatility is an example of this type of observations. Another example is ice-core data on oxygen isotopes used to investigate paleo-temperatures. The data can be viewed as incomplete observations of a model with a tractable likelihood function. Therefore we propose a simulated...

  10. A Maximum Radius for Habitable Planets.

    Science.gov (United States)

    Alibert, Yann

    2015-09-01

    We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.

  11. Maximum parsimony on subsets of taxa.

    Science.gov (United States)

    Fischer, Mareike; Thatte, Bhalchandra D

    2009-09-21

    In this paper we investigate mathematical questions concerning the reliability (reconstruction accuracy) of Fitch's maximum parsimony algorithm for reconstructing the ancestral state given a phylogenetic tree and a character. In particular, we consider the question whether the maximum parsimony method applied to a subset of taxa can reconstruct the ancestral state of the root more accurately than when applied to all taxa, and we give an example showing that this indeed is possible. A surprising feature of our example is that ignoring a taxon closer to the root improves the reliability of the method. On the other hand, in the case of the two-state symmetric substitution model, we answer affirmatively a conjecture of Li, Steel and Zhang which states that under a molecular clock the probability that the state at a single taxon is a correct guess of the ancestral state is a lower bound on the reconstruction accuracy of Fitch's method applied to all taxa.

  12. Maximum entropy analysis of liquid diffraction data

    International Nuclear Information System (INIS)

    Root, J.H.; Egelstaff, P.A.; Nickel, B.G.

    1986-01-01

    A maximum entropy method for reducing truncation effects in the inverse Fourier transform of structure factor, S(q), to pair correlation function, g(r), is described. The advantages and limitations of the method are explored with the PY hard sphere structure factor as model input data. An example using real data on liquid chlorine, is then presented. It is seen that spurious structure is greatly reduced in comparison to traditional Fourier transform methods. (author)

  13. A Maximum Resonant Set of Polyomino Graphs

    Directory of Open Access Journals (Sweden)

    Zhang Heping

    2016-05-01

    Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.

  14. Automatic maximum entropy spectral reconstruction in NMR

    International Nuclear Information System (INIS)

    Mobli, Mehdi; Maciejewski, Mark W.; Gryk, Michael R.; Hoch, Jeffrey C.

    2007-01-01

    Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system

  15. Linear stochastic models for forecasting daily maxima and hourly concentrations of air pollutants

    Energy Technology Data Exchange (ETDEWEB)

    McCollister, G M; Wilson, K R

    1975-04-01

    Two related time series models were developed to forecast concentrations of various air pollutants and tested on carbon monoxide and oxidant data for the Los Angeles basin. One model forecasts daily maximum concentrations of a particular pollutant using only past daily maximum values of that pollutant as input. The other model forecasts 1 hr average concentrations using only the past hourly average values. Both are significantly more accurate than persistence, i.e., forecasting for tomorrow what occurred today (or yesterday). Model forecasts for 1972 of the daily instantaneous maxima for total oxidant made using only past pollutant concentration data are more accurate than those made by the Los Angeles APCD using meteorological input as well as pollutant concentrations. Although none of these models forecast as accurately as might be desired for a health warning system, the relative success of simple time series models, even though based solely on pollutant concentration, suggests that models incorporating meteorological data and using either multi-dimensional times series or pattern recognition techniques should be tested.

  16. Average spectral efficiency analysis of FSO links over turbulence channel with adaptive transmissions and aperture averaging

    Science.gov (United States)

    Aarthi, G.; Ramachandra Reddy, G.

    2018-03-01

    In our paper, the impact of adaptive transmission schemes: (i) optimal rate adaptation (ORA) and (ii) channel inversion with fixed rate (CIFR) on the average spectral efficiency (ASE) are explored for free-space optical (FSO) communications with On-Off Keying (OOK), Polarization shift keying (POLSK), and Coherent optical wireless communication (Coherent OWC) systems under different turbulence regimes. Further to enhance the ASE we have incorporated aperture averaging effects along with the above adaptive schemes. The results indicate that ORA adaptation scheme has the advantage of improving the ASE performance compared with CIFR under moderate and strong turbulence regime. The coherent OWC system with ORA excels the other modulation schemes and could achieve ASE performance of 49.8 bits/s/Hz at the average transmitted optical power of 6 dBm under strong turbulence. By adding aperture averaging effect we could achieve an ASE of 50.5 bits/s/Hz under the same conditions. This makes ORA with Coherent OWC modulation as a favorable candidate for improving the ASE of the FSO communication system.

  17. Radionuclide concentrations and dose assessment of cistern water and groundwater at the Marshall Islands

    International Nuclear Information System (INIS)

    Noshkin, V.E.; Eagle, R.J.; Wong, K.M.; Jokela, T.A.; Robison, W.L.

    1981-01-01

    A radiological survey was conducted from September through November of 1978 to determine the concentrations of radionuclides in the terrestrial and marine environments of 11 atolls and 2 islands in the Northern Marshall Islands. More than 70 cistern and groundwater samples were collected at the atolls; the volume of each sample was between 55 and 100 l. The concentration of 90 Sr in cistern water at most atolls is that expected from world-wide fallout in wet deposition. Except for Bikini and Rongelap, 137 Cs concentrations in cistern water are in agreement with the average predicted concentrations from wet deposition. The 239+240 Pu concentrations are everywhere less than the predicted fallout concentrations except at Rongelap, Ailinginae, and Bikini where the measured and predicted concentrations are in general agreement. During the period sampled, most groundwater concentrations of 90 Sr and 137 Cs were everywhere higher than the concentrations in cistern water. Concentrations of the transurancies in filtered groundwater solution were everywhere comparable to or less than the concentrations in cistern water. It is concluded that the concentrations of radionuclides detected during any single period may not necessarily reflect the long-term average concentrations or the concentrations that might be observed if a lined well were extended above the surface. In any case, at all atolls the 90 Sr and 137 Cs concentrations in groundwater are below the concentration guidelines for drinking water recommended by the Environmental Protection Agency. The maximum annual dose rates and the 30- and 50-y integral doses are calculated for the intake of both cistern water and groundwater for each of the atolls

  18. Maximum total organic carbon limit for DWPF melter feed

    International Nuclear Information System (INIS)

    Choi, A.S.

    1995-01-01

    DWPF recently decided to control the potential flammability of melter off-gas by limiting the total carbon content in the melter feed and maintaining adequate conditions for combustion in the melter plenum. With this new strategy, all the LFL analyzers and associated interlocks and alarms were removed from both the primary and backup melter off-gas systems. Subsequently, D. Iverson of DWPF- T ampersand E requested that SRTC determine the maximum allowable total organic carbon (TOC) content in the melter feed which can be implemented as part of the Process Requirements for melter feed preparation (PR-S04). The maximum TOC limit thus determined in this study was about 24,000 ppm on an aqueous slurry basis. At the TOC levels below this, the peak concentration of combustible components in the quenched off-gas will not exceed 60 percent of the LFL during off-gas surges of magnitudes up to three times nominal, provided that the melter plenum temperature and the air purge rate to the BUFC are monitored and controlled above 650 degrees C and 220 lb/hr, respectively. Appropriate interlocks should discontinue the feeding when one or both of these conditions are not met. Both the magnitude and duration of an off-gas surge have a major impact on the maximum TOC limit, since they directly affect the melter plenum temperature and combustion. Although the data obtained during recent DWPF melter startup tests showed that the peak magnitude of a surge can be greater than three times nominal, the observed duration was considerably shorter, on the order of several seconds. The long surge duration assumed in this study has a greater impact on the plenum temperature than the peak magnitude, thus making the maximum TOC estimate conservative. Two models were used to make the necessary calculations to determine the TOC limit

  19. Beamforming using subspace estimation from a diagonally averaged sample covariance.

    Science.gov (United States)

    Quijano, Jorge E; Zurk, Lisa M

    2017-08-01

    The potential benefit of a large-aperture sonar array for high resolution target localization is often challenged by the lack of sufficient data required for adaptive beamforming. This paper introduces a Toeplitz-constrained estimator of the clairvoyant signal covariance matrix corresponding to multiple far-field targets embedded in background isotropic noise. The estimator is obtained by averaging along subdiagonals of the sample covariance matrix, followed by covariance extrapolation using the method of maximum entropy. The sample covariance is computed from limited data snapshots, a situation commonly encountered with large-aperture arrays in environments characterized by short periods of local stationarity. Eigenvectors computed from the Toeplitz-constrained covariance are used to construct signal-subspace projector matrices, which are shown to reduce background noise and improve detection of closely spaced targets when applied to subspace beamforming. Monte Carlo simulations corresponding to increasing array aperture suggest convergence of the proposed projector to the clairvoyant signal projector, thereby outperforming the classic projector obtained from the sample eigenvectors. Beamforming performance of the proposed method is analyzed using simulated data, as well as experimental data from the Shallow Water Array Performance experiment.

  20. Spatial variability in airborne pollen concentrations.

    Science.gov (United States)

    Raynor, G S; Ogden, E C; Hayes, J V

    1975-03-01

    Tests were conducted to determine the relationship between airborne pollen concentrations and distance. Simultaneous samples were taken in 171 tests with sets of eight rotoslide samplers spaced from one to 486 M. apart in straight lines. Use of all possible pairs gave 28 separation distances. Tests were conducted over a 2-year period in urban and rural locations distant from major pollen sources during both tree and ragweed pollen seasons. Samples were taken at a height of 1.5 M. during 5-to 20-minute periods. Tests were grouped by pollen type, location, year, and direction of the wind relative to the line. Data were analyzed to evaluate variability without regard to sampler spacing and variability as a function of separation distance. The mean, standard deviation, coefficient of variation, ratio of maximum to the mean, and ratio of minimum to the mean were calculated for each test, each group of tests, and all cases. The average coefficient of variation is 0.21, the maximum over the mean, 1.39 and the minimum over the mean, 0.69. No relationship was found with experimental conditions. Samples taken at the minimum separation distance had a mean difference of 18 per cent. Differences between pairs of samples increased with distance in 10 of 13 groups. These results suggest that airborne pollens are not always well mixed in the lower atmosphere and that a sample becomes less representative with increasing distance from the sampling location.

  1. A maximum entropy reconstruction technique for tomographic particle image velocimetry

    International Nuclear Information System (INIS)

    Bilsky, A V; Lozhkin, V A; Markovich, D M; Tokarev, M P

    2013-01-01

    This paper studies a novel approach for reducing tomographic PIV computational complexity. The proposed approach is an algebraic reconstruction technique, termed MENT (maximum entropy). This technique computes the three-dimensional light intensity distribution several times faster than SMART, using at least ten times less memory. Additionally, the reconstruction quality remains nearly the same as with SMART. This paper presents the theoretical computation performance comparison for MENT, SMART and MART, followed by validation using synthetic particle images. Both the theoretical assessment and validation of synthetic images demonstrate significant computational time reduction. The data processing accuracy of MENT was compared to that of SMART in a slot jet experiment. A comparison of the average velocity profiles shows a high level of agreement between the results obtained with MENT and those obtained with SMART. (paper)

  2. Determination of concentration factors for Cs-137 and Ra-226 in the mullet species Chelon labrosus (Mugilidae) from the South Adriatic Sea.

    Science.gov (United States)

    Antovic, Ivanka; Antovic, Nevenka M

    2011-07-01

    Concentration factors for Cs-137 and Ra-226 transfer from seawater, and dried sediment or mud with detritus, have been determined for whole, fresh weight, Chelon labrosus individuals and selected organs. Cesium was detected in 5 of 22 fish individuals, and its activity ranged from 1.0 to 1.6 Bq kg(-1). Radium was detected in all fish, and ranged from 0.4 to 2.1 Bq kg(-1), with an arithmetic mean of 1.0 Bq kg(-1). In regards to fish organs, cesium activity concentration was highest in muscles (maximum - 3.7 Bq kg(-1)), while radium was highest in skeletons (maximum - 25 Bq kg(-1)). Among cesium concentration factors, those for muscles were the highest (from seawater - an average of 47, from sediment - an average of 3.3, from mud with detritus - an average of 0.8). Radium concentration factors were the highest for skeleton (from seawater - an average of 130, from sediment - an average of 1.8, from mud with detritus - an average of 1.5). Additionally, annual intake of cesium and radium by human adults consuming muscles of this fish species has been estimated to provide, in aggregate, an effective dose of about 4.1 μSv y(-1). 2011 Elsevier Ltd. All rights reserved.

  3. The Maximum Free Magnetic Energy Allowed in a Solar Active Region

    Science.gov (United States)

    Moore, Ronald L.; Falconer, David A.

    2009-01-01

    Two whole-active-region magnetic quantities that can be measured from a line-of-sight magnetogram are (sup L) WL(sub SG), a gauge of the total free energy in an active region's magnetic field, and sup L(sub theta), a measure of the active region's total magnetic flux. From these two quantities measured from 1865 SOHO/MDI magnetograms that tracked 44 sunspot active regions across the 0.5 R(sub Sun) central disk, together with each active region's observed production of CMEs, X flares, and M flares, Falconer et al (2009, ApJ, submitted) found that (1) active regions have a maximum attainable free magnetic energy that increases with the magnetic size (sup L) (sub theta) of the active region, (2) in (Log (sup L)WL(sub SG), Log(sup L) theta) space, CME/flare-productive active regions are concentrated in a straight-line main sequence along which the free magnetic energy is near its upper limit, and (3) X and M flares are restricted to large active regions. Here, from (a) these results, (b) the observation that even the greatest X flares produce at most only subtle changes in active region magnetograms, and (c) measurements from MSFC vector magnetograms and from MDI line-of-sight magnetograms showing that practically all sunspot active regions have nearly the same area-averaged magnetic field strength: =- theta/A approximately equal to 300 G, where theta is the active region's total photospheric flux of field stronger than 100 G and A is the area of that flux, we infer that (1) the maximum allowed ratio of an active region's free magnetic energy to its potential-field energy is 1, and (2) any one CME/flare eruption releases no more than a small fraction (less than 10%) of the active region's free magnetic energy. This work was funded by NASA's Heliophysics Division and NSF's Division of Atmospheric Sciences.

  4. Maximum power operation of interacting molecular motors

    DEFF Research Database (Denmark)

    Golubeva, Natalia; Imparato, Alberto

    2013-01-01

    , as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics.......We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors...

  5. Maximum entropy method in momentum density reconstruction

    International Nuclear Information System (INIS)

    Dobrzynski, L.; Holas, A.

    1997-01-01

    The Maximum Entropy Method (MEM) is applied to the reconstruction of the 3-dimensional electron momentum density distributions observed through the set of Compton profiles measured along various crystallographic directions. It is shown that the reconstruction of electron momentum density may be reliably carried out with the aid of simple iterative algorithm suggested originally by Collins. A number of distributions has been simulated in order to check the performance of MEM. It is shown that MEM can be recommended as a model-free approach. (author). 13 refs, 1 fig

  6. On the maximum drawdown during speculative bubbles

    Science.gov (United States)

    Rotundo, Giulia; Navarra, Mauro

    2007-08-01

    A taxonomy of large financial crashes proposed in the literature locates the burst of speculative bubbles due to endogenous causes in the framework of extreme stock market crashes, defined as falls of market prices that are outlier with respect to the bulk of drawdown price movement distribution. This paper goes on deeper in the analysis providing a further characterization of the rising part of such selected bubbles through the examination of drawdown and maximum drawdown movement of indices prices. The analysis of drawdown duration is also performed and it is the core of the risk measure estimated here.

  7. Multi-Channel Maximum Likelihood Pitch Estimation

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2012-01-01

    In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...

  8. Conductivity maximum in a charged colloidal suspension

    Energy Technology Data Exchange (ETDEWEB)

    Bastea, S

    2009-01-27

    Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.

  9. Dynamical maximum entropy approach to flocking.

    Science.gov (United States)

    Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M

    2014-04-01

    We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.

  10. Maximum Temperature Detection System for Integrated Circuits

    Science.gov (United States)

    Frankiewicz, Maciej; Kos, Andrzej

    2015-03-01

    The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.

  11. Maximum entropy PDF projection: A review

    Science.gov (United States)

    Baggenstoss, Paul M.

    2017-06-01

    We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.

  12. Multiperiod Maximum Loss is time unit invariant.

    Science.gov (United States)

    Kovacevic, Raimund M; Breuer, Thomas

    2016-01-01

    Time unit invariance is introduced as an additional requirement for multiperiod risk measures: for a constant portfolio under an i.i.d. risk factor process, the multiperiod risk should equal the one period risk of the aggregated loss, for an appropriate choice of parameters and independent of the portfolio and its distribution. Multiperiod Maximum Loss over a sequence of Kullback-Leibler balls is time unit invariant. This is also the case for the entropic risk measure. On the other hand, multiperiod Value at Risk and multiperiod Expected Shortfall are not time unit invariant.

  13. Maximum a posteriori decoder for digital communications

    Science.gov (United States)

    Altes, Richard A. (Inventor)

    1997-01-01

    A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.

  14. Improved Maximum Parsimony Models for Phylogenetic Networks.

    Science.gov (United States)

    Van Iersel, Leo; Jones, Mark; Scornavacca, Celine

    2018-05-01

    Phylogenetic networks are well suited to represent evolutionary histories comprising reticulate evolution. Several methods aiming at reconstructing explicit phylogenetic networks have been developed in the last two decades. In this article, we propose a new definition of maximum parsimony for phylogenetic networks that permits to model biological scenarios that cannot be modeled by the definitions currently present in the literature (namely, the "hardwired" and "softwired" parsimony). Building on this new definition, we provide several algorithmic results that lay the foundations for new parsimony-based methods for phylogenetic network reconstruction.

  15. Ancestral sequence reconstruction with Maximum Parsimony

    OpenAIRE

    Herbst, Lina; Fischer, Mareike

    2017-01-01

    One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference as well as for ancestral sequence inference is Maximum Parsimony (...

  16. To quantum averages through asymptotic expansion of classical averages on infinite-dimensional space

    International Nuclear Information System (INIS)

    Khrennikov, Andrei

    2007-01-01

    We study asymptotic expansions of Gaussian integrals of analytic functionals on infinite-dimensional spaces (Hilbert and nuclear Frechet). We obtain an asymptotic equality coupling the Gaussian integral and the trace of the composition of scaling of the covariation operator of a Gaussian measure and the second (Frechet) derivative of a functional. In this way we couple classical average (given by an infinite-dimensional Gaussian integral) and quantum average (given by the von Neumann trace formula). We can interpret this mathematical construction as a procedure of 'dequantization' of quantum mechanics. We represent quantum mechanics as an asymptotic projection of classical statistical mechanics with infinite-dimensional phase space. This space can be represented as the space of classical fields, so quantum mechanics is represented as a projection of 'prequantum classical statistical field theory'

  17. Determining average path length and average trapping time on generalized dual dendrimer

    Science.gov (United States)

    Li, Ling; Guan, Jihong

    2015-03-01

    Dendrimer has wide number of important applications in various fields. In some cases during transport or diffusion process, it transforms into its dual structure named Husimi cactus. In this paper, we study the structure properties and trapping problem on a family of generalized dual dendrimer with arbitrary coordination numbers. We first calculate exactly the average path length (APL) of the networks. The APL increases logarithmically with the network size, indicating that the networks exhibit a small-world effect. Then we determine the average trapping time (ATT) of the trapping process in two cases, i.e., the trap placed on a central node and the trap is uniformly distributed in all the nodes of the network. In both case, we obtain explicit solutions of ATT and show how they vary with the networks size. Besides, we also discuss the influence of the coordination number on trapping efficiency.

  18. Metals in the Scheldt estuary: From environmental concentrations to bioaccumulation.

    Science.gov (United States)

    Van Ael, Evy; Blust, Ronny; Bervoets, Lieven

    2017-09-01

    To investigate the relationship between metal concentrations in abiotic compartments and in aquatic species, sediment, suspended matter and several aquatic species (Polychaeta, Oligochaeta, four crustacean species, three mollusc species and eight fish species) were collected during three seasons at six locations along the Scheldt estuary (the Netherlands-Belgium) and analysed on their metal content (Ag, Cd, Co, Cr, Cu, Ni, Pb, Zn and the metalloid As). Sediment and biota tissue concentrations were significantly influenced by sampling location, but not by season. Measurements of Acid Volatile Sulphides (AVS) concentrations in relation to Simultaneously Extracted Metals (SEM) in the sediment suggested that not all metals in the sediment will be bound to sulphides and some metals might be bioavailable. For all metals but zinc, highest concentrations were measured in invertebrate species; Ag and Ni in periwinkle, Cr, Co and Pb in Oligochaete worms and As, Cd and Cu in crabs and shrimp. Highest concentrations of Zn were measured in the kidney of European smelt. In fish, for most of the metals, the concentrations were highest in liver or kidney and lowest in muscle. For Zn however, highest concentrations were measured in the kidney of European smelt. For less than half of the metals significant correlations between sediment metal concentrations and bioaccumulated concentrations were found (liver/hepatopancreas or whole organism). To calculate the possible human health risk by consumption, average and maximum metal concentrations in the muscle tissues were compared to the minimum risk levels (MRLs). Concentrations of As led to the highest risk potential for all consumable species. Cadmium and Cu posed only a risk when consuming the highest contaminated shrimp and shore crabs. Consuming blue mussel could result in a risk for the metals As, Cd and Cr. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Trends of atmospheric black carbon concentration over the United Kingdom

    Science.gov (United States)

    Singh, Vikas; Ravindra, Khaiwal; Sahu, Lokesh; Sokhi, Ranjeet

    2018-04-01

    The continuous observations over a period of 7 years (2009-2016) available at 7 locations show declining trend of atmospheric BC in the UK. Among all the locations, the highest decrease of 8 ± 3 percent per year was observed at the Marylebone road in London. The detailed analysis performed at 21 locations during 2009-2011 shows that average annual mean atmospheric BC concentration were 0.45 ± 0.10, 1.47 ± 0.58, 1.34 ± 0.31, 1.83 ± 0.46 and 9.72 ± 0.78 μgm-3 at rural, suburban, urban background, urban centre and kerbside sites respectively. Around 1 μgm-3 of atmospheric BC could be attributed to urban emission, whereas traffic contributed up to 8 μg m-3 of atmospheric BC near busy roads. Seasonal pattern was also observed at all locations except rural and kerbside location, with maximum concentrations (1.2-4 μgm-3) in winter. Further, minimum concentrations (0.3-1.2 μgm-3) were observed in summer and similar concentrations in spring and fall. At suburban and urban background locations, similar diurnal pattern were observed with atmospheric BC concentration peaks (≈1.8 μg m-3) in the morning (around 9 a.m.) and evening (7-9 p.m.) rush hours, whereas minimum concentrations were during late night hours (peak at 5 a.m.) and the afternoon hours (peak at 2 p.m.). The urban centre values show a similar morning pattern (peak at 9 a.m.; concentration - 2.5 μgm-3) in relation to background locations but only a slight decrease in concentration in the afternoon which remained above 2 μgm-3 till midnight. It is concluded that the higher flow of traffic at urban centre locations results in higher atmospheric BC concentrations throughout the day. Comparison of weekday and weekend daily averaged atmospheric BC showed maximum concentrations on Friday, having minimum levels on Sunday. This study will help to refine the atmospheric BC emission inventories and provide data for air pollution and climate change models evaluation, which are used to formulate air pollution

  20. Maximum entropy networks are more controllable than preferential attachment networks

    International Nuclear Information System (INIS)

    Hou, Lvlin; Small, Michael; Lao, Songyang

    2014-01-01

    A maximum entropy (ME) method to generate typical scale-free networks has been recently introduced. We investigate the controllability of ME networks and Barabási–Albert preferential attachment networks. Our experimental results show that ME networks are significantly more easily controlled than BA networks of the same size and the same degree distribution. Moreover, the control profiles are used to provide insight into control properties of both classes of network. We identify and classify the driver nodes and analyze the connectivity of their neighbors. We find that driver nodes in ME networks have fewer mutual neighbors and that their neighbors have lower average degree. We conclude that the properties of the neighbors of driver node sensitively affect the network controllability. Hence, subtle and important structural differences exist between BA networks and typical scale-free networks of the same degree distribution. - Highlights: • The controllability of maximum entropy (ME) and Barabási–Albert (BA) networks is investigated. • ME networks are significantly more easily controlled than BA networks of the same degree distribution. • The properties of the neighbors of driver node sensitively affect the network controllability. • Subtle and important structural differences exist between BA networks and typical scale-free networks

  1. Stimulus-dependent maximum entropy models of neural population codes.

    Directory of Open Access Journals (Sweden)

    Einat Granot-Atedgi

    Full Text Available Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME model-a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population.

  2. Objective Bayesianism and the Maximum Entropy Principle

    Directory of Open Access Journals (Sweden)

    Jon Williamson

    2013-09-01

    Full Text Available Objective Bayesian epistemology invokes three norms: the strengths of our beliefs should be probabilities; they should be calibrated to our evidence of physical probabilities; and they should otherwise equivocate sufficiently between the basic propositions that we can express. The three norms are sometimes explicated by appealing to the maximum entropy principle, which says that a belief function should be a probability function, from all those that are calibrated to evidence, that has maximum entropy. However, the three norms of objective Bayesianism are usually justified in different ways. In this paper, we show that the three norms can all be subsumed under a single justification in terms of minimising worst-case expected loss. This, in turn, is equivalent to maximising a generalised notion of entropy. We suggest that requiring language invariance, in addition to minimising worst-case expected loss, motivates maximisation of standard entropy as opposed to maximisation of other instances of generalised entropy. Our argument also provides a qualified justification for updating degrees of belief by Bayesian conditionalisation. However, conditional probabilities play a less central part in the objective Bayesian account than they do under the subjective view of Bayesianism, leading to a reduced role for Bayes’ Theorem.

  3. Efficient heuristics for maximum common substructure search.

    Science.gov (United States)

    Englert, Péter; Kovács, Péter

    2015-05-26

    Maximum common substructure search is a computationally hard optimization problem with diverse applications in the field of cheminformatics, including similarity search, lead optimization, molecule alignment, and clustering. Most of these applications have strict constraints on running time, so heuristic methods are often preferred. However, the development of an algorithm that is both fast enough and accurate enough for most practical purposes is still a challenge. Moreover, in some applications, the quality of a common substructure depends not only on its size but also on various topological features of the one-to-one atom correspondence it defines. Two state-of-the-art heuristic algorithms for finding maximum common substructures have been implemented at ChemAxon Ltd., and effective heuristics have been developed to improve both their efficiency and the relevance of the atom mappings they provide. The implementations have been thoroughly evaluated and compared with existing solutions (KCOMBU and Indigo). The heuristics have been found to greatly improve the performance and applicability of the algorithms. The purpose of this paper is to introduce the applied methods and present the experimental results.

  4. Color corrected Fresnel lens for solar concentration

    International Nuclear Information System (INIS)

    Kritchman, E.M.

    1979-01-01

    A new linear convex Fresnel lens with its groove side down is described. The design philosophy is similar to the highly concentrating two focal Fresnel lens but including a correction for chromatic aberration. A solar concentration ratio as high as 80 is achieved. For wide acceptance angles the concentration nears the theoretical maximum. (author)

  5. Soil nematodes show a mid-elevation diversity maximum and elevational zonation on Mt. Norikura, Japan.

    Science.gov (United States)

    Dong, Ke; Moroenyane, Itumeleng; Tripathi, Binu; Kerfahi, Dorsaf; Takahashi, Koichi; Yamamoto, Naomichi; An, Choa; Cho, Hyunjun; Adams, Jonathan

    2017-06-08

    Little is known about how nematode ecology differs across elevational gradients. We investigated the soil nematode community along a ~2,200 m elevational range on Mt. Norikura, Japan, by sequencing the 18S rRNA gene. As with many other groups of organisms, nematode diversity showed a high correlation with elevation, and a maximum in mid-elevations. While elevation itself, in the context of the mid domain effect, could predict the observed unimodal pattern of soil nematode communities along the elevational gradient, mean annual temperature and soil total nitrogen concentration were the best predictors of diversity. We also found nematode community composition showed strong elevational zonation, indicating that a high degree of ecological specialization that may exist in nematodes in relation to elevation-related environmental gradients and certain nematode OTUs had ranges extending across all elevations, and these generalized OTUs made up a greater proportion of the community at high elevations - such that high elevation nematode OTUs had broader elevational ranges on average, providing an example consistent to Rapoport's elevational hypothesis. This study reveals the potential for using sequencing methods to investigate elevational gradients of small soil organisms, providing a method for rapid investigation of patterns without specialized knowledge in taxonomic identification.

  6. Hydraulic Limits on Maximum Plant Transpiration

    Science.gov (United States)

    Manzoni, S.; Vico, G.; Katul, G. G.; Palmroth, S.; Jackson, R. B.; Porporato, A. M.

    2011-12-01

    Photosynthesis occurs at the expense of water losses through transpiration. As a consequence of this basic carbon-water interaction at the leaf level, plant growth and ecosystem carbon exchanges are tightly coupled to transpiration. In this contribution, the hydraulic constraints that limit transpiration rates under well-watered conditions are examined across plant functional types and climates. The potential water flow through plants is proportional to both xylem hydraulic conductivity (which depends on plant carbon economy) and the difference in water potential between the soil and the atmosphere (the driving force that pulls water from the soil). Differently from previous works, we study how this potential flux changes with the amplitude of the driving force (i.e., we focus on xylem properties and not on stomatal regulation). Xylem hydraulic conductivity decreases as the driving force increases due to cavitation of the tissues. As a result of this negative feedback, more negative leaf (and xylem) water potentials would provide a stronger driving force for water transport, while at the same time limiting xylem hydraulic conductivity due to cavitation. Here, the leaf water potential value that allows an optimum balance between driving force and xylem conductivity is quantified, thus defining the maximum transpiration rate that can be sustained by the soil-to-leaf hydraulic system. To apply the proposed framework at the global scale, a novel database of xylem conductivity and cavitation vulnerability across plant types and biomes is developed. Conductivity and water potential at 50% cavitation are shown to be complementary (in particular between angiosperms and conifers), suggesting a tradeoff between transport efficiency and hydraulic safety. Plants from warmer and drier biomes tend to achieve larger maximum transpiration than plants growing in environments with lower atmospheric water demand. The predicted maximum transpiration and the corresponding leaf water

  7. Evaluating the maximum patient radiation dose in cardiac interventional procedures

    International Nuclear Information System (INIS)

    Kato, M.; Chida, K.; Sato, T.; Oosaka, H.; Tosa, T.; Kadowaki, K.

    2011-01-01

    Many of the X-ray systems that are used for cardiac interventional radiology provide no way to evaluate the patient maximum skin dose (MSD). The authors report a new method for evaluating the MSD by using the cumulative patient entrance skin dose (ESD), which includes a back-scatter factor and the number of cine-angiography frames during percutaneous coronary intervention (PCI). Four hundred consecutive PCI patients (315 men and 85 women) were studied. The correlation between the cumulative ESD and number of cine-angiography frames was investigated. The irradiation and overlapping fields were verified using dose-mapping software. A good correlation was found between the cumulative ESD and the number of cine-angiography frames. The MSD could be estimated using the proportion of cine-angiography frames used for the main angle of view relative to the total number of cine-angiography frames and multiplying this by the cumulative ESD. The average MSD (3.0±1.9 Gy) was lower than the average cumulative ESD (4.6±2.6 Gy). This method is an easy way to estimate the MSD during PCI. (authors)

  8. Analogue of Pontryagin's maximum principle for multiple integrals minimization problems

    OpenAIRE

    Mikhail, Zelikin

    2016-01-01

    The theorem like Pontryagin's maximum principle for multiple integrals is proved. Unlike the usual maximum principle, the maximum should be taken not over all matrices, but only on matrices of rank one. Examples are given.

  9. Planar waveguide concentrator used with a seasonal tracker.

    Science.gov (United States)

    Bouchard, Sébastien; Thibault, Simon

    2012-10-01

    Solar concentrators offer good promise for reducing the cost of solar power. Planar waveguides equipped with a microlens slab have already been proposed as an excellent approach to produce medium to high concentration levels. Instead, we suggest the use of a cylindrical microlens array to get useful concentration without tracking during the day. To use only a seasonal tracking system and get the highest possible concentration, cylindrical microlenses are placed in the east-west orientation. Our new design has an acceptance angle in the north-south direction of ±9° and ±54° in the east-west axis. Simulation of our optimized system achieves a 4.6× average concentration level from 8:30 to 16:30 with a maximum of 8.1× and 80% optical efficiency. The low-cost advantage of waveguide-based solar concentrators could support their use in roof-mounted solar panels and eliminate the need for an expensive and heavy active tracker.

  10. Comparison of predicted and measured variations of indoor radon concentration

    International Nuclear Information System (INIS)

    Arvela, H.; Voutilainen, A.; Maekelaeinen, I.; Castren, O.; Winqvist, K.

    1988-01-01

    Prediction of the variations of indoor radon concentration were calculated using a model relating indoor radon concentration to radon entry rate, air infiltration and meteorological factors. These calculated variations have been compared with seasonal variations of 33 houses during 1-4 years, with winter-summer concentration ratios of 300 houses and the measured diurnal variation. In houses with a slab in ground contact the measured seasonal variations are quite often in agreement with variations predicted for nearly pure pressure difference driven flow. The contribution of a diffusion source is significant in houses with large porous concrete walls against the ground. Air flow due to seasonally variable thermal convection within eskers strongly affects the seasonal variations within houses located thereon. Measured and predicted winter-summer concentration ratios demonstrate that, on average, the ratio is a function of radon concentration. The ratio increases with increasing winter concentration. According to the model the diurnal maximum caused by a pressure difference driven flow occurs in the morning, a finding which is in agreement with the measurements. The model presented can be used for differentiating between factors affecting radon entry into houses. (author)

  11. Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.

    Science.gov (United States)

    Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L

    2016-08-01

    This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.

  12. Maximum Profit Configurations of Commercial Engines

    Directory of Open Access Journals (Sweden)

    Yiran Chen

    2011-06-01

    Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.

  13. The worst case complexity of maximum parsimony.

    Science.gov (United States)

    Carmel, Amir; Musa-Lempel, Noa; Tsur, Dekel; Ziv-Ukelson, Michal

    2014-11-01

    One of the core classical problems in computational biology is that of constructing the most parsimonious phylogenetic tree interpreting an input set of sequences from the genomes of evolutionarily related organisms. We reexamine the classical maximum parsimony (MP) optimization problem for the general (asymmetric) scoring matrix case, where rooted phylogenies are implied, and analyze the worst case bounds of three approaches to MP: The approach of Cavalli-Sforza and Edwards, the approach of Hendy and Penny, and a new agglomerative, "bottom-up" approach we present in this article. We show that the second and third approaches are faster than the first one by a factor of Θ(√n) and Θ(n), respectively, where n is the number of species.

  14. Modelling maximum likelihood estimation of availability

    International Nuclear Information System (INIS)

    Waller, R.A.; Tietjen, G.L.; Rock, G.W.

    1975-01-01

    Suppose the performance of a nuclear powered electrical generating power plant is continuously monitored to record the sequence of failure and repairs during sustained operation. The purpose of this study is to assess one method of estimating the performance of the power plant when the measure of performance is availability. That is, we determine the probability that the plant is operational at time t. To study the availability of a power plant, we first assume statistical models for the variables, X and Y, which denote the time-to-failure and the time-to-repair variables, respectively. Once those statistical models are specified, the availability, A(t), can be expressed as a function of some or all of their parameters. Usually those parameters are unknown in practice and so A(t) is unknown. This paper discusses the maximum likelihood estimator of A(t) when the time-to-failure model for X is an exponential density with parameter, lambda, and the time-to-repair model for Y is an exponential density with parameter, theta. Under the assumption of exponential models for X and Y, it follows that the instantaneous availability at time t is A(t)=lambda/(lambda+theta)+theta/(lambda+theta)exp[-[(1/lambda)+(1/theta)]t] with t>0. Also, the steady-state availability is A(infinity)=lambda/(lambda+theta). We use the observations from n failure-repair cycles of the power plant, say X 1 , X 2 , ..., Xsub(n), Y 1 , Y 2 , ..., Ysub(n) to present the maximum likelihood estimators of A(t) and A(infinity). The exact sampling distributions for those estimators and some statistical properties are discussed before a simulation model is used to determine 95% simulation intervals for A(t). The methodology is applied to two examples which approximate the operating history of two nuclear power plants. (author)

  15. Maximum neutron flux in thermal reactors; Maksimum neutronskog fluksa kod termalnih reaktora

    Energy Technology Data Exchange (ETDEWEB)

    Strugar, P V [Institute of Nuclear Sciences Boris Kidric, Vinca, Beograd (Yugoslavia)

    1968-07-01

    Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples.

  16. 20 CFR 404.221 - Computing your average monthly wage.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the average...

  17. Average and local structure of α-CuI by configurational averaging

    International Nuclear Information System (INIS)

    Mohn, Chris E; Stoelen, Svein

    2007-01-01

    Configurational Boltzmann averaging together with density functional theory are used to study in detail the average and local structure of the superionic α-CuI. We find that the coppers are spread out with peaks in the atom-density at the tetrahedral sites of the fcc sublattice of iodines. We calculate Cu-Cu, Cu-I and I-I pair radial distribution functions, the distribution of coordination numbers and the distribution of Cu-I-Cu, I-Cu-I and Cu-Cu-Cu bond-angles. The partial pair distribution functions are in good agreement with experimental neutron diffraction-reverse Monte Carlo, extended x-ray absorption fine structure and ab initio molecular dynamics results. In particular, our results confirm the presence of a prominent peak at around 2.7 A in the Cu-Cu pair distribution function as well as a broader, less intense peak at roughly 4.3 A. We find highly flexible bonds and a range of coordination numbers for both iodines and coppers. This structural flexibility is of key importance in order to understand the exceptional conductivity of coppers in α-CuI; the iodines can easily respond to changes in the local environment as the coppers diffuse, and a myriad of different diffusion-pathways is expected due to the large variation in the local motifs

  18. Analysis of photosynthate translocation velocity and measurement of weighted average velocity in transporting pathway of crops

    International Nuclear Information System (INIS)

    Ge Cailin; Luo Shishi; Gong Jian; Zhang Hao; Ma Fei

    1996-08-01

    The translocation profile pattern of 14 C-photosynthate along the transporting pathway in crops were monitored by pulse-labelling a mature leaf with 14 CO 2 . The progressive spreading of translocation profile pattern along the sheath or stem indicates that the translocation of photosynthate along the sheath or stem proceed with a range of velocities rather than with just a single velocity. The method for measuring the weighted average velocity of photosynthate translocation along the sheath or stem was established in living crops. The weighted average velocity and the maximum velocity of photosynthate translocation along the sheath in rice and maize were measured actually. (4 figs., 3 tabs.)

  19. A maximum power point tracking for photovoltaic-SPE system using a maximum current controller

    Energy Technology Data Exchange (ETDEWEB)

    Muhida, Riza [Osaka Univ., Dept. of Physical Science, Toyonaka, Osaka (Japan); Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Park, Minwon; Dakkak, Mohammed; Matsuura, Kenji [Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Tsuyoshi, Akira; Michira, Masakazu [Kobe City College of Technology, Nishi-ku, Kobe (Japan)

    2003-02-01

    Processes to produce hydrogen from solar photovoltaic (PV)-powered water electrolysis using solid polymer electrolysis (SPE) are reported. An alternative control of maximum power point tracking (MPPT) in the PV-SPE system based on the maximum current searching methods has been designed and implemented. Based on the characteristics of voltage-current and theoretical analysis of SPE, it can be shown that the tracking of the maximum current output of DC-DC converter in SPE side will track the MPPT of photovoltaic panel simultaneously. This method uses a proportional integrator controller to control the duty factor of DC-DC converter with pulse-width modulator (PWM). The MPPT performance and hydrogen production performance of this method have been evaluated and discussed based on the results of the experiment. (Author)

  20. Averaged emission factors for the Hungarian car fleet

    Energy Technology Data Exchange (ETDEWEB)

    Haszpra, L. [Inst. for Atmospheric Physics, Budapest (Hungary); Szilagyi, I. [Central Research Inst. for Chemistry, Budapest (Hungary)

    1995-12-31

    The vehicular emission of non-methane hydrocarbon (NMHC) is one of the largest anthropogenic sources of NMHC in Hungary and in most of the industrialized countries. Non-methane hydrocarbon plays key role in the formation of photo-chemical air pollution, usually characterized by the ozone concentration, which seriously endangers the environment and human health. The ozone forming potential of the different NMHCs differs from each other significantly, while the NMHC composition of the car exhaust is influenced by the fuel and engine type, technical condition of the vehicle, vehicle speed and several other factors. In Hungary the majority of the cars are still of Eastern European origin. They represent the technological standard of the 70`s, although there are changes recently. Due to the long-term economical decline in Hungary the average age of the cars was about 9 years in 1990 and reached 10 years by 1993. The condition of the majority of the cars is poor. In addition, almost one third (31.2 %) of the cars are equipped with two-stroke engines which emit less NO{sub x} but much more hydrocarbon. The number of cars equipped with catalytic converter was negligible in 1990 and is slowly increasing only recently. As a consequence of these facts the traffic emission in Hungary may differ from that measured in or estimated for the Western European countries and the differences should be taken into account in the air pollution models. For the estimation of the average emission of the Hungarian car fleet a one-day roadway tunnel experiment was performed in the downtown of Budapest in summer, 1991. (orig.)

  1. Maximum permissible continuous release rates of phosphorus-32 and sulphur-35 to atmosphere in a milk producing area

    Energy Technology Data Exchange (ETDEWEB)

    Bryant, P M

    1963-01-01

    A method is given for calculating, for design purposes, the maximum permissible continuous release rates of phosphorus-32 and sulphur-35 to atmosphere with respect to milk contamination. In the absence of authoritative advice from the Medical Research Council, provisional working levels for the concentration of phosphorus-32 and sulphur-35 in milk are derived, and details are given of the agricultural assumptions involved in the calculation of the relationship between the amount of the nuclide deposited on grassland and that to be found in milk. The agricultural and meteorological conditions assumed are applicable as an annual average to England and Wales. The results (in mc/day) for phosphorus-32 and sulphur-35 for a number of stack heights and distances are shown graphically; typical values, quoted in a table, include 20 mc/day of phosphorus-32 and 30 mc/day of sulfur-35 as the maximum permissible continuous release rates with respect to ground level releases at a distance of 200 metres from pastureland.

  2. Analysis and comparison of safety models using average daily, average hourly, and microscopic traffic.

    Science.gov (United States)

    Wang, Ling; Abdel-Aty, Mohamed; Wang, Xuesong; Yu, Rongjie

    2018-02-01

    There have been plenty of traffic safety studies based on average daily traffic (ADT), average hourly traffic (AHT), or microscopic traffic at 5 min intervals. Nevertheless, not enough research has compared the performance of these three types of safety studies, and seldom of previous studies have intended to find whether the results of one type of study is transferable to the other two studies. First, this study built three models: a Bayesian Poisson-lognormal model to estimate the daily crash frequency using ADT, a Bayesian Poisson-lognormal model to estimate the hourly crash frequency using AHT, and a Bayesian logistic regression model for the real-time safety analysis using microscopic traffic. The model results showed that the crash contributing factors found by different models were comparable but not the same. Four variables, i.e., the logarithm of volume, the standard deviation of speed, the logarithm of segment length, and the existence of diverge segment, were positively significant in the three models. Additionally, weaving segments experienced higher daily and hourly crash frequencies than merge and basic segments. Then, each of the ADT-based, AHT-based, and real-time models was used to estimate safety conditions at different levels: daily and hourly, meanwhile, the real-time model was also used in 5 min intervals. The results uncovered that the ADT- and AHT-based safety models performed similar in predicting daily and hourly crash frequencies, and the real-time safety model was able to provide hourly crash frequency. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Boosting biogas yield of anaerobic digesters by utilizing concentrated molasses from 2nd generation bioethanol plant

    Energy Technology Data Exchange (ETDEWEB)

    Sarker, Shiplu [Department of Renewable Energy, Faculty of Engineering and Science, University of Agder, Grimstad-4879 (Norway); Moeller, Henrik Bjarne [Department of Biosystems Engineering, Faculty of Science and Technology, Aarhus University, Research center Foulum, Blichers Alle, Post Box 50, Tjele-8830 (Denmark)

    2013-07-01

    Concentrated molasses (C5 molasses) from 2nd generation bioethanol plant has been investigated for enhancing productivity of manure based digesters. A batch study at mesophilic condition (35+- 1 deg C) showed the maximum methane yield from molasses as 286 LCH4/kgVS which was approximately 63% of the calculated theoretical yield. In addition to the batch study, co-digestion of molasses with cattle manure in a semi-continuously stirred reactor at thermophilic temperature (50+- 1 deg C) was also performed with a stepwise increase in molasses concentration. The results from this experiment revealed the maximum average biogas yield of 1.89 L/L/day when 23% VSmolasses was co-digested with cattle manure. However, digesters fed with more than 32% VSmolasses and with short adaptation period resulted in VFA accumulation and reduced methane productivity indicating that when using molasses as biogas booster this level should not be exceeded.

  4. Maximum mass of magnetic white dwarfs

    International Nuclear Information System (INIS)

    Paret, Daryel Manreza; Horvath, Jorge Ernesto; Martínez, Aurora Perez

    2015-01-01

    We revisit the problem of the maximum masses of magnetized white dwarfs (WDs). The impact of a strong magnetic field on the structure equations is addressed. The pressures become anisotropic due to the presence of the magnetic field and split into parallel and perpendicular components. We first construct stable solutions of the Tolman-Oppenheimer-Volkoff equations for parallel pressures and find that physical solutions vanish for the perpendicular pressure when B ≳ 10 13 G. This fact establishes an upper bound for a magnetic field and the stability of the configurations in the (quasi) spherical approximation. Our findings also indicate that it is not possible to obtain stable magnetized WDs with super-Chandrasekhar masses because the values of the magnetic field needed for them are higher than this bound. To proceed into the anisotropic regime, we can apply results for structure equations appropriate for a cylindrical metric with anisotropic pressures that were derived in our previous work. From the solutions of the structure equations in cylindrical symmetry we have confirmed the same bound for B ∼ 10 13 G, since beyond this value no physical solutions are possible. Our tentative conclusion is that massive WDs with masses well beyond the Chandrasekhar limit do not constitute stable solutions and should not exist. (paper)

  5. Mammographic image restoration using maximum entropy deconvolution

    International Nuclear Information System (INIS)

    Jannetta, A; Jackson, J C; Kotre, C J; Birch, I P; Robson, K J; Padgett, R

    2004-01-01

    An image restoration approach based on a Bayesian maximum entropy method (MEM) has been applied to a radiological image deconvolution problem, that of reduction of geometric blurring in magnification mammography. The aim of the work is to demonstrate an improvement in image spatial resolution in realistic noisy radiological images with no associated penalty in terms of reduction in the signal-to-noise ratio perceived by the observer. Images of the TORMAM mammographic image quality phantom were recorded using the standard magnification settings of 1.8 magnification/fine focus and also at 1.8 magnification/broad focus and 3.0 magnification/fine focus; the latter two arrangements would normally give rise to unacceptable geometric blurring. Measured point-spread functions were used in conjunction with the MEM image processing to de-blur these images. The results are presented as comparative images of phantom test features and as observer scores for the raw and processed images. Visualization of high resolution features and the total image scores for the test phantom were improved by the application of the MEM processing. It is argued that this successful demonstration of image de-blurring in noisy radiological images offers the possibility of weakening the link between focal spot size and geometric blurring in radiology, thus opening up new approaches to system optimization

  6. Maximum Margin Clustering of Hyperspectral Data

    Science.gov (United States)

    Niazmardi, S.; Safari, A.; Homayouni, S.

    2013-09-01

    In recent decades, large margin methods such as Support Vector Machines (SVMs) are supposed to be the state-of-the-art of supervised learning methods for classification of hyperspectral data. However, the results of these algorithms mainly depend on the quality and quantity of available training data. To tackle down the problems associated with the training data, the researcher put effort into extending the capability of large margin algorithms for unsupervised learning. One of the recent proposed algorithms is Maximum Margin Clustering (MMC). The MMC is an unsupervised SVMs algorithm that simultaneously estimates both the labels and the hyperplane parameters. Nevertheless, the optimization of the MMC algorithm is a non-convex problem. Most of the existing MMC methods rely on the reformulating and the relaxing of the non-convex optimization problem as semi-definite programs (SDP), which are computationally very expensive and only can handle small data sets. Moreover, most of these algorithms are two-class classification, which cannot be used for classification of remotely sensed data. In this paper, a new MMC algorithm is used that solve the original non-convex problem using Alternative Optimization method. This algorithm is also extended for multi-class classification and its performance is evaluated. The results of the proposed algorithm show that the algorithm has acceptable results for hyperspectral data clustering.

  7. Paving the road to maximum productivity.

    Science.gov (United States)

    Holland, C

    1998-01-01

    "Job security" is an oxymoron in today's environment of downsizing, mergers, and acquisitions. Workers find themselves living by new rules in the workplace that they may not understand. How do we cope? It is the leader's charge to take advantage of this chaos and create conditions under which his or her people can understand the need for change and come together with a shared purpose to effect that change. The clinical laboratory at Arkansas Children's Hospital has taken advantage of this chaos to down-size and to redesign how the work gets done to pave the road to maximum productivity. After initial hourly cutbacks, the workers accepted the cold, hard fact that they would never get their old world back. They set goals to proactively shape their new world through reorganizing, flexing staff with workload, creating a rapid response laboratory, exploiting information technology, and outsourcing. Today the laboratory is a lean, productive machine that accepts change as a way of life. We have learned to adapt, trust, and support each other as we have journeyed together over the rough roads. We are looking forward to paving a new fork in the road to the future.

  8. Maximum power flux of auroral kilometric radiation

    International Nuclear Information System (INIS)

    Benson, R.F.; Fainberg, J.

    1991-01-01

    The maximum auroral kilometric radiation (AKR) power flux observed by distant satellites has been increased by more than a factor of 10 from previously reported values. This increase has been achieved by a new data selection criterion and a new analysis of antenna spin modulated signals received by the radio astronomy instrument on ISEE 3. The method relies on selecting AKR events containing signals in the highest-frequency channel (1980, kHz), followed by a careful analysis that effectively increased the instrumental dynamic range by more than 20 dB by making use of the spacecraft antenna gain diagram during a spacecraft rotation. This analysis has allowed the separation of real signals from those created in the receiver by overloading. Many signals having the appearance of AKR harmonic signals were shown to be of spurious origin. During one event, however, real second harmonic AKR signals were detected even though the spacecraft was at a great distance (17 R E ) from Earth. During another event, when the spacecraft was at the orbital distance of the Moon and on the morning side of Earth, the power flux of fundamental AKR was greater than 3 x 10 -13 W m -2 Hz -1 at 360 kHz normalized to a radial distance r of 25 R E assuming the power falls off as r -2 . A comparison of these intense signal levels with the most intense source region values (obtained by ISIS 1 and Viking) suggests that multiple sources were observed by ISEE 3

  9. Maximum likelihood window for time delay estimation

    International Nuclear Information System (INIS)

    Lee, Young Sup; Yoon, Dong Jin; Kim, Chi Yup

    2004-01-01

    Time delay estimation for the detection of leak location in underground pipelines is critically important. Because the exact leak location depends upon the precision of the time delay between sensor signals due to leak noise and the speed of elastic waves, the research on the estimation of time delay has been one of the key issues in leak lovating with the time arrival difference method. In this study, an optimal Maximum Likelihood window is considered to obtain a better estimation of the time delay. This method has been proved in experiments, which can provide much clearer and more precise peaks in cross-correlation functions of leak signals. The leak location error has been less than 1 % of the distance between sensors, for example the error was not greater than 3 m for 300 m long underground pipelines. Apart from the experiment, an intensive theoretical analysis in terms of signal processing has been described. The improved leak locating with the suggested method is due to the windowing effect in frequency domain, which offers a weighting in significant frequencies.

  10. Ancestral Sequence Reconstruction with Maximum Parsimony.

    Science.gov (United States)

    Herbst, Lina; Fischer, Mareike

    2017-12-01

    One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference and for ancestral sequence inference is Maximum Parsimony (MP). In this manuscript, we focus on this method and on ancestral state inference for fully bifurcating trees. In particular, we investigate a conjecture published by Charleston and Steel in 1995 concerning the number of species which need to have a particular state, say a, at a particular site in order for MP to unambiguously return a as an estimate for the state of the last common ancestor. We prove the conjecture for all even numbers of character states, which is the most relevant case in biology. We also show that the conjecture does not hold in general for odd numbers of character states, but also present some positive results for this case.

  11. A METHOD FOR DETERMINING THE RADIALLY-AVERAGED EFFECTIVE IMPACT AREA FOR AN AIRCRAFT CRASH INTO A STRUCTURE

    Energy Technology Data Exchange (ETDEWEB)

    Walker, William C. [ORNL

    2018-02-01

    This report presents a methodology for deriving the equations which can be used for calculating the radially-averaged effective impact area for a theoretical aircraft crash into a structure. Conventionally, a maximum effective impact area has been used in calculating the probability of an aircraft crash into a structure. Whereas the maximum effective impact area is specific to a single direction of flight, the radially-averaged effective impact area takes into consideration the real life random nature of the direction of flight with respect to a structure. Since the radially-averaged effective impact area is less than the maximum effective impact area, the resulting calculated probability of an aircraft crash into a structure is reduced.

  12. 49 CFR 230.24 - Maximum allowable stress.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...

  13. 20 CFR 226.52 - Total annuity subject to maximum.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Total annuity subject to maximum. 226.52... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Railroad Retirement Family Maximum § 226.52 Total annuity subject to maximum. The total annuity amount which is compared to the maximum monthly amount to...

  14. Half-width at half-maximum, full-width at half-maximum analysis

    Indian Academy of Sciences (India)

    addition to the well-defined parameter full-width at half-maximum (FWHM). The distribution of ... optical side-lobes in the diffraction pattern resulting in steep central maxima [6], reduc- tion of effects of ... and broad central peak. The idea of.

  15. Reconstructing Historical VOC Concentrations in Drinking Water for Epidemiological Studies at a U.S. Military Base: Summary of Results

    Directory of Open Access Journals (Sweden)

    Morris L. Maslia

    2016-10-01

    Full Text Available A U.S. government health agency conducted epidemiological studies to evaluate whether exposures to drinking water contaminated with volatile organic compounds (VOC at U.S. Marine Corps Base Camp Lejeune, North Carolina, were associated with increased health risks to children and adults. These health studies required knowledge of contaminant concentrations in drinking water—at monthly intervals—delivered to family housing, barracks, and other facilities within the study area. Because concentration data were limited or unavailable during much of the period of contamination (1950s–1985, the historical reconstruction process was used to quantify estimates of monthly mean contaminant-specific concentrations. This paper integrates many efforts, reports, and papers into a synthesis of the overall approach to, and results from, a drinking-water historical reconstruction study. Results show that at the Tarawa Terrace water treatment plant (WTP reconstructed (simulated tetrachloroethylene (PCE concentrations reached a maximum monthly average value of 183 micrograms per liter (μg/L compared to a one-time maximum measured value of 215 μg/L and exceeded the U.S. Environmental Protection Agency’s current maximum contaminant level (MCL of 5 μg/L during the period November 1957–February 1987. At the Hadnot Point WTP, reconstructed trichloroethylene (TCE concentrations reached a maximum monthly average value of 783 μg/L compared to a one-time maximum measured value of 1400 μg/L during the period August 1953–December 1984. The Hadnot Point WTP also provided contaminated drinking water to the Holcomb Boulevard housing area continuously prior to June 1972, when the Holcomb Boulevard WTP came on line (maximum reconstructed TCE concentration of 32 μg/L and intermittently during the period June 1972–February 1985 (maximum reconstructed TCE concentration of 66 μg/L. Applying the historical reconstruction process to quantify contaminant

  16. Optimal operating conditions for maximum biogas production in anaerobic bioreactors

    International Nuclear Information System (INIS)

    Balmant, W.; Oliveira, B.H.; Mitchell, D.A.; Vargas, J.V.C.; Ordonez, J.C.

    2014-01-01

    The objective of this paper is to demonstrate the existence of optimal residence time and substrate inlet mass flow rate for maximum methane production through numerical simulations performed with a general transient mathematical model of an anaerobic biodigester introduced in this study. It is herein suggested a simplified model with only the most important reaction steps which are carried out by a single type of microorganisms following Monod kinetics. The mathematical model was developed for a well mixed reactor (CSTR – Continuous Stirred-Tank Reactor), considering three main reaction steps: acidogenesis, with a μ max of 8.64 day −1 and a K S of 250 mg/L, acetogenesis, with a μ max of 2.64 day −1 and a K S of 32 mg/L, and methanogenesis, with a μ max of 1.392 day −1 and a K S of 100 mg/L. The yield coefficients were 0.1-g-dry-cells/g-pollymeric compound for acidogenesis, 0.1-g-dry-cells/g-propionic acid and 0.1-g-dry-cells/g-butyric acid for acetogenesis and 0.1 g-dry-cells/g-acetic acid for methanogenesis. The model describes both the transient and the steady-state regime for several different biodigester design and operating conditions. After model experimental validation, a parametric analysis was performed. It was found that biogas production is strongly dependent on the input polymeric substrate and fermentable monomer concentrations, but fairly independent of the input propionic, acetic and butyric acid concentrations. An optimisation study was then conducted and optimal residence time and substrate inlet mass flow rate were found for maximum methane production. The optima found were very sharp, showing a sudden drop of methane mass flow rate variation from the observed maximum to zero, within a 20% range around the optimal operating parameters, which stresses the importance of their identification, no matter how complex the actual bioreactor design may be. The model is therefore expected to be a useful tool for simulation, design, control and

  17. A conductance maximum observed in an inward-rectifier potassium channel

    OpenAIRE

    1994-01-01

    One prediction of a multi-ion pore is that its conductance should reach a maximum and then begin to decrease as the concentration of permeant ion is raised equally on both sides of the membrane. A conductance maximum has been observed at the single-channel level in gramicidin and in a Ca(2+)-activated K+ channel at extremely high ion concentration (> 1,000 mM) (Hladky, S. B., and D. A. Haydon. 1972. Biochimica et Biophysica Acta. 274:294-312; Eisenmam, G., J. Sandblom, and E. Neher. 1977. In ...

  18. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    Science.gov (United States)

    Hall, Alex; Taylor, Andy

    2017-06-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.

  19. A Kinetic Model to Explain the Maximum in alpha-Amylase Activity Measurements in the Presence of Small Carbohydrates

    NARCIS (Netherlands)

    Baks, T.; Janssen, A.E.M.; Boom, R.M.

    2006-01-01

    The effect of the presence of several small carbohydrates on the measurement of the -amylase activity was determined over a broad concentration range. At low carbohydrate concentrations, a distinct maximum in the -amylase activity versus concentration curves was observed in several cases. At higher

  20. STUDIES OF CHOSEN TOXIC ELEMENTS CONCENTRATION IN MULTIFLOWER BEE HONEY

    Directory of Open Access Journals (Sweden)

    Ewa Popiela

    2011-04-01

    Full Text Available 72 544x376 Normal 0 21 false false false  The aim of the study was to determine the bioaccumulation level of chosen toxic elements (Zn, Cu, Pb, As and Cd in multiflower honey collected from Brzeg area. Biological material (honey was mineralized using the microwave technique at an elevated pressure in the microprocessor station of pressure in the type Mars 5. Quantitative analysis of elements (As, Cd, Cu, Pb and Zn was performed by plasma spectrometry method using a Varian ICP-AES apparatus. The presence of toxic elements was determined in examined biological materials. The elements fallowed the fallowing decreasing order with respect to their content of honey: Zn>Cu>Pb>As>Cd. The average concentrations of studied elements observed in multi-flower honey were as follows: 6.24 mg.kg-1 of zinc, 2.75 mg.kg-1 of copper, 0.53, 0.071, 0.042 mg.kg-1of lead, arsenic and cadmium, respectively. Lead was the most problematic in bee honey because its average content exceeded the maximum acceptable concentration. Additionally, this metal concentration was 60% higher in studied samples than allowable standard of lead content.doi:10.5219/134 

  1. Nonimaging concentrators for diode-pumped slab lasers

    Science.gov (United States)

    Lacovara, Philip; Gleckman, Philip L.; Holman, Robert L.; Winston, Roland

    1991-10-01

    Diode-pumped slab lasers require concentrators for high-average power operation. We detail the properties of diode lasers and slab lasers which set the concentration requirements and the concentrator design methodologies that are used, and describe some concentrator designs used in high-average power slab lasers at Lincoln Laboratory.

  2. Targeted maximum likelihood estimation for a binary treatment: A tutorial.

    Science.gov (United States)

    Luque-Fernandez, Miguel Angel; Schomaker, Michael; Rachet, Bernard; Schnitzer, Mireille E

    2018-04-23

    When estimating the average effect of a binary treatment (or exposure) on an outcome, methods that incorporate propensity scores, the G-formula, or targeted maximum likelihood estimation (TMLE) are preferred over naïve regression approaches, which are biased under misspecification of a parametric outcome model. In contrast propensity score methods require the correct specification of an exposure model. Double-robust methods only require correct specification of either the outcome or the exposure model. Targeted maximum likelihood estimation is a semiparametric double-robust method that improves the chances of correct model specification by allowing for flexible estimation using (nonparametric) machine-learning methods. It therefore requires weaker assumptions than its competitors. We provide a step-by-step guided implementation of TMLE and illustrate it in a realistic scenario based on cancer epidemiology where assumptions about correct model specification and positivity (ie, when a study participant had 0 probability of receiving the treatment) are nearly violated. This article provides a concise and reproducible educational introduction to TMLE for a binary outcome and exposure. The reader should gain sufficient understanding of TMLE from this introductory tutorial to be able to apply the method in practice. Extensive R-code is provided in easy-to-read boxes throughout the article for replicability. Stata users will find a testing implementation of TMLE and additional material in the Appendix S1 and at the following GitHub repository: https://github.com/migariane/SIM-TMLE-tutorial. © 2018 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  3. A maximum likelihood framework for protein design

    Directory of Open Access Journals (Sweden)

    Philippe Hervé

    2006-06-01

    Full Text Available Abstract Background The aim of protein design is to predict amino-acid sequences compatible with a given target structure. Traditionally envisioned as a purely thermodynamic question, this problem can also be understood in a wider context, where additional constraints are captured by learning the sequence patterns displayed by natural proteins of known conformation. In this latter perspective, however, we still need a theoretical formalization of the question, leading to general and efficient learning methods, and allowing for the selection of fast and accurate objective functions quantifying sequence/structure compatibility. Results We propose a formulation of the protein design problem in terms of model-based statistical inference. Our framework uses the maximum likelihood principle to optimize the unknown parameters of a statistical potential, which we call an inverse potential to contrast with classical potentials used for structure prediction. We propose an implementation based on Markov chain Monte Carlo, in which the likelihood is maximized by gradient descent and is numerically estimated by thermodynamic integration. The fit of the models is evaluated by cross-validation. We apply this to a simple pairwise contact potential, supplemented with a solvent-accessibility term, and show that the resulting models have a better predictive power than currently available pairwise potentials. Furthermore, the model comparison method presented here allows one to measure the relative contribution of each component of the potential, and to choose the optimal number of accessibility classes, which turns out to be much higher than classically considered. Conclusion Altogether, this reformulation makes it possible to test a wide diversity of models, using different forms of potentials, or accounting for other factors than just the constraint of thermodynamic stability. Ultimately, such model-based statistical analyses may help to understand the forces

  4. Determination of concentration factors for Cs-137 and Ra-226 in the mullet species Chelon labrosus (Mugilidae) from the South Adriatic Sea

    Energy Technology Data Exchange (ETDEWEB)

    Antovic, Ivanka [Department for Biochemical and Medical Sciences, State University of Novi Pazar, Vuka Karadzica bb, 36 300 Novi Pazar (Serbia); Antovic, Nevenka M., E-mail: nenaa@rc.pmf.ac.me [Faculty of Natural Sciences and Mathematics, University of Montenegro, Dzordza Vasingtona bb, 20 000 Podgorica (Montenegro)

    2011-07-15

    Concentration factors for Cs-137 and Ra-226 transfer from seawater, and dried sediment or mud with detritus, have been determined for whole, fresh weight, Chelon labrosus individuals and selected organs. Cesium was detected in 5 of 22 fish individuals, and its activity ranged from 1.0 to 1.6 Bq kg{sup -1}. Radium was detected in all fish, and ranged from 0.4 to 2.1 Bq kg{sup -1}, with an arithmetic mean of 1.0 Bq kg{sup -1}. In regards to fish organs, cesium activity concentration was highest in muscles (maximum - 3.7 Bq kg{sup -1}), while radium was highest in skeletons (maximum - 25 Bq kg{sup -1}). Among cesium concentration factors, those for muscles were the highest (from seawater - an average of 47, from sediment - an average of 3.3, from mud with detritus - an average of 0.8). Radium concentration factors were the highest for skeleton (from seawater - an average of 130, from sediment - an average of 1.8, from mud with detritus - an average of 1.5). Additionally, annual intake of cesium and radium by human adults consuming muscles of this fish species has been estimated to provide, in aggregate, an effective dose of about 4.1 {mu}Sv y{sup -1}. - Highlights: > Radionuclide transfer from seawater, sediment and mud with detritus. > Concentration factors for Cs-137 and Ra-226 in C. labrosus whole fish and organs. > Cs-137 concentration factors are highest for C. labrosus muscles. > Ra-226 concentration factors are highest for C. labrosus skeleton.

  5. Determination of concentration factors for Cs-137 and Ra-226 in the mullet species Chelon labrosus (Mugilidae) from the South Adriatic Sea

    International Nuclear Information System (INIS)

    Antovic, Ivanka; Antovic, Nevenka M.

    2011-01-01

    Concentration factors for Cs-137 and Ra-226 transfer from seawater, and dried sediment or mud with detritus, have been determined for whole, fresh weight, Chelon labrosus individuals and selected organs. Cesium was detected in 5 of 22 fish individuals, and its activity ranged from 1.0 to 1.6 Bq kg -1 . Radium was detected in all fish, and ranged from 0.4 to 2.1 Bq kg -1 , with an arithmetic mean of 1.0 Bq kg -1 . In regards to fish organs, cesium activity concentration was highest in muscles (maximum - 3.7 Bq kg -1 ), while radium was highest in skeletons (maximum - 25 Bq kg -1 ). Among cesium concentration factors, those for muscles were the highest (from seawater - an average of 47, from sediment - an average of 3.3, from mud with detritus - an average of 0.8). Radium concentration factors were the highest for skeleton (from seawater - an average of 130, from sediment - an average of 1.8, from mud with detritus - an average of 1.5). Additionally, annual intake of cesium and radium by human adults consuming muscles of this fish species has been estimated to provide, in aggregate, an effective dose of about 4.1 μSv y -1 . - Highlights: → Radionuclide transfer from seawater, sediment and mud with detritus. → Concentration factors for Cs-137 and Ra-226 in C. labrosus whole fish and organs. → Cs-137 concentration factors are highest for C. labrosus muscles. → Ra-226 concentration factors are highest for C. labrosus skeleton.

  6. A reconnaissance study of radon concentrations in Hamadan city, Iran

    Directory of Open Access Journals (Sweden)

    G. K. Gillmore

    2010-04-01

    Full Text Available This paper presents results of a reconnaissance study that used CR-39 alpha track-etch detectors to measure radon concentrations in dwellings in Hamadan, western Iran, significantly, built on permeable alluvial fan deposits. The indoor radon levels recorded varied from 4 (i.e. below the lower limit of detection for the method to 364 Bq/m3 with a mean value of 108 Bq/m3 which is 2.5 times the average global population-weighted indoor radon concentration – these data augment the very few published studies on indoor radon levels in Iran. The maximum radon concentration in Hamadan occurs during the winter period (January to March with lower concentrations during the autumn. The effective dose equivalent to the population in Hamadan is estimated from this study to be in the region of 2.7 mSv/y, which is above the guidelines for dose to a member of the public of 1 mSv/y suggested by the International Commission on Radiological Protection (ICRP in 1993. This study supports other work in a number of countries that indicates such permeable "surficial" deposits as being of intermediate to high radon potential. In western Iran, the presence of hammered clay floors, the widespread presence of excavated qanats, the textural properties of surficial deposits and human behaviour intended to cope with winds are likely to be important factors influencing radon concentrations in older buildings.

  7. Analytical expressions for conditional averages: A numerical test

    DEFF Research Database (Denmark)

    Pécseli, H.L.; Trulsen, J.

    1991-01-01

    Conditionally averaged random potential fluctuations are an important quantity for analyzing turbulent electrostatic plasma fluctuations. Experimentally, this averaging can be readily performed by sampling the fluctuations only when a certain condition is fulfilled at a reference position...

  8. Experimental demonstration of squeezed-state quantum averaging

    DEFF Research Database (Denmark)

    Lassen, Mikael Østergaard; Madsen, Lars Skovgaard; Sabuncu, Metin

    2010-01-01

    We propose and experimentally demonstrate a universal quantum averaging process implementing the harmonic mean of quadrature variances. The averaged variances are prepared probabilistically by means of linear optical interference and measurement-induced conditioning. We verify that the implemented...

  9. [Ozone concentration distribution of urban].

    Science.gov (United States)

    Yin, Yong-quan; Li, Chang-mei; Ma, Gui-xia; Cui, Zhao-jie

    2004-11-01

    The increase of ozone concentration in urban is one of the most important research topics on environmental science. With the increase of nitrogen oxides and hydrogen-carbon compounds which are exhausted from cars, the ozone concentration in urban is obviously increased on sunlight, and threat of photochemistry smog will be possible. Therefore, it is very important to monitor and study the ozone concentration distribution in urban. The frequency-distribution, diurnal variation and monthly variation of ozone concentration were studied on the campus of Shandong University during six months monitoring. The influence of solar radiation and weather conditions on ozone concentration were discussed. The frequency of ozone concentration less than 200 microg/m3 is 96.88%. The ozone concentration has an obvious diurnal variation. The ozone concentration in the afternoon is higher than in the morning and in the evening. The maximum appears in June, when it is the strong solar radiation and high air-temperature. The weather conditions also influence the ozone concentration. The ozone concentration in clear day is higher than in rainy and cloudy day.

  10. Study of temporal variation of radon concentrations in public drinking water supplies

    International Nuclear Information System (INIS)

    York, E.L.

    1995-01-01

    The Environmental Protection Agency (EPA) has proposed a Maximum Contaminant Level (MCL) for radon-222 in public drinking water supplies of 300 pCi/L. Proposed monitoring requirements include collecting quarterly grab samples for the first year, then annual samples for the remainder of the compliance cycle provided first year quarterly samples average below the MCL. The focus of this research was to study the temporal variation of groundwater radon concentrations to investigate how reliably one can predict an annual average radon concentration based on the results of grab samples. Using a open-quotes slow-flowclose quotes collection method and liquid scintillation analysis, biweekly water samples were taken from ten public water supply wells in North Carolina (6 month - 11 month sampling periods). Based on study results, temporal variations exist in groundwater radon concentrations. Statistical analysis performed on the data indicates that grab samples taken from each of the ten wells during the study period would exhibit groundwater radon concentrations within 30% of their average radon concentration

  11. Maximum values and classifications of radionuclides

    International Nuclear Information System (INIS)

    1993-01-01

    The primary means of controlling the use of radiation are safety license procedure and the monitoring of radiation exposure and working conditions at places of radiation use. In Section 17 of the Finnish Radiation Act (592/91) certain operations are exempted from the safety license. The exemption limits for the licensing of radioactive materials, the radiotoxicity classification of radionuclides related to such exemption limits, the annual limits on intake of radionuclides to be followed when monitoring internal radiation dose, as well as concentration limits in the breathing air are specified in the guide. Also the surface contamination limits which must be followed when monitoring working conditions at places of radiation use are presented. (4 refs., 6 tabs.)

  12. Parabolic solar concentrator

    Science.gov (United States)

    Tecpoyotl-Torres, M.; Campos-Alvarez, J.; Tellez-Alanis, F.; Sánchez-Mondragón, J.

    2006-08-01

    In this work we present the basis of the solar concentrator design, which has is located at Temixco, Morelos, Mexico. For this purpose, this place is ideal due to its geographic and climatic conditions, and in addition, because it accounts with the greatest constant illumination in Mexico. For the construction of the concentrator we use a recycled parabolic plate of a telecommunications satellite dish (NEC). This plate was totally covered with Aluminum. The opening diameter is of 332 cm, the focal length is of 83 cm and the opening angle is of 90°. The geometry of the plate guaranties that the incident beams, will be collected at the focus. The mechanical treatment of the plate produces an average reflectance of 75% in the visible region of the solar spectrum, and of 92% for wavelengths up to 3μm in the infrared region. We obtain up to 2000°C of temperature concentration with this setup. The reflectance can be greatly improved, but did not consider it as typical practical use. The energy obtained can be applied to conditions that require of those high calorific energies. In order to optimize the operation of the concentrator we use a control circuit designed to track the apparent sun position.

  13. The flattening of the average potential in models with fermions

    International Nuclear Information System (INIS)

    Bornholdt, S.

    1993-01-01

    The average potential is a scale dependent scalar effective potential. In a phase with spontaneous symmetry breaking its inner region becomes flat as the averaging extends over infinite volume and the average potential approaches the convex effective potential. Fermion fluctuations affect the shape of the average potential in this region and its flattening with decreasing physical scale. They have to be taken into account to find the true minimum of the scalar potential which determines the scale of spontaneous symmetry breaking. (orig.)

  14. Chaotic Universe, Friedmannian on the average. Pt. 2

    Energy Technology Data Exchange (ETDEWEB)

    Marochnik, L S [Rostovskij-na-Donu Gosudarstvennyj Univ. (USSR). Astrofizicheskoe Otdelenie

    1980-05-01

    On the basis of the solutions obtained in the previous paper, the changes in the scenario of the standard model of the Big Bang are found. The chaos degree (constraints on fluctuation spectra) is obtained, which could be still preserved by the initially completely chaotic Universe at the time of light elements nucleosynthesis tsub(es). The time boundaries of hadron and lepton eras and the time the electron neutrinos and neutrons become 'frozen' in reactions of weak interaction may be shifted up to 1.4 times. The corresponding temperatures may shift off from the standard ones 0.88 times if the mean-square level of fluctuations is close to unity. If the density of the energy of fluctuations concentrated in the short-wave region of the spectrum is less than 1.5 epsilon, the nucleosynthesis leads to a helium abundance coinciding with the observed one. If at the time tsub(es) the maximum of the spectral density of the energy is in the long-wave region, that is lambdasub(max) >>ctsub(es) the level of the chaos during the period of nucleosynthesis is restricted to ..nu.. approx. < 1.76 (where ..nu.. approx. ..integral../Csub(K)//sup 2/d/sup 3/K, Csub(K) is Fourier component of the amplitude of metric fluctuations). In particular, the protogalactic vortical disturbances with a wide spectrum ..delta.. approx. > 4 x 10/sup 3/ ..cap omega../sup -1/(..delta.. = ..delta.. K/K, ..cap omega.. = rho/rhosub(crit)) are compatible with the observed helium abundance.

  15. Savannah River Site radioiodine atmospheric releases and offsite maximum doses

    International Nuclear Information System (INIS)

    Marter, W.L.

    1990-01-01

    Radioisotopes of iodine have been released to the atmosphere from the Savannah River Site since 1955. The releases, mostly from the 200-F and 200-H Chemical Separations areas, consist of the isotopes, I-129 and 1-131. Small amounts of 1-131 and 1-133 have also been released from reactor facilities and the Savannah River Laboratory. This reference memorandum was issued to summarize our current knowledge of releases of radioiodines and resultant maximum offsite doses. This memorandum supplements the reference memorandum by providing more detailed supporting technical information. Doses reported in this memorandum from consumption of the milk containing the highest I-131 concentration following the 1961 1-131 release incident are about 1% higher than reported in the reference memorandum. This is the result of using unrounded 1-131 concentrations of I-131 in milk in this memo. It is emphasized here that this technical report does not constitute a dose reconstruction in the same sense as the dose reconstruction effort currently underway at Hanford. This report uses existing published data for radioiodine releases and existing transport and dosimetry models

  16. 20 CFR 404.220 - Average-monthly-wage method.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Average-monthly-wage method. 404.220 Section... INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.220 Average-monthly-wage method. (a) Who is eligible for this method. You must...

  17. A time-averaged cosmic ray propagation theory

    International Nuclear Information System (INIS)

    Klimas, A.J.

    1975-01-01

    An argument is presented, which casts doubt on our ability to choose an appropriate magnetic field ensemble for computing the average behavior of cosmic ray particles. An alternate procedure, using time-averages rather than ensemble-averages, is presented. (orig.) [de

  18. 7 CFR 51.2561 - Average moisture content.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except when...

  19. Averaging in SU(2) open quantum random walk

    International Nuclear Information System (INIS)

    Ampadu Clement

    2014-01-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT

  20. Averaging in SU(2) open quantum random walk

    Science.gov (United States)

    Clement, Ampadu

    2014-03-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT.

  1. Maximum entropy production rate in quantum thermodynamics

    Energy Technology Data Exchange (ETDEWEB)

    Beretta, Gian Paolo, E-mail: beretta@ing.unibs.i [Universita di Brescia, via Branze 38, 25123 Brescia (Italy)

    2010-06-01

    In the framework of the recent quest for well-behaved nonlinear extensions of the traditional Schroedinger-von Neumann unitary dynamics that could provide fundamental explanations of recent experimental evidence of loss of quantum coherence at the microscopic level, a recent paper [Gheorghiu-Svirschevski 2001 Phys. Rev. A 63 054102] reproposes the nonlinear equation of motion proposed by the present author [see Beretta G P 1987 Found. Phys. 17 365 and references therein] for quantum (thermo)dynamics of a single isolated indivisible constituent system, such as a single particle, qubit, qudit, spin or atomic system, or a Bose-Einstein or Fermi-Dirac field. As already proved, such nonlinear dynamics entails a fundamental unifying microscopic proof and extension of Onsager's reciprocity and Callen's fluctuation-dissipation relations to all nonequilibrium states, close and far from thermodynamic equilibrium. In this paper we propose a brief but self-contained review of the main results already proved, including the explicit geometrical construction of the equation of motion from the steepest-entropy-ascent ansatz and its exact mathematical and conceptual equivalence with the maximal-entropy-generation variational-principle formulation presented in Gheorghiu-Svirschevski S 2001 Phys. Rev. A 63 022105. Moreover, we show how it can be extended to the case of a composite system to obtain the general form of the equation of motion, consistent with the demanding requirements of strong separability and of compatibility with general thermodynamics principles. The irreversible term in the equation of motion describes the spontaneous attraction of the state operator in the direction of steepest entropy ascent, thus implementing the maximum entropy production principle in quantum theory. The time rate at which the path of steepest entropy ascent is followed has so far been left unspecified. As a step towards the identification of such rate, here we propose a possible

  2. Determination of the maximum-depth to potential field sources by a maximum structural index method

    Science.gov (United States)

    Fedi, M.; Florio, G.

    2013-01-01

    A simple and fast determination of the limiting depth to the sources may represent a significant help to the data interpretation. To this end we explore the possibility of determining those source parameters shared by all the classes of models fitting the data. One approach is to determine the maximum depth-to-source compatible with the measured data, by using for example the well-known Bott-Smith rules. These rules involve only the knowledge of the field and its horizontal gradient maxima, and are independent from the density contrast. Thanks to the direct relationship between structural index and depth to sources we work out a simple and fast strategy to obtain the maximum depth by using the semi-automated methods, such as Euler deconvolution or depth-from-extreme-points method (DEXP). The proposed method consists in estimating the maximum depth as the one obtained for the highest allowable value of the structural index (Nmax). Nmax may be easily determined, since it depends only on the dimensionality of the problem (2D/3D) and on the nature of the analyzed field (e.g., gravity field or magnetic field). We tested our approach on synthetic models against the results obtained by the classical Bott-Smith formulas and the results are in fact very similar, confirming the validity of this method. However, while Bott-Smith formulas are restricted to the gravity field only, our method is applicable also to the magnetic field and to any derivative of the gravity and magnetic field. Our method yields a useful criterion to assess the source model based on the (∂f/∂x)max/fmax ratio. The usefulness of the method in real cases is demonstrated for a salt wall in the Mississippi basin, where the estimation of the maximum depth agrees with the seismic information.

  3. Radiocesium concentrations of snakes from contaminated and non-contaminated habitats of the AEC Savannah River Plant

    International Nuclear Information System (INIS)

    Brisbin, I.L. Jr.; Staton, M.A.; Pinder, J.E. III.; Geiger, R.A.

    1974-01-01

    Concentration levels of 134 Cs and 137 Cs were determined for 117 snakes of 19 species collected on the AEC Savannah River Plant near Aiken, South Carolina. Snakes collected from the vicinity of a reactor effluent stream averaged 131.5 pCi radiocesium/g live weight, with a maximum of 1032.6 pCi/g, and represented the highest level of radiocesium concentration reported in the literature for any naturally-occurring wild population of vertebrate predators. These snakes had significantly higher concentrations of radiocesium than those collected in the vicinity of a reactor cooling reservoir which averaged 27.7 pCi/g live weight, with a maximum of 139.3 pCi/g. The radiocesium contents of snakes collected from uncontaminated habitats averaged 2.6 and 2.4 pCi/g live weight, respectively, and did not differ significantly from background radiation levels. Radiocesium concentrations approximated a log-normal frequency distribution, and no significant differences in frequency-distribution patterns could be demonstrated between collection areas. (U.S.)

  4. The Assessment of Air Pollutant Concentrations and Air Quality Index in Shiraz during 2011-2013

    Directory of Open Access Journals (Sweden)

    Monireh Majlesi Nasr

    2016-06-01

    Full Text Available Background: Exposure to air pollutants can cause many problems, including the health effects in humans and animals. The aim of this study was to assay the air quality in the Shiraz city during 2011-2013. Methods: In this descriptive-analytical study, the air pollutant data during the study period were taken from Air Quality Co. for two main stations i.e. Darvazeh Kazeroun and Imam Hossein and then were analysed to determine air quality index. Results: The maximum (0.018 ppm and minimum (0.015 ppm annual concentration of SO2 were determined in 2011 and 2013, respectively. The maximum NO2 concentration was measured in summer 2011 with a value of 0.025 ppm. Regarding ozone, the highest average concentration was measured in the summer season of 2013 with the concentration of 0.068 ppm. In terms of air quality, the worst situation was experienced in 2011, which about 31 percent of the days have been marked as unhealthy, but during the last years of the study, the air quality get better. Conclusion: In general, the results of the study showed that SO2 concentration has been decreased during recent years due to strengthen of air pollution regulation but NO2 concentration was increased because the number of gas fuel automobile was also increased. With regard to air quality, it has an improving trend during the study period.

  5. Weighted Maximum-Clique Transversal Sets of Graphs

    OpenAIRE

    Chuan-Min Lee

    2011-01-01

    A maximum-clique transversal set of a graph G is a subset of vertices intersecting all maximum cliques of G. The maximum-clique transversal set problem is to find a maximum-clique transversal set of G of minimum cardinality. Motivated by the placement of transmitters for cellular telephones, Chang, Kloks, and Lee introduced the concept of maximum-clique transversal sets on graphs in 2001. In this paper, we study the weighted version of the maximum-clique transversal set problem for split grap...

  6. Pattern formation, logistics, and maximum path probability

    Science.gov (United States)

    Kirkaldy, J. S.

    1985-05-01

    The concept of pattern formation, which to current researchers is a synonym for self-organization, carries the connotation of deductive logic together with the process of spontaneous inference. Defining a pattern as an equivalence relation on a set of thermodynamic objects, we establish that a large class of irreversible pattern-forming systems, evolving along idealized quasisteady paths, approaches the stable steady state as a mapping upon the formal deductive imperatives of a propositional function calculus. In the preamble the classical reversible thermodynamics of composite systems is analyzed as an externally manipulated system of space partitioning and classification based on ideal enclosures and diaphragms. The diaphragms have discrete classification capabilities which are designated in relation to conserved quantities by descriptors such as impervious, diathermal, and adiabatic. Differentiability in the continuum thermodynamic calculus is invoked as equivalent to analyticity and consistency in the underlying class or sentential calculus. The seat of inference, however, rests with the thermodynamicist. In the transition to an irreversible pattern-forming system the defined nature of the composite reservoirs remains, but a given diaphragm is replaced by a pattern-forming system which by its nature is a spontaneously evolving volume partitioner and classifier of invariants. The seat of volition or inference for the classification system is thus transferred from the experimenter or theoretician to the diaphragm, and with it the full deductive facility. The equivalence relations or partitions associated with the emerging patterns may thus be associated with theorems of the natural pattern-forming calculus. The entropy function, together with its derivatives, is the vehicle which relates the logistics of reservoirs and diaphragms to the analog logistics of the continuum. Maximum path probability or second-order differentiability of the entropy in isolation are

  7. Feedback Limits to Maximum Seed Masses of Black Holes

    International Nuclear Information System (INIS)

    Pacucci, Fabio; Natarajan, Priyamvada; Ferrara, Andrea

    2017-01-01

    The most massive black holes observed in the universe weigh up to ∼10 10 M ⊙ , nearly independent of redshift. Reaching these final masses likely required copious accretion and several major mergers. Employing a dynamical approach that rests on the role played by a new, relevant physical scale—the transition radius—we provide a theoretical calculation of the maximum mass achievable by a black hole seed that forms in an isolated halo, one that scarcely merged. Incorporating effects at the transition radius and their impact on the evolution of accretion in isolated halos, we are able to obtain new limits for permitted growth. We find that large black hole seeds ( M • ≳ 10 4 M ⊙ ) hosted in small isolated halos ( M h ≲ 10 9 M ⊙ ) accreting with relatively small radiative efficiencies ( ϵ ≲ 0.1) grow optimally in these circumstances. Moreover, we show that the standard M • – σ relation observed at z ∼ 0 cannot be established in isolated halos at high- z , but requires the occurrence of mergers. Since the average limiting mass of black holes formed at z ≳ 10 is in the range 10 4–6 M ⊙ , we expect to observe them in local galaxies as intermediate-mass black holes, when hosted in the rare halos that experienced only minor or no merging events. Such ancient black holes, formed in isolation with subsequent scant growth, could survive, almost unchanged, until present.

  8. Seasonal variation in heavy metal concentration in mangrove foliage

    Digital Repository Service at National Institute of Oceanography (India)

    Untawale, A.G.; Wafar, S.; Bhosle, N.B.

    Seasonal variation in the concentration of some heavy metals in the leaves of seven species of mangrove vegetation from Goa, revealed that maximum concentration of iron and manganese occurs during the monsoon season without any significant toxic...

  9. Metals in the Scheldt estuary: From environmental concentrations to bioaccumulation

    International Nuclear Information System (INIS)

    Van Ael, Evy; Blust, Ronny; Bervoets, Lieven

    2017-01-01

    To investigate the relationship between metal concentrations in abiotic compartments and in aquatic species, sediment, suspended matter and several aquatic species (Polychaeta, Oligochaeta, four crustacean species, three mollusc species and eight fish species) were collected during three seasons at six locations along the Scheldt estuary (the Netherlands-Belgium) and analysed on their metal content (Ag, Cd, Co, Cr, Cu, Ni, Pb, Zn and the metalloid As). Sediment and biota tissue concentrations were significantly influenced by sampling location, but not by season. Measurements of Acid Volatile Sulphides (AVS) concentrations in relation to Simultaneously Extracted Metals (SEM) in the sediment suggested that not all metals in the sediment will be bound to sulphides and some metals might be bioavailable. For all metals but zinc, highest concentrations were measured in invertebrate species; Ag and Ni in periwinkle, Cr, Co and Pb in Oligochaete worms and As, Cd and Cu in crabs and shrimp. Highest concentrations of Zn were measured in the kidney of European smelt. In fish, for most of the metals, the concentrations were highest in liver or kidney and lowest in muscle. For Zn however, highest concentrations were measured in the kidney of European smelt. For less than half of the metals significant correlations between sediment metal concentrations and bioaccumulated concentrations were found (liver/hepatopancreas or whole organism). To calculate the possible human health risk by consumption, average and maximum metal concentrations in the muscle tissues were compared to the minimum risk levels (MRLs). Concentrations of As led to the highest risk potential for all consumable species. Cadmium and Cu posed only a risk when consuming the highest contaminated shrimp and shore crabs. Consuming blue mussel could result in a risk for the metals As, Cd and Cr. - Highlights: • This is the first study investigating metal distribution along the aquatic ecosystem of the Scheldt

  10. Study on indoor radon concentration and gamma radiation dose rate in different rooms in some dwellings around Bharath Gold Mines Limited, Karnataka State, India

    International Nuclear Information System (INIS)

    Umesha Reddy, K.; Jayasheelan, A.; Sannappa, J.

    2012-01-01

    Indoor radon contributes significantly to the total radiation exposure caused to human beings. The indoor concentration of radon in different rooms in the same type of dwellings around Bharath Gold Mines Limited (BGML), Karnataka State (12°57' min N and 78°16' min E) were measured by using LR-115 (type-Il) Solid State Nuclear Track Detectors (SSNTDs). The maximum indoor radon concentration is observed in the bathroom and minimum in the hall. The maximum average indoor radon concentration is observed in the Champion and minimum in the BEML nagar. The indoor gamma radiation dose rate is also measured in these locations using scintillometer. The geology of this part forms predominantly Hornblende Schist, Granite gneiss, Champion gneiss, Quartzite etc. The indoor radon concentration shows good correlation with the indoor gamma radiation dose. (author)

  11. Comparison of helical, maximum intensity projection (MIP), and averaged intensity (AI) 4D CT imaging for stereotactic body radiation therapy (SBRT) planning in lung cancer

    International Nuclear Information System (INIS)

    Bradley, Jeffrey D.; Nofal, Ahmed N.; El Naqa, Issam M.; Lu, Wei; Liu, Jubei; Hubenschmidt, James; Low, Daniel A.; Drzymala, Robert E.; Khullar, Divya

    2006-01-01

    Background and Purpose: To compare helical, MIP and AI 4D CT imaging, for the purpose of determining the best CT-based volume definition method for encompassing the mobile gross tumor volume (mGTV) within the planning target volume (PTV) for stereotactic body radiation therapy (SBRT) in stage I lung cancer. Materials and methods: Twenty patients with medically inoperable peripheral stage I lung cancer were planned for SBRT. Free-breathing helical and 4D image datasets were obtained for each patient. Two composite images, the MIP and AI, were automatically generated from the 4D image datasets. The mGTV contours were delineated for the MIP, AI and helical image datasets for each patient. The volume for each was calculated and compared using analysis of variance and the Wilcoxon rank test. A spatial analysis for comparing center of mass (COM) (i.e. isocenter) coordinates for each imaging method was also performed using multivariate analysis of variance. Results: The MIP-defined mGTVs were significantly larger than both the helical- (p 0.001) and AI-defined mGTVs (p = 0.012). A comparison of COM coordinates demonstrated no significant spatial difference in the x-, y-, and z-coordinates for each tumor as determined by helical, MIP, or AI imaging methods. Conclusions: In order to incorporate the extent of tumor motion from breathing during SBRT, MIP is superior to either helical or AI images for defining the mGTV. The spatial isocenter coordinates for each tumor were not altered significantly by the imaging methods

  12. Surface temperature evolution and the location of maximum and average surface temperature of a lithium-ion pouch cell under variable load profiles

    DEFF Research Database (Denmark)

    Goutam, Shovon; Timmermans, Jean-Marc; Omar, Noshin

    2014-01-01

    This experimental work attempts to determine the surface temperature evolution of large (20 Ah-rated capacity) commercial Lithium-Ion pouch cells for the application of rechargeable energy storage of plug in hybrid electric vehicles and electric vehicles. The cathode of the cells is nickel...

  13. Future changes over the Himalayas: Maximum and minimum temperature

    Science.gov (United States)

    Dimri, A. P.; Kumar, D.; Choudhary, A.; Maharana, P.

    2018-03-01

    An assessment of the projection of minimum and maximum air temperature over the Indian Himalayan region (IHR) from the COordinated Regional Climate Downscaling EXperiment- South Asia (hereafter, CORDEX-SA) regional climate model (RCM) experiments have been carried out under two different Representative Concentration Pathway (RCP) scenarios. The major aim of this study is to assess the probable future changes in the minimum and maximum climatology and its long-term trend under different RCPs along with the elevation dependent warming over the IHR. A number of statistical analysis such as changes in mean climatology, long-term spatial trend and probability distribution function are carried out to detect the signals of changes in climate. The study also tries to quantify the uncertainties associated with different model experiments and their ensemble in space, time and for different seasons. The model experiments and their ensemble show prominent cold bias over Himalayas for present climate. However, statistically significant higher warming rate (0.23-0.52 °C/decade) for both minimum and maximum air temperature (Tmin and Tmax) is observed for all the seasons under both RCPs. The rate of warming intensifies with the increase in the radiative forcing under a range of greenhouse gas scenarios starting from RCP4.5 to RCP8.5. In addition to this, a wide range of spatial variability and disagreements in the magnitude of trend between different models describes the uncertainty associated with the model projections and scenarios. The projected rate of increase of Tmin may destabilize the snow formation at the higher altitudes in the northern and western parts of Himalayan region, while rising trend of Tmax over southern flank may effectively melt more snow cover. Such combined effect of rising trend of Tmin and Tmax may pose a potential threat to the glacial deposits. The overall trend of Diurnal temperature range (DTR) portrays increasing trend across entire area with

  14. Last Glacial Maximum CO2 and d13C successfully reconciled

    NARCIS (Netherlands)

    Bouttes, N.; Paillard, D.; Roche, D.M.V.A.P.; Brovkin, V.; Bopp, L.

    2011-01-01

    During the Last Glacial Maximum (LGM, ∼21,000 years ago) the cold climate was strongly tied to low atmospheric CO2 concentration (∼190 ppm). Although it is generally assumed that this low CO2 was due to an expansion of the oceanic carbon reservoir, simulating the glacial level

  15. Determination of the maximum MGS mounting height : phase II detailed analysis with LS-DYNA.

    Science.gov (United States)

    2012-12-01

    Determination of the maximum Midwest Guardrail System (MGS) mounting height was performed in two phases. : Phase I concentrated on crash testing: two full-scale crash tests were performed on the MGS with top-rail mounting heights : of 34 in. (864 mm)...

  16. The maximum theoretical performance of unconcentrated solar photovoltaic and thermoelectric generator systems

    DEFF Research Database (Denmark)

    Bjørk, Rasmus; Nielsen, Kaspar Kirstein

    2017-01-01

    The maximum efficiency for photovoltaic (PV) and thermoelectric generator (TEG) systems without concentration is investigated. Both a combined system where the TEG is mounted directly on the back of the PV and a tandem system where the incoming sunlight is split, and the short wavelength radiation...

  17. Averaging and sampling for magnetic-observatory hourly data

    Directory of Open Access Journals (Sweden)

    J. J. Love

    2010-11-01

    Full Text Available A time and frequency-domain analysis is made of the effects of averaging and sampling methods used for constructing magnetic-observatory hourly data values. Using 1-min data as a proxy for continuous, geomagnetic variation, we construct synthetic hourly values of two standard types: instantaneous "spot" measurements and simple 1-h "boxcar" averages. We compare these average-sample types with others: 2-h average, Gaussian, and "brick-wall" low-frequency-pass. Hourly spot measurements provide a statistically unbiased representation of the amplitude range of geomagnetic-field variation, but as a representation of continuous field variation over time, they are significantly affected by aliasing, especially at high latitudes. The 1-h, 2-h, and Gaussian average-samples are affected by a combination of amplitude distortion and aliasing. Brick-wall values are not affected by either amplitude distortion or aliasing, but constructing them is, in an operational setting, relatively more difficult than it is for other average-sample types. It is noteworthy that 1-h average-samples, the present standard for observatory hourly data, have properties similar to Gaussian average-samples that have been optimized for a minimum residual sum of amplitude distortion and aliasing. For 1-h average-samples from medium and low-latitude observatories, the average of the combination of amplitude distortion and aliasing is less than the 5.0 nT accuracy standard established by Intermagnet for modern 1-min data. For medium and low-latitude observatories, average differences between monthly means constructed from 1-min data and monthly means constructed from any of the hourly average-sample types considered here are less than the 1.0 nT resolution of standard databases. We recommend that observatories and World Data Centers continue the standard practice of reporting simple 1-h-average hourly values.

  18. Sharp Reduction in Maximum LEU Fuel Temperatures during Loss of Coolant Accidents in a PBMR DPP-400 core by means of Optimised Placement of Neutron Poisons: Implications for Pu fuel-cycles

    International Nuclear Information System (INIS)

    Serfontein, Dawid E.

    2013-01-01

    The optimisation of the power profiles by means of placing an optimised distribution of neutron poison concentrations in the central reflector resulted in a large reduction in the maximum DLOFC temperature, which may produce far reaching safety and licensing benefits. Unfortunately this came at the expense of losing the ability to execute effective load following. The neutron poisons also caused a large reduction of 22% in the average burn-up of the fuel. Further optimisation is required to counter this reduction in burn-up

  19. Accurate modeling and maximum power point detection of ...

    African Journals Online (AJOL)

    Accurate modeling and maximum power point detection of photovoltaic ... Determination of MPP enables the PV system to deliver maximum available power. ..... adaptive artificial neural network: Proposition for a new sizing procedure.

  20. Maximum power per VA control of vector controlled interior ...

    Indian Academy of Sciences (India)

    Thakur Sumeet Singh

    2018-04-11

    Apr 11, 2018 ... Department of Electrical Engineering, Indian Institute of Technology Delhi, New ... The MPVA operation allows maximum-utilization of the drive-system. ... Permanent magnet motor; unity power factor; maximum VA utilization; ...