Stone, Wesley W.; Gilliom, Robert J.; Crawford, Charles G.
2008-01-01
Regression models were developed for predicting annual maximum and selected annual maximum moving-average concentrations of atrazine in streams using the Watershed Regressions for Pesticides (WARP) methodology developed by the National Water-Quality Assessment Program (NAWQA) of the U.S. Geological Survey (USGS). The current effort builds on the original WARP models, which were based on the annual mean and selected percentiles of the annual frequency distribution of atrazine concentrations. Estimates of annual maximum and annual maximum moving-average concentrations for selected durations are needed to characterize the levels of atrazine and other pesticides for comparison to specific water-quality benchmarks for evaluation of potential concerns regarding human health or aquatic life. Separate regression models were derived for the annual maximum and annual maximum 21-day, 60-day, and 90-day moving-average concentrations. Development of the regression models used the same explanatory variables, transformations, model development data, model validation data, and regression methods as those used in the original development of WARP. The models accounted for 72 to 75 percent of the variability in the concentration statistics among the 112 sampling sites used for model development. Predicted concentration statistics from the four models were within a factor of 10 of the observed concentration statistics for most of the model development and validation sites. Overall, performance of the models for the development and validation sites supports the application of the WARP models for predicting annual maximum and selected annual maximum moving-average atrazine concentration in streams and provides a framework to interpret the predictions in terms of uncertainty. For streams with inadequate direct measurements of atrazine concentrations, the WARP model predictions for the annual maximum and the annual maximum moving-average atrazine concentrations can be used to characterize
Maximum phytoplankton concentrations in the sea
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collected...... in the North Atlantic as part of the Bermuda Atlantic Time Series program as well as data collected off Southern California as part of the Southern California Bight Study program. The observed maximum particulate organic carbon and volumetric particle concentrations are consistent with the predictions...
Maximum Likelihood Estimation of Multivariate Autoregressive-Moving Average Models.
1977-02-01
maximizing the same have been proposed i) in time domain by Box and Jenkins [41. Astrom [3J, Wilson [23 1, and Phadke [161, and ii) in frequency domain by...moving average residuals and other convariance matrices with linear structure ”, Anna/s of Staustics, 3. 3. Astrom , K. J. (1970), Introduction to
MAXIMUM LIKELIHOOD ESTIMATION FOR PERIODIC AUTOREGRESSIVE MOVING AVERAGE MODELS.
Vecchia, A.V.
1985-01-01
A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.
30 CFR 57.5039 - Maximum permissible concentration.
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Maximum permissible concentration. 57.5039... Maximum permissible concentration. Except as provided by standard § 57.5005, persons shall not be exposed to air containing concentrations of radon daughters exceeding 1.0 WL in active workings. ...
A new solar signal: Average maximum sunspot magnetic fields independent of activity cycle
Livingston, William
2016-01-01
Over the past five years, 2010-2015, we have observed, in the near infrared (IR), the maximum magnetic field strengths for 4145 sunspot umbrae. Herein we distinguish field strengths from field flux. (Most solar magnetographs measure flux). Maximum field strength in umbrae is co-spatial with the position of umbral minimum brightness (Norton and Gilman, 2004). We measure field strength by the Zeeman splitting of the Fe 15648.5 A spectral line. We show that in the IR no cycle dependence on average maximum field strength (2050 G) has been found +/- 20 Gauss. A similar analysis of 17,450 spots observed by the Helioseismic and Magnetic Imager onboard the Solar Dynamics Observatory reveal the same cycle independence +/- 0.18 G., or a variance of 0.01%. This is found not to change over the ongoing 2010-2015 minimum to maximum cycle. Conclude the average maximum umbral fields on the Sun are constant with time.
Variability of maximum and mean average temperature across Libya (1945-2009)
Ageena, I.; Macdonald, N.; Morse, A. P.
2014-08-01
Spatial and temporal variability in daily maximum and mean average daily temperature, monthly maximum and mean average monthly temperature for nine coastal stations during the period 1956-2009 (54 years), and annual maximum and mean average temperature for coastal and inland stations for the period 1945-2009 (65 years) across Libya are analysed. During the period 1945-2009, significant increases in maximum temperature (0.017 °C/year) and mean average temperature (0.021 °C/year) are identified at most stations. Significantly, warming in annual maximum temperature (0.038 °C/year) and mean average annual temperatures (0.049 °C/year) are observed at almost all study stations during the last 32 years (1978-2009). The results show that Libya has witnessed a significant warming since the middle of the twentieth century, which will have a considerable impact on societies and the ecology of the North Africa region, if increases continue at current rates.
Computational complexity of some maximum average weight problems with precedence constraints
Faigle, Ulrich; Kern, Walter
1994-01-01
Maximum average weight ideal problems in ordered sets arise from modeling variants of the investment problem and, in particular, learning problems in the context of concepts with tree-structured attributes in artificial intelligence. Similarly, trying to construct tests with high reliability leads t
Modified Weighting for Calculating the Average Concentration of Non-Point Source Pollutant
牟瑞芳
2004-01-01
The concentration of runoff depends upon that of soil loss and the latter is assumed to be linear to the value of EI that equals the product of total storm energy E times the maximum 30-min intensity I30 for a given rainstorm. Usually, the maximum accumulative amount of rain for a rainstorm might bring on the maximum amount of runoff, but it does not equal the maximum erosion and not always lead the maximum concentration. Thus, the average concentration weighted by amount of runoff is somewhat unreasonable. An improvement for the calculation method of non-point source pollution load put forward by professor Li Huaien is proposed. In replacement of the weight of runoff, EI value of a single rainstorm is introduced as a new weight. An example of Fujing River watershed shows that its application is effective.
Analytical expressions for maximum wind turbine average power in a Rayleigh wind regime
Carlin, P.W.
1996-12-01
Average or expectation values for annual power of a wind turbine in a Rayleigh wind regime are calculated and plotted as a function of cut-out wind speed. This wind speed is expressed in multiples of the annual average wind speed at the turbine installation site. To provide a common basis for comparison of all real and imagined turbines, the Rayleigh-Betz wind machine is postulated. This machine is an ideal wind machine operating with the ideal Betz power coefficient of 0.593 in a Rayleigh probability wind regime. All other average annual powers are expressed in fractions of that power. Cases considered include: (1) an ideal machine with finite power and finite cutout speed, (2) real machines operating in variable speed mode at their maximum power coefficient, and (3) real machines operating at constant speed.
Scientific substantination of maximum allowable concentration of fluopicolide in water
Pelo I.М.
2014-03-01
Full Text Available In order to substantiate fluopicolide maximum allowable concentration in the water of water reservoirs the research was carried out. Methods of study: laboratory hygienic experiment using organoleptic and sanitary-chemical, sanitary-toxicological, sanitary-microbiological and mathematical methods. The results of fluopicolide influence on organoleptic properties of water, sanitary regimen of reservoirs for household purposes were given and its subthreshold concentration in water by sanitary and toxicological hazard index was calculated. The threshold concentration of the substance by the main hazard criteria was established, the maximum allowable concentration in water was substantiated. The studies led to the following conclusions: fluopicolide threshold concentration in water by organoleptic hazard index (limiting criterion – the smell – 0.15 mg/dm3, general sanitary hazard index (limiting criteria – impact on the number of saprophytic microflora, biochemical oxygen demand and nitrification – 0.015 mg/dm3, the maximum noneffective concentration – 0.14 mg/dm3, the maximum allowable concentration - 0.015 mg/dm3.
Concentration fluctuations and averaging time in vapor clouds
Wilson, David J
2010-01-01
This book contributes to more reliable and realistic predictions by focusing on sampling times from a few seconds to a few hours. Its objectives include developing clear definitions of statistical terms, such as plume sampling time, concentration averaging time, receptor exposure time, and other terms often confused with each other or incorrectly specified in hazard assessments; identifying and quantifying situations for which there is no adequate knowledge to predict concentration fluctuations in the near-field, close to sources, and far downwind where dispersion is dominated by atmospheric t
Sadjadi, Firooz A; Mahalanobis, Abhijit
2006-05-01
We report the development of a technique for adaptive selection of polarization ellipse tilt and ellipticity angles such that the target separation from clutter is maximized. From the radar scattering matrix [S] and its complex components, in phase and quadrature phase, the elements of the Mueller matrix are obtained. Then, by means of polarization synthesis, the radar cross section of the radar scatters are obtained at different transmitting and receiving polarization states. By designing a maximum average correlation height filter, we derive a target versus clutter distance measure as a function of four transmit and receive polarization state angles. The results of applying this method on real synthetic aperture radar imagery indicate a set of four transmit and receive angles that lead to maximum target versus clutter discrimination. These optimum angles are different for different targets. Hence, by adaptive control of the state of polarization of polarimetric radar, one can noticeably improve the discrimination of targets from clutter.
Wezel AP van; Vlaardingen P van; CSR
2001-01-01
This report presents maximum permissible concentrations and negligible concentrations that have been derived for various antifouling substances used as substitutes for TBT. Included here are Irgarol 1051, dichlofluanide, ziram, chlorothalonil and TCMTB.
Zhang Zhang
2009-06-01
Full Text Available A major analytical challenge in computational biology is the detection and description of clusters of specified site types, such as polymorphic or substituted sites within DNA or protein sequences. Progress has been stymied by a lack of suitable methods to detect clusters and to estimate the extent of clustering in discrete linear sequences, particularly when there is no a priori specification of cluster size or cluster count. Here we derive and demonstrate a maximum likelihood method of hierarchical clustering. Our method incorporates a tripartite divide-and-conquer strategy that models sequence heterogeneity, delineates clusters, and yields a profile of the level of clustering associated with each site. The clustering model may be evaluated via model selection using the Akaike Information Criterion, the corrected Akaike Information Criterion, and the Bayesian Information Criterion. Furthermore, model averaging using weighted model likelihoods may be applied to incorporate model uncertainty into the profile of heterogeneity across sites. We evaluated our method by examining its performance on a number of simulated datasets as well as on empirical polymorphism data from diverse natural alleles of the Drosophila alcohol dehydrogenase gene. Our method yielded greater power for the detection of clustered sites across a breadth of parameter ranges, and achieved better accuracy and precision of estimation of clusters, than did the existing empirical cumulative distribution function statistics.
Zhang Zhang
2009-06-01
Full Text Available A major analytical challenge in computational biology is the detection and description of clusters of specified site types, such as polymorphic or substituted sites within DNA or protein sequences. Progress has been stymied by a lack of suitable methods to detect clusters and to estimate the extent of clustering in discrete linear sequences, particularly when there is no a priori specification of cluster size or cluster count. Here we derive and demonstrate a maximum likelihood method of hierarchical clustering. Our method incorporates a tripartite divide-and-conquer strategy that models sequence heterogeneity, delineates clusters, and yields a profile of the level of clustering associated with each site. The clustering model may be evaluated via model selection using the Akaike Information Criterion, the corrected Akaike Information Criterion, and the Bayesian Information Criterion. Furthermore, model averaging using weighted model likelihoods may be applied to incorporate model uncertainty into the profile of heterogeneity across sites. We evaluated our method by examining its performance on a number of simulated datasets as well as on empirical polymorphism data from diverse natural alleles of the Drosophila alcohol dehydrogenase gene. Our method yielded greater power for the detection of clustered sites across a breadth of parameter ranges, and achieved better accuracy and precision of estimation of clusters, than did the existing empirical cumulative distribution function statistics.
Dependence of maximum concentration from chemical accidents on release duration
Hanna, Steven; Chang, Joseph
2017-01-01
Chemical accidents often involve releases of a total mass, Q, of stored material in a tank over a time duration, td, of less than a few minutes. The value of td is usually uncertain because of lack of knowledge of key information, such as the size and location of the hole and the pressure and temperature of the chemical. In addition, it is rare that eyewitnesses or video cameras are present at the time of the accident. For inhalation hazards, serious health effects (such as damage to the respiratory system) are determined by short term averages (pressurized liquefied chlorine releases from tanks are given, focusing on scenarios from the Jack Rabbit I (JR I) field experiment. The analytical calculations and the predictions of the SLAB dense gas dispersion model agree that the ratio of maximum C for two different td's is greatest (as much as a factor of ten) near the source. At large distances (beyond a few km for the JR I scenarios), where tt exceeds both td's, the ratio of maximum C approaches unity.
Curtis, Gary P.; Lu, Dan; Ye, Ming
2015-01-01
While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. This study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict the reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. These reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Limitations of applying MLBMA to the
U.S. Geological Survey, Department of the Interior — This data set represents the average monthly maximum temperature in Celsius multiplied by 100 for 2002 compiled for every catchment of NHDPlus for the conterminous...
Maximum Permissible Concentrations and Negligible Concentrations for Rare Earth Elements (REEs)
Sneller FEC; Kalf DF; Weltje L; Wezel AP van; CSR
2000-01-01
In this report maximum permissible concentrations (MPCs) and negligible concentrations (NCs) are derived for Rare Earth Elements (REEs), which are also known as lanthanides. The REEs selected for derivation of environmental risk limits in this report are Yttrium (Y), Lanthanum (La), Cerium (Ce), Pra
The effects of disjunct sampling and averaging time on maximum mean wind speeds
Larsén, Xiaoli Guo; Mann, J.
2006-01-01
Conventionally, the 50-year wind is calculated on basis of the annual maxima of consecutive 10-min averages. Very often, however, the averages are saved with a temporal spacing of several hours. We call it disjunct sampling. It may also happen that the wind speeds are averaged over a longer time...... period before being saved. In either case, the extreme wind will be underestimated. This paper investigates the effects of the disjunct sampling interval and the averaging time on the attenuation of the extreme wind estimation by means of a simple theoretical approach as well as measurements...
40 CFR 80.1238 - How is a refinery's or importer's average benzene concentration determined?
2010-07-01
... average benzene concentration determined? 80.1238 Section 80.1238 Protection of Environment ENVIRONMENTAL... concentration determined? (a) The average benzene concentration of gasoline produced at a refinery or imported...: ER26FE07.012 Where: Bavg = Average benzene concentration for the applicable averaging period (volume...
Hutchinson, Thomas H. [Plymouth Marine Laboratory, Prospect Place, The Hoe, Plymouth PL1 3DH (United Kingdom)], E-mail: thom1@pml.ac.uk; Boegi, Christian [BASF SE, Product Safety, GUP/PA, Z470, 67056 Ludwigshafen (Germany); Winter, Matthew J. [AstraZeneca Safety, Health and Environment, Brixham Environmental Laboratory, Devon TQ5 8BA (United Kingdom); Owens, J. Willie [The Procter and Gamble Company, Central Product Safety, 11810 East Miami River Road, Cincinnati, OH 45252 (United States)
2009-02-19
organisms and the development of sound criteria for data interpretation when the exposure of organisms has exceeded the MTD. While the MTD approach is well established for oral, topical, inhalational or injection exposure routes in mammalian toxicology, we propose that for exposure of aquatic organisms via immersion, the term Maximum Tolerated Concentration (MTC) is more appropriate.
Sung Woo Park
2015-03-01
Full Text Available The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs, the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.
77 FR 34411 - Branch Technical Position on Concentration Averaging and Encapsulation
2012-06-11
... COMMISSION Branch Technical Position on Concentration Averaging and Encapsulation AGENCY: Nuclear Regulatory... its Branch Technical Position on Concentration Averaging and Encapsulation (CA BTP). An earlier draft... bases for its concentration averaging positions. It also needs to be revised to incorporate new...
Cavalli, Andrea; Camilloni, Carlo; Vendruscolo, Michele
2013-03-07
In order to characterise the dynamics of proteins, a well-established method is to incorporate experimental parameters as replica-averaged structural restraints into molecular dynamics simulations. Here, we justify this approach in the case of interproton distance information provided by nuclear Overhauser effects by showing that it generates ensembles of conformations according to the maximum entropy principle. These results indicate that the use of replica-averaged structural restraints in molecular dynamics simulations, given a force field and a set of experimental data, can provide an accurate approximation of the unknown Boltzmann distribution of a system.
40 CFR Table 1 to Subpart A of... - Maximum Concentration of Constituents for Groundwater Protection
2010-07-01
... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Maximum Concentration of Constituents for Groundwater Protection 1 Table 1 to Subpart A of Part 192 Protection of Environment ENVIRONMENTAL... Concentration of Constituents for Groundwater Protection Constituent concentration 1 Maximum Arsenic 0.05 Barium...
Assessment of Average Tracer Concentration Approach for Flow Rate Measurement and Field Calibration
P. Sidauruk
2015-12-01
Full Text Available Tracer method is one of the methods available for open channel flow rate measurements such as in irrigation canals. Average tracer concentration approach is an instantaneous injection method that based on the average tracer concentrations value at the sampling point. If the procedures are correct and scientific considerations are justified, tracer method will give relatively high accuracy of measurements. The accuracy of the average tracer concentration approach has been assessed both in laboratory and field. The results of accuracy tests of open channel flow that has been conducted at the Center for Application Isotopes and Radiation Laboratory-BATAN showed that the accuracy level of average concentrations approach method was higher than 90% compared to the true value (volumetric flow rate. The accuracy of average tracer concentration approach was also assessed during the application of the method to measure flow rate of Mrican irrigation canals as an effort to perform field calibration of existing weirs. Both average tracer concentration approach and weirs can predict the trend of the flow correctly. However, it was observed that flow discrepancies between weirs measurement and average tracer concentration approach predictions were as high as 27%. The discrepancies might be due to the downgrading performances of the weirs because of previous floods and high sediment contents of the flow
Pannone, Marilena
2012-08-01
This paper shows how an exact analytical solution for the transient-state spatial moments of the cross-sectional average tracer concentration in large open channel flows can be derived from the depth-averaged advection-diffusion equation resorting to the method of Green's functions, without any simplifying assumption about the regularity of the actual concentration field, the smallness of the fluctuations, or the large space-time scale of variation of the average concentration gradient (justifying the a priori localization of the problem), which were the basis of the classic Taylor dispersion theory. The results reveal that in agreement with the findings by Aris (1956) and later by others for flows within a conduit, there are an initial centroid displacement and a variance deficit dependent on the specific position and dimension of the initial injection. The second central moment asymptotically tends to the linearly increasing function predictable on the basis of Taylor's classic theory, and the skewness, which is constantly zero for the cross-sectionally uniform injection, in the case of nonuniform initial distributions tends to slowly vanish after having reached a maximum. Thus, the persistent asymmetry exhibited by the field concentration data, as well as the retardations and the accelerations in the peak trajectory, can be justified without making any a priori assumption about the physical mechanism underlying their appearance, like transient storage phenomena, just by rigorously solving the governing equation for the cross-sectional average concentration in the presence of nonuniform, asymmetrically located solute injections.
38 CFR 4.76a - Computation of average concentric contraction of visual fields.
2010-07-01
... concentric contraction of visual fields. 4.76a Section 4.76a Pensions, Bonuses, and Veterans' Relief... Sense § 4.76a Computation of average concentric contraction of visual fields. Table III—Normal Visual Field Extent at 8 Principal Meridians Meridian Normal degrees Temporally 85 Down temporally 85 Down...
Collares-Pereira, M.; Rabl, A.
1978-06-01
The Liu and Jordan method of calculating long term average energy collection of flat plate collectors is simplified (by about a factor of 4), and generalized to all collectors, concentration and nonconcentrating. The only meteorological input needed are the long term average daily total hemispherical insolation H/sub h/ on a horizontal surface and, for thermal collectors the average ambient temperature. The collector is characterized by optical efficiency, heat loss (or U-value), heat extraction efficiency, concentration ratio and tracking mode. An average operating temperature is assumed. Interaction with storage can be included by combining the present model with the f-chart method of Beckman, Klein and Duffie. Formulas and examples are presented for five collector types: flat plate, compound parabolic concentrator, concentrator with E.-W. tracking axis, concentrator with polar tracking axis, and concentrator with two axis tracking. The examples show that even for relatively low temperature applications and cloudy climates (50/sup 0/C in New York in February), concentrating collectors can outperform the flat plate. The method has been validated against hourly weather data (with measurements of hemispherical and beam insolation), and has been found to have an average accuracy better than 3% for the long term average radiation available to solar collectors. The suitability of this method for comparison studies is illustrated by comparing in a location independent manner the radiation availability for several collector types or operating conditions: two axis tracking versus one axis tracking; polar tracking axis versus east-west tracking axis; fixed versus tracking flat plate; effect of ground reflectance; and acceptance for diffuse radiation as function of concentration ratio.
Variation in the annual average radon concentration measured in homes in Mesa County, Colorado
Rood, A.S.; George, J.L.; Langner, G.H. Jr.
1990-04-01
The purpose of this study is to examine the variability in the annual average indoor radon concentration. The TMC has been collecting annual average radon data for the past 5 years in 33 residential structures in Mesa County, Colorado. This report is an interim report that presents the data collected up to the present. Currently, the plans are to continue this study in the future. 62 refs., 3 figs., 12 tabs.
Li, Run-Kui; Zhao, Tong; Li, Zhi-Peng; Ding, Wen-Jun; Cui, Xiao-Yong; Xu, Qun; Song, Xian-Feng
2014-04-01
On-road vehicle emissions have become the main source of urban air pollution and attracted broad attentions. Vehicle emission factor is a basic parameter to reflect the status of vehicle emissions, but the measured emission factor is difficult to obtain, and the simulated emission factor is not localized in China. Based on the synchronized increments of traffic flow and concentration of air pollutants in the morning rush hour period, while meteorological condition and background air pollution concentration retain relatively stable, the relationship between the increase of traffic and the increase of air pollution concentration close to a road is established. Infinite line source Gaussian dispersion model was transformed for the inversion of average vehicle emission factors. A case study was conducted on a main road in Beijing. Traffic flow, meteorological data and carbon monoxide (CO) concentration were collected to estimate average vehicle emission factors of CO. The results were compared with simulated emission factors of COPERT4 model. Results showed that the average emission factors estimated by the proposed approach and COPERT4 in August were 2.0 g x km(-1) and 1.2 g x km(-1), respectively, and in December were 5.5 g x km(-1) and 5.2 g x km(-1), respectively. The emission factors from the proposed approach and COPERT4 showed close values and similar seasonal trends. The proposed method for average emission factor estimation eliminates the disturbance of background concentrations and potentially provides real-time access to vehicle fleet emission factors.
U.S. Geological Survey, Department of the Interior — This data set represents the 30-year (1971-2000) average annual maximum temperature in Celsius multiplied by 100 compiled for every catchment of NHDPlus for the...
Sullivan, Terry [Brookhaven National Lab. (BNL), Upton, NY (United States)
2016-02-22
The objectives of this report are; To present a simplified conceptual model for release from the buildings with residual subsurface structures that can be used to provide an upper bound on contaminant concentrations in the fill material; Provide maximum water concentrations and the corresponding amount of mass sorbed to the solid fill material that could occur in each building for use in dose assessment calculations; Estimate the maximum concentration in a well located outside of the fill material; and Perform a sensitivity analysis of key parameters.
Volumetric Concentration Maximum of Cohesive Sediment in Waters: A Numerical Study
Jisun Byun
2014-12-01
Full Text Available Cohesive sediment has different characteristics compared to non-cohesive sediment. The density and size of a cohesive sediment aggregate (a so-called, floc continuously changes through the flocculation process. The variation of floc size and density can cause a change of volumetric concentration under the condition of constant mass concentration. This study investigates how the volumetric concentration is affected by different conditions such as flow velocity, water depth, and sediment suspension. A previously verified, one-dimensional vertical numerical model is utilized here. The flocculation process is also considered by floc in the growth type flocculation model. Idealized conditions are assumed in this study for the numerical experiments. The simulation results show that the volumetric concentration profile of cohesive sediment is different from the Rouse profile. The volumetric concentration decreases near the bed showing the elevated maximum in the cases of both current and oscillatory flow. The density and size of floc show the minimum and the maximum values near the elevation of volumetric concentration maximum, respectively. This study also shows that the flow velocity and the critical shear stress have significant effects on the elevated maximum of volumetric concentration. As mechanisms of the elevated maximum, the strong turbulence intensity and increased mass concentration are considered because they cause the enhanced flocculation process. This study uses numerical experiments. To the best of our knowledge, no laboratory or field experiments on the elevated maximum have been carried out until now. It is of great necessity to conduct well-controlled laboratory experiments in the near future.
Cosemans, G.; Kretzschmar, J. [Flemish Inst. for Technological Research (Vito), Mol (Belgium)
2004-07-01
Pollutant roses are polar diagrams that show how air pollution depends on wind direction. If an ambient air quality monitoring station is markedly influenced by a source of the pollutant measured, the pollutant rose shows a peak towards the local source. When both wind direction data and pollutant concentration are measured as (1/2)-hourly averages, the pollutant rose is mathematically well defined and the computation is simple. When the pollutant data are averages over 24 h, as is the case for heavy metals or dioxin levels or in many cases PM10-levels in ambient air, the pollutant rose is mathematically well defined, but the computational scheme is not obvious. In this paper, two practical methods to maximize the information content of pollutant roses based on 24 h pollutant concentrations are presented. These methods are applied to time series of 24 h SO{sub 2} concentrations, derived from the 1/2-hourly SO{sub 2} concentrations measured in the Antwerp harbour, industrial, urban and rural regions by the Telemetric Air Quality Monitoring Network of the Flemish Environmental Agency (VMM). The pollutant roses computed from the 1/2-hourly SO{sub 2} concentrations constitute reference or control-roses to evaluate the representativeness or truthfulness of pollutant roses obtained by the presented methods. The presented methodology is very useful in model validations that have to be based on measured daily averaged concentrations as only available real ambient levels. While the methods give good pollutant roses in general, this paper especially deals with the case of pollutant roses with 'false' peaks. (orig.)
Dowdall, A; Murphy, P; Pollard, D; Fenton, D
2017-04-01
In 2002, a National Radon Survey (NRS) in Ireland established that the geographically weighted national average indoor radon concentration was 89 Bq m(-3). Since then a number of developments have taken place which are likely to have impacted on the national average radon level. Key among these was the introduction of amending Building Regulations in 1998 requiring radon preventive measures in new buildings in High Radon Areas (HRAs). In 2014, the Irish Government adopted the National Radon Control Strategy (NRCS) for Ireland. A knowledge gap identified in the NRCS was to update the national average for Ireland given the developments since 2002. The updated national average would also be used as a baseline metric to assess the effectiveness of the NRCS over time. A new national survey protocol was required that would measure radon in a sample of homes representative of radon risk and geographical location. The design of the survey protocol took into account that it is not feasible to repeat the 11,319 measurements carried out for the 2002 NRS due to time and resource constraints. However, the existence of that comprehensive survey allowed for a new protocol to be developed, involving measurements carried out in unbiased randomly selected volunteer homes. This paper sets out the development and application of that survey protocol. The results of the 2015 survey showed that the current national average indoor radon concentration for homes in Ireland is 77 Bq m(-3), a decrease from the 89 Bq m(-3) reported in the 2002 NRS. Analysis of the results by build date demonstrate that the introduction of the amending Building Regulations in 1998 have led to a reduction in the average indoor radon level in Ireland. Copyright © 2016 Elsevier Ltd. All rights reserved.
Parameterization of Time-Averaged Suspended Sediment Concentration in the Nearshore
Hyun-Doug Yoon
2015-11-01
Full Text Available To quantify the effect of wave breaking turbulence on sediment transport in the nearshore, the vertical distribution of time-averaged suspended sediment concentration (SSC in the surf zone was parameterized in terms of the turbulent kinetic energy (TKE at different cross-shore locations, including the bar crest, bar trough, and inner surf zone. Using data from a large-scale laboratory experiment, a simple relationship was developed between the time-averaged SSC and the time-averaged TKE. The vertical variation of the time-averaged SSC was fitted to an equation analogous to the turbulent dissipation rate term. At the bar crest, the proposed equation was slightly modified to incorporate the effect of near-bed sediment processes and yielded reasonable agreement. This parameterization yielded the best agreement at the bar trough, with a coefficient of determination R2 ≥ 0.72 above the bottom boundary layer. The time-averaged SSC in the inner surf zone showed good agreement near the bed but poor agreement near the water surface, suggesting that there is a different sedimentation mechanism that controls the SSC in the inner surf zone.
van de Plassche EJ; Polder MD; Canton JH
1992-01-01
In this report Maximum Permissible Concentrations (MPC) are derived for 9 trace metals based on ecotoxicological data. The elements are: antimony, barium, beryllium, cobalt, molybdenum, selenium, thallium, tin, and vanadium The study was carried out in the framework of the project "Setting int
Sullivan, Terry [Brookhaven National Lab. (BNL), Upton, NY (United States). Biological, Environmental, and Climate Sciences Dept.
2014-12-02
ZionSolutions is in the process of decommissioning the Zion Nuclear Power Plant in order to establish a new water treatment plant. There is some residual radioactive particles from the plant which need to be brought down to levels so an individual who receives water from the new treatment plant does not receive a radioactive dose in excess of 25 mrem/y⁻¹. The objectives of this report are: (a) To present a simplified conceptual model for release from the buildings with residual subsurface structures that can be used to provide an upper bound on contaminant concentrations in the fill material; (b) Provide maximum water concentrations and the corresponding amount of mass sorbed to the solid fill material that could occur in each building for use in dose assessment calculations; (c) Estimate the maximum concentration in a well located outside of the fill material; and (d) Perform a sensitivity analysis of key parameters.
Sung Woo Park; Byung Kwan Oh; Hyo Seon Park
2015-01-01
The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this...
Asymmetric multifractal detrending moving average analysis in time series of PM2.5 concentration
Zhang, Chen; Ni, Zhiwei; Ni, Liping; Li, Jingming; Zhou, Longfei
2016-09-01
In this paper, we propose the asymmetric multifractal detrending moving average analysis (A-MFDMA) method to explore the asymmetric correlation in non-stationary time series. The proposed method is applied to explore the asymmetric correlation of PM2.5 daily average concentration with uptrends or downtrends in China. In addition, shuffling and phase randomization procedures are applied to detect the sources of multifractality. The results show that existences of asymmetric correlations, and the asymmetric correlations are multifractal. Further, the multifractal scaling behavior in the Chinese PM2.5 is caused not only by long-range correlation but also by fat-tailed distribution, but the major source of multifractality is fat-tailed distribution.
40 CFR 63.7943 - How do I determine the average VOHAP concentration of my remediation material?
2010-07-01
... concentration of my remediation material? 63.7943 Section 63.7943 Protection of Environment ENVIRONMENTAL... Remediation Performance Tests § 63.7943 How do I determine the average VOHAP concentration of my remediation material? (a) General requirements. You must determine the average total VOHAP concentration of a...
Plassche EJ van de; Polder MD; Canton JH
1992-01-01
In this report Maximum Permissible Concentrations (MPC) are derived for 9 trace metals based on ecotoxicological data. The elements are: antimony, barium, beryllium, cobalt, molybdenum, selenium, thallium, tin, and vanadium The study was carried out in the framework of the project "Setting integrated environmental quality objectives". For the aquatic environment MPCs could be derived for all trace elements. These values were based on toxicity data for freshwater as well as saltwater...
G. M. J. HASAN
2014-10-01
Full Text Available Climate, one of the major controlling factors for well-being of the inhabitants in the world, has been changing in accordance with the natural forcing and manmade activities. Bangladesh, the most densely populated countries in the world is under threat due to climate change caused by excessive use or abuse of ecology and natural resources. This study checks the rainfall patterns and their associated changes in the north-eastern part of Bangladesh mainly Sylhet city through statistical analysis of daily rainfall data during the period of 1957 - 2006. It has been observed that a good correlation exists between the monthly mean and daily maximum rainfall. A linear regression analysis of the data is found to be significant for all the months. Some key statistical parameters like the mean values of Coefficient of Variability (CV, Relative Variability (RV and Percentage Inter-annual Variability (PIV have been studied and found to be at variance. Monthly, yearly and seasonal variation of rainy days also analysed to check for any significant changes.
Favret, Eduardo A; Fuentes, Néstor O; Molina, Ana M; Setten, Lorena M
2008-10-01
During the last few years, RIMAPS technique has been used to characterize the micro-relief of metallic surfaces and recently also applied to biological surfaces. RIMAPS is an image analysis technique which uses the rotation of an image and calculates its average power spectrum. Here, it is presented as a tool for describing the morphology of the trichodium net found in some grasses, which is developed on the epidermal cells of the lemma. Three different species of grasses (herbarium samples) are analyzed: Podagrostis aequivalvis (Trin.) Scribn. & Merr., Bromidium hygrometricum (Nees) Nees & Meyen and Bromidium ramboi (Parodi) Rúgolo. Simple schemes representing the real microstructure of the lemma are proposed and studied. RIMAPS spectra of both the schemes and the real microstructures are compared. These results allow inferring how similar the proposed geometrical schemes are to the real microstructures. Each geometrical pattern could be used as a reference for classifying other species. Finally, this kind of analysis is used to determine the morphology of the trichodium net of Agrostis breviculmis Hitchc. As the dried sample had shrunk and the microstructure was not clear, two kinds of morphology are proposed for the trichodium net of Agrostis L., one elliptical and the other rectilinear, the former being the most suitable.
DeVita, W.; Crunkilton, R. [Univ. of Wisconsin, Stevens Point, WI (United States)
1995-12-31
Semipermeable polymeric membrane devices (SPMDS) were deployed for 30 day periods to monitor polycyclic aromatic hydrocarbons (PAHs) in an urban stream which receives much of its flow from urban runoff. SPMDs are capable of effectively sampling several liters of water per day for some PAHs. Unlike conventional methods, SPMDs sample only those non-polar organic contaminants which are truly dissolved and available for bioconcentration. Also, SPMDs may concentrate contaminants from episodic events such as stormwater discharge. The State of Wisconsin has established surface water quality criteria based upon human lifetime cancer risk of 23 ppt for benzo(a)pyrene and 23 ppt as the sum of nine other potentially carcinogenic PAHs. Bulk water samples analyzed by conventional methodology were routinely well above this criteria, but contained particulate bound PAHs as well as PAHs bound by dissolved organic carbon (DOC) which are not available for bioconcentration. Average water concentrations of dissolved PAHs determined using SPMDs were also above this criteria. Variables used for determining water concentration included sampling rate at the exposure temperature, length of exposure and estimation of biofouling of SPMD surface.
Urbina-Villalba, German; García-Sucre, Máximo; Toro-Mendoza, Jhoan
2003-12-01
In order to account for the hydrodynamic interaction (HI) between suspended particles in an average way, Honig et al. [J. Colloid Interface Sci. 36, 97 (1971)] and more recently Heyes [Mol. Phys. 87, 287 (1996)] proposed different analytical forms for the diffusion constant. While the formalism of Honig et al. strictly applies to a binary collision, the one from Heyes accounts for the dependence of the diffusion constant on the local concentration of particles. However, the analytical expression of the latter approach is more complex and depends on the particular characteristics of each system. Here we report a combined methodology, which incorporates the formula of Honig et al. at very short distances and a simple local volume-fraction correction at longer separations. As will be shown, the flocculation behavior calculated from Brownian dynamics simulations employing the present technique, is found to be similar to that of Batchelor’s tensor [J. Fluid. Mech. 74, 1 (1976); 119, 379 (1982)]. However, it corrects the anomalous coalescence found in concentrated systems as a result of the overestimation of many-body HI.
Dang, Viet D.; Walters, David; Lee, Cindy M.
2016-01-01
Conifers are often used as an “air passive sampler”, but few studies have focused on the implication of broadleaf evergreens to monitor atmospheric semivolatile organic compounds such as polychlorinated biphenyls (PCBs). In this study, we used Rhododendron maximum (rhododendron) growing next to a contaminated stream to assess atmospheric PCB concentrations. The study area was located in a rural setting and approximately 2 km downstream of a former Sangamo-Weston (S-W) plant. Leaves from the same mature shrubs were collected in late fall 2010, and winter and spring 2011. PCBs were detected in the collected leaves suggesting that rhododendron can be used as air passive samplers in rural areas where active sampling is impractical. Estimated ΣPCB (47 congeners) concentrations in the atmosphere decreased from fall 2010 to spring 2011 with concentration means at 3990, 2850, and 931 pg m-3 in fall 2010, winter 2011, and spring 2011, respectively. These results indicate that the atmospheric concentrations at this location continue to be high despite termination of active discharge from the former S-W plant. Leaves had a consistent pattern of high concentrations of tetra- and penta-CBs similar to the congener distribution in polyethylene (PE) passive samplers deployed in the water column suggesting that volatilized PCBs from the stream were the primary source of contaminants in rhododendron leaves.
Da Cruz, Manuela; Van Schoors, Laetitia; Colin, Xavier; Benzarti, Karim
2014-05-01
The aim of this research project is to investigate the oxidation mechanism of high density polyethylene (HDPE) used in outdoor applications, in order to establish in a near future, a non-empirical kinetic model for lifetime prediction. The present paper focuses on the changes in the hydroperoxide (POOH) concentration induced by thermo-oxidative ageing, and on their relationship with the evolution of the weight average molar mass (Mw) due both to chain scission and crosslinking processes. Thin HDPE films were aged at 110 and 140°C in air under atmospheric pressure. In a first part, changes in the POOH concentration versus ageing time were assessed by three different analytical methods previously reported in the literature: modulated differential scattering calorimetry (MDSC), Fourier transform Infra-Red spectrometry after chemical derivatization treatment with gaseous sulfur dioxide (SO2-FTIR), and iodometry. A comparison of experimental results revealed that these three methods provide very similar quantitative data on POOH accumulation, whereas iodometry tends to strongly underestimate the subsequent stage of POOH decomposition. It was thus suspected that iodometry does not only titrate POOH, but also other chemical species (presumably double bonds) formed when POOH decompose. Therefore, only MDSC and SO2-FTIR were considered as relevant methods for POOH titration. In a second part, changes in Mw versus ageing time were monitored by size exclusion chromatography (SEC). A sharp drop of Mw was first observed at the beginning of exposure, which was assigned to an intensive chain scission process. Then, in a second stage, a stabilization or even a substantial re-increase in Mw was observed, suggesting a competition between chain scission and crosslinking processes. As this second stage starts at the same time as POOH decomposition, it was concluded that there is a strong correlation between both phenomena, occurring respectively at the macromolecular and molecular
Time weighted average concentration monitoring based on thin film solid phase microextraction.
Ahmadi, Fardin; Sparham, Chris; Boyaci, Ezel; Pawliszyn, Janusz
2017-03-02
Time weighted average (TWA) passive sampling with thin film solid phase microextraction (TF-SPME) and liquid chromatography tandem mass spectrometry (LC-MS/MS) was used for collection, identification, and quantification of benzophenone-3, benzophenone-4, 2-phenylbenzimidazole-5-sulphonic acid, octocrylene, and triclosan in the aquatic environment. Two types of TF-SPME passive samplers, including a retracted thin film device using a hydrophilic lipophilic balance (HLB) coating, and an open bed configuration with an octadecyl silica-based (C18) coating, were evaluated in an aqueous standard generation (ASG) system. Laboratory calibration results indicated that the thin film retracted device using HLB coating is suitable to determine TWA concentrations of polar analytes in water, with an uptake that was linear up to 70 days. In open bed form, a one-calibrant kinetic calibration technique was accomplished by loading benzophenone3-d5 as calibrant on the C18 coating to quantify all non-polar compounds. The experimental results showed that the one-calibrant kinetic calibration technique can be used for determination of classes of compounds in cases where deuterated counterparts are either not available or expensive. The developed passive samplers were deployed in wastewater-dominated reaches of the Grand River (Kitchener, ON) to verify their feasibility for determination of TWA concentrations in on-site applications. Field trials results indicated that these devices are suitable for long-term and short-term monitoring of compounds varying in polarity, such as UV blockers and biocide compounds in water, and the data were in good agreement with literature data.
Vacca, G. [Commissariat a l' Energie Atomique, Fontenay-aux-Roses (France). Centre d' Etudes Nucleaires
1967-07-01
The Task group of Committee II of the International Commission on Radiological Protection (I.C.R.P.) presided by P.E. Morrow published in 1966 the results of its work on aerosol dynamics in the respiratory tract. In that report a model was proposed for the deposition of dust in the lung and lymphatic nodes and for its clearance in the blood and/or in the gastrointestinal tract. The present report gathers the maximum permissible concentration values resulting from the application of such model. (author) [French] Le groupe de travail du Comite II de la Commission Internationale de Protection contre les Rayonnements (C.I.P.R.), preside par P.E. MORROW, a publie en 1966 les resultats de ses etudes concernant la dynamique des aerosols dans l'appareil respiratoire. Dans cette publication, un schema est propose pour le depot des poussieres dans les poumons et les ganglions lymphatiques et pour leur elimination dans le sang et/ou l'appareil digestif. La presente note rassemble les valeurs des concentrations maximales admissibles resultant de l'application d'un tel schema. (auteur)
Rubanov, G.P.; Grebtsov, E.M.; Kurnosov, V.K.; Tolstoi, G.I.
1988-06-01
Recommendations of 'Instructions on determining pneumoconiosis danger of mine work in coal mines' for choice of a basic point for measuring maximum single concentration of dust along scraper longwalls do not make it possible to objectively evaluate dust load at working places. In the Instructions, a point at 10 to 15 m from exit of longwall on the ventilation drift with an emergent stream of air is proposed for the dust probe. This designated point does not take into account the influence of the scheme of ventilation of the walls on formation of dust currents. Investigations of probes taken at different places along the longwall (at the beginning, 15 m from the begining, 15 m from the end, at the niche and transfer point of the longwal) using direct and reverse flow schemes of ventilation showed that the best point for determining maximum single concentratin of dust is a point in the middle of the longwall where dust currents are the same for both systems of ventilation, and use of a new method of calculating the dust load by testing at many different positions along the scraper longall makes it possible to determine the category of pneumonoconiosis danger for workers at scraper longwalls.
Orr, John L.
1997-01-01
In many ways, the typical approach to the handling of bibliographic material for generating review articles and similar manuscripts has changed little since the use of xerographic reproduction has become widespread. The basic approach is to collect reprints of the relevant material and place it in folders or stacks based on its dominant content. As the amount of information available increases with the passage of time, the viability of this mechanical approach to bibliographic management decreases. The personal computer revolution has changed the way we deal with many familiar tasks. For example, word processing on personal computers has supplanted the typewriter for many applications. Similarly, spreadsheets have not only replaced many routine uses of calculators but have also made possible new applications because the cost of calculation is extremely low. Objective The objective of this research was to use personal computer bibliographic software technology to support the determination of spacecraft maximum acceptable concentration (SMAC) values. Specific Aims The specific aims were to produce draft SMAC documents for hydrogen sulfide and tetrachloroethylene taking maximum advantage of the bibliographic software.
A comparison of muscle activity in concentric and counter movement maximum bench press.
van den Tillaar, Roland; Ettema, Gertjan
2013-01-01
The purpose of this study was to compare the kinematics and muscle activation patterns of regular free-weight bench press (counter movement) with pure concentric lifts in the ascending phase of a successful one repetition maximum (1-RM) attempt in the bench press. Our aim was to evaluate if diminishing potentiation could be the cause of the sticking region. Since diminishing potentiation cannot occur in pure concentric lifts, the occurrence of a sticking region in this type of muscle actions would support the hypothesis that the sticking region is due to a poor mechanical position. Eleven male participants (age 21.9 ± 1.7 yrs, body mass 80.7 ± 10.9 kg, body height 1.79 ± 0.07 m) conducted 1-RM lifts in counter movement and in pure concentric bench presses in which kinematics and EMG activity were measured. In both conditions, a sticking region occurred. However, the start of the sticking region was different between the two bench presses. In addition, in four of six muscles, the muscle activity was higher in the counter movement bench press compared to the concentric one. Considering the findings of the muscle activity of six muscles during the maximal lifts it was concluded that the diminishing effect of force potentiation, which occurs in the counter movement bench press, in combination with a delayed muscle activation unlikely explains the existence of the sticking region in a 1-RM bench press. Most likely, the sticking region is the result of a poor mechanical force position.
Kabala, Z. J.
1997-08-01
Under the assumption that local solute dispersion is negligible, a new general formula (in the form of a convolution integral) is found for the arbitrary k-point ensemble moment of the local concentration of a solute convected in arbitrary m spatial dimensions with general sure initial conditions. From this general formula new closed-form solutions in m=2 spatial dimensions are derived for 2-point ensemble moments of the local solute concentration for the impulse (Dirac delta) and Gaussian initial conditions. When integrated over an averaging window, these solutions lead to new closed-form expressions for the first two ensemble moments of the volume-averaged solute concentration and to the corresponding concentration coefficients of variation (CV). Also, for the impulse (Dirac delta) solute concentration initial condition, the second ensemble moment of the solute point concentration in two spatial dimensions and the corresponding CV are demonstrated to be unbound. For impulse initial conditions the CVs for volume-averaged concentrations axe compared with each other for a tracer from the Borden aquifer experiment. The point-concentration CV is unacceptably large in the whole domain, implying that the ensemble mean concentration is inappropriate for predicting the actual concentration values. The volume-averaged concentration CV decreases significantly with an increasing averaging volume. Since local dispersion is neglected, the new solutions should be interpreted as upper limits for the yet to be derived solutions that account for local dispersion; and so should the presented CVs for Borden tracers. The new analytical solutions may be used to test the accuracy of Monte Carlo simulations or other numerical algorithms that deal with the stochastic solute transport. They may also be used to determine the size of the averaging volume needed to make a quasi-sure statement about the solute mass contained in it.
Mente, Scot; Doran, Angela; Wager, Travis T
2012-06-14
The objective of this work was to establish that unbound maximum concentrations may be reasonably predicted from a combination of computed molecular properties assuming subcutaneous (SQ) dosing. Additionally, we show that the maximum unbound plasma and brain concentrations may be projected from a mixture of in vitro absorption, distribution, metabolism, excretion experimental parameters in combination with computed properties (volume of distribution, fraction unbound in microsomes). Finally, we demonstrate the utility of the underlying equations by showing that the maximum total plasma concentrations can be projected from the experimental parameters for a set of compounds with data collected from clinical research.
Pedersen, Ken Steen; Skrubel, Rikke; Stege, Helle;
2012-01-01
Background The objective of this study was to investigate the association between average daily gain and the number of Lawsonia intracellularis bacteria in faeces of growing pigs with different levels of diarrhoea. Methods A longitudinal field study (n?=?150 pigs) was performed in a Danish herd f...
Metaproteome of the viral concentrates from the deep chlorophyll maximum of the South China Sea
Xie, Zhang-Xian; Chen, Feng; Zhang, Shu-Feng; Wang, Ming-Hua; Zhang, Hao; Kong, Ling-Fen; Dai, Min-Han; Hong, Hua-Sheng; Lin, Lin; Wang, Da-Zhi
2016-04-01
Viral concentrates (VCs) have been commonly used for studying viral diversity, viral metagenomics and virus-host interactions in the natural ecosystem. However, the protein characteristics of VCs have not been explored. Here, we applied shotgun proteomics to characterize the proteins of VCs collected from the oligotrophic deep chlorophyll maximum of the South China Sea. We found that 34% of the identified proteins were assigned to the viruses, mainly being those of SAR11 related bacteria, cyanobacteria and picophytoeukaryotes. The remaining 66% were non-viral proteins mostly originating from diverse bacteria, such as SAR324, SAR11 and the Alteromonadales, and were functionally dominated by transport, translation, sulfur metabolism and one-carbon metabolism. Among the non-viral proteins, 28% were extracellular proteins and 10% were identified exclusively in the VCs, suggesting that non-viral entities might exist in the VCs. This study demonstrated that metaproteomics provides a valuable avenue to explore not only the diversity and structure of a viral community but also the novel ecological functions affiliated with microbes in the natural environment.
Kozak, K., E-mail: Krzysztof.Kozak@ifj.edu.pl [Institute of Nuclear Physics PAN, Radzikowskiego 152, 31-342 Krakow (Poland); Mazur, J. [Institute of Nuclear Physics PAN, Radzikowskiego 152, 31-342 Krakow (Poland); KozLowska, B. [University of Silesia, Bankowa 12, 40-007 Katowice (Poland); Karpinska, M. [Medical University of Bialystok, Jana Kilinskiego 1, 15-089 BiaLystok (Poland); Przylibski, T.A. [WrocLaw University of Technology, Wybrzeze S. Wyspianskiego 27, 50-370 WrocLaw (Poland); Mamont-Ciesla, K. [Central Laboratory for Radiological Protection, Konwaliowa 7, 03-194 Warszawa (Poland); Grzadziel, D. [Institute of Nuclear Physics PAN, Radzikowskiego 152, 31-342 Krakow (Poland); Stawarz, O. [Central Laboratory for Radiological Protection, Konwaliowa 7, 03-194 Warszawa (Poland); Wysocka, M. [Central Mining Institute, Plac Gwarkow1, 40-166 Katowice (Poland); Dorda, J. [University of Silesia, Bankowa 12, 40-007 Katowice (Poland); Zebrowski, A. [WrocLaw University of Technology, Wybrzeze S. Wyspianskiego 27, 50-370 WrocLaw (Poland); Olszewski, J. [Nofer Institute of Occupational Medicine, Sw.Teresy od Dzieciatka Jezus 8, 91-348 lodz (Poland); Hovhannisyan, H. [Institute of Nuclear Physics PAN, Radzikowskiego 152, 31-342 Krakow (Poland); Dohojda, M. [Institute of Building Technology (ITB), Filtrowa 1, 00-611 Warszawa (Poland); KapaLa, J. [Medical University of Bialystok, Jana Kilinskiego 1, 15-089 BiaLystok (Poland); Chmielewska, I. [Central Mining Institute, Plac Gwarkow1, 40-166 Katowice (Poland); KLos, B. [University of Silesia, Bankowa 12, 40-007 Katowice (Poland); Jankowski, J. [Nofer Institute of Occupational Medicine, Sw.Teresy od Dzieciatka Jezus 8, 91-348 lodz (Poland); Mnich, S. [Medical University of Bialystok, Jana Kilinskiego 1, 15-089 BiaLystok (Poland); KoLodziej, R. [Central Mining Institute, Plac Gwarkow1, 40-166 Katowice (Poland)
2011-10-15
The method for the calculation of correction factors is presented, which can be used for the assessment of the mean annual radon concentration on the basis of 1-month or 3-month indoor measurements. Annual radon concentration is an essential value for the determination of the annual dose due to radon inhalation. The measurements have been carried out in 132 houses in Poland over a period of one year. The passive method of track detectors with CR-39 foil was applied. Four thermal-precipitation regions in Poland were established and correction factors were calculated for each region, separately for houses with and without basements. - Highlights: > Using radon concentration results in houses we calculated the correction factors. > Factors were calculated for each month, 2 house types in different regions in Poland. > They enable the evaluation of average annual radon concentration in the house. > Annual average radon concentration basing on 1 or 3 months detector exposure.
Aminah, Agustin Siti; Pawitan, Gandhi; Tantular, Bertho
2017-03-01
So far, most of the data published by Statistics Indonesia (BPS) as data providers for national statistics are still limited to the district level. Less sufficient sample size for smaller area levels to make the measurement of poverty indicators with direct estimation produced high standard error. Therefore, the analysis based on it is unreliable. To solve this problem, the estimation method which can provide a better accuracy by combining survey data and other auxiliary data is required. One method often used for the estimation is the Small Area Estimation (SAE). There are many methods used in SAE, one of them is Empirical Best Linear Unbiased Prediction (EBLUP). EBLUP method of maximum likelihood (ML) procedures does not consider the loss of degrees of freedom due to estimating β with β ^. This drawback motivates the use of the restricted maximum likelihood (REML) procedure. This paper proposed EBLUP with REML procedure for estimating poverty indicators by modeling the average of household expenditures per capita and implemented bootstrap procedure to calculate MSE (Mean Square Error) to compare the accuracy EBLUP method with the direct estimation method. Results show that EBLUP method reduced MSE in small area estimation.
BAI Jun-hong; OUYANG Hua; WANG Qing-gai; ZHOU Cai-ping; XU Xiao-feng
2005-01-01
Horizontal and vertical variations of daily average CO2 concentration above the wetland surface were studied in Xianghai National Nature Reserve of China in August, 2000. The primary purpose was to study spatial distribution characteristics of CO2 concentration on the four levels of height(0. 1 m, 0.6 m, 1.2 m and 2 m) and compare the differences of CO2 concentration under different land covers. Results showed that daily average CO2 concentration above wetland surface in Xianghai National Natural Reserve was lower than that above other wetlands in northeast China as well as the worldwide average, suggesting that Xianghai wetland absorbed CO2 in August and acted as"sink" of CO2. The horizontal variations on the four levels of height along the latitude were distinct, and had the changing tendency of"decreasing after increasing" with the increase of height. The areas with obvious variations were consistent on different levels of height,and those with the highest variations appeared above surface of shore, sloping field, Typha wetland and Phragmites wetland; the vertical variations were greatly different, with the higher variations in Phragmites wetland and Typha wetland, and the lands near the shore and the sloping field with the lower variations. Spatial variations of daily average CO2 concentrations above wetland surface were affected by surface qualities and land covers.
Smit CE; Wezel AP van; Jager T; Traas TP; CSR
2000-01-01
The impact of secondary poisoning on the Maximum Permissible Concentrations (MPCs) and Negligible Concentrations (NCs) of cadmium, copper and mercury in water, sediment and soil have been evaluated. Field data on accumulation of these elements by fish, mussels and earthworms were used to derive MPC
Smit CE; Wezel AP van; Jager T; Traas TP; CSR
2000-01-01
The impact of secondary poisoning on the Maximum Permissible Concentrations (MPCs) and Negligible Concentrations (NCs) of cadmium, copper and mercury in water, sediment and soil have been evaluated. Field data on accumulation of these elements by fish, mussels and earthworms were used to derive
P. Jančík
2013-10-01
Full Text Available The goal of the article is to present analysis of metallurgical industry contribution to annual average PM10 concentrations in Moravian-Silesian based on means of the air pollution modelling in accord with the Czech reference methodology SYMOS´97.
Recovery of the histogram of hourly ozone distribution from weekly average concentrations
Olcese, Luis E. [Departamento de Fisico Quimica/INFIQC, Facultad de Ciencias Quimicas, Universidad Nacional de Cordoba, 5000 Cordoba (Argentina)]. E-mail: lolcese@fcq.unc.edu.ar; Toselli, Beatriz M. [Departamento de Fisico Quimica/INFIQC, Facultad de Ciencias Quimicas, Universidad Nacional de Cordoba, 5000 Cordoba (Argentina)
2006-05-15
A simple method is presented for estimating hourly distribution of air pollutants, based on data collected by passive sensors on a weekly or bi-weekly basis with no need for previous measurements at a site. In order for this method to be applied to locations where no hourly records are available, reference data from other sites are required to generate calibration histograms. The proposed procedure allows one to obtain the histogram of hourly ozone values during a given week with an error of about 30%, which is good considering the simplicity of this approach. This method can be a valuable tool for sites that lack previous hourly records of pollutant ambient concentrations, where it can be used to verify compliance with regulations or to estimate the AOT40 index with an acceptable degree of exactitude. - The histogram of hourly ozone distribution can be obtained based on passive sensor data.
Recovery of the histogram of hourly ozone distribution from weekly average concentrations.
Olcese, Luis E; Toselli, Beatriz M
2006-05-01
A simple method is presented for estimating hourly distribution of air pollutants, based on data collected by passive sensors on a weekly or bi-weekly basis with no need for previous measurements at a site. In order for this method to be applied to locations where no hourly records are available, reference data from other sites are required to generate calibration histograms. The proposed procedure allows one to obtain the histogram of hourly ozone values during a given week with an error of about 30%, which is good considering the simplicity of this approach. This method can be a valuable tool for sites that lack previous hourly records of pollutant ambient concentrations, where it can be used to verify compliance with regulations or to estimate the AOT40 index with an acceptable degree of exactitude.
Larsen, Ryan J.; Newman, Michael; Nikolaidis, Aki
2016-11-01
Multiple methods have been proposed for using Magnetic Resonance Spectroscopy Imaging (MRSI) to measure representative metabolite concentrations of anatomically-defined brain regions. Generally these methods require spectral analysis, quantitation of the signal, and reconciliation with anatomical brain regions. However, to simplify processing pipelines, it is practical to only include those corrections that significantly improve data quality. Of particular importance for cross-sectional studies is knowledge about how much each correction lowers the inter-subject variance of the measurement, thereby increasing statistical power. Here we use a data set of 72 subjects to calculate the reduction in inter-subject variance produced by several corrections that are commonly used to process MRSI data. Our results demonstrate that significant reductions of variance can be achieved by performing water scaling, accounting for tissue type, and integrating MRSI data over anatomical regions rather than simply assigning MRSI voxels with anatomical region labels.
Nutrient maximums related to low oxygen concentrations in the southern Canada Basin
JIN Ming-ming; SHI Jiuxin; LU Yong; CHEN Jianfang; GAO Guoping; WU Jingfeng; ZHANG Haisheng
2005-01-01
The phenomenon of nutrient maximums at 70～200 m occurred only in the region of the Canada Basin among the world oceans. The prevailing hypothesis was that the direct injection of the low-temperature high-nutrient brines from the Chukchi Sea shelf (＜50 m) in winter provided the nutrient maximums. However, we found that there are five problems in the direct injection process. Formerly Jin et al. considered that the formation of nutrient maximums can be a process of locally long-term regeneration. Here we propose a regeneration-mixture process. Data of temperature, salinity, oxygen and nutrients were collected at three stations in the southern Canada Basin during the summer 1999 cruise. We identified the cores of the surface, near-surface, potential temperature maximum waters and Arctic Bottom Water by the diagrams and vertical profiles of salinity, potential temperature, oxygen and nutrients. The historical 129Ⅰ data indicated that the surface and near-surface waters were Pacific-origin, but the waters below the potential temperature maximum core depth was Atlantic-origin. Along with the correlation of nutrient maximums and very low oxygen contents in the near-surface water, we hypothesize that, the putative organic matter was decomposed to inorganic nutrients; and the Pacific water was mixed with the Atlantic water in the transition zone. The idea of the regeneration-mixture process agrees with the historical observations of no apparent seasonal changes, the smooth nutrient profiles, the lowest saturation of CaCO3 above 400 m, low rate of CFC-11 ventilation and 3H-3He ages of 8～18 a around the nutrient maximum depths.
Ortiz-García, E. G.; Salcedo-Sanz, S.; Pérez-Bellido, A. M.; Gascón-Moreno, J.; Portilla-Figueras, A.
In this paper we present the application of a support vector regression algorithm to a real problem of maximum daily tropospheric ozone forecast. The support vector regression approach proposed is hybridized with an heuristic for optimal selection of hyper-parameters. The prediction of maximum daily ozone is carried out in all the station of the air quality monitoring network of Madrid. In the paper we analyze how the ozone prediction depends on meteorological variables such as solar radiation and temperature, and also we perform a comparison against the results obtained using a multi-layer perceptron neural network in the same prediction problem.
Kelfoun, Karim
2017-06-01
Pyroclastic currents are very destructive and their complex behavior makes the related hazards difficult to predict. A new numerical model has been developed to simulate the emplacement of both the concentrated and the dilute parts of pyroclastic currents using two coupled depth-averaged approaches. Interaction laws allow the concentrated current (pyroclastic flow) to generate a dilute current (pyroclastic surge) and, inversely, the dilute current to form a concentrated current or a deposit. The density of the concentrated current is assumed to be constant during emplacement, whereas the density of the dilute current changes depending on the particle supply from the concentrated current and the mass lost through sedimentation. The model is explored theoretically using simplified geometries as proxies for natural source conditions and topographies. It reproduces the relationships observed in the field between the surge genesis and the topography: the increase in surge production in constricted valleys, the decoupling between the concentrated and the dilute currents, and the formation of surge-derived concentrated flows. The strong nonlinear link between the surge genesis and the velocity of the concentrated flow beneath it could explain the sudden occurrence of powerful and destructive surges and the difficulty of predicting this occurrence. A companion paper compares the results of the model with the field data for the eruption of Merapi in 2010 and demonstrates that the approach is able to reproduce the natural emplacement of the concentrated and the dilute pyroclastic currents studied with good accuracy.
Okkonen, Jarkko; Neupauer, Roseanna M.
2016-05-01
Capture zones of water supply wells are most often delineated based on travel times of water or solute to the well, with the assumption that if the travel time is sufficiently large, the concentration of chemical at the well will not exceed the drinking water standards. In many situations, the likely source concentrations or release masses of contamination from the potential sources are unknown; therefore, the exact concentration at the well cannot be determined. In situations in which the source mass can be estimated with some accuracy, the delineation of the capture zone should be based on the maximum chemical concentration that can be expected at the well, rather than on an arbitrary travel time. We present a new capture zone delineation methodology that is based on this maximum chemical concentration. The method delineates capture zones by solving the adjoint of the advection-dispersion-reaction equation and relating the adjoint state and the known release mass to the expected chemical concentration at the well. We demonstrate the use of this method through a case study in which soil heat exchange systems are potential sources of contamination. The heat exchange fluid mixtures contain known fluid volumes and chemical concentrations; thus, in the event of a release, the release mass of the chemical is known. We also demonstrate the use of a concentration basis in quantifying other measures of well vulnerability including exposure time and time to exceed a predefined threshold concentration at the well.
Tran Duy, A.; Schrama, J.W.; Dam, van A.A.; Verreth, J.A.J.
2008-01-01
Feed intake and satiation in fish are regulated by a number of factors, of which dissolved oxygen concentration (DO) is important. Since fish take up oxygen through the limited gill surface area, all processes that need energy, including food processing, depend on their maximum oxygen uptake capacit
Silva, Cleomacio Miguel da; Amaral, Romilton dos Santos; Santos Junior, Jose Araujo dos; Vieira, Jose Wilson; Leoterio, Dilmo Marques da Silva [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil). Dept. de Energia Nuclear. Grupo de Radioecologia (RAE)], E-mail: cleomaciomiguel@yahoo.com.br; Amaral, Ademir [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil). Dept. de Energia Nuclear. Grupo de Estudos em Radioprotecao e Radioecologia
2007-07-01
The distribution of natural radionuclides in samples from typically anomalous environments has generally a great significant asymmetry, as a result of outlier. For diminishing statistic fluctuation researchers, in radioecology, commonly use geometric mean or median, once the average has no stability under the effect of outliers. As the median is not affected by anomalous values, this parameter of central tendency is the most frequently employed for evaluation of a set of data containing discrepant values. On the other hand, Efron presented a non-parametric method the so-called bootstrap that can be used to decrease the dispersion around the central-tendency value. Generally, in radioecology, statistics procedures are used in order to reduce the effect results of the presence of anomalous values as regards averages. In this context, the present study had as an objective to evaluate the application of the non-parametric bootstrap method (BM) for determining the average concentration of {sup 226}Ra in cultivated forage palms (Opuntia spp.) in soils with uranium anomaly on the dairy milk farms, localized in the cities of Pedra and Venturosa, Pernambuco-Brazil, as well as discussing the utilization of this method in radioecology. The results of {sup 226}Ra in samples of forage palm varied from 1,300 to 25,000 mBq.kg{sup -1} (dry matter), with arithmetic average of 5,965.86 +- 5,903.05 mBq.kg{sup -1}. The result obtained for this average using BM was 5,963.82 +- 1,202.96 mBq.kg{sup -1} (dry matter). The use of BM allowed an automatic filtration of experimental data, without the elimination of outliers, leading to the reduction of dispersion around the average. As a result, the BM permitted reaching a stable arithmetic average of the effects of the outliers. (author)
Collignan, Bernard; Powaga, Emilie
2014-11-01
Risk assessment due to radon exposure indoors is based on annual average indoor radon activity concentration. To assess the radon exposure in a building, measurement is generally performed during at least two months during heating period in order to be representative of the annual average value. This is because radon presence indoors could be very variable during time. This measurement protocol is fairly reliable but may be a limiting in the radon risk management, particularly during a real estate transaction due to the duration of the measurement and the limitation of the measurement period. A previous field study defined a rapid methodology to characterize radon entry in dwellings. The objective of this study was at first, to test this methodology in various dwellings to assess its relevance with a daily test. At second, a ventilation model was used to assess numerically the air renewal of a building, the indoor air quality all along the year and the annual average indoor radon activity concentration, based on local meteorological conditions, some building characteristics and in-situ characterization of indoor pollutant emission laws. Experimental results obtained on thirteen individual dwellings showed that it is generally possible to obtain a representative characterization of radon entry into homes. It was also possible to refine the methodology defined in the previous study. In addition, numerical assessments of annual average indoor radon activity concentration showed generally a good agreement with measured values. These results are encouraging to allow a procedure with a short measurement time to be used to characterize long-term radon potential in dwellings.
Segal, A.; Epstein, M.
2009-08-01
A central solar plant, based on beam-down optics, is composed of a field of heliostats, a tower reflector and a ground receiver. The tower reflector is an optical system comprises of a quadric surface mirror (hyperboloid), where its upper focal point coincides with the aim point of a heliostat field and its lower focal point is located at a specified height, coinciding with the entrance plane of the ground receiver. The optics of a tower reflector requires the use of ground secondary concentrator, composed of a cluster of CPCs, because the quadric surface mirror always magnifies the sun image. There is an intrinsic correlation between the tower reflector position and its size on one hand, and the geometry, dimensions and reflective area of the secondary concentrator on the other hand; both are related to the heliostat field reflective area. Obviously, when one wishes to have a smaller tower reflector by placing it closer to the upper focal point, the image created at the lower focus will be larger, resulting in a larger secondary ground concentrator. The present work analyses the ways for a substantial decrease of the size of the ground concentrator cluster (and, implicit, the concentrators area) via truncation, without significant sacrifice of the performance, although some increase of the optical losses is inevitable. This offers a method for cost effective design of future central solar plants utilizing the beam down optics.
Shonkwiler, K. B.; Ham, J. M.; Williams, C. M.
2013-12-01
Ammonia (NH3) that volatilizes from confined animal feeding operations (CAFOs) can form aerosols that travel long distances where such aerosols can deposit in sensitive regions, potentially causing harm to local ecosystems. However, quantifying the emissions of ammonia from CAFOs through direct measurement is very difficult and costly to perform. A system was therefore developed at Colorado State University for conditionally sampling NH3 concentrations based on weather parameters measured using inexpensive equipment. These systems use passive diffusive cartridges (Radiello, Sigma-Aldrich, St. Louis, MO, USA) that provide time-averaged concentrations representative of a two-week deployment period. The samplers are exposed by a robotic mechanism so they are only deployed when wind is from the direction of the CAFO at 1.4 m/s or greater. These concentration data, along with other weather variables measured during each sampler deployment period, can then be used in a simple inverse model (FIDES, UMR Environnement et Grandes Cultures, Thiverval-Grignon, France) to estimate emissions. There are not yet any direct comparisons of the modeled emissions derived from time-averaged concentration data to modeled emissions from more sophisticated backward Lagrangian stochastic (bLs) techniques that utilize instantaneous measurements of NH3 concentration. In the summer and autumn of 2013, a suite of robotic passive sampler systems were deployed at a 25,000-head cattle feedlot at the same time as an open-path infrared (IR) diode laser (GasFinder2, Boreal Laser Inc., Edmonton, Alberta, Canada) which continuously measured ammonia concentrations instantaneously over a 225-m path. This particular laser is utilized in agricultural settings, and in combination with a bLs model (WindTrax, Thunder Beach Scientific, Inc., Halifax, Nova Scotia, Canada), has become a common method for estimating NH3 emissions from a variety of agricultural and industrial operations. This study will first
Kleeman, M.; Mahmud, A.
2008-12-01
California has one of the worst particulate air pollution problems in the nation with some estimates predicting more than 5000 premature deaths each year attributed to air pollution. Climate change will modify weather patterns in California with unknown consequences for PM2.5. Previous down-scaling exercises carried out for the entire United States have typically not resolved the details associated with California's mountain-valley topography and mixture of urban-rural emissions characteristics. Detailed studies carried out for California have identified strong effects acting in opposite directions on PM2.5 concentrations making the net prediction for climate effects on PM2.5 somewhat uncertain. More research is needed to reduce this uncertainty so that we can truly understand climate impacts on PM2.5 and public health. The objective of this research is to predict climate change effects on annual average concentrations of particulate matter (PM2.5) in California with sufficient resolution to capture the details of California's air basins. Business-as-usual scenarios generated by the Parallel Climate Model (PCM) will be down-scaled to 4km meteorology using the Weather Research Forecast (WRF) model. The CIT/UCD source-oriented photochemical air quality model will be employed to predict PM2.5 concentrations throughout the entire state of California. The modeled annual average total and speciated PM2.5 concentrations for the future (2047-2049) and the present-day (2004-2006) periods will be compared to determine climate change effects. The results from this study will improve our understanding of global climate change effects on PM2.5 concentrations in California.
Alcala, F. J.; Were, A.; Serrano-Ortiz, P.; Canton, Y.; Sole, A.; Villagarcia, L.; Contreras, S.; Kowalski, A. S.; Marrero, R.; Puigdefabregas, J.; Domingo, F.
2009-07-01
The chloride mass balance (CMB) method was applied in the unsaturated zone to estimate potential recharge (R{sub t}) rainfall in two small catchment of southern mid-to-high slope of Sierra de Gador carbonate aquifer (SE Spain) , in the average hydrological year 2003-04 and the unusually dry 2004-05. Unknown fractions of diffuse (R{sub D}) and concentrated recharge (R{sub c}) into R{sub t} were firstly evaluated to fit average and lower R{sub T} thresholds for modeling further long-term recharge. Daily rainfall and actual evapotranspiration (AET) from the Eddy Covariance (EC) technique provided yearly R{sub T} of 189 mm year{sup -}1 in 2003-04 and 8 mm year{sup -}1 in 2004-05.
2010-07-01
... Measurements Required, and Maximum Discrepancy Specification C Table C-1 to Subpart C of Part 53 Protection of... Reference Methods Pt. 53, Subpt. C, Table C-1 Table C-1 to Subpart C of Part 53—Test Concentration Ranges... 0.25 to 0.35 2 2 .03 Total 7 8 Effective Date Note: At 75 FR 35601, June 22, 2010, table C-1 to...
Alfafara, C G; Miura, K; Shimizu, H; Shioya, S; Suga, K; Suzuki, K
1993-02-20
A fuzzy logic controller (FLC) for the control of ethanol concentration was developed and utilized to realize the maximum production of glutathione (GSH) in yeast fedbatch culture. A conventional fuzzy controller, which uses the control error and its rate of change in the premise part of the linguistic rules, worked well when the initial error of ethanol concentration was small. However, when the initial error was large, controller overreaction resulted in an overshoot.An improved fuzzy controller was obtained to avoid controller overreaction by diagnostic determination of "glucose emergency states" (i.e., glucose accumulation or deficiency), and then appropriate emergency control action was obtained by the use of weight coefficients and modification of linguistic rules to decrease the overreaction of the controller when the fermentation was in the emergency state. The improved fuzzy controller was able to control a constant ethanol concentration under conditions of large initial error.The improved fuzzy control system was used in the GSH production phase of the optimal operation to indirectly control the specific growth rate mu to its critical value micro(c). In the GSH production phase of the fed-batch culture, the optimal solution was to control micro to micro(c) in order to maintain a maximum specific GSH production rate. The value of micro(c) also coincided with the critical specific growth rate at which no ethanol formation occurs. Therefore, the control of micro to micro(c) could be done indirectly by maintaining a constant ethanol concentration, that is, zero net ethanol formation, through proper manipulation of the glucose feed rate. Maximum production of GSH was realized using the developed FLC; maximum production was a consequence of the substrate feeding strategy and cysteine addition, and the FLC was a simple way to realize the strategy.
DeVore, Matthew S; Gull, Stephen F; Johnson, Carey K
2012-04-05
We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions.
Lents, C A; Brown-Brandl, T M; Rohrer, G A; Oliver, W T; Freking, B A
2016-04-01
The objectives of this study were to determine the effect of sex, sire line, and litter size on concentrations of acyl-ghrelin and total ghrelin in plasma of grow-finish pigs and to understand the relationship of plasma concentrations of ghrelin with feeding behavior, average daily gain (ADG), and back fat in grow-finish swine. Yorkshire-Landrace crossbred dams were inseminated with semen from Yorkshire, Landrace, or Duroc sires. Within 24 h of birth, pigs were cross-fostered into litter sizes of normal (N; >12 pigs/litter) or small (S; ≤ 9 pigs/litter). At 8 wk of age, pigs (n = 240) were blocked by sire breed, sex, and litter size and assigned to pens (n = 6) containing commercial feeders modified with a system to monitor feeding behavior. Total time eating, number of daily meals, and duration of meals were recorded for each individual pig. Body weight was recorded every 4 wk. Back fat and loin eye area were recorded at the conclusion of the 12-wk feeding study. A blood sample was collected at week 7 of the study to quantify concentrations of acyl- and total ghrelin in plasma. Pigs from small litters weighed more (P finish phase. Barrows spent more time eating (P < 0.001) than gilts, but the number of meals and concentrations of ghrelin did not differ with sex of the pig. Pigs from Duroc and Yorkshire sires had lesser (P < 0.0001) concentrations of acyl-ghrelin than pigs from Landrace sires, but plasma concentrations of total ghrelin were not affected by sire breed. Concentrations of acyl-ghrelin were positively correlated with the number of meals and negatively correlated with meal length and ADG (P < 0.05). A larger number of short-duration meals may indicate that pigs with greater concentrations of acyl-ghrelin consumed less total feed, which likely explains why they were leaner and grew more slowly. Acyl-ghrelin is involved in regulating feeding behavior in pigs, and measuring acyl-ghrelin is important when trying to understand the role of this hormone in
Ahmadian, Radin
2010-09-01
This study investigated the relationship of anthocyanin concentration from different organic fruit species and output voltage and current in a TiO2 dye-sensitized solar cell (DSSC) and hypothesized that fruits with greater anthocyanin concentration produce higher maximum power point (MPP) which would lead to higher current and voltage. Anthocyanin dye solution was made with crushing of a group of fresh fruits with different anthocyanin content in 2 mL of de-ionized water and filtration. Using these test fruit dyes, multiple DSSCs were assembled such that light enters through the TiO2 side of the cell. The full current-voltage (I-V) co-variations were measured using a 500 Ω potentiometer as a variable load. Point-by point current and voltage data pairs were measured at various incremental resistance values. The maximum power point (MPP) generated by the solar cell was defined as a dependent variable and the anthocyanin concentration in the fruit used in the DSSC as the independent variable. A regression model was used to investigate the linear relationship between study variables. Regression analysis showed a significant linear relationship between MPP and anthocyanin concentration with a p-value of 0.007. Fruits like blueberry and black raspberry with the highest anthocyanin content generated higher MPP. In a DSSC, a linear model may predict MPP based on the anthocyanin concentration. This model is the first step to find organic anthocyanin sources in the nature with the highest dye concentration to generate energy.
Gabarro, Carolina; Turiel, Antonio; Elosegui, Pedro; Pla-Resina, Joaquim A.; Portabella, Marcos
2017-08-01
Monitoring sea ice concentration is required for operational and climate studies in the Arctic Sea. Technologies used so far for estimating sea ice concentration have some limitations, for instance the impact of the atmosphere, the physical temperature of ice, and the presence of snow and melting. In the last years, L-band radiometry has been successfully used to study some properties of sea ice, remarkably sea ice thickness. However, the potential of satellite L-band observations for obtaining sea ice concentration had not yet been explored. In this paper, we present preliminary evidence showing that data from the Soil Moisture Ocean Salinity (SMOS) mission can be used to estimate sea ice concentration. Our method, based on a maximum-likelihood estimator (MLE), exploits the marked difference in the radiative properties of sea ice and seawater. In addition, the brightness temperatures of 100 % sea ice and 100 % seawater, as well as their combined values (polarization and angular difference), have been shown to be very stable during winter and spring, so they are robust to variations in physical temperature and other geophysical parameters. Therefore, we can use just two sets of tie points, one for summer and another for winter, for calculating sea ice concentration, leading to a more robust estimate. After analysing the full year 2014 in the entire Arctic, we have found that the sea ice concentration obtained with our method is well determined as compared to the Ocean and Sea Ice Satellite Application Facility (OSI SAF) dataset. However, when thin sea ice is present (ice thickness ≲ 0.6 m), the method underestimates the actual sea ice concentration. Our results open the way for a systematic exploitation of SMOS data for monitoring sea ice concentration, at least for specific seasons. Additionally, SMOS data can be synergistically combined with data from other sensors to monitor pan-Arctic sea ice conditions.
Sullivan, Terry [Brookhaven National Lab. (BNL), Upton, NY (United States). Biological, Environmental and Climate Sciences Dept.
2014-12-10
ZionSolutions is in the process of decommissioning the Zion Nuclear Power Plant in order to establish a new water treatment plant. There is some residual radioactive particles from the plant which need to be brought down to levels so an individual who receives water from the new treatment plant does not receive a radioactive dose in excess of 25 mrem/y⁻¹ as specified in 10 CFR 20 Subpart E. The objectives of this report are: (a) To present a simplified conceptual model for release from the buildings with residual subsurface structures that can be used to provide an upper bound on radionuclide concentrations in the fill material and the water in the interstitial spaces of the fill. (b) Provide maximum water concentrations and the corresponding amount of mass sorbed to the solid fill material that could occur in each building for use by ZSRP in selecting ROCs for detailed dose assessment calculations.
Van Donkelaar, A.; Martin, R. V.; Brauer, M.; Kahn, R.; Levy, R.; Verduzco, C.; Villeneuve, P.
2010-01-01
Exposure to airborne particles can cause acute or chronic respiratory disease and can exacerbate heart disease, some cancers, and other conditions in susceptible populations. Ground stations that monitor fine particulate matter in the air (smaller than 2.5 microns, called PM2.5) are positioned primarily to observe severe pollution events in areas of high population density; coverage is very limited, even in developed countries, and is not well designed to capture long-term, lower-level exposure that is increasingly linked to chronic health effects. In many parts of the developing world, air quality observation is absent entirely. Instruments aboard NASA Earth Observing System satellites, such as the MODerate resolution Imaging Spectroradiometer (MODIS) and the Multi-angle Imaging SpectroRadiometer (MISR), monitor aerosols from space, providing once daily and about once-weekly coverage, respectively. However, these data are only rarely used for health applications, in part because the can retrieve the amount of aerosols only summed over the entire atmospheric column, rather than focusing just on the near-surface component, in the airspace humans actually breathe. In addition, air quality monitoring often includes detailed analysis of particle chemical composition, impossible from space. In this paper, near-surface aerosol concentrations are derived globally from the total-column aerosol amounts retrieved by MODIS and MISR. Here a computer aerosol simulation is used to determine how much of the satellite-retrieved total column aerosol amount is near the surface. The five-year average (2001-2006) global near-surface aerosol concentration shows that World Health Organization Air Quality standards are exceeded over parts of central and eastern Asia for nearly half the year.
Arriagada, Manuel; Cipagauta, Carolina; Foppiano, Alberto
2013-05-01
A simple semi-empirical model to determine the maximum electron concentration in the ionosphere (NmF2) for South American locations is used to calculate NmF2 for a northern hemisphere station in the same longitude sector. NmF2 is determined as the sum of two terms, one related to photochemical and diffusive processes and the other one to transport mechanisms. The model gives diurnal variations of NmF2 representative for winter, summer and equinox conditions, during intervals of high and low solar activity. Model NmF2 results are compared with ionosonde observations made at Toluca-México (19.3°N; 260°E). Differences between model results and observations are similar to those corresponding to comparisons with South American observations. It seems that further improvement of the model could be made by refining the latitude dependencies of coefficients used for the transport term.
Dowling, Adam H
2011-06-01
The aim was to investigate the influence of number average molecular weight and concentration of the poly(acrylic) acid (PAA) liquid constituent of a GI restorative on the compressive fracture strength (σ) and modulus (E).
U.S. Environmental Protection Agency — The average concentrations of As, Cd, Cr, Hg, Ni and Pb in n=84 residential soil samples, in Rosia Montana area, analyzed by X-ray fluorescence spectrometry are...
S. Gannouni
2016-01-01
Full Text Available In a tunnel fire, the production of smoke and toxic gases remains the principal prejudicial factors to users. The heat is not considered as a major direct danger to users since temperatures up to man level do not reach tenable situations that after a relatively long time except near the fire source. However, the temperatures under ceiling can exceed the thresholds conditions and can thus cause structural collapse of infrastructure. This paper presents a numerical analysis of smoke hazard in tunnel fires with different aspect ratio by large eddy simulation. Results show that the CO concentration increases as the aspect ratio decreases and decreases with the longitudinal ventilation velocity. CFD predicted maximum smoke temperatures are compared to the calculated values using the model of Li et al. and then compared with those given by the empirical equation proposed by kurioka et al. A reasonable good agreement has been obtained. The backlayering length decreases as the ventilation velocity increases and this decrease fell into good exponential decay. The dimensionless interface height and the region of bad visibility increases with the aspect ratio of the tunnel cross-sectional geometry.
Kozak, K; Mazur, J; Kozłowska, B; Karpińska, M; Przylibski, T A; Mamont-Cieśla, K; Grządziel, D; Stawarz, O; Wysocka, M; Dorda, J; Zebrowski, A; Olszewski, J; Hovhannisyan, H; Dohojda, M; Kapała, J; Chmielewska, I; Kłos, B; Jankowski, J; Mnich, S; Kołodziej, R
2011-10-01
The method for the calculation of correction factors is presented, which can be used for the assessment of the mean annual radon concentration on the basis of 1-month or 3-month indoor measurements. Annual radon concentration is an essential value for the determination of the annual dose due to radon inhalation. The measurements have been carried out in 132 houses in Poland over a period of one year. The passive method of track detectors with CR-39 foil was applied. Four thermal-precipitation regions in Poland were established and correction factors were calculated for each region, separately for houses with and without basements.
高艳普; 王向东; 王冬青
2015-01-01
An algorithm of maximum likelihood method for parameters estimate was presented aimed at multivariable controlled autoregressive moving average (CARMA-like).The algorithm transform the CARMA-like system into m identification models (m is the output numbers),each of which only had a parameter vector which needed to be esti-mated,and then through maximum likelihood method for estimating parameter vectors of each identification model,and all parameters estimate of the system were obtained.Simulation results verified the effectiveness of the proposed algo-rithm.%提出了一种针对多变量受控自回归滑动平均（controlled autoregressive moving average system-like，CARMA-like）系统的极大似然参数估计算法。将 CARMA-like 系统分解成为 m 个辨识模型（m 是输出量的个数），使每一个辨识模型仅包含一个需要估计的参数向量，通过极大似然方法估计每个辨识模型的参数向量，从而得到整个系统的参数估计值。仿真结果验证了该算法的有效性。
Mehdizadeh, Arash; Gardiner, Bruce S; Lavagnino, Michael; Smith, David W
2017-03-13
In this study, we propose a method for quantitative prediction of changes in concentrations of a number of key signaling, structural and effector molecules within the extracellular matrix of tendon. To achieve this, we introduce the notion of elementary cell responses (ECRs). An ECR defines a normal reference secretion profile of a molecule by a tenocyte in response to the tenocyte's local strain. ECRs are then coupled with a model for mechanical damage of tendon collagen fibers at different straining conditions of tendon and then scaled up to the tendon tissue level for comparison with experimental observations. Specifically, our model predicts relative changes in ECM concentrations of transforming growth factor beta, interleukin 1 beta, collagen type I, glycosaminoglycan, matrix metalloproteinase 1 and a disintegrin and metalloproteinase with thrombospondin motifs 5, with respect to tendon straining conditions that are consistent with the observations in the literature. In good agreement with a number of in vivo and in vitro observations, the model provides a logical and parsimonious explanation for how excessive mechanical loading of tendon can lead to under-stimulation of tenocytes and a degenerative tissue profile, which may well have bearing on a better understanding of tendon homeostasis and the origin of some tendinopathies.
Pannone, Marilena
2014-03-01
A large-time analytical solution is proposed for the spatial variance and coefficient of variation of the depth-averaged concentration due to instantaneous, cross sectionally uniform solute sources in pseudorectangular open channel flows. The mathematical approach is based on the use of the Green functions and on the Fourier decomposition of the depth-averaged velocities, coupled with the method of the images. The variance spatial trend is characterized by a minimum at the center of the mass and two mobile, decaying symmetrical peaks which, at very large times, are located at the inflexion points of the average Gaussian distribution. The coefficient of variation, which provides an estimate of the expected percentage deviation of the depth-averaged point concentrations about the section-average, exhibits a minimum at the center which decays like t-1 and only depends on the river diffusive time scale. The defect of cross-sectional mixing quickly increases with the distance from the center, and almost linearly at large times. Accurate numerical Lagrangian simulations were performed to validate the analytical results in preasymptotic and asymptotic conditions, referring to a particularly representative sample case for which cross-sectional depth and velocity measurements were known from a field survey. In addition, in order to discuss the practical usefulness of computing large-time concentration spatial moments in river flows, and resorting to directly measured input data, the order of magnitude of section-averaged concentrations and corresponding coefficients of variation was estimated in field conditions and for hypothetical contamination scenarios, considering a unit normalized mass impulsively injected across the transverse section of 81 U.S. rivers.
Lee, Kab-Jae; Kim, Sol; Lee, Ju; Oh, Jae-Eung
2003-05-01
A brushless dc (BLDC) motor, which has a permanent magnet (PM) component, is a potential candidate for hybrid or electric vehicle applications. Minimizing the BLDC motor size is an important requirement for application. This requirement is usually satisfied by adopting a high performance permanent magnet or improved winding methods. The PM configuration is also a critical point in design. This article presents the effect of the PM configuration on motor performance, especially the maximum torque. Four representative BLDC motor types are analytically investigated under the condition that the volume of the PM and magnetic material is constant. An embedded interior permanent magnet motor has the best torque performance the maximum torque of which is more than 1.5 times larger than that of the surface mounted permanent magnet motor. The performance of back electromotive force, instantaneous torques is also investigated.
Oudyn, Frederik W; Lyons, David J; Pringle, M J
2012-01-01
Many scientific laboratories follow, as standard practice, a relatively short maximum holding time (within 7 days) for the analysis of total suspended solids (TSS) in environmental water samples. In this study we have subsampled from bulk water samples stored at ∼4 °C in the dark, then analysed for TSS at time intervals up to 105 days after collection. The nonsignificant differences in TSS results observed over time demonstrates that storage at ∼4 °C in the dark is an effective method of preserving samples for TSS analysis, far past the 7-day standard practice. Extending the maximum holding time will ease the pressure on sample collectors and laboratory staff who until now have had to determine TSS within an impractically short period.
Mary Hokazono
Full Text Available CONTEXT AND OBJECTIVE: Transcranial Doppler (TCD detects stroke risk among children with sickle cell anemia (SCA. Our aim was to evaluate TCD findings in patients with different sickle cell disease (SCD genotypes and correlate the time-averaged maximum mean (TAMM velocity with hematological characteristics. DESIGN AND SETTING: Cross-sectional analytical study in the Pediatric Hematology sector, Universidade Federal de São Paulo. METHODS: 85 SCD patients of both sexes, aged 2-18 years, were evaluated, divided into: group I (62 patients with SCA/Sß0 thalassemia; and group II (23 patients with SC hemoglobinopathy/Sß+ thalassemia. TCD was performed and reviewed by a single investigator using Doppler ultrasonography with a 2 MHz transducer, in accordance with the Stroke Prevention Trial in Sickle Cell Anemia (STOP protocol. The hematological parameters evaluated were: hematocrit, hemoglobin, reticulocytes, leukocytes, platelets and fetal hemoglobin. Univariate analysis was performed and Pearson's coefficient was calculated for hematological parameters and TAMM velocities (P < 0.05. RESULTS: TAMM velocities were 137 ± 28 and 103 ± 19 cm/s in groups I and II, respectively, and correlated negatively with hematocrit and hemoglobin in group I. There was one abnormal result (1.6% and five conditional results (8.1% in group I. All results were normal in group II. Middle cerebral arteries were the only vessels affected. CONCLUSION: There was a low prevalence of abnormal Doppler results in patients with sickle-cell disease. Time-average maximum mean velocity was significantly different between the genotypes and correlated with hematological characteristics.
Regina A. A.
2010-12-01
Full Text Available The study is to model emission from a stack to estimate ground level concentration from a palm oil mill. The case study is a mill located in Kuala Langat, Selangor. Emission source is from boilers stacks. The exercise determines the estimate the ground level concentrations for dust to the surrounding areas through the utilization of modelling software. The surround area is relatively flat, an industrial area surrounded by factories and with palm oil plantations in the outskirts. The model utilized in the study was to gauge the worst-case scenario. Ambient air concentrations were garnered calculate the increase to localized conditions. Keywords: emission, modelling, palm oil mill, particulate, POME
Regina A. A.; I. Mohammad Halim Shah
2010-01-01
The study is to model emission from a stack to estimate ground level concentration from a palm oil mill. The case study is a mill located in Kuala Langat, Selangor. Emission source is from boilers stacks. The exercise determines the estimate the ground level concentrations for dust to the surrounding areas through the utilization of modelling software. The surround area is relatively flat, an industrial area surrounded by factories and with palm oil plantations in the outskirts. The model uti...
Wezel AP van; Posthumus R; Vlaardingen P van; Crommentuijn T; Plassche EJ van de; CSR
1999-01-01
This report presents maximal permissible concentrations (MPCs) and negligible concentrations (NCs) are derived for di-n-butylphthalate (DBP) and di(2-ethylhexyl)phthalate (DEHP). Phthalates are often mentioned as suspected endocrine disrupters. Data with endpoints related to the endocrine or reprodu
U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...
Fleming, K; Thompson-Crispi, K A; Hodgins, D C; Miglior, F; Corredig, M; Mallard, B A
2016-03-01
The objective of this study was to evaluate IgG and β-lactoglobulin (β-LG) concentrations in colostrum and milk of Canadian Holsteins (n=108) classified as high (H), average (A), or low (L) for antibody-mediated (AMIR) or cell-mediated immune responses (CMIR) based on estimated breeding values. It was hypothesized that H-AMIR and H-CMIR cows produce colostrum (first milking) and milk (d 5 postcalving) with higher concentrations of IgG and β-LG. Data for IgG and β-LG in colostrum and milk were analyzed independently using mixed linear models. Least squares means were compared using Tukey's test. Cows classified as H-AMIR had higher IgG and β-LG concentrations in colostrum compared with A- and L-AMIR cows; 84% of H-AMIR, 69% of A-AMIR, and 68% of L-AMIR cows had over 5,000 mg/dL IgG in colostrum. No differences in IgG and β-LG concentrations in colostrum were noted among cows ranked on CMIR or in milk of cows ranked on AMIR. β-Lactoglobulin and IgG concentrations were positively correlated in colostrum. Breeding cows for H-AMIR status may reduce failure of passive transfer of IgG in their calves; β-LG may play a role in bovine immune defenses. Colostrum from H-AMIR cows may serve as a more economical feedstock source for manufacturing natural health products.
Hermes, Anna L.; Sikes, Elisabeth L.
2016-10-01
The pathway and fate of land-derived suspended particulate organic matter (POM) as it passes through estuaries remains a poorly constrained component of coastal carbon dynamics. The δ13C of bulk POC (particulate organic carbon; δ13C-POC) and n-alkane biomarkers were used to assess the proportion of algal- and land- (vascular plant) derived POM through the Delaware Estuary, on five cruises in 2010-2011. We found that POC was highly correlated with suspended sediment concentrations (SSC). Higher SSC was present in bottom waters, causing bottom waters to have consistently higher concentrations of POC than surface waters, with the bottom waters of the estuarine turbidity maximum (ETM) exhibiting maximum POC concentrations for all seasons and flow regimes. Algal-derived POM seasonally affected the δ13C-POC and n-alkane geochemical signatures of surface waters, whereas bottom waters were dominated by vascular plant-derived POM. δ13C-POC results suggested a gradual loss in vascular plant-derived POM between the riverine and marine endmember stations. In contrast, n-alkane concentrations peaked in bottom waters of the ETM at 2-5 times surface water concentrations. Indices of the relative proportions of n-alkanes and n-alkanes as a proportion of total POC had their levels decrease considerably downstream of the ETM. These biomarker analyses suggest enhanced loss of land-derived material across the ETM and that the ETM acts as a geochemical filter for vascular plant-derived POM in a classic well mixed estuary.
Fournier, Sean Donovan; Beall, Patrick S; Miller, Mark L
2014-08-01
Through the SNL New Mexico Small Business Assistance (NMSBA) program, several Sandia engineers worked with the Environmental Restoration Group (ERG) Inc. to verify and validate a novel algorithm used to determine the scanning Critical Level (L c ) and Minimum Detectable Concentration (MDC) (or Minimum Detectable Areal Activity) for the 102F scanning system. Through the use of Monte Carlo statistical simulations the algorithm mathematically demonstrates accuracy in determining the L c and MDC when a nearest-neighbor averaging (NNA) technique was used. To empirically validate this approach, SNL prepared several spiked sources and ran a test with the ERG 102F instrument on a bare concrete floor known to have no radiological contamination other than background naturally occurring radioactive material (NORM). The tests conclude that the NNA technique increases the sensitivity (decreases the L c and MDC) for high-density data maps that are obtained by scanning radiological survey instruments.
Schiefelbein, Sarah; Fröhlich, Alexander; John, Gernot T; Beutler, Falco; Wittmann, Christoph; Becker, Judith
2013-08-01
Dissolved oxygen plays an essential role in aerobic cultivation especially due to its low solubility. Under unfavorable conditions of mixing and vessel geometry it can become limiting. This, however, is difficult to predict and thus the right choice for an optimal experimental set-up is challenging. To overcome this, we developed a method which allows a robust prediction of the dissolved oxygen concentration during aerobic growth. This integrates newly established mathematical correlations for the determination of the volumetric gas-liquid mass transfer coefficient (kLa) in disposable shake-flasks from the filling volume, the vessel size and the agitation speed. Tested for the industrial production organism Corynebacterium glutamicum, this enabled a reliable design of culture conditions and allowed to predict the maximum possible cell concentration without oxygen limitation.
Naseeruddin, Shaik; Desai, Suseelendra; Venkateswar Rao, L
2016-02-01
Two grams of delignified substrate at 10% (w/v) level was subjected to biphasic dilute acid hydrolysis using phosphoric acid, hydrochloric acid and sulfuric acid separately at 110 °C for 10 min in phase-I and 121 °C for 15 min in phase-II. Combinations of acid concentrations in two phases were varied for maximum holocellulose hydrolysis with release of fewer inhibitors, to select the suitable acid and its concentration. Among three acids, sulfuric acid in combination of 1 & 2% (v/v) hydrolyzed maximum holocellulose of 25.44±0.44% releasing 0.51±0.02 g/L of phenolics and 0.12±0.002 g/L of furans, respectively. Further, hydrolysis of delignified substrate using selected acid by varying reaction time and temperature hydrolyzed 55.58±1.78% of holocellulose releasing 2.11±0.07 g/L and 1.37±0.03 g/L of phenolics and furans, respectively at conditions of 110 °C for 45 min in phase-I & 121 °C for 60 min in phase-II.
Hu, Kaifeng; Ellinger, James J; Chylla, Roger A; Markley, John L
2011-12-15
Time-zero 2D (13)C HSQC (HSQC(0)) spectroscopy offers advantages over traditional 2D NMR for quantitative analysis of solutions containing a mixture of compounds because the signal intensities are directly proportional to the concentrations of the constituents. The HSQC(0) spectrum is derived from a series of spectra collected with increasing repetition times within the basic HSQC block by extrapolating the repetition time to zero. Here we present an alternative approach to data collection, gradient-selective time-zero (1)H-(13)C HSQC(0) in combination with fast maximum likelihood reconstruction (FMLR) data analysis and the use of two concentration references for absolute concentration determination. Gradient-selective data acquisition results in cleaner spectra, and NMR data can be acquired in both constant-time and non-constant-time mode. Semiautomatic data analysis is supported by the FMLR approach, which is used to deconvolute the spectra and extract peak volumes. The peak volumes obtained from this analysis are converted to absolute concentrations by reference to the peak volumes of two internal reference compounds of known concentration: DSS (4,4-dimethyl-4-silapentane-1-sulfonic acid) at the low concentration limit (which also serves as chemical shift reference) and MES (2-(N-morpholino)ethanesulfonic acid) at the high concentration limit. The linear relationship between peak volumes and concentration is better defined with two references than with one, and the measured absolute concentrations of individual compounds in the mixture are more accurate. We compare results from semiautomated gsHSQC(0) with those obtained by the original manual phase-cycled HSQC(0) approach. The new approach is suitable for automatic metabolite profiling by simultaneous quantification of multiple metabolites in a complex mixture.
Ovalle, E. M.; Bravo, M. A.; Villalobos, C. U.; Foppiano, A. J.
2013-10-01
Ionospheric variability observed prior to mayor earthquakes has been studied for decades. In particular, in many such studies the identification of ionospheric precursors of large earthquakes has been regarded as a specific goal. This paper analyses the observations of the maximum electron concentration (NmF2) over Concepción (36.8°S; 73.0°W) and of the total electron content (TEC) for an area covering the rupture zone corresponding to the very large Chile earthquake of 27 February 2010. The analyses used here are similar to those published before for many earthquakes in Taiwan, Japan and Russia. Possible NmF2 and TEC precursors are compared with other precursors proposed for the same earthquake using different TEC determinations and satellite observations of electron/ion concentration, energetic particle bursts and electromagnetic emissions. Some possible precursors derived from the various observations are consistent with each other. However, none can be unambiguously associated to the Chilean earthquake.
Gang Li
2012-01-01
Full Text Available Vertical patterns of early summer chlorophyll a (Chl a concentration from the Indian Ocean are presented, as well as the variations of depth and size-fractioned Chl a in the deep chlorophyll maximum (DCM. A total of 38 stations were investigated from 12 April to 5 May 2011, with 8 discrete-depth samples (7 fixed and 1 variable at real DCM measured at each station. Depth-integrated Chl a concentration (∑Chl a varied from 11.5 to 26.8 mg m−2, whereas Chl a content at DCM ranged from 0.17 to 0.57 μg L−1 with picophytoplankton (<3 μm accounting for 82% to 93%. The DCM depth varied from 55.6 to 91 m and shoaled latitudinally to northward. Moreover, our results indicated that the ∑Chl a could be underestimated by up to 9.3% with a routine sampling protocol of collecting samples only at 7 fixed depths as the real DCM was missed. The underestimation was negatively correlated to the DCM depth when it varied from 55.6 to 71.3 m (r=−0.63, P<0.05 but positively correlated when it ranged from 75.8 to 91 m (r=0.68, P<0.01. This indicates that in the Indian Ocean the greater the departure of the DCM from 75 m depth, the greater the underestimation of integrated Chl a concentration that could occur if the real DCM is missed.
Foote, A P; Tait, R G; Keisler, D H; Hales, K E; Freetly, H C
2016-04-01
The objective of this experiment was to determine the association of circulating plasma leptin concentrations with production and body composition measures of finishing beef steers and heifers and to determine if multiple sampling time points improve the associations of plasma leptin concentrations with production and body composition traits. Individual dry matter intake (DMI) and ADG were determined for 84 d using steers and heifers (n = 127 steers and n = 109 heifers). Blood was collected on day 0, day 42, and day 83 for determination of plasma leptin concentrations. Leptin concentrations were greater in heifers than those in steers on day 0 (P leptin concentrations increased in both sexes but were not different from each other on day 83. Leptin concentrations at all 3 time points and the mean were shown to be positively associated with DMI (P ≤ 0.006), whereas the mean leptin concentration explaining 8.3% of the variance of DMI. Concentrations of leptin at day 42, day 83, and the mean of all 3 time points were positively associated with ADG (P ≤ 0.011). Mean leptin concentration was negatively associated with gain:feed ratio and positively associated with residual feed intake (RFI), indicating that more efficient cattle had lower leptin concentrations. However, leptin concentrations explained very little of the variation in residual feed intake (≤ 3.2% of the variance). Leptin concentrations were positively associated with body fat measured by ultrasonography at the 12th rib and over the rump (P leptin concentration explaining 21.9% and 12.7% of the variance in 12th rib and rump fat thickness, respectively. The same trend was observed with carcass composition where leptin concentrations were positively associated with 12th rib fat thickness, USDA-calculated yield grade (YG), and marbling score (P ≤ 0.006) and mean leptin concentration explained 16.8, 18.2, and 4.6% of the variance for 12th rib fat thickness, yield grade, and marbling score, respectively
The objective of this experiment was to determine the association of circulating plasma leptin concentrations with production and body composition measures of finishing beef steers and heifers and to determine if multiple sampling time points improve the associations of plasma leptin concentrations ...
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
Rodrigues, Elsa Teresa; Pardal, Miguel Ângelo; Gante, Cristiano; Loureiro, João; Lopes, Isabel
2017-02-01
The main goal of the present study was to determine and validate an aquatic Maximum Acceptable Concentration-Environmental Quality Standard (MAC-EQS) value for the agricultural fungicide azoxystrobin (AZX). Assessment factors were applied to short-term toxicity data using the lowest EC50 and after the Species Sensitivity Distribution (SSD) method. Both ways of EQS generation were applied to a freshwater toxicity dataset for AZX based on available data, and to marine toxicity datasets for AZX and Ortiva(®) (a commercial formulation of AZX) obtained by the present study. A high interspecific variability in AZX sensitivity was observed in all datasets, being the copepoda Eudiaptomus graciloides (LC50,48h = 38 μg L(-1)) and the gastropod Gibbula umbilicalis (LC50,96h = 13 μg L(-1)) the most sensitive freshwater and marine species, respectively. MAC-EQS values derived using the lowest EC50 (≤0.38 μg L(-1)) were more protective than those derived using the SSD method (≤3.2 μg L(-1)). After comparing the MAC-EQS values estimated in the present study to the smallest AA-EQS available, which protect against the occurrence of prolonged exposure of AZX, the MAC-EQS values derived using the lowest EC50 were considered overprotective and a MAC-EQS of 1.8 μg L(-1) was validated and recommended for AZX for the water column. This value was derived from marine toxicity data, which highlights the importance of testing marine organisms. Moreover, Ortiva affects the most sensitive marine species to a greater extent than AZX, and marine species are more sensitive than freshwater species to AZX. A risk characterization ratio higher than one allowed to conclude that AZX might pose a high risk to the aquatic environment. Also, in a wider conclusion, before new pesticides are approved, we suggest to improve the Tier 1 prospective Ecological Risk Assessment by increasing the number of short-term data, and apply the SSD approach, in order to ensure the safety of
NONE
2015-11-01
The book on the MAK (maximum permissible concentrations at the place of work) and BAT (biological tolerance values for working materials) value list 2015 includes the following chapters: (a) Maximum permissible concentrations at the place of work: definition, application and determination of MAT values, list of materials; carcinogenic working materials, sensibilizing working materials, aerosols, limiting the exposition peaks, skin resorption, MAK values during pregnancy, germ cell mutagens, specific working materials; (b) Biological tolerance values for working materials: definition and application of BAT values, list of materials, carcinogenic working materials, biological guide values, biological working material reference values.
NONE
2013-08-01
The book on the MAK (maximum permissible concentrations at the place of work) and BAT (biological tolerance values for working materials) value list 2013 includes the following chapters: (a) Maximum permissible concentrations at the place of work: definition, application and determination of MAT values, list of materials; carcinogenic working materials, sensibilizing working materials, aerosols, limiting the exposition peaks, skin resorption, MAK values during pregnancy, germ cell mutagens, specific working materials; (b) Biological tolerance values for working materials: definition and application of BAT values, list of materials, carcinogenic working materials, biological guide values, biological working material reference values.
NONE
2014-11-01
The book on the MAK (maximum permissible concentrations at the place of work) and BAT (biological tolerance values for working materials) value list 2014 includes the following chapters: (a) Maximum permissible concentrations at the place of work: definition, application and determination of MAT values, list of materials; carcinogenic working materials, sensibilizing working materials, aerosols, limiting the exposition peaks, skin resorption, MAK values during pregnancy, germ cell mutagens, specific working materials; (b) Biological tolerance values for working materials: definition and application of BAT values, list of materials, carcinogenic working materials, biological guide values, biological working material reference values.
NONE
2017-08-01
The MAK and BAT values list 2017 includes the maximum permissible concentrations at the place of work and biological tolerance values for working materials. The following working materials are covered: carcinogenic working materials, sensitizing materials and aerosols. The report discusses the restriction of exposure peaks, skin resorption, MAK (maximum working place concentration) values during pregnancy, germ cell mutagens and specific working materials. Importance and application of BAT (biological working material tolerance) values, list of materials, carcinogens, biological guide values and reference values are also included.
Kim, Seung-Kyu; Park, Jong-Eun
2014-06-01
Despite remarkable achievements with r some chemicals, a field-measurement technique has not been advanced for volatile hydrophobic organic chemicals (HOCs) that are the subjects of international concern. This study assesses the applicability of passive air sampling (PAS) by comparing PUF-PAS and its modified SIP-PAS which was made by impregnating XAD-4 powder into PUF, overviewing the principles of PAS, screening sensitive parameters, and determining the uncertainty range of PAS-derived concentration. The PAS air sampling rate determined in this study, corrected by a co-deployed low-volume active air sampler (LAS) for neutral PFCs as model chemicals, was ˜1.2 m3 day-1. Our assessment shows that the improved sorption capacity in a SIP lengthens PAS deployment duration by expanding the linear uptake range and then enlarges the effective air sampling volume and detection frequency of chemicals at trace level. Consequently, volatile chemicals can be collected during sufficiently long times without reaching equilibrium when using SIP, while this is not possible for PUF. The most sensitive parameter to influence PAS-derived CA was an air-side mass transfer coefficient (kA), implying the necessity of spiking depuration chemicals (DCs) because this parameter is strongly related with meteorological conditions. Uncertainty in partition coefficients (KPSM-A or KOA) influences PAS-derived CA to a greater extent with regard to lower KPSM-A chemicals. Also, the PAS-derived CA has an uncertainty range of a half level to a 3-fold higher level of the calculated one. This work is expected to establish solid grounds for the improvement of field measurement technique of HOCs.
Maringer, F.J. [Bundesversuchs- und Forschungsanstalt Arsenal, Vienna (Austria); Akis, M.C.; Stadtmann, H. [Oesterreichisches Forschungszentrum Seibersdorf GmbH (Austria); Kaineder, H. [Amt der Oberoesterreichischen Landesregierung, Linz (Austria); Kindl, P. [Technische Univ., Graz (Austria); Kralik, C. [Bundesanstalt fuer Lebensmitteluntersuchung und -forschung, Vienna (Austria); Lettner, H.; Winkler, R. [Salzburg Univ. (Austria); Ringer, W. [Salzburg Univ. (Austria)]|[Atominstitut der Oesterreichischen Universitaeten, Vienna (Austria)
1998-12-31
Within the Austrian radon mitigation project `SARAH` different methods of radon diagnosis had been used. For these investigations a `Blower-Door` had been employed to apply a low pressure and to look for radon entry paths. On the occasion of the radon sniffing the team got the idea to measure the radon concentration in the Blower-Door exhaust air to get an estimate of the long-term average radon concentration in the building. In this paper the new method and their application possibilities are given. The estimation of the average radon entry rate, the average long-term radon concentration, and the evaluation of the mitigation success are described and discussed. The advantage of this procedure is to obtain a result for the annual mean indoor radon concentration after only about three hours. (orig.) [Deutsch] Im Rahmen des oesterreichischen Radonsanierungsprojekts `SARAH` wurden verschiedene Methoden zur Radondiagnose von Gebaeuden angewandt. Zum raschen Auffinden von Radoneintrittspfaden wurde auch ein `Blower-Door` zur Applikation eines Unterdrucks (-50 Pa) innerhalb der untersuchten Haeuser verwendet. Dabei entsprang die Idee, durch Messung der Radonkonzentration der Blower-Door-Abluft einen Hinweis auf die durchschnittliche Radonkonzentration im Gebaeude zu erhalten. In dieser Arbeit werden die neue Methode und deren Anwendungsmoeglichkeit zur Abschaetzung der mittleren Radoneintrittsrate und der langzeitlich mittleren Radonkonzentrationen (`Jahresmittelwert`) sowie des Sanierungserfolges (Ausmass der Radonreduktion) eines Gebaeudes beschrieben und diskutiert. Der Vorteil der Methode liegt darin, dass innerhalb von etwa drei Stunden Messzeit eine Abschaetzung fuer den Jahresmittelwert der Radonkonzentration eines Gebaeudes vorliegt. (orig.)
杨帆; 周亮; 林蔚; 徐建刚
2016-01-01
基于NASA的全球大气PM2.5年均污染浓度栅格数据，通过区统计非洲各国大气PM2.5浓度均值及建立空间数据库，利用重力模型、ESDA模型及GIS空间统计分析方法，对非洲52个主要国家（地区）2001年～2010年间的大气PM2.5污染浓度空间格局演化特征进行探究，并依据时间序列特征将研究对象划分为8类。研究结果表明：淤2001年～2010年非洲大气PM2.5污染浓度大致呈现“中间高、南北低；西部高、东部低”的空间特征；其中高值区集中分布在非洲西部几内亚湾附近的尼日利亚、刚果与喀麦隆等国家，低值区则广泛地分布在北非、南非以及非洲东南部印度洋沿岸地区或岛屿。于基于ESDA模型的空间自相关分析发现PM2.5浓度“高—高”热点区主要集聚在几内亚湾附近，“低—低”冷点区集中在东南部印度洋沿岸的南非、莫桑比克与马达加斯加岛。盂时序上2001年～2010年非洲PM2.5年平均污染浓度呈现明显下降趋势，其中32个国家2010年PM2.5污染年均浓度低于2001年。榆从自然环境条件及社会经济因素两方面浅析其空间格局主要成因：几内亚湾沿岸是非洲PM2.5污染最严重的地区，因其人口稠密且高度依赖石油产业；非洲东南部地区PM2.5污染最轻，得益于其良好的自然环境条件及低污染的支柱产业。%Based on NASA’s global annual average PM2.5 grids, by vectoring the administrative boundaries of African countries, calculating the average con-centration of atmospheric PM2.5 of all countries and building the spatial database, using gravity model, ESDA model and GIS spatial statistical analysis method, we explore the spatial pattern evolution of the atmospheric PM2.5 pol-lution concentration of 52 main African countries from 2001 to 2010. According to their time series features, we classify them into 8 types. The results show that: (1) The concentration of
Mascarenhas-Melo, Filipa; Palavra, Filipe; Marado, Daniela; Sereno, José; Teixeira-Lemos, Edite; Freitas, Isabel; Isabel-Mendonça, Maria; Pinto, Rui; Teixeira, Frederico; Reis, Flávio
2013-01-01
This study intended to determine the impact of HDL-c and/or TGs levels on patients with average LDL-c concentration, focusing on lipidic, oxidative, inflammatory, and angiogenic profiles. Patients with cardiovascular risk factors (n = 169) were divided into 4 subgroups, combining normal and low HDL-c with normal and high TGs patients. The following data was analyzed: BP, BMI, waist circumference and serum glucose, Total-c, TGs, LDL-c, oxidized-LDL, total HDL-c and HDL subpopulations, paraoxonase-1 (PON1) activity, hsCRP, uric acid, TNF- α , adiponectin, VEGF, and iCAM1. The two populations with increased TGs levels, regardless of the normal or low HDL-c, presented obesity and higher waist circumference, Total-c, LDL-c, Ox-LDL, and uric acid. Adiponectin concentration was significantly lower and VEGF was higher in the population with cumulative low values of HDL-c and high values of TGs, while HDL quality was reduced in the populations with impaired values of HDL-c and/or TGs, viewed by reduced large and increased small HDL subfractions. In conclusion, in a population with cardiovascular risk factors, low HDL-c and/or high TGs concentrations seem to be associated with a poor cardiometabolic profile, despite average LDL-c levels. This condition, often called residual risk, is better evidenced by using both traditional and nontraditional CV biomarkers, including large and small HDL subfractions, Ox-LDL, adiponectin, VEGF, and uric acid.
Maximum permissible concentrations of uranium in air
Adams, N
1973-01-01
The retention of uranium by bone and kidney has been re-evaluated taking account of recently published data for a man who had been occupationally exposed to natural uranium aerosols and for adults who had ingested uranium at the normal dietary levels. For life-time occupational exposure to uranium aerosols the new retention functions yield a greater retention in bone and a smaller retention in kidney than the earlier ones, which were based on acute intakes of uranium by terminal patients. Hence bone replaces kidney as the critical organ. The (MPC) sub a for uranium 238 on radiological considerations using the current (1959) ICRP lung model for the new retention functions is slightly smaller than for earlier functions but the (MPC) sub a determined by chemical toxicity remains the most restrictive.
Steffy, D. A.; Nichols, A.; Morgan, J.; Gibbs, R.
2013-12-01
Sediment samples were collected during the fall of 2010 and 2011from across the Gulf of Mexico outer continental shelf (OCS). A Tukey range test was used to compare samples between the relict sand deposits of the northern Gulf OCS to the relict carbonate sediments off of western Florida OCS. Tests indicate that nickel, vanadium, and lead are significantly higher (p < 0.05) in the seasonal average concentrations in the relict sand deposits closer to the Deepwater Horizon Well. These metals also significantly decreased (p < 0.05) from 2010 to 2011 in each region. These changes can be explained by the presence of a new source for these metals in the crude oil released from the Deepwater Horizon Oil Spill during the spring of 2010. Chromium and thallium did not vary seasonally or between the two areas of the OCS being investigated.
Lake Basin Fetch and Maximum Length/Width
Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...
Siegel, Irving H.
The arithmetic processes of aggregation and averaging are basic to quantitative investigations of employment, unemployment, and related concepts. In explaining these concepts, this report stresses need for accuracy and consistency in measurements, and describes tools for analyzing alternative measures. (BH)
Gramkow, Claus
1999-01-01
In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion...
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
赵明; 谢松梅; 杨劲; 魏敏吉
2014-01-01
During the assessment of bioequivalence in our country ,the in-terval limits for maximum concentration ( Cmax ) is in an alternating phase.The aim of this paper is to introduce some principles and thoughts of bioequivalence for area under the curve (AUC) and Cmax.Examples of two drugs evaluation were presented here , which might be helpful for the development and review of generics.%在国内的生物等效性评价中，峰浓度（Cmax ）的等效界值尚处在新老标准交替阶段。当 Cmax处在新老标准之间，如何进行审评决策，是一个需要认真考虑的问题。本文通过2个药物审评实例，介绍关于该类问题的思考原则和思路，以期为仿制药的研究开发和审评提供参考。
Kethireddy, V; Oey, I; Jowett, Tim; Bremer, P
2016-09-16
Sub-lethal injury within a microbial population, due to processing treatments or environmental stress, is often assessed as the difference in the number of cells recovered on non-selective media compared to numbers recovered on a "selective media" containing a predetermined maximum non-inhibitory concentration (MNIC) of a selective agent. However, as knowledge of cell metabolic response to injury, population diversity and dynamics increased, the rationale behind the conventional approach of quantifying sub-lethal injury must be scrutinized further. This study reassessed the methodology used to quantify sub-lethal injury for Saccharomyces cerevisiae cells (≈ 4.75 Log CFU/mL) exposed to either a mild thermal (45°C for 0, 10 and 20min) or a mild pulsed electric field treatment (field strengths of 8.0-9.0kV/cm and energy levels of 8, 14 and 21kJ/kg). Treated cells were plated onto either Yeast Malt agar (YM) or YM containing NaCl, as a selective agent at 5-15% in 1% increments. The impact of sub-lethal stress due to initial processing, the stress due to selective agents in the plating media, and the subsequent variation of inhibition following the treatments was assessed based on the CFU count (cell numbers). ANOVA and a generalised least squares model indicated significant effects of media, treatments, and their interaction effects (P<0.05) on cell numbers. It was shown that the concentration of the selective agent used dictated the extent of sub-lethal injury recorded owing to the interaction effects of the selective component (NaCl) in the recovery media. Our findings highlight a potential common misunderstanding on how culture conditions impact on sub-lethal injury. Interestingly for S. cerevisiae cells the number of cells recovered at different NaCl concentrations in the media appears to provide valuable information about the mode of injury, the comparative efficacy of different processing regimes and the inherent degree of resistance within a population. This
Young, Vershawn Ashanti
2004-01-01
"Your Average Nigga" contends that just as exaggerating the differences between black and white language leaves some black speakers, especially those from the ghetto, at an impasse, so exaggerating and reifying the differences between the races leaves blacks in the impossible position of either having to try to be white or forever struggling to…
Gramkow, Claus
2001-01-01
In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...
Covariant approximation averaging
Shintani, Eigo; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph
2014-01-01
We present a new class of statistical error reduction techniques for Monte-Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in $N_f=2+1$ lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte-Carlo calculations over conventional methods for the same cost.
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously
Negative Average Preference Utilitarianism
Roger Chao
2012-03-01
Full Text Available For many philosophers working in the area of Population Ethics, it seems that either they have to confront the Repugnant Conclusion (where they are forced to the conclusion of creating massive amounts of lives barely worth living, or they have to confront the Non-Identity Problem (where no one is seemingly harmed as their existence is dependent on the “harmful” event that took place. To them it seems there is no escape, they either have to face one problem or the other. However, there is a way around this, allowing us to escape the Repugnant Conclusion, by using what I will call Negative Average Preference Utilitarianism (NAPU – which though similar to anti-frustrationism, has some important differences in practice. Current “positive” forms of utilitarianism have struggled to deal with the Repugnant Conclusion, as their theory actually entails this conclusion; however, it seems that a form of Negative Average Preference Utilitarianism (NAPU easily escapes this dilemma (it never even arises within it.
Sever, Peter S; Dahlöf, Björn; Poulter, Neil R;
2003-01-01
The lowering of cholesterol concentrations in individuals at high risk of cardiovascular disease improves outcome. No study, however, has assessed benefits of cholesterol lowering in the primary prevention of coronary heart disease (CHD) in hypertensive patients who are not conventionally deemed...
Sever, Peter S; Dahlöf, Björn; Poulter, Neil R;
2003-01-01
The lowering of cholesterol concentrations in individuals at high risk of cardiovascular disease improves outcome. No study, however, has assessed benefits of cholesterol lowering in the primary prevention of coronary heart disease (CHD) in hypertensive patients who are not conventionally deemed ...
Sever, Peter S; Dahlöf, Björn; Poulter, Neil R;
2004-01-01
The lowering of cholesterol concentrations in individuals at high risk of cardiovascular disease improves outcome. No study, however, has assessed benefits of cholesterol lowering in the primary prevention of coronary heart disease (CHD) in hypertensive patients who are not conventionally deemed...
Ensemble Averaged Gravity Theory
Khosravi, Nima
2016-01-01
We put forward the idea that all the theoretically consistent models of gravity have a contribution to the observed gravity interaction. In this formulation each model comes with its own Euclidean path integral weight where general relativity (GR) automatically has the maximum weight in high-curvature regions. We employ this idea in the framework of Lovelock models and show that in four dimensions the result is a specific form of $f(R,G)$ model. This specific $f(R,G)$ satisfies the stability conditions and has self-accelerating solution. Our model is consistent with the local tests of gravity since its behavior is same as GR for high-curvature regimes. In low-curvature regime the gravity force is weaker than GR which can interpret as existence of a repulsive fifth force for very large scales. Interestingly there is an intermediate-curvature regime where the gravity force is stronger in our model than GR. The different behavior of our model in comparison with GR in both low- and intermediate-curvature regimes ...
Independence, Odd Girth, and Average Degree
Löwenstein, Christian; Pedersen, Anders Sune; Rautenbach, Dieter;
2011-01-01
We prove several tight lower bounds in terms of the order and the average degree for the independence number of graphs that are connected and/or satisfy some odd girth condition. Our main result is the extension of a lower bound for the independence number of triangle-free graphs of maximum...
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...
Remarks on the Lower Bounds for the Average Genus
Yi-chao Chen
2011-01-01
Let G be a graph of maximum degree at most four. By using the overlap matrix method which is introduced by B. Mohar, we show that the average genus of G is not less than 1/3 of its maximum genus, and the bound is best possible. Also, a new lower bound of average genus in terms of girth is derived.
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Average subentropy, coherence and entanglement of random mixed quantum states
Zhang, Lin; Singh, Uttam; Pati, Arun K.
2017-02-01
Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate that mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
Physical Theories with Average Symmetry
Alamino, Roberto C.
2013-01-01
This Letter probes the existence of physical laws invariant only in average when subjected to some transformation. The concept of a symmetry transformation is broadened to include corruption by random noise and average symmetry is introduced by considering functions which are invariant only in average under these transformations. It is then shown that actions with average symmetry obey a modified version of Noether's Theorem with dissipative currents. The relation of this with possible violat...
Average Convexity in Communication Situations
Slikker, M.
1998-01-01
In this paper we study inheritance properties of average convexity in communication situations. We show that the underlying graph ensures that the graphrestricted game originating from an average convex game is average convex if and only if every subgraph associated with a component of the underlyin
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Concentrations of methoxyflurane and nitrous oxide in veterinary operating rooms
Ward, G.S.; Byland, R.R.
1982-02-01
The surgical rooms of 14 private veterinary practices were monitored to determined methoxyflurane (MOF) concentrations during surgical procedure under routine working conditions. The average room volume for these 14 rooms was 29 m3. The average MOF value for all rooms was 2.3 ppm, with a range of 0.7 to 7.4 ppm. Four of the 14 rooms exceeded the maximum recommended concentration of 2 ppm. Six rooms which had 6 or more air changes/hr averaged 1.1 ppm, whereas 8 rooms with less than 6 measurable air changes/hr averaged 3.2 ppm. Operating rooms that had oxygen flows of more than 1,000 cm3/min averaged 4.4 ppm, whereas those with flows of less than 1,000 cm3/min averaged 1.5 ppm. The average time spent during a surgical procedure using MOF, for all 14 facilities, was 2 hours. Nitrous oxide (N/sub 2/O) concentrations were determined in 4 veterinary surgical rooms. The average N/sub 2/O concentration for 3 rooms without waste anesthetic gas scavenging was 138 ppm. Concentration of N/sub 2/O in the waste anesthetic gas-scavenged surgical room was 14 ppm, which was below the maximum recommended concentration of 25 ppm.
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Sampling Based Average Classifier Fusion
Jian Hou
2014-01-01
fusion algorithms have been proposed in literature, average fusion is almost always selected as the baseline for comparison. Little is done on exploring the potential of average fusion and proposing a better baseline. In this paper we empirically investigate the behavior of soft labels and classifiers in average fusion. As a result, we find that; by proper sampling of soft labels and classifiers, the average fusion performance can be evidently improved. This result presents sampling based average fusion as a better baseline; that is, a newly proposed classifier fusion algorithm should at least perform better than this baseline in order to demonstrate its effectiveness.
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
Equalized near maximum likelihood detector
2012-01-01
This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Physical Theories with Average Symmetry
Alamino, Roberto C
2013-01-01
This Letter probes the existence of physical laws invariant only in average when subjected to some transformation. The concept of a symmetry transformation is broadened to include corruption by random noise and average symmetry is introduced by considering functions which are invariant only in average under these transformations. It is then shown that actions with average symmetry obey a modified version of Noether's Theorem with dissipative currents. The relation of this with possible violations of physical symmetries, as for instance Lorentz invariance in some quantum gravity theories, is briefly commented.
Maximum phonation time: variability and reliability.
Speyer, Renée; Bogaardt, Hans C A; Passos, Valéria Lima; Roodenburg, Nel P H D; Zumach, Anne; Heijnen, Mariëlle A M; Baijens, Laura W J; Fleskens, Stijn J H M; Brunings, Jan W
2010-05-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia versus a group of healthy control subjects matched by age and gender. Over a period of maximally 6 weeks, three video recordings were made of five subjects' maximum phonation time trials. A panel of five experts were responsible for all measurements, including a repeated measurement of the subjects' first recordings. Patients showed significantly shorter maximum phonation times compared with healthy controls (on average, 6.6 seconds shorter). The averaged interclass correlation coefficient (ICC) over all raters per trial for the first day was 0.998. The averaged reliability coefficient per rater and per trial for repeated measurements of the first day's data was 0.997, indicating high intrarater reliability. The mean reliability coefficient per day for one trial was 0.939. When using five trials, the reliability increased to 0.987. The reliability over five trials for a single day was 0.836; for 2 days, 0.911; and for 3 days, 0.935. To conclude, the maximum phonation time has proven to be a highly reliable measure in voice assessment. A single rater is sufficient to provide highly reliable measurements.
1990-11-01
findings contained in this report are thosE Df the author(s) and should not he construed as an official Department Df the Army position, policy , or...Marquardt methods" to perform linear and nonlinear estimations. One idea in this area by Box and Jenkins (1976) was the " backcasting " procedure to evaluate
Tehsin, Sara; Rehman, Saad; Awan, Ahmad B.; Chaudry, Qaiser; Abbas, Muhammad; Young, Rupert; Asif, Afia
2016-04-01
Sensitivity to the variations in the reference image is a major concern when recognizing target objects. A combinational framework of correlation filters and logarithmic transformation has been previously reported to resolve this issue alongside catering for scale and rotation changes of the object in the presence of distortion and noise. In this paper, we have extended the work to include the influence of different logarithmic bases on the resultant correlation plane. The meaningful changes in correlation parameters along with contraction/expansion in the correlation plane peak have been identified under different scenarios. Based on our research, we propose some specific log bases to be used in logarithmically transformed correlation filters for achieving suitable tolerance to different variations. The study is based upon testing a range of logarithmic bases for different situations and finding an optimal logarithmic base for each particular set of distortions. Our results show improved correlation and target detection accuracies.
Morales-Casique, E.; Neuman, S.P.; Vesselinov, V.V.
2010-01-01
We use log permeability and porosity data obtained from single-hole pneumatic packer tests in six boreholes drilled into unsaturated fractured tuff near Superior, Arizona, to postulate, calibrate and compare five alternative variogram models (exponential, exponential with linear drift, power, trunca
Quantized average consensus with delay
Jafarian, Matin; De Persis, Claudio
2012-01-01
Average consensus problem is a special case of cooperative control in which the agents of the network asymptotically converge to the average state (i.e., position) of the network by transferring information via a communication topology. One of the issues of the large scale networks is the cost of co
Trajectory averaging for stochastic approximation MCMC algorithms
Liang, Faming
2010-01-01
The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400--407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305--320]. The application of the trajectory averaging estimator to other stochastic approximation MCMC algorithms, for example, a stochastic approximation MLE al...
Solar concentrator with a toroidal relay module.
Lin, Jhe-Syuan; Liang, Chao-Wen
2015-10-01
III-V multijunction solar cells require solar concentrators with a high concentration ratio to reduce per watt cost and to increase solar energy transforming efficiency. This paper discusses a novel solar concentrator design that features a high concentration ratio, high transfer efficiency, thin profile design, and a high solar acceptance angle. The optical design of the concentrator utilizes a toroidal relay module, which includes both the off-axis relay lens and field lens design in a single concentric toroidal lens shape. The optical design concept of the concentrator is discussed and the simulation results are shown. The given exemplary design has an aspect ratio of 0.24, a high averaged optical concentration ratio 1230×, a maximum efficiency of 76.8%, and the solar acceptance angle of ±0.9°.
Gaussian moving averages and semimartingales
Basse-O'Connor, Andreas
2008-01-01
In the present paper we study moving averages (also known as stochastic convolutions) driven by a Wiener process and with a deterministic kernel. Necessary and sufficient conditions on the kernel are provided for the moving average to be a semimartingale in its natural filtration. Our results...... are constructive - meaning that they provide a simple method to obtain kernels for which the moving average is a semimartingale or a Wiener process. Several examples are considered. In the last part of the paper we study general Gaussian processes with stationary increments. We provide necessary and sufficient...
Vocal attractiveness increases by averaging.
Bruckert, Laetitia; Bestelmeyer, Patricia; Latinus, Marianne; Rouger, Julien; Charest, Ian; Rousselet, Guillaume A; Kawahara, Hideki; Belin, Pascal
2010-01-26
Vocal attractiveness has a profound influence on listeners-a bias known as the "what sounds beautiful is good" vocal attractiveness stereotype [1]-with tangible impact on a voice owner's success at mating, job applications, and/or elections. The prevailing view holds that attractive voices are those that signal desirable attributes in a potential mate [2-4]-e.g., lower pitch in male voices. However, this account does not explain our preferences in more general social contexts in which voices of both genders are evaluated. Here we show that averaging voices via auditory morphing [5] results in more attractive voices, irrespective of the speaker's or listener's gender. Moreover, we show that this phenomenon is largely explained by two independent by-products of averaging: a smoother voice texture (reduced aperiodicities) and a greater similarity in pitch and timbre with the average of all voices (reduced "distance to mean"). These results provide the first evidence for a phenomenon of vocal attractiveness increases by averaging, analogous to a well-established effect of facial averaging [6, 7]. They highlight prototype-based coding [8] as a central feature of voice perception, emphasizing the similarity in the mechanisms of face and voice perception.
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously rising rotation curve until the outermost measured radial position. That is why a general relation has been derived, giving the maximum rotation for a disc depending on the luminosity, surface brightness, and colour of the disc. As a physical basis of this relation serves an adopted fixed mass-to-light ratio as a function of colour. That functionality is consistent with results from population synthesis models and its absolute value is determined from the observed stellar velocity dispersions. The derived maximum disc rotation is compared with a number of observed maximum rotations, clearly demonstrating the need for appreciable amounts of dark matter in the disc region and even more so for LSB galaxies. Matters h...
Cycle Average Peak Fuel Temperature Prediction Using CAPP/GAMMA+
Tak, Nam-il; Lee, Hyun Chul; Lim, Hong Sik [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2015-10-15
In order to obtain a cycle average maximum fuel temperature without rigorous efforts, a neutronics/thermo-fluid coupled calculation is needed with depletion capability. Recently, a CAPP/GAMMA+ coupled code system has been developed and the initial core of PMR200 was analyzed using the CAPP/GAMMA+ code system. The GAMMA+ code is a system thermo-fluid analysis code and the CAPP code is a neutronics code. The General Atomics proposed that the design limit of the fuel temperature under normal operating conditions should be a cycle-averaged maximum value. Nonetheless, the existing works of Korea Atomic Energy Research Institute (KAERI) only calculated the maximum fuel temperature at a fixed time point, e.g., the beginning of cycle (BOC) just because the calculation capability was not ready for a cycle average value. In this work, a cycle average maximum fuel temperature has been calculated using CAPP/GAMMA+ code system for the equilibrium core of PMR200. The CAPP/GAMMA+ coupled calculation was carried out for the equilibrium core of PMR 200 from BOC to EOC to obtain a cycle average peak fuel temperature. The peak fuel temperature was predicted to be 1372 .deg. C near MOC. However, the cycle average peak fuel temperature was calculated as 1181 .deg. C, which is below the design target of 1250 .deg. C.
Averaged Electroencephalic Audiometry in Infants
Lentz, William E.; McCandless, Geary A.
1971-01-01
Normal, preterm, and high-risk infants were tested at 1, 3, 6, and 12 months of age using averaged electroencephalic audiometry (AEA) to determine the usefulness of AEA as a measurement technique for assessing auditory acuity in infants, and to delineate some of the procedural and technical problems often encountered. (KW)
Ergodic averages via dominating processes
Møller, Jesper; Mengersen, Kerrie
2006-01-01
We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary ...
Maximum-biomass prediction of homofermentative Lactobacillus.
Cui, Shumao; Zhao, Jianxin; Liu, Xiaoming; Chen, Yong Q; Zhang, Hao; Chen, Wei
2016-07-01
Fed-batch and pH-controlled cultures have been widely used for industrial production of probiotics. The aim of this study was to systematically investigate the relationship between the maximum biomass of different homofermentative Lactobacillus and lactate accumulation, and to develop a prediction equation for the maximum biomass concentration in such cultures. The accumulation of the end products and the depletion of nutrients by various strains were evaluated. In addition, the minimum inhibitory concentrations (MICs) of acid anions for various strains at pH 7.0 were examined. The lactate concentration at the point of complete inhibition was not significantly different from the MIC of lactate for all of the strains, although the inhibition mechanism of lactate and acetate on Lactobacillus rhamnosus was different from the other strains which were inhibited by the osmotic pressure caused by acid anions at pH 7.0. When the lactate concentration accumulated to the MIC, the strains stopped growing. The maximum biomass was closely related to the biomass yield per unit of lactate produced (YX/P) and the MIC (C) of lactate for different homofermentative Lactobacillus. Based on the experimental data obtained using different homofermentative Lactobacillus, a prediction equation was established as follows: Xmax - X0 = (0.59 ± 0.02)·YX/P·C.
High average power supercontinuum sources
J C Travers
2010-11-01
The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium. The most common experimental arrangements are described, including both continuous wave fibre laser systems with over 100 W pump power, and picosecond mode-locked, master oscillator power fibre amplifier systems, with over 10 kW peak pump power. These systems can produce broadband supercontinua with over 50 and 1 mW/nm average spectral power, respectively. Techniques for numerical modelling of the supercontinuum sources are presented and used to illustrate some supercontinuum dynamics. Some recent experimental results are presented.
Dependability in Aggregation by Averaging
Jesus, Paulo; Almeida, Paulo Sérgio
2010-01-01
Aggregation is an important building block of modern distributed applications, allowing the determination of meaningful properties (e.g. network size, total storage capacity, average load, majorities, etc.) that are used to direct the execution of the system. However, the majority of the existing aggregation algorithms exhibit relevant dependability issues, when prospecting their use in real application environments. In this paper, we reveal some dependability issues of aggregation algorithms based on iterative averaging techniques, giving some directions to solve them. This class of algorithms is considered robust (when compared to common tree-based approaches), being independent from the used routing topology and providing an aggregation result at all nodes. However, their robustness is strongly challenged and their correctness often compromised, when changing the assumptions of their working environment to more realistic ones. The correctness of this class of algorithms relies on the maintenance of a funda...
Measuring Complexity through Average Symmetry
Alamino, Roberto C.
2015-01-01
This work introduces a complexity measure which addresses some conflicting issues between existing ones by using a new principle - measuring the average amount of symmetry broken by an object. It attributes low (although different) complexity to either deterministic or random homogeneous densities and higher complexity to the intermediate cases. This new measure is easily computable, breaks the coarse graining paradigm and can be straightforwardly generalised, including to continuous cases an...
陈嘉; 李拥军; 杨文萍
2009-01-01
Objective To study the effects of carbon disulfide exposure within the national maximum allowable concentration(MAC) on blood pressure and electrocardiogram, and associations with selected factors. Methods Workers in a chemical fiber factory were divided into two groups based on the type of work: a high exposure group (HEG) of 821 individuals and a low exposure group (LEG) of 259. The CS_2 concentration at workplace was controlled under the national MAC. A set of 250 randomly selected people taking routine phys-ical check-ups in the same period and hospital constituted the control group. The systolic blood pressure (SBP) and diastolic hlood pressure (DBP) were measured on the arm, and the pulse pressure (PP) and mean arterial blood pressure (MABP) were calculated based on SBP and DBP. The blood pressure data, along with the results of the routine 12-lead electrocardiography taken at rest and records on gender, age, years of work, type of work, and concentrations of triglycerol, cholesterol, and glucose in blood, were compiled for analyses. Risk factors upon CS_2 exposure for the increase of blood pressure and occurrence of electrocardiogram abnor-malities were identified and rationalized. Results Significant difference (P<0.01) in the average values of SBP, DBP, MABP, and the corresponding abnormality incident rates was found between HEG and LEG, and between HEG and the control group. For both HEG and LEG, the incident rate of DBP abnormality(high DBP) is nearly two times as high as that of SBP. Type of work is the largest risk factor in both the high SBP and high DBP subgroups, with odds ratios (OR) of 2.086 and 2.331 respectively, and high CS_2 exposure presents more than double the risk than low exposure. On the incident rate of ECG abnormalities, beth exposure groups are significantly different (P<0.01) to the control group. High SBP in LEG and high DBP in HEG were found to be significant risk factors (OR = 3.531 and 1.638 respectively), while blood glucose
Mirror averaging with sparsity priors
Dalalyan, Arnak
2010-01-01
We consider the problem of aggregating the elements of a (possibly infinite) dictionary for building a decision procedure, that aims at minimizing a given criterion. Along with the dictionary, an independent identically distributed training sample is available, on which the performance of a given procedure can be tested. In a fairly general set-up, we establish an oracle inequality for the Mirror Averaging aggregate based on any prior distribution. This oracle inequality is applied in the context of sparse coding for different problems of statistics and machine learning such as regression, density estimation and binary classification.
Gale, Robert W.
2007-01-01
The Commonwealth of Virginia Department of Environmental Quality, working closely with the State of West Virginia Department of Environmental Protection and the U.S. Environmental Protection Agency is undertaking a polychlorinated biphenyl source assessment study for the Bluestone River watershed. The study area extends from the Bluefield area of Virginia and West Virginia, targets the Bluestone River and tributaries suspected of contributing to polychlorinated biphenyl, polychlorinated dibenzo-p-dioxin and dibenzofuran contamination, and includes sites near confluences of Big Branch, Brush Fork, and Beaver Pond Creek. The objectives of this study were to gather information about the concentrations, patterns, and distribution of these contaminants at specific study sites to expand current knowledge about polychlorinated biphenyl impacts and to identify potential new sources of contamination. Semipermeable membrane devices were used to integratively accumulate the dissolved fraction of the contaminants at each site. Performance reference compounds were added prior to deployment and used to determine site-specific sampling rates, enabling estimations of time-weighted average water concentrations during the deployed period. Minimum estimated concentrations of polychlorinated biphenyl congeners in water were about 1 picogram per liter per congener, and total concentrations at study sites ranged from 130 to 18,000 picograms per liter. The lowest concentration was 130 picograms per liter, about threefold greater than total hypothetical concentrations from background levels in field blanks. Polychlorinated biphenyl concentrations in water fell into three groups of sites: low (130-350 picogram per liter); medium (640-3,500 picogram per liter; and high (11,000-18,000 picogram per liter). Concentrations at the high sites, Beacon Cave and Beaverpond Branch at the Resurgence, were about four- to sixfold higher than concentrations estimated for the medium group of sites
Zipf's law, power laws and maximum entropy
Visser, Matt
2013-04-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Zipf's law, power laws, and maximum entropy
Visser, Matt
2012-01-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines - from astronomy to demographics to economics to linguistics to zoology, and even warfare. A recent model of random group formation [RGF] attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present article I argue that the cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Trajectory averaging for stochastic approximation MCMC algorithms
Liang, Faming
2010-10-01
The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.
MACHINE PROTECTION FOR HIGH AVERAGE CURRENT LINACS
Jordan, Kevin; Allison, Trent; Evans, Richard; Coleman, James; Grippo, Albert
2003-05-01
A fully integrated Machine Protection System (MPS) is critical to efficient commissioning and safe operation of all high current accelerators. The Jefferson Lab FEL [1,2] has multiple electron beam paths and many different types of diagnostic insertion devices. The MPS [3] needs to monitor both the status of these devices and the magnet settings which define the beam path. The matrix of these devices and beam paths are programmed into gate arrays, the output of the matrix is an allowable maximum average power limit. This power limit is enforced by the drive laser for the photocathode gun. The Beam Loss Monitors (BLMs), RF status, and laser safety system status are also inputs to the control matrix. There are 8 Machine Modes (electron path) and 8 Beam Modes (average power limits) that define the safe operating limits for the FEL. Combinations outside of this matrix are unsafe and the beam is inhibited. The power limits range from no beam to 2 megawatts of electron beam power.
Intensity contrast of the average supergranule
Langfellner, J; Gizon, L
2016-01-01
While the velocity fluctuations of supergranulation dominate the spectrum of solar convection at the solar surface, very little is known about the fluctuations in other physical quantities like temperature or density at supergranulation scale. Using SDO/HMI observations, we characterize the intensity contrast of solar supergranulation at the solar surface. We identify the positions of ${\\sim}10^4$ outflow and inflow regions at supergranulation scales, from which we construct average flow maps and co-aligned intensity and magnetic field maps. In the average outflow center, the maximum intensity contrast is $(7.8\\pm0.6)\\times10^{-4}$ (there is no corresponding feature in the line-of-sight magnetic field). This corresponds to a temperature perturbation of about $1.1\\pm0.1$ K, in agreement with previous studies. We discover an east-west anisotropy, with a slightly deeper intensity minimum east of the outflow center. The evolution is asymmetric in time: the intensity excess is larger 8 hours before the reference t...
OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
[Effect of Fe2+ concentration on kinetics of biohydrogen production].
Wan, Wei; Wang, Jian-long
2008-09-01
The effect of Fe2+ concentration ranging from 0 to 1500 mg/L on the kinetics of fermentative hydrogen production by mixed microbial culture was investigated. The results showed that, at 35 degrees C and initial pH 7.0, using glucose as substrate, hydrogen production potential and average hydrogen production rate increased with increasing Fe2+ concentration from 0 to 300 mg/L, with the maximum hydrogen production potential of 302.3 mL and maximum average hydrogen production rate of 30.0 mL/h being obtained at Fe2+ concentration of 300 mg/L. Hydrogen yield increased with increasing Fe2+ concentration from 0 to 350 mg/L, with the maximum hydrogen yield of 311.2 mL/g glucose being obtained at Fe2+ concentration of 350 mg/L. Modified Logistic model could describe the progress of cumulative hydrogen production in the batch tests successfully. Modified Han-Levenspiel model could describe the effect of Fe2+ concentrations on average hydrogen production rate successfully.
Ensemble average theory of gravity
Khosravi, Nima
2016-12-01
We put forward the idea that all the theoretically consistent models of gravity have contributions to the observed gravity interaction. In this formulation, each model comes with its own Euclidean path-integral weight where general relativity (GR) has automatically the maximum weight in high-curvature regions. We employ this idea in the framework of Lovelock models and show that in four dimensions the result is a specific form of the f (R ,G ) model. This specific f (R ,G ) satisfies the stability conditions and possesses self-accelerating solutions. Our model is consistent with the local tests of gravity since its behavior is the same as in GR for the high-curvature regime. In the low-curvature regime the gravitational force is weaker than in GR, which can be interpreted as the existence of a repulsive fifth force for very large scales. Interestingly, there is an intermediate-curvature regime where the gravitational force is stronger in our model compared to GR. The different behavior of our model in comparison with GR in both low- and intermediate-curvature regimes makes it observationally distinguishable from Λ CDM .
Maximum margin Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian
2012-03-01
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
Xu, Yadong; Serre, Marc L; Reyes, Jeanette; Vizuete, William
2016-04-19
To improve ozone exposure estimates for ambient concentrations at a national scale, we introduce our novel Regionalized Air Quality Model Performance (RAMP) approach to integrate chemical transport model (CTM) predictions with the available ozone observations using the Bayesian Maximum Entropy (BME) framework. The framework models the nonlinear and nonhomoscedastic relation between air pollution observations and CTM predictions and for the first time accounts for variability in CTM model performance. A validation analysis using only noncollocated data outside of a validation radius rv was performed and the R(2) between observations and re-estimated values for two daily metrics, the daily maximum 8-h average (DM8A) and the daily 24-h average (D24A) ozone concentrations, were obtained with the OBS scenario using ozone observations only in contrast with the RAMP and a Constant Air Quality Model Performance (CAMP) scenarios. We show that, by accounting for the spatial and temporal variability in model performance, our novel RAMP approach is able to extract more information in terms of R(2) increase percentage, with over 12 times for the DM8A and over 3.5 times for the D24A ozone concentrations, from CTM predictions than the CAMP approach assuming that model performance does not change across space and time.
Effect of ammonia concentration on fermentative hydrogen production by mixed cultures.
Wang, Bo; Wan, Wei; Wang, Jianlong
2009-02-01
The effect of ammonia concentration ranging from 0 to 10 g N/L on fermentative hydrogen production by mixed cultures was investigated in batch tests using glucose as substrate at 35 degrees C and initial pH 7.0. The experimental results showed that during the fermentative hydrogen production, the substrate degradation efficiency increased with increasing ammonia concentration from 0 to 0.01 g N/L. The hydrogen production potential, hydrogen yield and average hydrogen production rate increased with increasing ammonia concentration from 0 to 0.1g N/L. The maximum hydrogen production potential of 291.4 mL, maximum hydrogen yield of 298.8 mL/g glucose and maximum average hydrogen production rate of 8.5 mL/h were all obtained at the ammonia concentration of 0.1g N/L.
Greenslade, Thomas B., Jr.
1985-01-01
Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)
Abolishing the maximum tension principle
Dabrowski, Mariusz P
2015-01-01
We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Abolishing the maximum tension principle
Mariusz P. Da̧browski
2015-09-01
Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
On the maximum sufficient range of interstellar vessels
Cartin, Daniel
2011-01-01
This paper considers the likely maximum range of space vessels providing the basis of a mature interstellar transportation network. Using the principle of sufficiency, it is argued that this range will be less than three parsecs for the average interstellar vessel. This maximum range provides access from the Solar System to a large majority of nearby stellar systems, with total travel distances within the network not excessively greater than actual physical distance.
Maximum Genus of Strong Embeddings
Er-ling Wei; Yan-pei Liu; Han Ren
2003-01-01
The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.
Averaging and exact perturbations in LTB dust models
Sussman, Roberto A
2012-01-01
We introduce a scalar weighed average ("q-average") acting on concentric comoving domains in spherically symmetric Lemaitre-Tolman-Bondi (LTB) dust models. The resulting averaging formalism allows for an elegant coordinate independent dynamical study of the models, providing as well a valuable theoretical insight on the properties of scalar averaging in inhomogeneous spacetimes. The q-averages of those covariant scalars common to FLRW models (the "q-scalars") identically satisfy FLRW evolution laws and determine for every domain a unique FLRW background state. All curvature and kinematic proper tensors and their invariant contractions are expressible in terms of the q-scalars and their linear and quadratic local fluctuations, which convey the effects of inhomogeneity through the ratio of Weyl to Ricci curvature invariants and the magnitude of radial gradients. We define also non-local fluctuations associated with the intuitive notion of a "contrast" with respect to FLRW reference averaged values assigned to a...
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
Average monthly and annual climate maps for Bolivia
Vicente-Serrano, Sergio M.
2015-02-24
This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.
The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
Alternative Multiview Maximum Entropy Discrimination.
Chao, Guoqing; Sun, Shiliang
2016-07-01
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.
Maximum entropy principle and texture formation
Arminjon, M; Arminjon, Mayeul; Imbault, Didier
2006-01-01
The macro-to-micro transition in a heterogeneous material is envisaged as the selection of a probability distribution by the Principle of Maximum Entropy (MAXENT). The material is made of constituents, e.g. given crystal orientations. Each constituent is itself made of a large number of elementary constituents. The relevant probability is the volume fraction of the elementary constituents that belong to a given constituent and undergo a given stimulus. Assuming only obvious constraints in MAXENT means describing a maximally disordered material. This is proved to have the same average stimulus in each constituent. By adding a constraint in MAXENT, a new model, potentially interesting e.g. for texture prediction, is obtained.
Measurement of the average lifetime of b hadrons
Adriani, O.; Aguilar-Benitez, M.; Ahlen, S.; Alcaraz, J.; Aloisio, A.; Alverson, G.; Alviggi, M. G.; Ambrosi, G.; An, Q.; Anderhub, H.; Anderson, A. L.; Andreev, V. P.; Angelescu, T.; Antonov, L.; Antreasyan, D.; Arce, P.; Arefiev, A.; Atamanchuk, A.; Azemoon, T.; Aziz, T.; Baba, P. V. K. S.; Bagnaia, P.; Bakken, J. A.; Ball, R. C.; Banerjee, S.; Bao, J.; Barillère, R.; Barone, L.; Baschirotto, A.; Battiston, R.; Bay, A.; Becattini, F.; Bechtluft, J.; Becker, R.; Becker, U.; Behner, F.; Behrens, J.; Bencze, Gy. L.; Berdugo, J.; Berges, P.; Bertucci, B.; Betev, B. L.; Biasini, M.; Biland, A.; Bilei, G. M.; Bizzarri, R.; Blaising, J. J.; Bobbink, G. J.; Bock, R.; Böhm, A.; Borgia, B.; Bosetti, M.; Bourilkov, D.; Bourquin, M.; Boutigny, D.; Bouwens, B.; Brambilla, E.; Branson, J. G.; Brock, I. C.; Brooks, M.; Bujak, A.; Burger, J. D.; Burger, W. J.; Busenitz, J.; Buytenhuijs, A.; Cai, X. D.; Capell, M.; Caria, M.; Carlino, G.; Cartacci, A. M.; Castello, R.; Cerrada, M.; Cesaroni, F.; Chang, Y. H.; Chaturvedi, U. K.; Chemarin, M.; Chen, A.; Chen, C.; Chen, G.; Chen, G. M.; Chen, H. F.; Chen, H. S.; Chen, M.; Chen, W. Y.; Chiefari, G.; Chien, C. Y.; Choi, M. T.; Chung, S.; Civinini, C.; Clare, I.; Clare, R.; Coan, T. E.; Cohn, H. O.; Coignet, G.; Colino, N.; Contin, A.; Costantini, S.; Cotorobai, F.; Cui, X. T.; Cui, X. Y.; Dai, T. S.; D'Alessandro, R.; de Asmundis, R.; Degré, A.; Deiters, K.; Dénes, E.; Denes, P.; DeNotaristefani, F.; Dhina, M.; DiBitonto, D.; Diemoz, M.; Dimitrov, H. R.; Dionisi, C.; Ditmarr, M.; Djambazov, L.; Dova, M. T.; Drago, E.; Duchesneau, D.; Duinker, P.; Duran, I.; Easo, S.; El Mamouni, H.; Engler, A.; Eppling, F. J.; Erné, F. C.; Extermann, P.; Fabbretti, R.; Fabre, M.; Falciano, S.; Fan, S. J.; Fackler, O.; Fay, J.; Felcini, M.; Ferguson, T.; Fernandez, D.; Fernandez, G.; Ferroni, F.; Fesefeldt, H.; Fiandrini, E.; Field, J. H.; Filthaut, F.; Fisher, P. H.; Forconi, G.; Fredj, L.; Freudenreich, K.; Friebel, W.; Fukushima, M.; Gailloud, M.; Galaktionov, Yu.; Gallo, E.; Ganguli, S. N.; Garcia-Abia, P.; Gele, D.; Gentile, S.; Gheordanescu, N.; Giagu, S.; Goldfarb, S.; Gong, Z. F.; Gonzalez, E.; Gougas, A.; Goujon, D.; Gratta, G.; Gruenewald, M.; Gu, C.; Guanziroli, M.; Guo, J. K.; Gupta, V. K.; Gurtu, A.; Gustafson, H. R.; Gutay, L. J.; Hangarter, K.; Hartmann, B.; Hasan, A.; Hauschildt, D.; He, C. F.; He, J. T.; Hebbeker, T.; Hebert, M.; Hervé, A.; Hilgers, K.; Hofer, H.; Hoorani, H.; Hu, G.; Hu, G. Q.; Ille, B.; Ilyas, M. M.; Innocente, V.; Janssen, H.; Jezequel, S.; Jin, B. N.; Jones, L. W.; Josa-Mutuberria, I.; Kasser, A.; Khan, R. A.; Kamyshkov, Yu.; Kapinos, P.; Kapustinsky, J. S.; Karyotakis, Y.; Kaur, M.; Khokhar, S.; Kienzle-Focacci, M. N.; Kim, J. K.; Kim, S. C.; Kim, Y. G.; Kinnison, W. W.; Kirkby, A.; Kirkby, D.; Kirsch, S.; Kittel, W.; Klimentov, A.; Klöckner, R.; König, A. C.; Koffeman, E.; Kornadt, O.; Koutsenko, V.; Koulbardis, A.; Kraemer, R. W.; Kramer, T.; Krastev, V. R.; Krenz, W.; Krivshich, A.; Kuijten, H.; Kumar, K. S.; Kunin, A.; Landi, G.; Lanske, D.; Lanzano, S.; Lebedev, A.; Lebrun, P.; Lecomte, P.; Lecoq, P.; Le Coultre, P.; Lee, D. M.; Lee, J. S.; Lee, K. Y.; Leedom, I.; Leggett, C.; Le Goff, J. M.; Leiste, R.; Lenti, M.; Leonardi, E.; Li, C.; Li, H. T.; Li, P. J.; Liao, J. Y.; Lin, W. T.; Lin, Z. Y.; Linde, F. L.; Lindemann, B.; Lista, L.; Liu, Y.; Lohmann, W.; Longo, E.; Lu, Y. S.; Lubbers, J. M.; Lübelsmeyer, K.; Luci, C.; Luckey, D.; Ludovici, L.; Luminari, L.; Lustermann, W.; Ma, J. M.; Ma, W. G.; MacDermott, M.; Malik, R.; Malinin, A.; Maña, C.; Maolinbay, M.; Marchesini, P.; Marion, F.; Marin, A.; Martin, J. P.; Martinez-Laso, L.; Marzano, F.; Massaro, G. G. G.; Mazumdar, K.; McBride, P.; McMahon, T.; McNally, D.; Merk, M.; Merola, L.; Meschini, M.; Metzger, W. J.; Mi, Y.; Mihul, A.; Mills, G. B.; Mir, Y.; Mirabelli, G.; Mnich, J.; Möller, M.; Monteleoni, B.; Morand, R.; Morganti, S.; Moulai, N. E.; Mount, R.; Müller, S.; Nadtochy, A.; Nagy, E.; Napolitano, M.; Nessi-Tedaldi, F.; Newman, H.; Neyer, C.; Niaz, M. A.; Nippe, A.; Nowak, H.; Organtini, G.; Pandoulas, D.; Paoletti, S.; Paolucci, P.; Pascale, G.; Passaleva, G.; Patricelli, S.; Paul, T.; Pauluzzi, M.; Paus, C.; Pauss, F.; Pei, Y. J.; Pensotti, S.; Perret-Gallix, D.; Perrier, J.; Pevsner, A.; Piccolo, D.; Pieri, M.; Piroué, P. A.; Plasil, F.; Plyaskin, V.; Pohl, M.; Pojidaev, V.; Postema, H.; Qi, Z. D.; Qian, J. M.; Qureshi, K. N.; Raghavan, R.; Rahal-Callot, G.; Rancoita, P. G.; Rattaggi, M.; Raven, G.; Razis, P.; Read, K.; Ren, D.; Ren, Z.; Rescigno, M.; Reucroft, S.; Ricker, A.; Riemann, S.; Riemers, B. C.; Riles, K.; Rind, O.; Rizvi, H. A.; Ro, S.; Rodriguez, F. J.; Roe, B. P.; Röhner, M.; Romero, L.; Rosier-Lees, S.; Rosmalen, R.; Rosselet, Ph.; van Rossum, W.; Roth, S.; Rubbia, A.; Rubio, J. A.; Rykaczewski, H.; Sachwitz, M.; Salicio, J.; Salicio, J. M.; Sanders, G. S.; Santocchia, A.; Sarakinos, M. S.; Sartorelli, G.; Sassowsky, M.; Sauvage, G.; Schegelsky, V.; Schmitz, D.; Schmitz, P.; Schneegans, M.; Schopper, H.; Schotanus, D. J.; Shotkin, S.; Schreiber, H. J.; Shukla, J.; Schulte, R.; Schulte, S.; Schultze, K.; Schwenke, J.; Schwering, G.; Sciacca, C.; Scott, I.; Sehgal, R.; Seiler, P. G.; Sens, J. C.; Servoli, L.; Sheer, I.; Shen, D. Z.; Shevchenko, S.; Shi, X. R.; Shumilov, E.; Shoutko, V.; Son, D.; Sopczak, A.; Soulimov, V.; Spartiotis, C.; Spickermann, T.; Spillantini, P.; Starosta, R.; Steuer, M.; Stickland, D. P.; Sticozzi, F.; Stone, H.; Strauch, K.; Stringfellow, B. C.; Sudhakar, K.; Sultanov, G.; Sun, L. Z.; Susinno, G. F.; Suter, H.; Swain, J. D.; Syed, A. A.; Tang, X. W.; Taylor, L.; Terzi, G.; Ting, Samuel C. C.; Ting, S. M.; Tonutti, M.; Tonwar, S. C.; Tóth, J.; Tsaregorodtsev, A.; Tsipolitis, G.; Tully, C.; Tung, K. L.; Ulbricht, J.; Urbán, L.; Uwer, U.; Valente, E.; Van de Walle, R. T.; Vetlitsky, I.; Viertel, G.; Vikas, P.; Vikas, U.; Vivargent, M.; Vogel, H.; Vogt, H.; Vorobiev, I.; Vorobyov, A. A.; Vuilleumier, L.; Wadhwa, M.; Wallraff, W.; Wang, C.; Wang, C. R.; Wang, X. L.; Wang, Y. F.; Wang, Z. M.; Warner, C.; Weber, A.; Weber, J.; Weill, R.; Wenaus, T. J.; Wenninger, J.; White, M.; Willmott, C.; Wittgenstein, F.; Wright, D.; Wu, S. X.; Wynhoff, S.; Wysłouch, B.; Xie, Y. Y.; Xu, J. G.; Xu, Z. Z.; Xue, Z. L.; Yan, D. S.; Yang, B. Z.; Yang, C. G.; Yang, G.; Ye, C. H.; Ye, J. B.; Ye, Q.; Yeh, S. C.; Yin, Z. W.; You, J. M.; Yunus, N.; Yzerman, M.; Zaccardelli, C.; Zaitsev, N.; Zemp, P.; Zeng, M.; Zeng, Y.; Zhang, D. H.; Zhang, Z. P.; Zhou, B.; Zhou, G. J.; Zhou, J. F.; Zhu, R. Y.; Zichichi, A.; van der Zwaan, B. C. C.; L3 Collaboration
1993-11-01
The average lifetime of b hadrons has been measured using the L3 detector at LEP, running at √ s ≈ MZ. A b-enriched sample was obtained from 432538 hadronic Z events collected in 1990 and 1991 by tagging electrons and muons from semileptonic b hadron decays. From maximum likelihood fits to the electron and muon impact parameter distributions, the average b hadron lifetime was measured to be τb = (1535 ± 35 ± 28) fs, where the first error is statistical and the second includes both the experimental and the theoretical systematic uncertainties.
2010-01-01
... 7 Agriculture 10 2010-01-01 2010-01-01 false On average. 1209.12 Section 1209.12 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS....12 On average. On average means a rolling average of production or imports during the last two...
Thermodynamic hardness and the maximum hardness principle
Franco-Pérez, Marco; Gázquez, José L.; Ayers, Paul W.; Vela, Alberto
2017-08-01
An alternative definition of hardness (called the thermodynamic hardness) within the grand canonical ensemble formalism is proposed in terms of the partial derivative of the electronic chemical potential with respect to the thermodynamic chemical potential of the reservoir, keeping the temperature and the external potential constant. This temperature dependent definition may be interpreted as a measure of the propensity of a system to go through a charge transfer process when it interacts with other species, and thus it keeps the philosophy of the original definition. When the derivative is expressed in terms of the three-state ensemble model, in the regime of low temperatures and up to temperatures of chemical interest, one finds that for zero fractional charge, the thermodynamic hardness is proportional to T-1(I -A ) , where I is the first ionization potential, A is the electron affinity, and T is the temperature. However, the thermodynamic hardness is nearly zero when the fractional charge is different from zero. Thus, through the present definition, one avoids the presence of the Dirac delta function. We show that the chemical hardness defined in this way provides meaningful and discernible information about the hardness properties of a chemical species exhibiting integer or a fractional average number of electrons, and this analysis allowed us to establish a link between the maximum possible value of the hardness here defined, with the minimum softness principle, showing that both principles are related to minimum fractional charge and maximum stability conditions.
Maximum Likelihood Analysis in the PEN Experiment
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
Cacti with maximum Kirchhoff index
Wang, Wen-Rui; Pan, Xiang-Feng
2015-01-01
The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....
Bounds on Average Time Complexity of Decision Trees
Chikalov, Igor
2011-01-01
In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any diagnostic problem, the minimum average depth of decision tree is bounded from below by the entropy of probability distribution (with a multiplier 1/log2 k for a problem over a k-valued information system). Among diagnostic problems, the problems with a complete set of attributes have the lowest minimum average depth of decision trees (e.g, the problem of building optimal prefix code [1] and a blood test study in assumption that exactly one patient is ill [23]). For such problems, the minimum average depth of decision tree exceeds the lower bound by at most one. The minimum average depth reaches the maximum on the problems in which each attribute is "indispensable" [44] (e.g., a diagnostic problem with n attributes and kn pairwise different rows in the decision table and the problem of implementing the modulo 2 summation function). These problems have the minimum average depth of decision tree equal to the number of attributes in the problem description. © Springer-Verlag Berlin Heidelberg 2011.
Maziero, G C; Baunwart, C; Toledo, M C
2001-05-01
The theoretical maximum daily intakes (TMDI) of the phenolic antioxidants butylated hydroxyanisole (BHA), butylated hydroxytoluene (BHT) and tertbutyl hydroquinone (TBHQ) in Brazil were estimated using food consumption data derived from a household economic survey and a packaged goods market survey. The estimates were based on maximum levels of use of the food additives specified in national food standards. The calculated intakes of the three additives for the mean consumer were below the ADIs. Estimates of TMDI for BHA, BHT and TBHQ ranged from 0.09 to 0.15, 0.05 to 0.10 and 0.07 to 0.12 mg/kg of body weight, respectively. To check if the additives are actually used at their maximum authorized levels, analytical determinations of these compounds in selected food categories were carried out using HPLC with UV detection. BHT and TBHQ concentrations in foodstuffs considered to be representive sources of these antioxidants in the diet were below the respective maximum permitted levels. BHA was not detected in any of the analysed samples. Based on the maximal approach and on the analytical data, it is unlikely that the current ADI of BHA (0.5 mg/kg body weight), BHT (0.3 mg/kg body weight) and TBHQ (0.7 mg/kg body weight) will be exceeded in practice by the average Brazilian consumer.
Morrison, Glenn; Shaughnessy, Richard; Shu, Shi
2011-02-01
A Monte Carlo analysis of indoor ozone levels in four cities was applied to provide guidance to regulatory agencies on setting maximum ozone emission rates from consumer appliances. Measured distributions of air exchange rates, ozone decay rates and outdoor ozone levels at monitoring stations were combined with a steady-state indoor air quality model resulting in emission rate distributions (mg h -1) as a function of % of building hours protected from exceeding a target maximum indoor concentration of 20 ppb. Whole-year, summer and winter results for Elizabeth, NJ, Houston, TX, Windsor, ON, and Los Angeles, CA exhibited strong regional differences, primarily due to differences in air exchange rates. Infiltration of ambient ozone at higher average air exchange rates significantly reduces allowable emission rates, even though air exchange also dilutes emissions from appliances. For Houston, TX and Windsor, ON, which have lower average residential air exchange rates, emission rates ranged from -1.1 to 2.3 mg h -1 for scenarios that protect 80% or more of building hours from experiencing ozone concentrations greater than 20 ppb in summer. For Los Angeles, CA and Elizabeth, NJ, with higher air exchange rates, only negative emission rates were allowable to provide the same level of protection. For the 80th percentile residence, we estimate that an 8-h average limit concentration of 20 ppb would be exceeded, even in the absence of an indoor ozone source, 40 or more days per year in any of the cities analyzed. The negative emission rates emerging from the analysis suggest that only a zero-emission rate standard is prudent for Los Angeles, Elizabeth, NJ and other regions with higher summertime air exchange rates. For regions such as Houston with lower summertime air exchange rates, the higher emission rates would likely increase occupant exposure to the undesirable products of ozone reactions, thus reinforcing the need for zero-emission rate standard.
Bravo, J. L [Instituto de Geofisica, UNAM, Mexico, D.F. (Mexico); Nava, M. M [Instituto Mexicano del Petroleo, Mexico, D.F. (Mexico); Gay, C [Centro de Ciencias de la Atmosfera, UNAM, Mexico, D.F. (Mexico)
2001-07-01
We developed a procedure to forecast, with 2 or 3 hours, the daily maximum of surface ozone concentrations. It involves the adjustment of Autoregressive Integrated and Moving Average (ARIMA) models to daily ozone maximum concentrations at 10 monitoring atmospheric stations in Mexico City during one-year period. A one-day forecast is made and it is adjusted with the meteorological and solar radiation information acquired during the first 3 hours before the occurrence of the maximum value. The relative importance for forecasting of the history of the process and of meteorological conditions is evaluated. Finally an estimate of the daily probability of exceeding a given ozone level is made. [Spanish] Se aplica un procedimiento basado en la metodologia conocida como ARIMA, para predecir, con 2 o 3 horas de anticipacion, el valor maximo de la concentracion diaria de ozono. Esta basado en el calculo de autorregresiones y promedios moviles aplicados a los valores maximos de ozono superficial provenientes de 10 estaciones de monitoreo atmosferico en la Ciudad de Mexico y obtenidos durante un ano de muestreo. El pronostico para un dia se ajusta con la informacion meteorologica y de radiacion solar correspondiente a un periodo que antecede con al menos tres horas la ocurrencia esperada del valor maximo. Se compara la importancia relativa de la historia del proceso y de las condiciones meteorologicas previas para el pronostico. Finalmente se estima la probabilidad diaria de que un nivel normativo o preestablecido para contingencias de ozono sea rebasado.
Maximum Permissible Concentrations and Negligible Concentrations for Rare Earth Elements (REEs)
Sneller FEC; Kalf DF; Weltje L; Wezel AP van; CSR
2000-01-01
In dit rapport worden maximaal toelaatbare risiconiveaus (MTR) en verwaarloosbare risiconiveaus (VR) afgeleid voor zeldzame aardmetalen (ZAM). De geselecteerde ZAMs zijn Yttrium (Y), Lanthanum (La), Cerium (Ce), Praseodymium (Pr), Neodymium (Nd), Samarium (Sm), Gadolinium (Gd), en Dysprosium (Dy
Wezel AP van; Vlaardingen P van; CSR
2001-01-01
In dit rapport zijn maximaal toelaatbare concentratie's en verwaarloosbare concentratie's afgeleid voor diverse aangroeiwerende middelen, welke worden gebruikt als vervanger voor TBT zoals Irgarol 1051, dichlofluanide, ziram, chloorthalonil en TCMTB.
340 W average power output of diode-pumped composite ceramic YAG/Nd:YAG disk laser
Jia, Kai; Jiang, Yong; Yang, Feng; Deng, Bo; Hou, Tianjin; Guo, Jiawei; Chen, Dezhang; Wang, Hongyuan; Yang, Chuang; Peng, Chun
2016-11-01
We report on a diode-pumped composite ceramic disk laser in this paper. The composite ceramic YAG/Nd:YAG disk consists of 4 mm thick pure YAG and 2 mm thick Nd:YAG with 1.0 at.% doping concentration. The slope efficiency of the composite ceramic disk laser is 36.6% corresponding to the maximum optical-optical efficiency of 29.2%. Furthermore, 340 W average power output was achieved at the absorbed pump power of 1290 W.
Objects of maximum electromagnetic chirality
Fernandez-Corbaton, Ivan
2015-01-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
The strong maximum principle revisited
Pucci, Patrizia; Serrin, James
In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.
Level sets of multiple ergodic averages
Ai-Hua, Fan; Ma, Ji-Hua
2011-01-01
We propose to study multiple ergodic averages from multifractal analysis point of view. In some special cases in the symbolic dynamics, Hausdorff dimensions of the level sets of multiple ergodic average limit are determined by using Riesz products.
Computer simulation of concentrated solid solution strengthening
Kuo, C. T. K.; Arsenault, R. J.
1976-01-01
The interaction forces between a straight edge dislocation moving through a three-dimensional block containing a random array of solute atoms were determined. The yield stress at 0 K was obtained by determining the average maximum solute-dislocation interaction force that is encountered by edge dislocation, and an expression relating the yield stress to the length of the dislocation and the solute concentration is provided. The magnitude of the solid solution strengthening due to solute atoms can be determined directly from the numerical results, provided the dislocation line length that moves as a unit is specified.
Accurate Switched-Voltage voltage averaging circuit
金光, 一幸; 松本, 寛樹
2006-01-01
Abstract ###This paper proposes an accurate Switched-Voltage (SV) voltage averaging circuit. It is presented ###to compensated for NMOS missmatch error at MOS differential type voltage averaging circuit. ###The proposed circuit consists of a voltage averaging and a SV sample/hold (S/H) circuit. It can ###operate using nonoverlapping three phase clocks. Performance of this circuit is verified by PSpice ###simulations.
Spectral averaging techniques for Jacobi matrices
del Rio, Rafael; Schulz-Baldes, Hermann
2008-01-01
Spectral averaging techniques for one-dimensional discrete Schroedinger operators are revisited and extended. In particular, simultaneous averaging over several parameters is discussed. Special focus is put on proving lower bounds on the density of the averaged spectral measures. These Wegner type estimates are used to analyze stability properties for the spectral types of Jacobi matrices under local perturbations.
无
2011-01-01
[Objective] The research aimed to analyze temporal and spatial variation characteristics of temperature in Shangqiu City during 1961-2010.[Method] Based on temperature data in eight meteorological stations of Shangqiu during 1961-2010,by using trend analysis method,the temporal and spatial evolution characteristics of annual average temperature,annual average maximum and minimum temperatures,annual extreme maximum and minimum temperatures,daily range of annual average temperature in Shangqiu City were analy...
Average-Time Games on Timed Automata
Jurdzinski, Marcin; Trivedi, Ashutosh
2009-01-01
An average-time game is played on the infinite graph of configurations of a finite timed automaton. The two players, Min and Max, construct an infinite run of the automaton by taking turns to perform a timed transition. Player Min wants to minimise the average time per transition and player Max wants to maximise it. A solution of average-time games is presented using a reduction to average-price game on a finite graph. A direct consequence is an elementary proof of determinacy for average-tim...
Maximum entropy production in daisyworld
Maunu, Haley A.; Knuth, Kevin H.
2012-05-01
Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.
Maximum stellar iron core mass
F W Giacobbe
2003-03-01
An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.
Maximum Matchings via Glauber Dynamics
Jindal, Anant; Pal, Manjish
2011-01-01
In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...
2011-01-10
...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...
The Sherpa Maximum Likelihood Estimator
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
Vestige: Maximum likelihood phylogenetic footprinting
Maxwell Peter
2005-05-01
Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational
Yasso, B; Li, Y; Alexander, A; Mel'nikova, N B; Mukhina, I V
2014-01-01
A comparison of the relative bioavailability and intensity of penetration of glucosamine sulfate in oral, injection and topical administration of the dosage form Hondroxid Maximum as a cream containing micellar system for transdermal delivery of glucosamine in the experiment by Sprague-Dawley rats was carried out. On the base on the pharmacokinetic profiles data of glucosamine in rat blood plasma with daily administration in 3 times a day for 1 week by cream Hondroxid Maximum 400 mg/kg and the single injection solution of 4% Glucosamine sulfate 400 mg/kg was found that the relative bioavailability was 61.6%. Calculated penetration rate of glucosamine in the plasma through the rats skin in 4 hours, equal to 26.9 μg/cm2 x h, and the penetration of glucosamine through the skin into the plasma after a single dose of cream in 4 hours was 4.12%. Comparative analysis of literature and experimental data and calculations based on them suggest that medicine Hondroxid Maximum, cream with transdermal glucosamine complex in the treatment in accordance with the instructions can provide an average concentration of glucosamine in the synovial fluid of an inflamed joint in the range (0.7 - 1.5) μg/ml, much higher than the concentration of endogenous glucosamine human synovial joint fluid (0.02 - 0.07 μg/ml). By theoretical calculations taking into account experimental data it is shown that the medicine Hondroxid Maximum can reach the bioavailability level of the modern injection forms and exceed the bioavailability level of modern oral forms of glucosamine up to 2 times.
WIDTHS AND AVERAGE WIDTHS OF SOBOLEV CLASSES
刘永平; 许贵桥
2003-01-01
This paper concerns the problem of the Kolmogorov n-width, the linear n-width, the Gel'fand n-width and the Bernstein n-width of Sobolev classes of the periodicmultivariate functions in the space Lp(Td) and the average Bernstein σ-width, averageKolmogorov σ-widths, the average linear σ-widths of Sobolev classes of the multivariatequantities.
Stochastic averaging of quasi-Hamiltonian systems
朱位秋
1996-01-01
A stochastic averaging method is proposed for quasi-Hamiltonian systems (Hamiltonian systems with light dampings subject to weakly stochastic excitations). Various versions of the method, depending on whether the associated Hamiltonian systems are integrable or nonintegrable, resonant or nonresonant, are discussed. It is pointed out that the standard stochastic averaging method and the stochastic averaging method of energy envelope are special cases of the stochastic averaging method of quasi-Hamiltonian systems and that the results obtained by this method for several examples prove its effectiveness.
NOAA Average Annual Salinity (3-Zone)
California Department of Resources — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...
Dynamic Multiscale Averaging (DMA) of Turbulent Flow
Richard W. Johnson
2012-09-01
A new approach called dynamic multiscale averaging (DMA) for computing the effects of turbulent flow is described. The new method encompasses multiple applications of temporal and spatial averaging, that is, multiscale operations. Initially, a direct numerical simulation (DNS) is performed for a relatively short time; it is envisioned that this short time should be long enough to capture several fluctuating time periods of the smallest scales. The flow field variables are subject to running time averaging during the DNS. After the relatively short time, the time-averaged variables are volume averaged onto a coarser grid. Both time and volume averaging of the describing equations generate correlations in the averaged equations. These correlations are computed from the flow field and added as source terms to the computation on the next coarser mesh. They represent coupling between the two adjacent scales. Since they are computed directly from first principles, there is no modeling involved. However, there is approximation involved in the coupling correlations as the flow field has been computed for only a relatively short time. After the time and spatial averaging operations are applied at a given stage, new computations are performed on the next coarser mesh using a larger time step. The process continues until the coarsest scale needed is reached. New correlations are created for each averaging procedure. The number of averaging operations needed is expected to be problem dependent. The new DMA approach is applied to a relatively low Reynolds number flow in a square duct segment. Time-averaged stream-wise velocity and vorticity contours from the DMA approach appear to be very similar to a full DNS for a similar flow reported in the literature. Expected symmetry for the final results is produced for the DMA method. The results obtained indicate that DMA holds significant potential in being able to accurately compute turbulent flow without modeling for practical
Average Transmission Probability of a Random Stack
Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg
2010-01-01
The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…
Average sampling theorems for shift invariant subspaces
无
2000-01-01
The sampling theorem is one of the most powerful results in signal analysis. In this paper, we study the average sampling on shift invariant subspaces, e.g. wavelet subspaces. We show that if a subspace satisfies certain conditions, then every function in the subspace is uniquely determined and can be reconstructed by its local averages near certain sampling points. Examples are given.
Testing linearity against nonlinear moving average models
de Gooijer, J.G.; Brännäs, K.; Teräsvirta, T.
1998-01-01
Lagrange multiplier (LM) test statistics are derived for testing a linear moving average model against an additive smooth transition moving average model. The latter model is introduced in the paper. The small sample performance of the proposed tests are evaluated in a Monte Carlo study and compared
Averaging Einstein's equations : The linearized case
Stoeger, William R.; Helmi, Amina; Torres, Diego F.
2007-01-01
We introduce a simple and straightforward averaging procedure, which is a generalization of one which is commonly used in electrodynamics, and show that it possesses all the characteristics we require for linearized averaging in general relativity and cosmology for weak-field and perturbed FLRW situ
Averaging Einstein's equations : The linearized case
Stoeger, William R.; Helmi, Amina; Torres, Diego F.
We introduce a simple and straightforward averaging procedure, which is a generalization of one which is commonly used in electrodynamics, and show that it possesses all the characteristics we require for linearized averaging in general relativity and cosmology for weak-field and perturbed FLRW
Average excitation potentials of air and aluminium
Bogaardt, M.; Koudijs, B.
1951-01-01
By means of a graphical method the average excitation potential I may be derived from experimental data. Average values for Iair and IAl have been obtained. It is shown that in representing range/energy relations by means of Bethe's well known formula, I has to be taken as a continuously changing fu
Average Transmission Probability of a Random Stack
Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg
2010-01-01
The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…
2010-07-19
... CFR Part 3015, Subpart V, and the final rule related notice published at 48 FR 29114, June 24, 1983... Average Payments/Maximum Reimbursement Rates AGENCY: Food and Nutrition Service, USDA. ACTION: Notice. SUMMARY: This Notice announces the annual adjustments to the ``national average payments,'' the amount...
New results on averaging theory and applications
Cândido, Murilo R.; Llibre, Jaume
2016-08-01
The usual averaging theory reduces the computation of some periodic solutions of a system of ordinary differential equations, to find the simple zeros of an associated averaged function. When one of these zeros is not simple, i.e., the Jacobian of the averaged function in it is zero, the classical averaging theory does not provide information about the periodic solution associated to a non-simple zero. Here we provide sufficient conditions in order that the averaging theory can be applied also to non-simple zeros for studying their associated periodic solutions. Additionally, we do two applications of this new result for studying the zero-Hopf bifurcation in the Lorenz system and in the Fitzhugh-Nagumo system.
Analogue Divider by Averaging a Triangular Wave
Selvam, Krishnagiri Chinnathambi
2017-08-01
A new analogue divider circuit by averaging a triangular wave using operational amplifiers is explained in this paper. The triangle wave averaging analog divider using operational amplifiers is explained here. The reference triangular waveform is shifted from zero voltage level up towards positive power supply voltage level. Its positive portion is obtained by a positive rectifier and its average value is obtained by a low pass filter. The same triangular waveform is shifted from zero voltage level to down towards negative power supply voltage level. Its negative portion is obtained by a negative rectifier and its average value is obtained by another low pass filter. Both the averaged voltages are combined in a summing amplifier and the summed voltage is given to an op-amp as negative input. This op-amp is configured to work in a negative closed environment. The op-amp output is the divider output.
Kuracina Richard
2015-06-01
Full Text Available The article deals with the measurement of maximum explosion pressure and the maximum rate of exposure pressure rise of wood dust cloud. The measurements were carried out according to STN EN 14034-1+A1:2011 Determination of explosion characteristics of dust clouds. Part 1: Determination of the maximum explosion pressure pmax of dust clouds and the maximum rate of explosion pressure rise according to STN EN 14034-2+A1:2012 Determination of explosion characteristics of dust clouds - Part 2: Determination of the maximum rate of explosion pressure rise (dp/dtmax of dust clouds. The wood dust cloud in the chamber is achieved mechanically. The testing of explosions of wood dust clouds showed that the maximum value of the pressure was reached at the concentrations of 450 g / m3 and its value is 7.95 bar. The fastest increase of pressure was observed at the concentrations of 450 g / m3 and its value was 68 bar / s.
Picosecond mid-infrared amplifier for high average power.
Botha, LR
2007-04-01
Full Text Available are similar. The saturation fluence for a multi level system can be written as z PhEsat σ υ 2 = With σ the stimulated emission cross section and P the pressure of the laser. 1/z... is essentially the average number of populated rotational levels. For our case z=0.07 and 181054.1 −×=σ cm2. Thus for a 10 atm laser the saturation fluence is: 2 18 1334 /1173 07.01017/12 10109.210626.6 cmmJxEsat = ××× ××× = − − The maximum...
The Average-Case Area of Heilbronn-Type Triangles
Jiang, T.; Li, Ming; Vitányi, Paul
1999-01-01
From among $ {n \\choose 3}$ triangles with vertices chosen from $n$ points in the unit square, let $T$ be the one with the smallest area, and let $A$ be the area of $T$. Heilbronn's triangle problem asks for the maximum value assumed by $A$ over all choices of $n$ points. We consider the average-case: If the $n$ points are chosen independently and at random (with a uniform distribution), then there exist positive constants $c$ and $C$ such that $c/n^3 < \\mu_n < C/n^3$ for all large enough val...
Recent advances in phase shifted time averaging and stroboscopic interferometry
Styk, Adam; Józwik, Michał
2016-08-01
Classical Time Averaging and Stroboscopic Interferometry are widely used for MEMS/MOEMS dynamic behavior investigations. Unfortunately both methods require an extensive measurement and data processing strategies in order to evaluate the information on maximum amplitude at a given load of vibrating object. In this paper the modified strategies of data processing in both techniques are introduced. These modifications allow for fast and reliable calculation of searched value, without additional complication of measurement systems. Through the paper the both approaches are discussed and experimentally verified.
Role of spatial averaging in multicellular gradient sensing
Smith, Tyler; Fancher, Sean; Levchenko, Andre; Nemenman, Ilya; Mugler, Andrew
2016-06-01
Gradient sensing underlies important biological processes including morphogenesis, polarization, and cell migration. The precision of gradient sensing increases with the length of a detector (a cell or group of cells) in the gradient direction, since a longer detector spans a larger range of concentration values. Intuition from studies of concentration sensing suggests that precision should also increase with detector length in the direction transverse to the gradient, since then spatial averaging should reduce the noise. However, here we show that, unlike for concentration sensing, the precision of gradient sensing decreases with transverse length for the simplest gradient sensing model, local excitation-global inhibition. The reason is that gradient sensing ultimately relies on a subtraction of measured concentration values. While spatial averaging indeed reduces the noise in these measurements, which increases precision, it also reduces the covariance between the measurements, which results in the net decrease in precision. We demonstrate how a recently introduced gradient sensing mechanism, regional excitation-global inhibition (REGI), overcomes this effect and recovers the benefit of transverse averaging. Using a REGI-based model, we compute the optimal two- and three-dimensional detector shapes, and argue that they are consistent with the shapes of naturally occurring gradient-sensing cell populations.
Pushing concentration of stationary solar concentrators to the limit.
Winston, Roland; Zhang, Weiya
2010-04-26
We give the theoretical limit of concentration allowed by nonimaging optics for stationary solar concentrators after reviewing sun- earth geometry in direction cosine space. We then discuss the design principles that we follow to approach the maximum concentration along with examples including a hollow CPC trough, a dielectric CPC trough, and a 3D dielectric stationary solar concentrator which concentrates sun light four times (4x), eight hours per day year around.
Predicting Maximum Sunspot Number in Solar Cycle 24
Nipa J Bhatt; Rajmal Jain; Malini Aggarwal
2009-03-01
A few prediction methods have been developed based on the precursor technique which is found to be successful for forecasting the solar activity. Considering the geomagnetic activity aa indices during the descending phase of the preceding solar cycle as the precursor, we predict the maximum amplitude of annual mean sunspot number in cycle 24 to be 111 ± 21. This suggests that the maximum amplitude of the upcoming cycle 24 will be less than cycles 21–22. Further, we have estimated the annual mean geomagnetic activity aa index for the solar maximum year in cycle 24 to be 20.6 ± 4.7 and the average of the annual mean sunspot number during the descending phase of cycle 24 is estimated to be 48 ± 16.8.
Optimization of agitation and aeration conditions for maximum virginiamycin production.
Shioya, S; Morikawa, M; Kajihara, Y; Shimizu, H
1999-02-01
To maximize the productivity of virginiamycin, which is a commercially important antibiotic as an animal feed additive, an empirical approach was employed in the batch culture of Streptomyces virginiae. Here, the effects of dissolved oxygen (DO) concentration and agitation speed on the maximum cell concentration at the production phase, as well as on the productivity of virginiamycin, were investigated. To maintain the DO concentration in the fermentor at a certain level, either the agitation speed or the inlet oxygen concentration of the supply gas was manipulated. It was found that increasing the agitation speed had a positive effect on the antibiotic productivity independent of the DO concentration. The optimum DO concentration, agitation speed and addition of an autoregulator, virginiae butanolide C (VB-C), were determined to maximize virginiamycin productivity. The optimal strategy was to start the cultivation at 450 rpm and to continue until the DO concentration reached 80%. After reaching 80%, the DO concentration was maintained at this level by changing the agitation speed, up to a maximum of 800 rpm. The addition of an optimal amount of the autoregulator VB-C in an experiment resulted in the maximal production of virginiamycin M (399 mg/l), which was about 1.8-fold those obtained previously.
Averaged Lema\\^itre-Tolman-Bondi dynamics
Isidro, Eddy G Chirinos; Piattella, Oliver F; Zimdahl, Winfried
2016-01-01
We consider cosmological backreaction effects in Buchert's averaging formalism on the basis of an explicit solution of the Lema\\^itre-Tolman-Bondi (LTB) dynamics which is linear in the LTB curvature parameter and has an inhomogeneous bang time. The volume Hubble rate is found in terms of the volume scale factor which represents a derivation of the simplest phenomenological solution of Buchert's equations in which the fractional densities corresponding to average curvature and kinematic backreaction are explicitly determined by the parameters of the underlying LTB solution at the boundary of the averaging volume. This configuration represents an exactly solvable toy model but it does not adequately describe our "real" Universe.
Average-passage flow model development
Adamczyk, John J.; Celestina, Mark L.; Beach, Tim A.; Kirtley, Kevin; Barnett, Mark
1989-01-01
A 3-D model was developed for simulating multistage turbomachinery flows using supercomputers. This average passage flow model described the time averaged flow field within a typical passage of a bladed wheel within a multistage configuration. To date, a number of inviscid simulations were executed to assess the resolution capabilities of the model. Recently, the viscous terms associated with the average passage model were incorporated into the inviscid computer code along with an algebraic turbulence model. A simulation of a stage-and-one-half, low speed turbine was executed. The results of this simulation, including a comparison with experimental data, is discussed.
FREQUENTIST MODEL AVERAGING ESTIMATION: A REVIEW
Haiying WANG; Xinyu ZHANG; Guohua ZOU
2009-01-01
In applications, the traditional estimation procedure generally begins with model selection.Once a specific model is selected, subsequent estimation is conducted under the selected model without consideration of the uncertainty from the selection process. This often leads to the underreporting of variability and too optimistic confidence sets. Model averaging estimation is an alternative to this procedure, which incorporates model uncertainty into the estimation process. In recent years, there has been a rising interest in model averaging from the frequentist perspective, and some important progresses have been made. In this paper, the theory and methods on frequentist model averaging estimation are surveyed. Some future research topics are also discussed.
Averaging of Backscatter Intensities in Compounds
Donovan, John J.; Pingitore, Nicholas E.; Westphal, Andrew J.
2002-01-01
Low uncertainty measurements on pure element stable isotope pairs demonstrate that mass has no influence on the backscattering of electrons at typical electron microprobe energies. The traditional prediction of average backscatter intensities in compounds using elemental mass fractions is improperly grounded in mass and thus has no physical basis. We propose an alternative model to mass fraction averaging, based of the number of electrons or protons, termed “electron fraction,” which predicts backscatter yield better than mass fraction averaging. PMID:27446752
Experimental Demonstration of Squeezed State Quantum Averaging
Lassen, Mikael; Sabuncu, Metin; Filip, Radim; Andersen, Ulrik L
2010-01-01
We propose and experimentally demonstrate a universal quantum averaging process implementing the harmonic mean of quadrature variances. The harmonic mean protocol can be used to efficiently stabilize a set of fragile squeezed light sources with statistically fluctuating noise levels. The averaged variances are prepared probabilistically by means of linear optical interference and measurement induced conditioning. We verify that the implemented harmonic mean outperforms the standard arithmetic mean strategy. The effect of quantum averaging is experimentally tested both for uncorrelated and partially correlated noise sources with sub-Poissonian shot noise or super-Poissonian shot noise characteristics.
The Average Lower Connectivity of Graphs
Ersin Aslan
2014-01-01
Full Text Available For a vertex v of a graph G, the lower connectivity, denoted by sv(G, is the smallest number of vertices that contains v and those vertices whose deletion from G produces a disconnected or a trivial graph. The average lower connectivity denoted by κav(G is the value (∑v∈VGsvG/VG. It is shown that this parameter can be used to measure the vulnerability of networks. This paper contains results on bounds for the average lower connectivity and obtains the average lower connectivity of some graphs.
Cosmic inhomogeneities and averaged cosmological dynamics.
Paranjape, Aseem; Singh, T P
2008-10-31
If general relativity (GR) describes the expansion of the Universe, the observed cosmic acceleration implies the existence of a "dark energy." However, while the Universe is on average homogeneous on large scales, it is inhomogeneous on smaller scales. While GR governs the dynamics of the inhomogeneous Universe, the averaged homogeneous Universe obeys modified Einstein equations. Can such modifications alone explain the acceleration? For a simple generic model with realistic initial conditions, we show the answer to be "no." Averaging effects negligibly influence the cosmological dynamics.
Changing mortality and average cohort life expectancy
Schoen, Robert; Canudas-Romo, Vladimir
2005-01-01
of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL) has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure......, the average cohort life expectancy (ACLE), to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate...
R Wave Extraction Based on the Maximum First Derivative plus the Maximum Value of the Double Search
Wen-po Yao; Wen-li Yao; Min Wu; Tie-bing Liu
2016-01-01
R-wave detection is the main approach for heart rate variability analysis and clinical application based on R-R interval. The maximum ifrst derivative plus the maximum value of the double search algorithm is applied on electrocardiogram (ECG) of MIH-BIT Arrhythmia Database to extract R wave. Through the study of algorithm's characteristics and R-wave detection method, data segmentation method is modified to improve the detection accuracy. After segmentation modification, average accuracy rate of 6 sets of short ECG data increase from 82.51% to 93.70%, and the average accuracy rate of 11 groups long-range data is 96.61%. Test results prove that the algorithm and segmentation method can accurately locate R wave and have good effectiveness and versatility, but may exist some undetected problems due to algorithm implementation.
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Maximum entropy estimation of a Benzene contaminated plume using ecotoxicological assays.
Wahyudi, Agung; Bartzke, Mariana; Küster, Eberhard; Bogaert, Patrick
2013-01-01
Ecotoxicological bioassays, e.g. based on Danio rerio teratogenicity (DarT) or the acute luminescence inhibition with Vibrio fischeri, could potentially lead to significant benefits for detecting on site contaminations on qualitative or semi-quantitative bases. The aim was to use the observed effects of two ecotoxicological assays for estimating the extent of a Benzene groundwater contamination plume. We used a Maximum Entropy (MaxEnt) method to rebuild a bivariate probability table that links the observed toxicity from the bioassays with Benzene concentrations. Compared with direct mapping of the contamination plume as obtained from groundwater samples, the MaxEnt concentration map exhibits on average slightly higher concentrations though the global pattern is close to it. This suggest MaxEnt is a valuable method to build a relationship between quantitative data, e.g. contaminant concentrations, and more qualitative or indirect measurements, in a spatial mapping framework, which is especially useful when clear quantitative relation is not at hand.
Sea Surface Temperature Average_SST_Master
National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using ArcGIS...
Appeals Council Requests - Average Processing Time
Social Security Administration — This dataset provides annual data from 1989 through 2015 for the average processing time (elapsed time in days) for dispositions by the Appeals Council (AC) (both...
Average Vegetation Growth 1990 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1990 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 1997 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1997 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 1992 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1992 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 2001 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 2001 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 1995 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1995 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 2000 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 2000 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 1998 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1998 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 1994 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1994 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
MN Temperature Average (1961-1990) - Line
Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...
Average Vegetation Growth 1996 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1996 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 2005 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 2005 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 1993 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1993 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
MN Temperature Average (1961-1990) - Polygon
Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...
Spacetime Average Density (SAD) Cosmological Measures
Page, Don N
2014-01-01
The measure problem of cosmology is how to obtain normalized probabilities of observations from the quantum state of the universe. This is particularly a problem when eternal inflation leads to a universe of unbounded size so that there are apparently infinitely many realizations or occurrences of observations of each of many different kinds or types, making the ratios ambiguous. There is also the danger of domination by Boltzmann Brains. Here two new Spacetime Average Density (SAD) measures are proposed, Maximal Average Density (MAD) and Biased Average Density (BAD), for getting a finite number of observation occurrences by using properties of the Spacetime Average Density (SAD) of observation occurrences to restrict to finite regions of spacetimes that have a preferred beginning or bounce hypersurface. These measures avoid Boltzmann brain domination and appear to give results consistent with other observations that are problematic for other widely used measures, such as the observation of a positive cosmolo...
A practical guide to averaging functions
Beliakov, Gleb; Calvo Sánchez, Tomasa
2016-01-01
This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...
Rotational averaging of multiphoton absorption cross sections
Friese, Daniel H., E-mail: daniel.h.friese@uit.no; Beerepoot, Maarten T. P.; Ruud, Kenneth [Centre for Theoretical and Computational Chemistry, University of Tromsø — The Arctic University of Norway, N-9037 Tromsø (Norway)
2014-11-28
Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.
Rotational averaging of multiphoton absorption cross sections
Friese, Daniel H.; Beerepoot, Maarten T. P.; Ruud, Kenneth
2014-11-01
Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.
Monthly snow/ice averages (ISCCP)
National Aeronautics and Space Administration — September Arctic sea ice is now declining at a rate of 11.5 percent per decade, relative to the 1979 to 2000 average. Data from NASA show that the land ice sheets in...
Average Annual Precipitation (PRISM model) 1961 - 1990
U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 1961-1990. Parameter-elevation...
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
Symmetric Euler orientation representations for orientational averaging.
Mayerhöfer, Thomas G
2005-09-01
A new kind of orientation representation called symmetric Euler orientation representation (SEOR) is presented. It is based on a combination of the conventional Euler orientation representations (Euler angles) and Hamilton's quaternions. The properties of the SEORs concerning orientational averaging are explored and compared to those of averaging schemes that are based on conventional Euler orientation representations. To that aim, the reflectance of a hypothetical polycrystalline material with orthorhombic crystal symmetry was calculated. The calculation was carried out according to the average refractive index theory (ARIT [T.G. Mayerhöfer, Appl. Spectrosc. 56 (2002) 1194]). It is shown that the use of averaging schemes based on conventional Euler orientation representations leads to a dependence of the result from the specific Euler orientation representation that was utilized and from the initial position of the crystal. The latter problem can be overcome partly by the introduction of a weighing factor, but only for two-axes-type Euler orientation representations. In case of a numerical evaluation of the average, a residual difference remains also if a two-axes type Euler orientation representation is used despite of the utilization of a weighing factor. In contrast, this problem does not occur if a symmetric Euler orientation representation is used as a matter of principle, while the result of the averaging for both types of orientation representations converges with increasing number of orientations considered in the numerical evaluation. Additionally, the use of a weighing factor and/or non-equally spaced steps in the numerical evaluation of the average is not necessary. The symmetrical Euler orientation representations are therefore ideally suited for the use in orientational averaging procedures.
Cosmic Inhomogeneities and the Average Cosmological Dynamics
Paranjape, Aseem; Singh, T. P.
2008-01-01
If general relativity (GR) describes the expansion of the Universe, the observed cosmic acceleration implies the existence of a `dark energy'. However, while the Universe is on average homogeneous on large scales, it is inhomogeneous on smaller scales. While GR governs the dynamics of the \\emph{in}homogeneous Universe, the averaged \\emph{homogeneous} Universe obeys modified Einstein equations. Can such modifications alone explain the acceleration? For a simple generic model with realistic ini...
Average Bandwidth Allocation Model of WFQ
Tomáš Balogh
2012-01-01
Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.
FUNDAMENTALS OF TRANSMISSION FLUCTUATION SPECTROMETRY WITH VARIABLE SPATIAL AVERAGING
Jianqi Shen; Ulrich Riebel; Marcus Breitenstein; Udo Kr(a)uter
2003-01-01
Transmission signal of radiation in suspension of particles performed with a high spatial and temporal resolution shows significant fluctuations, which are related to the physical properties of the particles and the process of spatial and temporal averaging. Exploiting this connection, it is possible to calculate the parti cie size distribution (PSD)and particle concentration. This paper provides an approach of transmission fluctuation spectrometry (TFS) with variable spatial averaging. The transmission fluctuations are expressed in terms of the expectancy of transmission square (ETS)and are obtained as a spectrum, which is a function of the variable beam diameter. The reversal point and the depth of the spectrum contain the information of particle size and particle concentration, respectively.
Efficiency of autonomous soft nanomachines at maximum power.
Seifert, Udo
2011-01-14
We consider nanosized artificial or biological machines working in steady state enforced by imposing nonequilibrium concentrations of solutes or by applying external forces, torques, or electric fields. For unicyclic and strongly coupled multicyclic machines, efficiency at maximum power is not bounded by the linear response value 1/2. For strong driving, it can even approach the thermodynamic limit 1. Quite generally, such machines fall into three different classes characterized, respectively, as "strong and efficient," "strong and inefficient," and "balanced." For weakly coupled multicyclic machines, efficiency at maximum power has lost any universality even in the linear response regime.
Averaged controllability of parameter dependent conservative semigroups
Lohéac, Jérôme; Zuazua, Enrique
2017-02-01
We consider the problem of averaged controllability for parameter depending (either in a discrete or continuous fashion) control systems, the aim being to find a control, independent of the unknown parameters, so that the average of the states is controlled. We do it in the context of conservative models, both in an abstract setting and also analysing the specific examples of the wave and Schrödinger equations. Our first result is of perturbative nature. Assuming the averaging probability measure to be a small parameter-dependent perturbation (in a sense that we make precise) of an atomic measure given by a Dirac mass corresponding to a specific realisation of the system, we show that the averaged controllability property is achieved whenever the system corresponding to the support of the Dirac is controllable. Similar tools can be employed to obtain averaged versions of the so-called Ingham inequalities. Particular attention is devoted to the 1d wave equation in which the time-periodicity of solutions can be exploited to obtain more precise results, provided the parameters involved satisfy Diophantine conditions ensuring the lack of resonances.
Average Temperatures in the Southwestern United States, 2000-2015 Versus Long-Term Average
U.S. Environmental Protection Agency — This indicator shows how the average air temperature from 2000 to 2015 has differed from the long-term average (1895–2015). To provide more detailed information,...
The inverse maximum dynamic flow problem
BAGHERIAN; Mehri
2010-01-01
We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.
Cosmic structure, averaging and dark energy
Wiltshire, David L
2013-01-01
These lecture notes review the theoretical problems associated with coarse-graining the observed inhomogeneous structure of the universe at late epochs, of describing average cosmic evolution in the presence of growing inhomogeneity, and of relating average quantities to physical observables. In particular, a detailed discussion of the timescape scenario is presented. In this scenario, dark energy is realized as a misidentification of gravitational energy gradients which result from gradients in the kinetic energy of expansion of space, in the presence of density and spatial curvature gradients that grow large with the growth of structure. The phenomenology and observational tests of the timescape model are discussed in detail, with updated constraints from Planck satellite data. In addition, recent results on the variation of the Hubble expansion on < 100/h Mpc scales are discussed. The spherically averaged Hubble law is significantly more uniform in the rest frame of the Local Group of galaxies than in t...
Books average previous decade of economic misery.
R Alexander Bentley
Full Text Available For the 20(th century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.
Benchmarking statistical averaging of spectra with HULLAC
Klapisch, Marcel; Busquet, Michel
2008-11-01
Knowledge of radiative properties of hot plasmas is important for ICF, astrophysics, etc When mid-Z or high-Z elements are present, the spectra are so complex that one commonly uses statistically averaged description of atomic systems [1]. In a recent experiment on Fe[2], performed under controlled conditions, high resolution transmission spectra were obtained. The new version of HULLAC [3] allows the use of the same model with different levels of details/averaging. We will take advantage of this feature to check the effect of averaging with comparison with experiment. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Quant. Spectros. Rad. Transf. 65, 43 (2000). [2] J. E. Bailey, G. A. Rochau, C. A. Iglesias et al., Phys. Rev. Lett. 99, 265002-4 (2007). [3]. M. Klapisch, M. Busquet, and A. Bar-Shalom, AIP Conference Proceedings 926, 206-15 (2007).
Stochastic Averaging and Stochastic Extremum Seeking
Liu, Shu-Jun
2012-01-01
Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering and analysis of bacterial convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...
Books average previous decade of economic misery.
Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.
High Average Power Yb:YAG Laser
Zapata, L E; Beach, R J; Payne, S A
2001-05-23
We are working on a composite thin-disk laser design that can be scaled as a source of high brightness laser power for tactical engagement and other high average power applications. The key component is a diffusion-bonded composite comprising a thin gain-medium and thicker cladding that is strikingly robust and resolves prior difficulties with high average power pumping/cooling and the rejection of amplified spontaneous emission (ASE). In contrast to high power rods or slabs, the one-dimensional nature of the cooling geometry and the edge-pump geometry scale gracefully to very high average power. The crucial design ideas have been verified experimentally. Progress this last year included: extraction with high beam quality using a telescopic resonator, a heterogeneous thin film coating prescription that meets the unusual requirements demanded by this laser architecture, thermal management with our first generation cooler. Progress was also made in design of a second-generation laser.
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
The modulated average structure of mullite.
Birkenstock, Johannes; Petříček, Václav; Pedersen, Bjoern; Schneider, Hartmut; Fischer, Reinhard X
2015-06-01
Homogeneous and inclusion-free single crystals of 2:1 mullite (Al(4.8)Si(1.2)O(9.6)) grown by the Czochralski technique were examined by X-ray and neutron diffraction methods. The observed diffuse scattering together with the pattern of satellite reflections confirm previously published data and are thus inherent features of the mullite structure. The ideal composition was closely met as confirmed by microprobe analysis (Al(4.82 (3))Si(1.18 (1))O(9.59 (5))) and by average structure refinements. 8 (5) to 20 (13)% of the available Si was found in the T* position of the tetrahedra triclusters. The strong tendencey for disorder in mullite may be understood from considerations of hypothetical superstructures which would have to be n-fivefold with respect to the three-dimensional average unit cell of 2:1 mullite and n-fourfold in case of 3:2 mullite. In any of these the possible arrangements of the vacancies and of the tetrahedral units would inevitably be unfavorable. Three directions of incommensurate modulations were determined: q1 = [0.3137 (2) 0 ½], q2 = [0 0.4021 (5) 0.1834 (2)] and q3 = [0 0.4009 (5) -0.1834 (2)]. The one-dimensional incommensurately modulated crystal structure associated with q1 was refined for the first time using the superspace approach. The modulation is dominated by harmonic occupational modulations of the atoms in the di- and the triclusters of the tetrahedral units in mullite. The modulation amplitudes are small and the harmonic character implies that the modulated structure still represents an average structure in the overall disordered arrangement of the vacancies and of the tetrahedral structural units. In other words, when projecting the local assemblies at the scale of a few tens of average mullite cells into cells determined by either one of the modulation vectors q1, q2 or q3 a weak average modulation results with slightly varying average occupation factors for the tetrahedral units. As a result, the real
A singularity theorem based on spatial averages
J M M Senovilla
2007-07-01
Inspired by Raychaudhuri's work, and using the equation named after him as a basic ingredient, a new singularity theorem is proved. Open non-rotating Universes, expanding everywhere with a non-vanishing spatial average of the matter variables, show severe geodesic incompletness in the past. Another way of stating the result is that, under the same conditions, any singularity-free model must have a vanishing spatial average of the energy density (and other physical variables). This is very satisfactory and provides a clear decisive difference between singular and non-singular cosmologies.
Average: the juxtaposition of procedure and context
Watson, Jane; Chick, Helen; Callingham, Rosemary
2014-09-01
This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.
SOURCE TERMS FOR AVERAGE DOE SNF CANISTERS
K. L. Goluoglu
2000-06-09
The objective of this calculation is to generate source terms for each type of Department of Energy (DOE) spent nuclear fuel (SNF) canister that may be disposed of at the potential repository at Yucca Mountain. The scope of this calculation is limited to generating source terms for average DOE SNF canisters, and is not intended to be used for subsequent calculations requiring bounding source terms. This calculation is to be used in future Performance Assessment calculations, or other shielding or thermal calculations requiring average source terms.
An approximate analytical approach to resampling averages
Malzahn, Dorthe; Opper, M.
2004-01-01
Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr......Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach...
Grassmann Averages for Scalable Robust PCA
Hauberg, Søren; Feragen, Aasa; Black, Michael J.
2014-01-01
As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...
Model averaging and muddled multimodel inferences.
Cade, Brian S
2015-09-01
Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the t
Model averaging and muddled multimodel inferences
Cade, Brian S.
2015-01-01
Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the
Rolling bearing feature frequency extraction using extreme average envelope decomposition
Shi, Kunju; Liu, Shulin; Jiang, Chao; Zhang, Hongli
2016-09-01
The vibration signal contains a wealth of sensitive information which reflects the running status of the equipment. It is one of the most important steps for precise diagnosis to decompose the signal and extracts the effective information properly. The traditional classical adaptive signal decomposition method, such as EMD, exists the problems of mode mixing, low decomposition accuracy etc. Aiming at those problems, EAED(extreme average envelope decomposition) method is presented based on EMD. EAED method has three advantages. Firstly, it is completed through midpoint envelopment method rather than using maximum and minimum envelopment respectively as used in EMD. Therefore, the average variability of the signal can be described accurately. Secondly, in order to reduce the envelope errors during the signal decomposition, replacing two envelopes with one envelope strategy is presented. Thirdly, the similar triangle principle is utilized to calculate the time of extreme average points accurately. Thus, the influence of sampling frequency on the calculation results can be significantly reduced. Experimental results show that EAED could separate out single frequency components from a complex signal gradually. EAED could not only isolate three kinds of typical bearing fault characteristic of vibration frequency components but also has fewer decomposition layers. EAED replaces quadratic enveloping to an envelope which ensuring to isolate the fault characteristic frequency under the condition of less decomposition layers. Therefore, the precision of signal decomposition is improved.
On the average uncertainty for systems with nonlinear coupling
Nelson, Kenric P.; Umarov, Sabir R.; Kon, Mark A.
2017-02-01
The increased uncertainty and complexity of nonlinear systems have motivated investigators to consider generalized approaches to defining an entropy function. New insights are achieved by defining the average uncertainty in the probability domain as a transformation of entropy functions. The Shannon entropy when transformed to the probability domain is the weighted geometric mean of the probabilities. For the exponential and Gaussian distributions, we show that the weighted geometric mean of the distribution is equal to the density of the distribution at the location plus the scale (i.e. at the width of the distribution). The average uncertainty is generalized via the weighted generalized mean, in which the moment is a function of the nonlinear source. Both the Rényi and Tsallis entropies transform to this definition of the generalized average uncertainty in the probability domain. For the generalized Pareto and Student's t-distributions, which are the maximum entropy distributions for these generalized entropies, the appropriate weighted generalized mean also equals the density of the distribution at the location plus scale. A coupled entropy function is proposed, which is equal to the normalized Tsallis entropy divided by one plus the coupling.
2013-01-01
A concentration device (2) for filter filtration concentration of particles (4) from a volume of a fluid (6). The concentration device (2) comprises a filter (8) configured to filter particles (4) of a predefined size in the volume of the fluid (6). The concentration device (2) comprises...
Parameterized Traveling Salesman Problem: Beating the Average
Gutin, G.; Patel, V.
2016-01-01
In the traveling salesman problem (TSP), we are given a complete graph Kn together with an integer weighting w on the edges of Kn, and we are asked to find a Hamilton cycle of Kn of minimum weight. Let h(w) denote the average weight of a Hamilton cycle of Kn for the weighting w. Vizing in 1973 asked
On averaging methods for partial differential equations
Verhulst, F.
2001-01-01
The analysis of weakly nonlinear partial differential equations both qualitatively and quantitatively is emerging as an exciting eld of investigation In this report we consider specic results related to averaging but we do not aim at completeness The sections and contain important material which
Discontinuities and hysteresis in quantized average consensus
Ceragioli, Francesca; Persis, Claudio De; Frasca, Paolo
2011-01-01
We consider continuous-time average consensus dynamics in which the agents’ states are communicated through uniform quantizers. Solutions to the resulting system are defined in the Krasowskii sense and are proven to converge to conditions of ‘‘practical consensus’’. To cope with undesired chattering
Bayesian Averaging is Well-Temperated
Hansen, Lars Kai
2000-01-01
Bayesian predictions are stochastic just like predictions of any other inference scheme that generalize from a finite sample. While a simple variational argument shows that Bayes averaging is generalization optimal given that the prior matches the teacher parameter distribution the situation...
A Functional Measurement Study on Averaging Numerosity
Tira, Michael D.; Tagliabue, Mariaelena; Vidotto, Giulio
2014-01-01
In two experiments, participants judged the average numerosity between two sequentially presented dot patterns to perform an approximate arithmetic task. In Experiment 1, the response was given on a 0-20 numerical scale (categorical scaling), and in Experiment 2, the response was given by the production of a dot pattern of the desired numerosity…
Generalized Jackknife Estimators of Weighted Average Derivatives
Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael
With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic li...
Bootstrapping Density-Weighted Average Derivatives
Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael
Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...
Quantum Averaging of Squeezed States of Light
Squeezing has been recognized as the main resource for quantum information processing and an important resource for beating classical detection strategies. It is therefore of high importance to reliably generate stable squeezing over longer periods of time. The averaging procedure for a single qu...
Bayesian Model Averaging for Propensity Score Analysis
Kaplan, David; Chen, Jianshen
2013-01-01
The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…
A dynamic analysis of moving average rules
Chiarella, C.; He, X.Z.; Hommes, C.H.
2006-01-01
The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type
Average utility maximization: A preference foundation
A.V. Kothiyal (Amit); V. Spinu (Vitalie); P.P. Wakker (Peter)
2014-01-01
textabstractThis paper provides necessary and sufficient preference conditions for average utility maximization over sequences of variable length. We obtain full generality by using a new algebraic technique that exploits the richness structure naturally provided by the variable length of the sequen
High average-power induction linacs
Prono, D.S.; Barrett, D.; Bowles, E.; Caporaso, G.J.; Chen, Yu-Jiuan; Clark, J.C.; Coffield, F.; Newton, M.A.; Nexsen, W.; Ravenscroft, D.
1989-03-15
Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs.
High Average Power Optical FEL Amplifiers
Ben-Zvi, I; Litvinenko, V
2005-01-01
Historically, the first demonstration of the FEL was in an amplifier configuration at Stanford University. There were other notable instances of amplifying a seed laser, such as the LLNL amplifier and the BNL ATF High-Gain Harmonic Generation FEL. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance a 100 kW average power FEL. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting energy recovery linacs combine well with the high-gain FEL amplifier to produce unprecedented average power FELs with some advantages. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Li...
Full averaging of fuzzy impulsive differential inclusions
Natalia V. Skripnik
2010-09-01
Full Text Available In this paper the substantiation of the method of full averaging for fuzzy impulsive differential inclusions is studied. We extend the similar results for impulsive differential inclusions with Hukuhara derivative (Skripnik, 2007, for fuzzy impulsive differential equations (Plotnikov and Skripnik, 2009, and for fuzzy differential inclusions (Skripnik, 2009.
Materials for high average power lasers
Marion, J.E.; Pertica, A.J.
1989-01-01
Unique materials properties requirements for solid state high average power (HAP) lasers dictate a materials development research program. A review of the desirable laser, optical and thermo-mechanical properties for HAP lasers precedes an assessment of the development status for crystalline and glass hosts optimized for HAP lasers. 24 refs., 7 figs., 1 tab.
A dynamic analysis of moving average rules
C. Chiarella; X.Z. He; C.H. Hommes
2006-01-01
The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type use
Spatio-temporal observations of tertiary ozone maximum
V. F. Sofieva
2009-03-01
Full Text Available We present spatio-temporal distributions of tertiary ozone maximum (TOM, based on GOMOS (Global Ozone Monitoring by Occultation of Stars ozone measurements in 2002–2006. The tertiary ozone maximum is typically observed in the high-latitude winter mesosphere at altitude ~72 km. Although the explanation for this phenomenon has been found recently – low concentrations of odd-hydrogen cause the subsequent decrease in odd-oxygen losses – models have had significant deviations from existing observations until recently. Good coverage of polar night regions by GOMOS data has allowed for the first time obtaining spatial and temporal observational distributions of night-time ozone mixing ratio in the mesosphere.
The distributions obtained from GOMOS data have specific features, which are variable from year to year. In particular, due to a long lifetime of ozone in polar night conditions, the downward transport of polar air by the meridional circulation is clearly observed in the tertiary ozone maximum time series. Although the maximum tertiary ozone mixing ratio is achieved close to the polar night terminator (as predicted by the theory, TOM can be observed also at very high latitudes, not only in the beginning and at the end, but also in the middle of winter. We have compared the observational spatio-temporal distributions of tertiary ozone maximum with that obtained using WACCM (Whole Atmosphere Community Climate Model and found that the specific features are reproduced satisfactorily by the model.
Since ozone in the mesosphere is very sensitive to HO_{x} concentrations, energetic particle precipitation can significantly modify the shape of the ozone profiles. In particular, GOMOS observations have shown that the tertiary ozone maximum was temporarily destroyed during the January 2005 and December 2006 solar proton events as a result of the HO_{x} enhancement from the increased ionization.
Rose, Caspar
2014-01-01
, especially minority shareholders. Concentrated ownership is associated with benefits and costs. Concentrated ownership may reduce agency costs by increased monitoring of top management. However, concentrated ownership may also provide dominating owners with private benefits of control.......This entry summarizes the main theoretical contributions and empirical findings in relation to concentrated ownership from a law and economics perspective. The various forms of concentrated ownership are described as well as analyzed from the perspective of the legal protection of investors...
无
2007-01-01
The growth and interspecies competition of two red tide algal species Thalassiosira pseudonana Hasle et Heimdal and Gymnodinium sp. were studied under different concentration ratios of nitrogen to phosphorus, and the algal batch culture experiments were conducted. The physiological and biochemical indexes were measured periodically, including the maximum comparing growth rate, relative growth rate, average double time and chlorophyll a concentration. The results showed that when the concentration ratio of nitrogen to phosphorus was 16∶ 1, the maximum comparing growth rate,relative growth rate and chlorophyll a concentration of Thalassiosira pseudonana all reached the highest,and average double time was the shortest. This implied that the optimal concentration ratio of nitrogen to phosphorus of Thalassiosira pseudonana is 16∶ 1. When the concentration ratio of nitrogen to phosphorus was 6∶ 1, the maximum comparing growth rate, relative growth rate and the chlorophyll a concentration of Gymnodinium sp. reached the highest,and average double time was the shortest, so the optimal concentration ratio of nitrogen to phosphorus of Gymnodinium sp. is 6∶ 1. From the growth curves as indicated both in the cell density and the chlorophyll a concentration, it is suggested that the influence of concentration ratio of nitrogen to phosphorus on the chlorophyll a concentration and the cell density are almost the same. Different concentration ratios of nitrogen to phosphorus had weak influence on community succession and the competition between the two algae. Gymnodinium sp. may use the phosphorus in vivo for growth, so it is important to pay attention to the concealment of phosphorus, in order to avoid the outbreak of red tide. On the basis of the importance of nitrogen and phosphorus and the ratio of their concentration, the possible outbreak mechanism of red tide of the two algae was also discussed.
Generalised maximum entropy and heterogeneous technologies
Oude Lansink, A.G.J.M.
1999-01-01
Generalised maximum entropy methods are used to estimate a dual model of production on panel data of Dutch cash crop farms over the period 1970-1992. The generalised maximum entropy approach allows a coherent system of input demand and output supply equations to be estimated for each farm in the sam
20 CFR 229.48 - Family maximum.
2010-04-01
... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...
Kos, Bor; Valič, Blaž; Kotnik, Tadej; Gajšek, Peter
2012-10-07
Induction heating equipment is a source of strong and nonhomogeneous magnetic fields, which can exceed occupational reference levels. We investigated a case of an induction tempering tunnel furnace. Measurements of the emitted magnetic flux density (B) were performed during its operation and used to validate a numerical model of the furnace. This model was used to compute the values of B and the induced in situ electric field (E) for 15 different body positions relative to the source. For each body position, the computed B values were used to determine their maximum and average values, using six spatial averaging schemes (9-285 averaging points) and two averaging algorithms (arithmetic mean and quadratic mean). Maximum and average B values were compared to the ICNIRP reference level, and E values to the ICNIRP basic restriction. Our results show that in nonhomogeneous fields, the maximum B is an overly conservative predictor of overexposure, as it yields many false positives. The average B yielded fewer false positives, but as the number of averaging points increased, false negatives emerged. The most reliable averaging schemes were obtained for averaging over the torso with quadratic averaging, with no false negatives even for the maximum number of averaging points investigated.
Radon and radon-daughter concentrations in air in the vicinity of the Anaconda Uranium Mill
Momeni, M H; Lindstrom, J B; Dungey, C E; Kisieleski, W E
1979-11-01
Radon concentration, working level, and meteorological variables were measured continuously from June 1977 through June 1978 at three stations in the vicinity of the Anaconda Uranium Mill with measurements integrated to hourly intervals. Both radon and daughters show strong variations associated with low wind velocities and stable atmospheric conditions, and diurnal variations associated with thermal inversions. Average radon concentration shows seasonal dependence with highest concentrations observed during fall and winter. Comparison of radon concentrations and working levels between three stations shows strong dependence on wind direction and velocity. Radon concentrations and working-level distributions for each month and each station were analyzed. The average maximum, minimum, and modal concentration and working levels were estimated with observed frequencies. The highest concentration is 11,000 pCi/m/sup 3/ on the tailings. Working-level variations parallel radon variations but lag by less than one hour. The highest working levels were observed at night when conditions of higher secular radioactive equilibrium for radon daughters exist. Background radon concentration was measured at two stations, each located about 25 km from the mill, and the average is 408 pCi/m/sup 3/. Average working-level background is 3.6 x 10/sup -3/.
Duality of Maximum Entropy and Minimum Divergence
Shinto Eguchi
2014-06-01
Full Text Available We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.
Maximum power analysis of photovoltaic module in Ramadi city
Shahatha Salim, Majid; Mohammed Najim, Jassim [College of Science, University of Anbar (Iraq); Mohammed Salih, Salih [Renewable Energy Research Center, University of Anbar (Iraq)
2013-07-01
Performance of photovoltaic (PV) module is greatly dependent on the solar irradiance, operating temperature, and shading. Solar irradiance can have a significant impact on power output of PV module and energy yield. In this paper, a maximum PV power which can be obtain in Ramadi city (100km west of Baghdad) is practically analyzed. The analysis is based on real irradiance values obtained as the first time by using Soly2 sun tracker device. Proper and adequate information on solar radiation and its components at a given location is very essential in the design of solar energy systems. The solar irradiance data in Ramadi city were analyzed based on the first three months of 2013. The solar irradiance data are measured on earth's surface in the campus area of Anbar University. Actual average data readings were taken from the data logger of sun tracker system, which sets to save the average readings for each two minutes and based on reading in each one second. The data are analyzed from January to the end of March-2013. Maximum daily readings and monthly average readings of solar irradiance have been analyzed to optimize the output of photovoltaic solar modules. The results show that the system sizing of PV can be reduced by 12.5% if a tracking system is used instead of fixed orientation of PV modules.
Maximum power analysis of photovoltaic module in Ramadi city
Majid Shahatha Salim, Jassim Mohammed Najim, Salih Mohammed Salih
2013-01-01
Full Text Available Performance of photovoltaic (PV module is greatly dependent on the solar irradiance, operating temperature, and shading. Solar irradiance can have a significant impact on power output of PV module and energy yield. In this paper, a maximum PV power which can be obtain in Ramadi city (100km west of Baghdad is practically analyzed. The analysis is based on real irradiance values obtained as the first time by using Soly2 sun tracker device. Proper and adequate information on solar radiation and its components at a given location is very essential in the design of solar energy systems. The solar irradiance data in Ramadi city were analyzed based on the first three months of 2013. The solar irradiance data are measured on earth's surface in the campus area of Anbar University. Actual average data readings were taken from the data logger of sun tracker system, which sets to save the average readings for each two minutes and based on reading in each one second. The data are analyzed from January to the end of March-2013. Maximum daily readings and monthly average readings of solar irradiance have been analyzed to optimize the output of photovoltaic solar modules. The results show that the system sizing of PV can be reduced by 12.5% if a tracking system is used instead of fixed orientation of PV modules.
Phytochelatin concentrations in the equatorial Pacific
Ahner, Beth A.; Lee, Jennifer G.; Price, Neil M.; Morel, François M. M.
1998-11-01
Phytochelatin, an intracellular metal-binding polypeptide synthesized in eucaryotic algae in response to metals such as Cd and Cu, was measured in particulate samples collected from the equatorial Pacific. The concentrations in these samples (normalized to total particulate chl a) were unexpectedly high compared to laboratory culture data and were on average slightly more than in coastal areas where the metal concentrations are typically much greater. In part, the high field concentrations can be explained by the low cellular concentrations of chlorophyll a resulting from very low ambient Fe, but laboratory experiments provide a possible explanation for the rest of the difference. At low concentrations of inorganic Cd (Cd'=3 pM), increasing amounts of phytochelatin were induced by decreasing Zn concentrations in the culture medium of two diatoms: Thalassiosira weissflogii, a coastal species, and T. parthenaia, an isolate from the equatorial Pacific. In all previous studies, phytochelatin production has been directly correlated with increasing metal concentrations. Decreasing Co also resulted in higher phytochelatin concentrations in T. weissflogii and Emiliania huxleyi. Replicating the field concentrations of Zn, Co, and Cd in the laboratory results in cellular concentrations (amol -1 cell) that are very similar to those estimated for the field. Contrary to the expectation that high metal concentrations in the equatorial upwelling would cause elevated phytochelatin concentrations, there was no increase in phytochelatin concentrations from 20° S to 10° N—near surface samples were roughly the same at all stations. Also, most of the depth profiles had a distinct subsurface maximum. Neither of these features is readily explained by the available Zn and Cd data. Incubations with additions of Cd and Cu performed on water sampled at four separate stations induced significantly higher concentrations of phytochelatins than those in controls in a majority of the samples
Averaged Extended Tree Augmented Naive Classifier
Aaron Meehan
2015-07-01
Full Text Available This work presents a new general purpose classifier named Averaged Extended Tree Augmented Naive Bayes (AETAN, which is based on combining the advantageous characteristics of Extended Tree Augmented Naive Bayes (ETAN and Averaged One-Dependence Estimator (AODE classifiers. We describe the main properties of the approach and algorithms for learning it, along with an analysis of its computational time complexity. Empirical results with numerous data sets indicate that the new approach is superior to ETAN and AODE in terms of both zero-one classification accuracy and log loss. It also compares favourably against weighted AODE and hidden Naive Bayes. The learning phase of the new approach is slower than that of its competitors, while the time complexity for the testing phase is similar. Such characteristics suggest that the new classifier is ideal in scenarios where online learning is not required.
ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE
Carmen BOGHEAN
2013-12-01
Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.
Time-average dynamic speckle interferometry
Vladimirov, A. P.
2014-05-01
For the study of microscopic processes occurring at structural level in solids and thin biological objects, a method of dynamic speckle interferometry successfully applied. However, the method has disadvantages. The purpose of the report is to acquaint colleagues with the method of averaging in time in dynamic speckle - interferometry of microscopic processes, allowing eliminating shortcomings. The main idea of the method is the choice the averaging time, which exceeds the characteristic time correlation (relaxation) the most rapid process. The method theory for a thin phase and the reflecting object is given. The results of the experiment on the high-cycle fatigue of steel and experiment to estimate the biological activity of a monolayer of cells, cultivated on a transparent substrate is given. It is shown that the method allows real-time visualize the accumulation of fatigue damages and reliably estimate the activity of cells with viruses and without viruses.
Effects of bruxism on the maximum bite force
Todić Jelena T.
2017-01-01
Full Text Available Background/Aim. Bruxism is a parafunctional activity of the masticatory system, which is characterized by clenching or grinding of teeth. The purpose of this study was to determine whether the presence of bruxism has impact on maximum bite force, with particular reference to the potential impact of gender on bite force values. Methods. This study included two groups of subjects: without and with bruxism. The presence of bruxism in the subjects was registered using a specific clinical questionnaire on bruxism and physical examination. The subjects from both groups were submitted to the procedure of measuring the maximum bite pressure and occlusal contact area using a single-sheet pressure-sensitive films (Fuji Prescale MS and HS Film. Maximal bite force was obtained by multiplying maximal bite pressure and occlusal contact area values. Results. The average values of maximal bite force were significantly higher in the subjects with bruxism compared to those without bruxism (p 0.01. Maximal bite force was significantly higher in the males compared to the females in all segments of the research. Conclusion. The presence of bruxism influences the increase in the maximum bite force as shown in this study. Gender is a significant determinant of bite force. Registration of maximum bite force can be used in diagnosing and analysing pathophysiological events during bruxism.
Climate Sensitivity of the Last Glacial Maximum from Paleoclimate Simulations and Observations
Otto-Bliesner, B. L.; Brady, E.; Kothavala, Z.
2004-12-01
Global coupled climate models run for future scenarios of increasing atmospheric CO2 give a range of response of the global average surface temperature. Regional responses, including the North Atlantic overturning circulation and tropical Pacific ENSO, also vary significantly among models. The second phase of the Paleoclimate Modeling Intercomparison Project (PMIP 2) is coordinating simulations and data syntheses for the Last Glacial Maximum (21,000 years before present) to allow another assessment of climate sensitivity. Atmospheric CO2 concentrations at the Last Glacial Maximum (LGM) have been estimated using measurements from ice cores to be 185 ppmv, approximately 50% of present-day values. Global, annual mean surface temperature simulated by the slab ocean version of the National Center for Atmospheric Research (NCAR) Community Climate System Model (CCSM3) shows a cooling of -2.8°C for LGM CO2 levels and a warming of 2.5°C for a doubling of CO2. Slab and coupled CCSM3 simulations that include the reductions of the other atmospheric trace gases and the large ice sheets covering North America and Eurasia at LGM give cooling in agreement with proxy inferences and indicate that LGM CO2 explains about half of the global cooling at LGM. Regional signatures of the climate system to changed LGM forcing are also an important measure of climate sensitivity and results from the fully coupled version of CCSM3 will be shown.
Average Annual Rainfall over the Globe
Agrawal, D. C.
2013-01-01
The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74 ×…
Endogenous average cost based access pricing
Fjell, Kenneth; Foros, Øystein; Pal, Debashis
2006-01-01
We consider an industry where a downstream competitor requires access to an upstream facility controlled by a vertically integrated and regulated incumbent. The literature on access pricing assumes the access price to be exogenously fixed ex-ante. We analyze an endogenous average cost based access pricing rule, where both firms realize the interdependence among their quantities and the regulated access price. Endogenous access pricing neutralizes the artificial cost advantag...
The Ghirlanda-Guerra identities without averaging
Chatterjee, Sourav
2009-01-01
The Ghirlanda-Guerra identities are one of the most mysterious features of spin glasses. We prove the GG identities in a large class of models that includes the Edwards-Anderson model, the random field Ising model, and the Sherrington-Kirkpatrick model in the presence of a random external field. Previously, the GG identities were rigorously proved only `on average' over a range of temperatures or under small perturbations.
Average Annual Rainfall over the Globe
Agrawal, D. C.
2013-01-01
The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74 ×…
Average Light Intensity Inside a Photobioreactor
Herby Jean
2011-01-01
Full Text Available For energy production, microalgae are one of the few alternatives with high potential. Similar to plants, algae require energy acquired from light sources to grow. This project uses calculus to determine the light intensity inside of a photobioreactor filled with algae. Under preset conditions along with estimated values, we applied Lambert-Beer's law to formulate an equation to calculate how much light intensity escapes a photobioreactor and determine the average light intensity that was present inside the reactor.
Geomagnetic effects on the average surface temperature
Ballatore, P.
Several results have previously shown as the solar activity can be related to the cloudiness and the surface solar radiation intensity (Svensmark and Friis-Christensen, J. Atmos. Sol. Terr. Phys., 59, 1225, 1997; Veretenenkoand Pudovkin, J. Atmos. Sol. Terr. Phys., 61, 521, 1999). Here, the possible relationships between the averaged surface temperature and the solar wind parameters or geomagnetic activity indices are investigated. The temperature data used are the monthly SST maps (generated at RAL and available from the related ESRIN/ESA database) that represent the averaged surface temperature with a spatial resolution of 0.5°x0.5° and cover the entire globe. The interplanetary data and the geomagnetic data are from the USA National Space Science Data Center. The time interval considered is 1995-2000. Specifically, possible associations and/or correlations of the average temperature with the interplanetary magnetic field Bz component and with the Kp index are considered and differentiated taking into account separate geographic and geomagnetic planetary regions.
Unscrambling The "Average User" Of Habbo Hotel
Mikael Johnson
2007-01-01
Full Text Available The “user” is an ambiguous concept in human-computer interaction and information systems. Analyses of users as social actors, participants, or configured users delineate approaches to studying design-use relationships. Here, a developer’s reference to a figure of speech, termed the “average user,” is contrasted with design guidelines. The aim is to create an understanding about categorization practices in design through a case study about the virtual community, Habbo Hotel. A qualitative analysis highlighted not only the meaning of the “average user,” but also the work that both the developer and the category contribute to this meaning. The average user a represents the unknown, b influences the boundaries of the target user groups, c legitimizes the designer to disregard marginal user feedback, and d keeps the design space open, thus allowing for creativity. The analysis shows how design and use are intertwined and highlights the developers’ role in governing different users’ interests.
On Backus average for generally anisotropic layers
Bos, Len; Slawinski, Michael A; Stanoev, Theodore
2016-01-01
In this paper, following the Backus (1962) approach, we examine expressions for elasticity parameters of a homogeneous generally anisotropic medium that is long-wave-equivalent to a stack of thin generally anisotropic layers. These expressions reduce to the results of Backus (1962) for the case of isotropic and transversely isotropic layers. In over half-a-century since the publications of Backus (1962) there have been numerous publications applying and extending that formulation. However, neither George Backus nor the authors of the present paper are aware of further examinations of mathematical underpinnings of the original formulation; hence, this paper. We prove that---within the long-wave approximation---if the thin layers obey stability conditions then so does the equivalent medium. We examine---within the Backus-average context---the approximation of the average of a product as the product of averages, and express it as a proposition in terms of an upper bound. In the presented examination we use the e...
A simple algorithm for averaging spike trains.
Julienne, Hannah; Houghton, Conor
2013-02-25
Although spike trains are the principal channel of communication between neurons, a single stimulus will elicit different spike trains from trial to trial. This variability, in both spike timings and spike number can obscure the temporal structure of spike trains and often means that computations need to be run on numerous spike trains in order to extract features common across all the responses to a particular stimulus. This can increase the computational burden and obscure analytical results. As a consequence, it is useful to consider how to calculate a central spike train that summarizes a set of trials. Indeed, averaging responses over trials is routine for other signal types. Here, a simple method for finding a central spike train is described. The spike trains are first mapped to functions, these functions are averaged, and a greedy algorithm is then used to map the average function back to a spike train. The central spike trains are tested for a large data set. Their performance on a classification-based test is considerably better than the performance of the medoid spike trains.
Changing mortality and average cohort life expectancy
Robert Schoen
2005-10-01
Full Text Available Period life expectancy varies with changes in mortality, and should not be confused with the life expectancy of those alive during that period. Given past and likely future mortality changes, a recent debate has arisen on the usefulness of the period life expectancy as the leading measure of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure, the average cohort life expectancy (ACLE, to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate measures of mortality are calculated for England and Wales, Norway, and Switzerland for the years 1880 to 2000. CAL is found to be sensitive to past and present changes in death rates. ACLE requires the most data, but gives the best representation of the survivorship of cohorts present at a given time.
Disk-averaged synthetic spectra of Mars
Tinetti, Giovanna; Meadows, Victoria S.; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather
2005-01-01
The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.
Spatial averaging infiltration model for layered soil
HU HePing; YANG ZhiYong; TIAN FuQiang
2009-01-01
To quantify the influences of soil heterogeneity on infiltration, a spatial averaging infiltration model for layered soil (SAI model) is developed by coupling the spatial averaging approach proposed by Chen et al. and the Generalized Green-Ampt model proposed by Jia et al. In the SAI model, the spatial heterogeneity along the horizontal direction is described by a probability distribution function, while that along the vertical direction is represented by the layered soils. The SAI model is tested on a typical soil using Monte Carlo simulations as the base model. The results show that the SAI model can directly incorporate the influence of spatial heterogeneity on infiltration on the macro scale. It is also found that the homogeneous assumption of soil hydraulic conductivity along the horizontal direction will overestimate the infiltration rate, while that along the vertical direction will underestimate the infiltration rate significantly during rainstorm periods. The SAI model is adopted in the spatial averaging hydrological model developed by the authors, and the results prove that it can be applied in the macro-scale hydrological and land surface process modeling in a promising way.
Spatial averaging infiltration model for layered soil
无
2009-01-01
To quantify the influences of soil heterogeneity on infiltration, a spatial averaging infiltration model for layered soil (SAI model) is developed by coupling the spatial averaging approach proposed by Chen et al. and the Generalized Green-Ampt model proposed by Jia et al. In the SAI model, the spatial hetero- geneity along the horizontal direction is described by a probability distribution function, while that along the vertical direction is represented by the layered soils. The SAI model is tested on a typical soil using Monte Carlo simulations as the base model. The results show that the SAI model can directly incorporate the influence of spatial heterogeneity on infiltration on the macro scale. It is also found that the homogeneous assumption of soil hydraulic conductivity along the horizontal direction will overes- timate the infiltration rate, while that along the vertical direction will underestimate the infiltration rate significantly during rainstorm periods. The SAI model is adopted in the spatial averaging hydrological model developed by the authors, and the results prove that it can be applied in the macro-scale hy- drological and land surface process modeling in a promising way.
Disk-averaged synthetic spectra of Mars
Tinetti, G; Fong, W; Meadows, V S; Snively, H; Velusamy, T; Crisp, David; Fong, William; Meadows, Victoria S.; Snively, Heather; Tinetti, Giovanna; Velusamy, Thangasamy
2004-01-01
The principal goal of the NASA Terrestrial Planet Finder (TPF) and ESA Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earth-sized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of the planet Mars to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPF-C) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model which uses observational data as input to generate a database of spatially-resolved synthetic spectra for a range of illumination conditions (phase angles) and viewing geometries. Results presented here include disk averaged synthetic spectra, light-cur...
Disk-averaged synthetic spectra of Mars
Tinetti, Giovanna; Meadows, Victoria S.; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather
2005-01-01
The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.
Deng, Yiwen; Chen, Chao; Li, Qiong; Hu, Qinqiang; Yuan, Haoting; Li, Junmei; Li, Yan
2015-12-01
Urban tunnels located in the city center areas, can alleviate traffic pressure and provide more convenient traffic for people. Vehicles emit pollutants that are significant contributors to air pollution inside and at the outlet of tunnels. Ventilation is the most widely used method to dilute pollutants in tunnels. To calculate the design required air volume flow accurately, vehicle emissions should be exactly determined. Emission factors are important parameters to estimate vehicle emissions. To characterize carbon monoxide (CO) and nitrogen oxides (NOX) emission factors for a mixed vehicle fleet under real-world driving conditions of urban China, we measured CO and NOX concentrations in Shanghai East Yan'an Road tunnel and Changsha Yingpan Road tunnel in 2012 and 2013. In-use fleet average CO and NOX emission factors were calculated according to tunnel pollutants mass balance models. The results showed that the maximum CO concentration in August was 86 ppm, while in October it was 45 ppm in Shanghai East Yan'an Road tunnel. The maximum concentrations of CO and NOX were 33 ppm and 2 ppm in Changsha Yingpan Road tunnel, respectively. In-use fleet average CO emission factors of East Yan'an Road tunnel, with gradient of -3% ∼ 3%, were 1.266 (±0.889) ∼ 3.974 (±2.189) g km-1 vehicle-1. In-use fleet average CO and NOX emission factors of Yingpan Road tunnel with gradient of -6% ∼ 6% amounted to 0.754 (±0.561) ∼ 6.050 (±5.940) g km-1 vehicle-1 and 0.121 (±0.022) ∼ 0.818 (±0.755) g km-1 vehicle-1, respectively. The dependences of CO and NOX emission on roadway gradient and vehicle speed were found. The average CO and NOX emission factors increased with the ascending of roadway gradient as well as reverse with vehicle speed. These findings provide meaningful reference for ventilation design and environmental assessment of urban tunnels, and further help provide basic data to formulate relevant standards and norms.
Adaptive Parallel Tempering for Stochastic Maximum Likelihood Learning of RBMs
Desjardins, Guillaume; Bengio, Yoshua
2010-01-01
Restricted Boltzmann Machines (RBM) have attracted a lot of attention of late, as one the principle building blocks of deep networks. Training RBMs remains problematic however, because of the intractibility of their partition function. The maximum likelihood gradient requires a very robust sampler which can accurately sample from the model despite the loss of ergodicity often incurred during learning. While using Parallel Tempering in the negative phase of Stochastic Maximum Likelihood (SML-PT) helps address the issue, it imposes a trade-off between computational complexity and high ergodicity, and requires careful hand-tuning of the temperatures. In this paper, we show that this trade-off is unnecessary. The choice of optimal temperatures can be automated by minimizing average return time (a concept first proposed by [Katzgraber et al., 2006]) while chains can be spawned dynamically, as needed, thus minimizing the computational overhead. We show on a synthetic dataset, that this results in better likelihood ...
Concentration of Swiss Elite Orienteers.
Seiler, Roland; Wetzel, Jorg
1997-01-01
A visual discrimination task was used to measure concentration among 43 members of Swiss national orienteering teams. Subjects were above average in the number of target objects dealt with and in duration of continuous concentration. For females only, ranking in orienteering performance was related to quality of concentration (ratio of correct to…
Analytic continuation average spectrum method for transport in quantum liquids
Kletenik-Edelman, Orly [School of Chemistry, Sackler Faculty of Exact Sciences, Tel Aviv University, Tel Aviv 69978 (Israel); Rabani, Eran, E-mail: rabani@tau.ac.il [School of Chemistry, Sackler Faculty of Exact Sciences, Tel Aviv University, Tel Aviv 69978 (Israel); Reichman, David R. [Department of Chemistry, Columbia University, 3000 Broadway, New York, NY 10027 (United States)
2010-05-12
Recently, we have applied the analytic continuation averaged spectrum method (ASM) to calculate collective density fluctuations in quantum liquid . Unlike the maximum entropy (MaxEnt) method, the ASM approach is capable of revealing resolved modes in the dynamic structure factor in agreement with experiments. In this work we further develop the ASM to study single-particle dynamics in quantum liquids with dynamical susceptibilities that are characterized by a smooth spectrum. Surprisingly, we find that for the power spectrum of the velocity autocorrelation function there are pronounced differences in comparison with the MaxEnt approach, even for this simple case of smooth unimodal dynamic response. We show that for liquid para-hydrogen the ASM is closer to the centroid molecular dynamics (CMD) result while for normal liquid helium it agrees better with the quantum mode coupling theory (QMCT) and with the MaxEnt approach.
Extracting Credible Dependencies for Averaged One-Dependence Estimator Analysis
LiMin Wang
2014-01-01
Full Text Available Of the numerous proposals to improve the accuracy of naive Bayes (NB by weakening the conditional independence assumption, averaged one-dependence estimator (AODE demonstrates remarkable zero-one loss performance. However, indiscriminate superparent attributes will bring both considerable computational cost and negative effect on classification accuracy. In this paper, to extract the most credible dependencies we present a new type of seminaive Bayesian operation, which selects superparent attributes by building maximum weighted spanning tree and removes highly correlated children attributes by functional dependency and canonical cover analysis. Our extensive experimental comparison on UCI data sets shows that this operation efficiently identifies possible superparent attributes at training time and eliminates redundant children attributes at classification time.
The maximum intelligible range of the human voice
Boren, Braxton
This dissertation examines the acoustics of the spoken voice at high levels and the maximum number of people that could hear such a voice unamplified in the open air. In particular, it examines an early auditory experiment by Benjamin Franklin which sought to determine the maximum intelligible crowd for the Anglican preacher George Whitefield in the eighteenth century. Using Franklin's description of the experiment and a noise source on Front Street, the geometry and diffraction effects of such a noise source are examined to more precisely pinpoint Franklin's position when Whitefield's voice ceased to be intelligible. Based on historical maps, drawings, and prints, the geometry and material of Market Street is constructed as a computer model which is then used to construct an acoustic cone tracing model. Based on minimal values of the Speech Transmission Index (STI) at Franklin's position, Whitefield's on-axis Sound Pressure Level (SPL) at 1 m is determined, leading to estimates centering around 90 dBA. Recordings are carried out on trained actors and singers to determine their maximum time-averaged SPL at 1 m. This suggests that the greatest average SPL achievable by the human voice is 90-91 dBA, similar to the median estimates for Whitefield's voice. The sites of Whitefield's largest crowds are acoustically modeled based on historical evidence and maps. Based on Whitefield's SPL, the minimal STI value, and the crowd's background noise, this allows a prediction of the minimally intelligible area for each site. These yield maximum crowd estimates of 50,000 under ideal conditions, while crowds of 20,000 to 30,000 seem more reasonable when the crowd was reasonably quiet and Whitefield's voice was near 90 dBA.
Matić Vesna
2016-01-01
Full Text Available Concentration risk has been gaining a special dimension in the contemporary financial and economic environment. Financial institutions are exposed to this risk mainly in the field of lending, mostly through their credit activities and concentration of credit portfolios. This refers to the concentration of different exposures within a single risk category (credit risk, market risk, operational risk, liquidity risk.
Mapping the MPM maximum flow algorithm on GPUs
Solomon, Steven; Thulasiraman, Parimala
2010-11-01
The GPU offers a high degree of parallelism and computational power that developers can exploit for general purpose parallel applications. As a result, a significant level of interest has been directed towards GPUs in recent years. Regular applications, however, have traditionally been the focus of work on the GPU. Only very recently has there been a growing number of works exploring the potential of irregular applications on the GPU. We present a work that investigates the feasibility of Malhotra, Pramodh Kumar and Maheshwari's "MPM" maximum flow algorithm on the GPU that achieves an average speedup of 8 when compared to a sequential CPU implementation.
A dual method for maximum entropy restoration
Smith, C. B.
1979-01-01
A simple iterative dual algorithm for maximum entropy image restoration is presented. The dual algorithm involves fewer parameters than conventional minimization in the image space. Minicomputer test results for Fourier synthesis with inadequate phantom data are given.
Maximum Throughput in Multiple-Antenna Systems
Zamani, Mahdi
2012-01-01
The point-to-point multiple-antenna channel is investigated in uncorrelated block fading environment with Rayleigh distribution. The maximum throughput and maximum expected-rate of this channel are derived under the assumption that the transmitter is oblivious to the channel state information (CSI), however, the receiver has perfect CSI. First, we prove that in multiple-input single-output (MISO) channels, the optimum transmission strategy maximizing the throughput is to use all available antennas and perform equal power allocation with uncorrelated signals. Furthermore, to increase the expected-rate, multi-layer coding is applied. Analogously, we establish that sending uncorrelated signals and performing equal power allocation across all available antennas at each layer is optimum. A closed form expression for the maximum continuous-layer expected-rate of MISO channels is also obtained. Moreover, we investigate multiple-input multiple-output (MIMO) channels, and formulate the maximum throughput in the asympt...
Photoemission spectromicroscopy with MAXIMUM at Wisconsin
Ng, W.; Ray-Chaudhuri, A.K.; Cole, R.K.; Wallace, J.; Crossley, S.; Crossley, D.; Chen, G.; Green, M.; Guo, J.; Hansen, R.W.C.; Cerrina, F.; Margaritondo, G. (Dept. of Electrical Engineering, Dept. of Physics and Synchrotron Radiation Center, Univ. of Wisconsin, Madison (USA)); Underwood, J.H.; Korthright, J.; Perera, R.C.C. (Center for X-ray Optics, Accelerator and Fusion Research Div., Lawrence Berkeley Lab., CA (USA))
1990-06-01
We describe the development of the scanning photoemission spectromicroscope MAXIMUM at the Wisoncsin Synchrotron Radiation Center, which uses radiation from a 30-period undulator. The article includes a discussion of the first tests after the initial commissioning. (orig.).
Maximum-likelihood method in quantum estimation
Paris, M G A; Sacchi, M F
2001-01-01
The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.
De Luca, G.; Magnus, J.R.
2011-01-01
This article is concerned with the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals which implement, respectively, the exact Bayesian Model Averaging (BMA) estimator and the Weighted Average Least Squa
Entanglement in random pure states: spectral density and average von Neumann entropy
Kumar, Santosh; Pandey, Akhilesh, E-mail: skumar.physics@gmail.com, E-mail: ap0700@mail.jnu.ac.in [School of Physical Sciences, Jawaharlal Nehru University, New Delhi 110 067 (India)
2011-11-04
Quantum entanglement plays a crucial role in quantum information, quantum teleportation and quantum computation. The information about the entanglement content between subsystems of the composite system is encoded in the Schmidt eigenvalues. We derive here closed expressions for the spectral density of Schmidt eigenvalues for all three invariant classes of random matrix ensembles. We also obtain exact results for average von Neumann entropy. We find that maximum average entanglement is achieved if the system belongs to the symplectic invariant class. (paper)
The maximum entropy technique. System's statistical description
Belashev, B Z
2002-01-01
The maximum entropy technique (MENT) is applied for searching the distribution functions of physical values. MENT takes into consideration the demand of maximum entropy, the characteristics of the system and the connection conditions, naturally. It is allowed to apply MENT for statistical description of closed and open systems. The examples in which MENT had been used for the description of the equilibrium and nonequilibrium states and the states far from the thermodynamical equilibrium are considered
19 CFR 114.23 - Maximum period.
2010-04-01
... 19 Customs Duties 1 2010-04-01 2010-04-01 false Maximum period. 114.23 Section 114.23 Customs... CARNETS Processing of Carnets § 114.23 Maximum period. (a) A.T.A. carnet. No A.T.A. carnet with a period of validity exceeding 1 year from date of issue shall be accepted. This period of validity cannot be...
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
A sixth order averaged vector field method
Li, Haochen; Wang, Yushun; Qin, Mengzhao
2014-01-01
In this paper, based on the theory of rooted trees and B-series, we propose the concrete formulas of the substitution law for the trees of order =5. With the help of the new substitution law, we derive a B-series integrator extending the averaged vector field (AVF) method to high order. The new integrator turns out to be of order six and exactly preserves energy for Hamiltonian systems. Numerical experiments are presented to demonstrate the accuracy and the energy-preserving property of the s...
Phase-averaged transport for quasiperiodic Hamiltonians
Bellissard, J; Schulz-Baldes, H
2002-01-01
For a class of discrete quasi-periodic Schroedinger operators defined by covariant re- presentations of the rotation algebra, a lower bound on phase-averaged transport in terms of the multifractal dimensions of the density of states is proven. This result is established under a Diophantine condition on the incommensuration parameter. The relevant class of operators is distinguished by invariance with respect to symmetry automorphisms of the rotation algebra. It includes the critical Harper (almost-Mathieu) operator. As a by-product, a new solution of the frame problem associated with Weyl-Heisenberg-Gabor lattices of coherent states is given.
Sparsity averaging for radio-interferometric imaging
Carrillo, Rafael E; Wiaux, Yves
2014-01-01
We propose a novel regularization method for compressive imaging in the context of the compressed sensing (CS) theory with coherent and redundant dictionaries. Natural images are often complicated and several types of structures can be present at once. It is well known that piecewise smooth images exhibit gradient sparsity, and that images with extended structures are better encapsulated in wavelet frames. Therefore, we here conjecture that promoting average sparsity or compressibility over multiple frames rather than single frames is an extremely powerful regularization prior.
Fluctuations of wavefunctions about their classical average
Benet, L [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Flores, J [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Hernandez-Saldana, H [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Izrailev, F M [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Leyvraz, F [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Seligman, T H [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico)
2003-02-07
Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics.
The average free volume model for liquids
Yu, Yang
2014-01-01
In this work, the molar volume thermal expansion coefficient of 59 room temperature ionic liquids is compared with their van der Waals volume Vw. Regular correlation can be discerned between the two quantities. An average free volume model, that considers the particles as hard core with attractive force, is proposed to explain the correlation in this study. A combination between free volume and Lennard-Jones potential is applied to explain the physical phenomena of liquids. Some typical simple liquids (inorganic, organic, metallic and salt) are introduced to verify this hypothesis. Good agreement from the theory prediction and experimental data can be obtained.
Fluctuations of wavefunctions about their classical average
Bénet, L; Hernandez-Saldana, H; Izrailev, F M; Leyvraz, F; Seligman, T H
2003-01-01
Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics.
Grassmann Averages for Scalable Robust PCA
2014-01-01
As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortunately, state-of-the-art approaches for robust PCA do not scale beyond small-to-medium sized datasets. To address this, we introduce the Grassmann Average (GA), whic...
Least cost cusp concentrator design
Nanda, S.K.; Mullick, S.C.; Annamalai, M.; Iyer, M.V.; Nirmala, K.A.; Venkatesh, P.; Prasad, C.R.; Subramani, C.
1982-01-01
Cusp concentrators require larger reflector areas, but can be designed for larger acceptance angles, allowing large mirror tolerances. Design procedures are outlined to compute the optimum combination of acceptance angle and maximum mirror slope for any required concentration ratio, taking into account the material as well as fabrication costs. The cusps are compared with the parabola.
SEXUAL DIMORPHISM OF MAXIMUM FEMORAL LENGTH
Pandya A M
2011-04-01
Full Text Available Sexual identification from the skeletal parts has medico legal and anthropological importance. Present study aims to obtain values of maximum femoral length and to evaluate its possible usefulness in determining correct sexual identification. Study sample consisted of 184 dry, normal, adult, human femora (136 male & 48 female from skeletal collections of Anatomy department, M. P. Shah Medical College, Jamnagar, Gujarat. Maximum length of femur was considered as maximum vertical distance between upper end of head of femur and the lowest point on femoral condyle, measured with the osteometric board. Mean Values obtained were, 451.81 and 417.48 for right male and female, and 453.35 and 420.44 for left male and female respectively. Higher value in male was statistically highly significant (P< 0.001 on both sides. Demarking point (D.P. analysis of the data showed that right femora with maximum length more than 476.70 were definitely male and less than 379.99 were definitely female; while for left bones, femora with maximum length more than 484.49 were definitely male and less than 385.73 were definitely female. Maximum length identified 13.43% of right male femora, 4.35% of right female femora, 7.25% of left male femora and 8% of left female femora. [National J of Med Res 2011; 1(2.000: 67-70
Oka, Yuka; Hirayama, Izumi; Yoshikawa, Mitsuhide; Yokoyama, Tomoko; Iida, Kenji; Iwakoshi, Katsushi; Suzuki, Ayana; Yanagihara, Midori; Segawa, Yukino; Kukimoto, Sonomi; Hamada, Humika; Matsuzawa, Satomi; Tabata, Setsuko; Sasamoto, Takeo
2017-01-01
A survey of nitrate-ion concentrations in plant-factory-cultured leafy vegetables was conducted. 344 samples of twenty-one varieties of raw leafy vegetables were examined using HPLC. The nitrate-ion concentrations in plant-factory-cultured leafy vegetables were found to be LOD-6,800 mg/kg. Furthermore, the average concentration values varied among different leafy vegetables. The average values for plant-factory-cultured leafy vegetables were higher than those of open-cultured leafy vegetables reported in previous studies, such as the values listed in the Standard Tables of Food Composition in Japan- 2015 - (Seventh revised edition). For some plant-factory-cultured leafy vegetables, such as salad spinach, the average values were above the maximum permissible levels of nitrate concentration in EC No 1258/2011; however, even when these plant-factory-cultured vegetables were routinely eaten, the intake of nitrate ions in humans did not exceed the ADI.
Synoptic and meteorological drivers of extreme ozone concentrations over Europe
Otero, Noelia Felipe; Sillmann, Jana; Schnell, Jordan L.; Rust, Henning W.; Butler, Tim
2016-04-01
The present work assesses the relationship between local and synoptic meteorological conditions and surface ozone concentration over Europe in spring and summer months, during the period 1998-2012 using a new interpolated data set of observed surface ozone concentrations over the European domain. Along with local meteorological conditions, the influence of large-scale atmospheric circulation on surface ozone is addressed through a set of airflow indices computed with a novel implementation of a grid-by-grid weather type classification across Europe. Drivers of surface ozone over the full distribution of maximum daily 8-hour average values are investigated, along with drivers of the extreme high percentiles and exceedances or air quality guideline thresholds. Three different regression techniques are applied: multiple linear regression to assess the drivers of maximum daily ozone, logistic regression to assess the probability of threshold exceedances and quantile regression to estimate the meteorological influence on extreme values, as represented by the 95th percentile. The relative importance of the input parameters (predictors) is assessed by a backward stepwise regression procedure that allows the identification of the most important predictors in each model. Spatial patterns of model performance exhibit distinct variations between regions. The inclusion of the ozone persistence is particularly relevant over Southern Europe. In general, the best model performance is found over Central Europe, where the maximum temperature plays an important role as a driver of maximum daily ozone as well as its extreme values, especially during warmer months.
Detrending moving average algorithm for multifractals
Gu, Gao-Feng; Zhou, Wei-Xing
2010-07-01
The detrending moving average (DMA) algorithm is a widely used technique to quantify the long-term correlations of nonstationary time series and the long-range correlations of fractal surfaces, which contains a parameter θ determining the position of the detrending window. We develop multifractal detrending moving average (MFDMA) algorithms for the analysis of one-dimensional multifractal measures and higher-dimensional multifractals, which is a generalization of the DMA method. The performance of the one-dimensional and two-dimensional MFDMA methods is investigated using synthetic multifractal measures with analytical solutions for backward (θ=0) , centered (θ=0.5) , and forward (θ=1) detrending windows. We find that the estimated multifractal scaling exponent τ(q) and the singularity spectrum f(α) are in good agreement with the theoretical values. In addition, the backward MFDMA method has the best performance, which provides the most accurate estimates of the scaling exponents with lowest error bars, while the centered MFDMA method has the worse performance. It is found that the backward MFDMA algorithm also outperforms the multifractal detrended fluctuation analysis. The one-dimensional backward MFDMA method is applied to analyzing the time series of Shanghai Stock Exchange Composite Index and its multifractal nature is confirmed.
Averaged null energy condition from causality
Hartman, Thomas; Kundu, Sandipan; Tajdini, Amirhossein
2017-07-01
Unitary, Lorentz-invariant quantum field theories in flat spacetime obey mi-crocausality: commutators vanish at spacelike separation. For interacting theories in more than two dimensions, we show that this implies that the averaged null energy, ∫ duT uu , must be non-negative. This non-local operator appears in the operator product expansion of local operators in the lightcone limit, and therefore contributes to n-point functions. We derive a sum rule that isolates this contribution and is manifestly positive. The argument also applies to certain higher spin operators other than the stress tensor, generating an infinite family of new constraints of the form ∫ duX uuu··· u ≥ 0. These lead to new inequalities for the coupling constants of spinning operators in conformal field theory, which include as special cases (but are generally stronger than) the existing constraints from the lightcone bootstrap, deep inelastic scattering, conformal collider methods, and relative entropy. We also comment on the relation to the recent derivation of the averaged null energy condition from relative entropy, and suggest a more general connection between causality and information-theoretic inequalities in QFT.
Local average height distribution of fluctuating interfaces
Smith, Naftali R.; Meerson, Baruch; Sasorov, Pavel V.
2017-01-01
Height fluctuations of growing surfaces can be characterized by the probability distribution of height in a spatial point at a finite time. Recently there has been spectacular progress in the studies of this quantity for the Kardar-Parisi-Zhang (KPZ) equation in 1 +1 dimensions. Here we notice that, at or above a critical dimension, the finite-time one-point height distribution is ill defined in a broad class of linear surface growth models unless the model is regularized at small scales. The regularization via a system-dependent small-scale cutoff leads to a partial loss of universality. As a possible alternative, we introduce a local average height. For the linear models, the probability density of this quantity is well defined in any dimension. The weak-noise theory for these models yields the "optimal path" of the interface conditioned on a nonequilibrium fluctuation of the local average height. As an illustration, we consider the conserved Edwards-Wilkinson (EW) equation, where, without regularization, the finite-time one-point height distribution is ill defined in all physical dimensions. We also determine the optimal path of the interface in a closely related problem of the finite-time height-difference distribution for the nonconserved EW equation in 1 +1 dimension. Finally, we discuss a UV catastrophe in the finite-time one-point distribution of height in the (nonregularized) KPZ equation in 2 +1 dimensions.
Asymptotic Time Averages and Frequency Distributions
Muhammad El-Taha
2016-01-01
Full Text Available Consider an arbitrary nonnegative deterministic process (in a stochastic setting {X(t, t≥0} is a fixed realization, i.e., sample-path of the underlying stochastic process with state space S=(-∞,∞. Using a sample-path approach, we give necessary and sufficient conditions for the long-run time average of a measurable function of process to be equal to the expectation taken with respect to the same measurable function of its long-run frequency distribution. The results are further extended to allow unrestricted parameter (time space. Examples are provided to show that our condition is not superfluous and that it is weaker than uniform integrability. The case of discrete-time processes is also considered. The relationship to previously known sufficient conditions, usually given in stochastic settings, will also be discussed. Our approach is applied to regenerative processes and an extension of a well-known result is given. For researchers interested in sample-path analysis, our results will give them the choice to work with the time average of a process or its frequency distribution function and go back and forth between the two under a mild condition.
Borak, T.B.; Baynes, S.A. [Colorado State Univ., Ft. Collins, CO (United States). Dept. of Radiological Health Sciences
1999-04-01
Measurements were made of {sup 222}Rn concentrations outdoors in Ft. Collins, Colorado, using a continuously sampling scintillation flask between January 1993 and December 1995. These data were analyzed for hourly, daily, and seasonal variations. The average {sup 222}Rn concentration at 1 m above the ground was 18 {+-} 10 Bq m{sup {minus}3} with a geometric mean of 15 Bq m{sup {minus}3} and a geometric standard deviation of 1.7. Hourly averaged data indicated a diurnal pattern with the outdoor {sup 222}Rn concentration reaching a maximum in the early morning between 4:00 a.m. and 6:00 a.m. and a broad minimum between 1:00 p.m. and 4:00 p.m. in the afternoon. An analysis also indicated that the outdoor {sup 222}Rn concentrations were consistently lowest during the spring (March and April) and highest during the late summer (July--September).
STUDY ON MAXIMUM SPECIFIC SLUDGE ACIVITY OF DIFFERENT ANAEROBIC GRANULAR SLUDGE BY BATCH TESTS
无
2001-01-01
The maximum specific sludge activity of granular sludge from large-scale UASB, IC and Biobed anaerobic reactors were investigated by batch tests. The limitation factors related to maximum specific sludge activity (diffusion, substrate sort, substrate concentration and granular size) were studied. The general principle and procedure for the precise measurement of maximum specific sludge activity were suggested. The potential capacity of loading rate of the IC and Biobed anaerobic reactors were analyzed and compared by use of the batch tests results.
Averaging processes in granular flows driven by gravity
Rossi, Giulia; Armanini, Aronne
2016-04-01
One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental
Preparation of high viscosity average molecular mass poly-L-lactide
ZHOU Zhi-hua; RUAN Jian-ming; ZOU Jian-peng; ZHOU Zhong-cheng; SHEN Xiong-jun
2006-01-01
Poly-L-lactide(PLLA) was synthesized by ring-opening polymerization from high purity L-lactide with tin octoate as initiator, and characterized by means of infrared, and 1H-nuclear magnetic resonance. The influences of initiator concentration,polymerization temperature and polymerization time on the viscosity average molecular mass of PLLA were investigated. The effects of different purification methods on the concentration of initiator and viscosity average molecular mass were also studied. PLLA with a viscosity average molecular mass of about 50.5×104 was obtained when polymerization was conducted for 24 h at 140 ℃ with the molar ratio of monomer to purification initator being 12 000. After purification, the concentration of tin octoate decreases; however,the effect of different purification methods on the viscosity average molecular mass of PLLA is different, and the obtained PLLA is a typical amorphous polymeric material. The crystallinity of PLLA decreases with the increase of viscosity average molecular mass.
Asymmetric network connectivity using weighted harmonic averages
Morrison, Greg; Mahadevan, L.
2011-02-01
We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.
Averaged Null Energy Condition from Causality
Hartman, Thomas; Tajdini, Amirhossein
2016-01-01
Unitary, Lorentz-invariant quantum field theories in flat spacetime obey microcausality: commutators vanish at spacelike separation. For interacting theories in more than two dimensions, we show that this implies that the averaged null energy, $\\int du T_{uu}$, must be positive. This non-local operator appears in the operator product expansion of local operators in the lightcone limit, and therefore contributes to $n$-point functions. We derive a sum rule that isolates this contribution and is manifestly positive. The argument also applies to certain higher spin operators other than the stress tensor, generating an infinite family of new constraints of the form $\\int du X_{uuu\\cdots u} \\geq 0$. These lead to new inequalities for the coupling constants of spinning operators in conformal field theory, which include as special cases (but are generally stronger than) the existing constraints from the lightcone bootstrap, deep inelastic scattering, conformal collider methods, and relative entropy. We also comment ...
Average Gait Differential Image Based Human Recognition
Jinyan Chen
2014-01-01
Full Text Available The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI, AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition.
Geographic Gossip: Efficient Averaging for Sensor Networks
Dimakis, Alexandros G; Wainwright, Martin J
2007-01-01
Gossip algorithms for distributed computation are attractive due to their simplicity, distributed nature, and robustness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repeatedly recirculating redundant information. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing times of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of $n$ and $\\sqrt{n}$ respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy $\\epsilon$ using $O(\\frac{n^{1.5}}{\\sqrt{\\log ...
Bivariate phase-rectified signal averaging
Schumann, Aicko Y; Bauer, Axel; Schmidt, Georg
2008-01-01
Phase-Rectified Signal Averaging (PRSA) was shown to be a powerful tool for the study of quasi-periodic oscillations and nonlinear effects in non-stationary signals. Here we present a bivariate PRSA technique for the study of the inter-relationship between two simultaneous data recordings. Its performance is compared with traditional cross-correlation analysis, which, however, does not work well for non-stationary data and cannot distinguish the coupling directions in complex nonlinear situations. We show that bivariate PRSA allows the analysis of events in one signal at times where the other signal is in a certain phase or state; it is stable in the presence of noise and impassible to non-stationarities.
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z.; Hong, Z.; Wang, D.; Zhou, H.; Shen, X.; Shen, C.
2014-06-01
Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (Ic) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the Ic degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Computing Rooted and Unrooted Maximum Consistent Supertrees
van Iersel, Leo
2009-01-01
A chief problem in phylogenetics and database theory is the computation of a maximum consistent tree from a set of rooted or unrooted trees. A standard input are triplets, rooted binary trees on three leaves, or quartets, unrooted binary trees on four leaves. We give exact algorithms constructing rooted and unrooted maximum consistent supertrees in time O(2^n n^5 m^2 log(m)) for a set of m triplets (quartets), each one distinctly leaf-labeled by some subset of n labels. The algorithms extend to weighted triplets (quartets). We further present fast exact algorithms for constructing rooted and unrooted maximum consistent trees in polynomial space. Finally, for a set T of m rooted or unrooted trees with maximum degree D and distinctly leaf-labeled by some subset of a set L of n labels, we compute, in O(2^{mD} n^m m^5 n^6 log(m)) time, a tree distinctly leaf-labeled by a maximum-size subset X of L that all trees in T, when restricted to X, are consistent with.
Maximum magnitude earthquakes induced by fluid injection
McGarr, Arthur F.
2014-01-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Maximum magnitude earthquakes induced by fluid injection
McGarr, A.
2014-02-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Messier, Kyle P; Campbell, Ted; Bradley, Philip J; Serre, Marc L
2015-08-18
Radon ((222)Rn) is a naturally occurring chemically inert, colorless, and odorless radioactive gas produced from the decay of uranium ((238)U), which is ubiquitous in rocks and soils worldwide. Exposure to (222)Rn is likely the second leading cause of lung cancer after cigarette smoking via inhalation; however, exposure through untreated groundwater is also a contributing factor to both inhalation and ingestion routes. A land use regression (LUR) model for groundwater (222)Rn with anisotropic geological and (238)U based explanatory variables is developed, which helps elucidate the factors contributing to elevated (222)Rn across North Carolina. The LUR is also integrated into the Bayesian Maximum Entropy (BME) geostatistical framework to increase accuracy and produce a point-level LUR-BME model of groundwater (222)Rn across North Carolina including prediction uncertainty. The LUR-BME model of groundwater (222)Rn results in a leave-one out cross-validation r(2) of 0.46 (Pearson correlation coefficient = 0.68), effectively predicting within the spatial covariance range. Modeled results of (222)Rn concentrations show variability among intrusive felsic geological formations likely due to average bedrock (238)U defined on the basis of overlying stream-sediment (238)U concentrations that is a widely distributed consistently analyzed point-source data.
Maximum embryo absorbed dose from intravenous urography: interhospital variations
Damilakis, J.; Perisinakis, K. [University of Crete (Greece). Dept. of Medical Physics; Koukourakis, M. [University of Crete (Greece). Dept. of Radiology; Gourtsoyiannis, N. [University Hospital of Iraklion, Crete (Greece). Dept. of Radiotherapy
1997-12-01
The purpose of this study was to determine the maximum embryo dose during intravenous urography (IVU) examinations, when inadvertent irradiation of a pregnant woman occurs, and to investigate the variation of doses received from different institutions. Doses at average embryo depth from IVU examinations have been measured in four institutions using a Rando phantom and thermoluminescent crystals. In order to estimate the maximum range of embryo doses, radiologists were asked to carry out the examinations with the same technique as in female patients with acute ureteral obstruction. The range of doses estimated at embryo depth for the institutions participating in this study was 5.77 to 35.2 mGy. The considerable interhospital variation found in dose can be explained by different equipment and techniques used. A simple method of estimating embryo dose from pelvic radiographs reported previously was found to be also applicable to IVU examinations. Absorbed dose at 6 cm, the average embryo depth, was found significantly less than 50 mGy. (Author).
30 CFR 7.87 - Test to determine the maximum fuel-air ratio.
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Test to determine the maximum fuel-air ratio. 7... Use in Underground Coal Mines § 7.87 Test to determine the maximum fuel-air ratio. (a) Test procedure... several speed/torque conditions to determine the concentrations of CO and NOX, dry basis, in the...
Generation and applications of high average power Mid-IR supercontinuum in chalcogenide fibres
Petersen, Christian Rosenberg
2016-01-01
Mid-infrared supercontinuum with up to 54.8 mW average power, and maximum bandwidth of 1.77-8.66 Î¼m is demonstrated as a result of pumping tapered chalcogenide photonic crystal fibers with a MHz parametric source at 4 Î¼m....
2013-08-05
... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF AGRICULTURE Food and Nutrition Service National School Lunch, Special Milk, and School Breakfast Programs, National Average Payments/Maximum Reimbursement Rates Correction In notice document 2013-17990, appearing on...
Yu, Hwa-Lung; Wang, Chih-Hsin
2013-02-05
Understanding the daily changes in ambient air quality concentrations is important to the assessing human exposure and environmental health. However, the fine temporal scales (e.g., hourly) involved in this assessment often lead to high variability in air quality concentrations. This is because of the complex short-term physical and chemical mechanisms among the pollutants. Consequently, high heterogeneity is usually present in not only the averaged pollution levels, but also the intraday variance levels of the daily observations of ambient concentration across space and time. This characteristic decreases the estimation performance of common techniques. This study proposes a novel quantile-based Bayesian maximum entropy (QBME) method to account for the nonstationary and nonhomogeneous characteristics of ambient air pollution dynamics. The QBME method characterizes the spatiotemporal dependence among the ambient air quality levels based on their location-specific quantiles and accounts for spatiotemporal variations using a local weighted smoothing technique. The epistemic framework of the QBME method can allow researchers to further consider the uncertainty of space-time observations. This study presents the spatiotemporal modeling of daily CO and PM10 concentrations across Taiwan from 1998 to 2009 using the QBME method. Results show that the QBME method can effectively improve estimation accuracy in terms of lower mean absolute errors and standard deviations over space and time, especially for pollutants with strong nonhomogeneous variances across space. In addition, the epistemic framework can allow researchers to assimilate the site-specific secondary information where the observations are absent because of the common preferential sampling issues of environmental data. The proposed QBME method provides a practical and powerful framework for the spatiotemporal modeling of ambient pollutants.
Maximum Multiflow in Wireless Network Coding
Zhou, Jin-Yi; Jiang, Yong; Zheng, Hai-Tao
2012-01-01
In a multihop wireless network, wireless interference is crucial to the maximum multiflow (MMF) problem, which studies the maximum throughput between multiple pairs of sources and sinks. In this paper, we observe that network coding could help to decrease the impacts of wireless interference, and propose a framework to study the MMF problem for multihop wireless networks with network coding. Firstly, a network model is set up to describe the new conflict relations modified by network coding. Then, we formulate a linear programming problem to compute the maximum throughput and show its superiority over one in networks without coding. Finally, the MMF problem in wireless network coding is shown to be NP-hard and a polynomial approximation algorithm is proposed.
An improved maximum power point tracking method for photovoltaic systems
Tafticht, T.; Agbossou, K.; Doumbia, M.L.; Cheriti, A. [Institut de recherche sur l' hydrogene, Departement de genie electrique et genie informatique, Universite du Quebec a Trois-Rivieres, C.P. 500, Trois-Rivieres (QC) (Canada)
2008-07-15
In most of the maximum power point tracking (MPPT) methods described currently in the literature, the optimal operation point of the photovoltaic (PV) systems is estimated by linear approximations. However these approximations can lead to less than optimal operating conditions and hence reduce considerably the performances of the PV system. This paper proposes a new approach to determine the maximum power point (MPP) based on measurements of the open-circuit voltage of the PV modules, and a nonlinear expression for the optimal operating voltage is developed based on this open-circuit voltage. The approach is thus a combination of the nonlinear and perturbation and observation (P and O) methods. The experimental results show that the approach improves clearly the tracking efficiency of the maximum power available at the output of the PV modules. The new method reduces the oscillations around the MPP, and increases the average efficiency of the MPPT obtained. The new MPPT method will deliver more power to any generic load or energy storage media. (author)
Hemispheric average Cl atom concentration from 13C/12C ratios in atmospheric methane
D. Lowe
2004-05-01
Full Text Available Methane is a significant atmospheric trace gas in the context of greenhouse warming and climate change. The dominant sink of atmospheric methane is the hydroxyl radical (OH. Recently, a mechanism for production of chlorine radicals (Cl in the marine boundary layer (MBL via bromine autocatalysis has been proposed. The importance of this mechanism in producing a methane sink is not clear at present because of the difficulty of in-situ direct measurement of Cl. However, the large kinetic isotope effect of Cl compared with OH produces a large fractionation of 13C compared with 12C in atmospheric methane. This property can be used to estimate the likely size of the methane sink attributable to MBL Cl. By taking account of the mixing of MBL air into the free troposphere, we estimate that the global methane sink due to reaction with Cl atoms in the MBL could be as large as 19 Tg yr−1, or about 3.3% of the total CH4 sink.
Optimal Control of Polymer Flooding Based on Maximum Principle
Yang Lei
2012-01-01
Full Text Available Polymer flooding is one of the most important technologies for enhanced oil recovery (EOR. In this paper, an optimal control model of distributed parameter systems (DPSs for polymer injection strategies is established, which involves the performance index as maximum of the profit, the governing equations as the fluid flow equations of polymer flooding, and the inequality constraint as the polymer concentration limitation. To cope with the optimal control problem (OCP of this DPS, the necessary conditions for optimality are obtained through application of the calculus of variations and Pontryagin’s weak maximum principle. A gradient method is proposed for the computation of optimal injection strategies. The numerical results of an example illustrate the effectiveness of the proposed method.
The Wiener maximum quadratic assignment problem
Cela, Eranda; Woeginger, Gerhard J
2011-01-01
We investigate a special case of the maximum quadratic assignment problem where one matrix is a product matrix and the other matrix is the distance matrix of a one-dimensional point set. We show that this special case, which we call the Wiener maximum quadratic assignment problem, is NP-hard in the ordinary sense and solvable in pseudo-polynomial time. Our approach also yields a polynomial time solution for the following problem from chemical graph theory: Find a tree that maximizes the Wiener index among all trees with a prescribed degree sequence. This settles an open problem from the literature.
Maximum confidence measurements via probabilistic quantum cloning
Zhang Wen-Hai; Yu Long-Bao; Cao Zhuo-Liang; Ye Liu
2013-01-01
Probabilistic quantum cloning (PQC) cannot copy a set of linearly dependent quantum states.In this paper,we show that if incorrect copies are allowed to be produced,linearly dependent quantum states may also be cloned by the PQC.By exploiting this kind of PQC to clone a special set of three linearly dependent quantum states,we derive the upper bound of the maximum confidence measure of a set.An explicit transformation of the maximum confidence measure is presented.
Maximum floodflows in the conterminous United States
Crippen, John R.; Bue, Conrad D.
1977-01-01
Peak floodflows from thousands of observation sites within the conterminous United States were studied to provide a guide for estimating potential maximum floodflows. Data were selected from 883 sites with drainage areas of less than 10,000 square miles (25,900 square kilometers) and were grouped into regional sets. Outstanding floods for each region were plotted on graphs, and envelope curves were computed that offer reasonable limits for estimates of maximum floods. The curves indicate that floods may occur that are two to three times greater than those known for most streams.
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...
Maximum entropy analysis of EGRET data
Pohl, M.; Strong, A.W.
1997-01-01
EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Industrial Applications of High Average Power FELS
Shinn, Michelle D
2005-01-01
The use of lasers for material processing continues to expand, and the annual sales of such lasers exceeds $1 B (US). Large scale (many m2) processing of materials require the economical production of laser powers of the tens of kilowatts, and therefore are not yet commercial processes, although they have been demonstrated. The development of FELs based on superconducting RF (SRF) linac technology provides a scaleable path to laser outputs above 50 kW in the IR, rendering these applications economically viable, since the cost/photon drops as the output power increases. This approach also enables high average power ~ 1 kW output in the UV spectrum. Such FELs will provide quasi-cw (PRFs in the tens of MHz), of ultrafast (pulsewidth ~ 1 ps) output with very high beam quality. This talk will provide an overview of applications tests by our facility's users such as pulsed laser deposition, laser ablation, and laser surface modification, as well as present plans that will be tested with our upgraded FELs. These upg...
A new approach for Bayesian model averaging
TIAN XiangJun; XIE ZhengHui; WANG AiHui; YANG XiaoChun
2012-01-01
Bayesian model averaging (BMA) is a recently proposed statistical method for calibrating forecast ensembles from numerical weather models.However,successful implementation of BMA requires accurate estimates of the weights and variances of the individual competing models in the ensemble.Two methods,namely the Expectation-Maximization (EM) and the Markov Chain Monte Carlo (MCMC) algorithms,are widely used for BMA model training.Both methods have their own respective strengths and weaknesses.In this paper,we first modify the BMA log-likelihood function with the aim of removing the additional limitation that requires that the BMA weights add to one,and then use a limited memory quasi-Newtonian algorithm for solving the nonlinear optimization problem,thereby formulating a new approach for BMA (referred to as BMA-BFGS).Several groups of multi-model soil moisture simulation experiments from three land surface models show that the performance of BMA-BFGS is similar to the MCMC method in terms of simulation accuracy,and that both are superior to the EM algorithm.On the other hand,the computational cost of the BMA-BFGS algorithm is substantially less than for MCMC and is almost equivalent to that for EM.
Calculating Free Energies Using Average Force
Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)
2001-01-01
A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.
Zhu, Zhilin; Sun, Xiaomin; Zhao, Fenghua; Meixner, Franz X
2015-08-01
Ozone (O3) concentration and flux (Fo) were measured using the eddy covariance technique over a wheat field in the Northwest-Shandong Plain of China. The O3-induced wheat yield loss was estimated by utilizing O3 exposure-response models. The results showed that: (1) During the growing season (7 March to 7 June, 2012), the minimum (16.1 ppbV) and maximum (53.3 ppbV) mean O3 concentrations occurred at approximately 6:30 and 16:00, respectively. The mean and maximum of all measured O3 concentrations were 31.3 and 128.4 ppbV, respectively. The variation of O3 concentration was mainly affected by solar radiation and temperature. (2) The mean diurnal variation of deposition velocity (Vd) can be divided into four phases, and the maximum occurred at noon (12:00). Averaged Vd during daytime (6:00-18:00) and nighttime (18:00-6:00) were 0.42 and 0.14 cm/sec, respectively. The maximum of measured Vd was about 1.5 cm/sec. The magnitude of Vd was influenced by the wheat growing stage, and its variation was significantly correlated with both global radiation and friction velocity. (3) The maximum mean Fo appeared at 14:00, and the maximum measured Fo was -33.5 nmol/(m(2)·sec). Averaged Fo during daytime and nighttime were -6.9 and -1.5 nmol/(m(2)·sec), respectively. (4) Using O3 exposure-response functions obtained from the USA, Europe, and China, the O3-induced wheat yield reduction in the district was estimated as 12.9% on average (5.5%-23.3%). Large uncertainties were related to the statistical methods and environmental conditions involved in deriving the exposure-response functions.
Maximum-power-point tracking control of solar heating system
Huang, Bin-Juine
2012-11-01
The present study developed a maximum-power point tracking control (MPPT) technology for solar heating system to minimize the pumping power consumption at an optimal heat collection. The net solar energy gain Q net (=Q s-W p/η e) was experimentally found to be the cost function for MPPT with maximum point. The feedback tracking control system was developed to track the optimal Q net (denoted Q max). A tracking filter which was derived from the thermal analytical model of the solar heating system was used to determine the instantaneous tracking target Q max(t). The system transfer-function model of solar heating system was also derived experimentally using a step response test and used in the design of tracking feedback control system. The PI controller was designed for a tracking target Q max(t) with a quadratic time function. The MPPT control system was implemented using a microprocessor-based controller and the test results show good tracking performance with small tracking errors. It is seen that the average mass flow rate for the specific test periods in five different days is between 18.1 and 22.9kg/min with average pumping power between 77 and 140W, which is greatly reduced as compared to the standard flow rate at 31kg/min and pumping power 450W which is based on the flow rate 0.02kg/sm 2 defined in the ANSI/ASHRAE 93-1986 Standard and the total collector area 25.9m 2. The average net solar heat collected Q net is between 8.62 and 14.1kW depending on weather condition. The MPPT control of solar heating system has been verified to be able to minimize the pumping energy consumption with optimal solar heat collection. © 2012 Elsevier Ltd.
Volume calculation of the spur gear billet for cold precision forging with average circle method
Wangjun Cheng; Chengzhong Chi; Yongzhen Wang; Peng Lin; Wei Liang; Chen Li
2014-01-01
Forging spur gears are widely used in the driving system of mining machinery and equipment due to their higher strength and dimensional accuracy. For the purpose of precisely calculating the volume of cylindrical spur gear billet in cold precision forging, a new theoretical method named average circle method was put forward. With this method, a series of gear billet volumes were calculated. Comparing with the accurate three-dimensional modeling method, the accuracy of average circle method by theoretical calculation was estimated and the maximum relative error of average circle method was less than 1.5%, which was in good agreement with the experimental results. Relative errors of the calculated and the experimental for obtaining the gear billet volumes with reference circle method are larger than those of the average circle method. It shows that average circle method possesses a higher calculation accuracy than reference circle method (traditional method), which should be worth popularizing widely in calculation of spur gear billet volume.
Analysis of Photovoltaic Maximum Power Point Trackers
Veerachary, Mummadi
The photovoltaic generator exhibits a non-linear i-v characteristic and its maximum power point (MPP) varies with solar insolation. An intermediate switch-mode dc-dc converter is required to extract maximum power from the photovoltaic array. In this paper buck, boost and buck-boost topologies are considered and a detailed mathematical analysis, both for continuous and discontinuous inductor current operation, is given for MPP operation. The conditions on the connected load values and duty ratio are derived for achieving the satisfactory maximum power point operation. Further, it is shown that certain load values, falling out of the optimal range, will drive the operating point away from the true maximum power point. Detailed comparison of various topologies for MPPT is given. Selection of the converter topology for a given loading is discussed. Detailed discussion on circuit-oriented model development is given and then MPPT effectiveness of various converter systems is verified through simulations. Proposed theory and analysis is validated through experimental investigations.
On maximum cycle packings in polyhedral graphs
Peter Recht
2014-04-01
Full Text Available This paper addresses upper and lower bounds for the cardinality of a maximum vertex-/edge-disjoint cycle packing in a polyhedral graph G. Bounds on the cardinality of such packings are provided, that depend on the size, the order or the number of faces of G, respectively. Polyhedral graphs are constructed, that attain these bounds.
Hard graphs for the maximum clique problem
Hoede, Cornelis
1988-01-01
The maximum clique problem is one of the NP-complete problems. There are graphs for which a reduction technique exists that transforms the problem for these graphs into one for graphs with specific properties in polynomial time. The resulting graphs do not grow exponentially in order and number. Gra
Maximum Likelihood Estimation of Search Costs
J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)
2006-01-01
textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p
Weak Scale From the Maximum Entropy Principle
Hamada, Yuta; Kawana, Kiyoharu
2015-01-01
The theory of multiverse and wormholes suggests that the parameters of the Standard Model are fixed in such a way that the radiation of the $S^{3}$ universe at the final stage $S_{rad}$ becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the Standard Model, we can check whether $S_{rad}$ actually becomes maximum at the observed values. In this paper, we regard $S_{rad}$ at the final stage as a function of the weak scale ( the Higgs expectation value ) $v_{h}$, and show that it becomes maximum around $v_{h}={\\cal{O}}(300\\text{GeV})$ when the dimensionless couplings in the Standard Model, that is, the Higgs self coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by \\begin{equation} v_{h}\\sim\\frac{T_{BBN}^{2}}{M_{pl}y_{e}^{5}},\
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
Global characterization of the Holocene Thermal Maximum
Renssen, H.; Seppä, H.; Crosta, X.; Goosse, H.; Roche, D.M.V.A.P.
2012-01-01
We analyze the global variations in the timing and magnitude of the Holocene Thermal Maximum (HTM) and their dependence on various forcings in transient simulations covering the last 9000 years (9 ka), performed with a global atmosphere-ocean-vegetation model. In these experiments, we consider the i
Instance Optimality of the Adaptive Maximum Strategy
L. Diening; C. Kreuzer; R. Stevenson
2016-01-01
In this paper, we prove that the standard adaptive finite element method with a (modified) maximum marking strategy is instance optimal for the total error, being the square root of the squared energy error plus the squared oscillation. This result will be derived in the model setting of Poisson’s e
Maximum Phonation Time: Variability and Reliability
R. Speyer; H.C.A. Bogaardt; V.L. Passos; N.P.H.D. Roodenburg; A. Zumach; M.A.M. Heijnen; L.W.J. Baijens; S.J.H.M. Fleskens; J.W. Brunings
2010-01-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia v
Maximum likelihood estimation of fractionally cointegrated systems
Lasak, Katarzyna
In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...
Maximum likelihood estimation for integrated diffusion processes
Baltazar-Larios, Fernando; Sørensen, Michael
EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...
Maximum gain of Yagi-Uda arrays
Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.
1971-01-01
Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....
杨树政; 林理彬
2002-01-01
We have found that the nonthermal radiation of a nonstationary Kerr-Newman black hole is affected by interstellar materials. In particular, the interstellar gas deeply influences the average range of nonthermal radiation particles, while the average range depends on the maximum energy of the radiation and the energy extent of the radiation.
42 CFR 495.308 - Net average allowable costs as the basis for determining the incentive payment.
2010-10-01
... 42 Public Health 5 2010-10-01 2010-10-01 false Net average allowable costs as the basis for... Net average allowable costs as the basis for determining the incentive payment. (a) The first year of..., implementation or upgrade of certified electronic health records technology. (2) The maximum net...
Kimmel, David G.; McGlaughon, Benjamin D.; Leonard, Jeremy; Paerl, Hans W.; Taylor, J. Christopher; Cira, Emily K.; Wetz, Michael S.
2015-05-01
Estuaries often have distinct zones of high chlorophyll a concentrations, known as chlorophyll maximum (CMAX). The persistence of these features is often attributed to physical (mixing and light availability) and chemical (nutrient availability) features, but the role of mesozooplankton grazing is rarely explored. We measured the spatial and temporal variability of the CMAX and mesozooplankton community in the eutrophic Neuse River Estuary, North Carolina. We also conducted grazing experiments to determine the relative impact of mesozooplankton grazing on the CMAX during the phytoplankton growing season (spring through late summer). The CMAX was consistently located upriver of the zone of maximum zooplankton abundance, with an average spatial separation of 18 km. Grazing experiments in the CMAX region revealed negligible effect of mesozooplankton on chlorophyll a during March, and no effect during June or August. These results suggest that the spatial separation of the peak in chlorophyll a concentration and mesozooplankton abundance results in minimal impact of mesozooplankton grazing, contributing to persistence of the CMAX for prolonged time periods. In the Neuse River Estuary, the low mesozooplankton abundance in the CMAX region is attributed to lack of a low salinity tolerant species, predation by the ctenophore Mnemiopsis leidyi, and/or physiologic impacts on mesozooplankton growth rates due to temperature (in the case of low wintertime abundances). The consequences of this lack of overlap result in exacerbation of the effects of eutrophication; namely a lack of trophic transfer to mesozooplankton in this region and the sinking of phytodetritus to the benthos that fuels hypoxia.
Herrmann, Richard A.
1974-01-01
By concentrating radioactivity contained on luminous dials, a teacher can make a high reading source for classroom experiments on radiation. The preparation of the source and its uses are described. (DT)
Jun He; Xin Yao
2004-01-01
Most of works on the time complexity analysis of evolutionary algorithms have always focused on some artificial binary problems.The time complexity of the algorithms for combinatorial optimisation has not been well understood.This paper considers the time complexity of an evolutionary algorithm for a classical combinatorial optimisation problem,to find the maximum cardinality matching in a graph.It is shown that the evolutionary algorithm can produce a matching with nearly maximum cardinality in average polynomial time.
Long-term average performance benefits of parabolic trough improvements
Gee, R.; Gaul, H.W.; Kearney, D.; Rabl, A.
1980-03-01
Improved parabolic trough concentrating collectors will result from better design, improved fabrication techniques, and the development and utilization of improved materials. The difficulty of achieving these improvements varies as does their potential for increasing parabolic trough performance. The purpose of this analysis is to quantify the relative merit of various technology advancements in improving the long-term average performance of parabolic trough concentrating collectors. The performance benefits of improvements are determined as a function of operating temperature for north-south, east-west, and polar mounted parabolic troughs. The results are presented graphically to allow a quick determination of the performance merits of particular improvements. Substantial annual energy gains are shown to be attainable. Of the improvements evaluated, the development of stable back-silvered glass reflective surfaces offers the largest performance gain for operating temperatures below 150/sup 0/C. Above 150/sup 0/C, the development of trough receivers that can maintain a vacuum is the most significant potential improvement. The reduction of concentrator slope errors also has a substantial performance benefit at high operating temperatures.
M.A.G. Silva
2009-10-01
Full Text Available Estudaram-se as alterações nos eletrólitos, nos gases sanguíneos, na osmolalidade, no hematócrito, na hemoglobina, nas bases tituláveis e no anion gap no sangue venoso de 11 equinos da raça Puro Sangue Árabe, destreinados, submetidos a exercício máximo e submáximo em esteira rolante. Esses animais passaram por período de três dias de adaptação à esteira rolante e posteriormente realizaram dois exercícios testes, um de curta e outro de longa duração. Foram coletadas amostras de sangue venoso antes, imediatamente após e 30 minutos após o término dos exercícios. Após a realização do exercício máximo, observou-se diminuição significativa no pHv, na PvCO2, no HCO3, na cBase além de elevação no AG. Detectou-se também aumento do K+, do Ht e da Hb. Ao final do exercício submáximo, constatou-se somente aumento significativo no pHv, na cBase, na SatvO2 e na PvO2. Conclui-se que os equinos submetidos a exercício máximo desenvolveram acidose metabólica e alcalose respiratória compensatória, hipercalemia e aumento nos valores de hematócrito e hemoglobina. No exercício submáximo, os animais apresentaram alcalose metabólica hipoclorêmica e não ocorreram alterações no equilíbrio hidroeletrolítico.Changes in electrolytes, blood gas, osmolality, hematocrit, hemoglobin, base concentration, and anion gap in 11 detrained Arabian horses during exercise on a high-speed treadmill were investigated. After a period of three days of adaptation on the rolling mat, the animals were submitted to two exercises: one of short (maximum and other of long duration (submaximum. Venous blood samples were obtained right before, and 30 minutes after the exercise. After the maximum exercise, it was observed a significative decrease in pHv, PvCO2, HCO3, and cBase and an increase in AG. It was also observed hypercalemia and increase in Ht and Hb. At the final of the submaximum exercise, it was observed significative increase in pH, c
Interpreting Sky-Averaged 21-cm Measurements
Mirocha, Jordan
2015-01-01
Within the first ~billion years after the Big Bang, the intergalactic medium (IGM) underwent a remarkable transformation, from a uniform sea of cold neutral hydrogen gas to a fully ionized, metal-enriched plasma. Three milestones during this epoch of reionization -- the emergence of the first stars, black holes (BHs), and full-fledged galaxies -- are expected to manifest themselves as extrema in sky-averaged ("global") measurements of the redshifted 21-cm background. However, interpreting these measurements will be complicated by the presence of strong foregrounds and non-trivialities in the radiative transfer (RT) modeling required to make robust predictions.I have developed numerical models that efficiently solve the frequency-dependent radiative transfer equation, which has led to two advances in studies of the global 21-cm signal. First, frequency-dependent solutions facilitate studies of how the global 21-cm signal may be used to constrain the detailed spectral properties of the first stars, BHs, and galaxies, rather than just the timing of their formation. And second, the speed of these calculations allows one to search vast expanses of a currently unconstrained parameter space, while simultaneously characterizing the degeneracies between parameters of interest. I find principally that (1) physical properties of the IGM, such as its temperature and ionization state, can be constrained robustly from observations of the global 21-cm signal without invoking models for the astrophysical sources themselves, (2) translating IGM properties to galaxy properties is challenging, in large part due to frequency-dependent effects. For instance, evolution in the characteristic spectrum of accreting BHs can modify the 21-cm absorption signal at levels accessible to first generation instruments, but could easily be confused with evolution in the X-ray luminosity star-formation rate relation. Finally, (3) the independent constraints most likely to aide in the interpretation
Kim, Leonard, E-mail: kimlh@umdnj.edu [Department of Radiation Oncology, Cancer Institute of New Jersey, Robert Wood Johnson Medical School, University of Medicine and Dentistry of New Jersey, New Brunswick, NJ (United States); Narra, Venkat; Yue, Ning [Department of Radiation Oncology, Cancer Institute of New Jersey, Robert Wood Johnson Medical School, University of Medicine and Dentistry of New Jersey, New Brunswick, NJ (United States)
2013-07-01
Recent studies have reported potentially clinically meaningful dose differences when heterogeneity correction is used in breast balloon brachytherapy. In this study, we report on the relationship between heterogeneity-corrected and -uncorrected doses for 2 commonly used plan evaluation metrics: maximum point dose to skin surface and maximum point dose to ribs. Maximum point doses to skin surface and ribs were calculated using TG-43 and Varian Acuros for 20 patients treated with breast balloon brachytherapy. The results were plotted against each other and fit with a zero-intercept line. Max skin dose (Acuros) = max skin dose (TG-43) ⁎ 0.930 (R{sup 2} = 0.995). The average magnitude of difference from this relationship was 1.1% (max 2.8%). Max rib dose (Acuros) = max rib dose (TG-43) ⁎ 0.955 (R{sup 2} = 0.9995). The average magnitude of difference from this relationship was 0.7% (max 1.6%). Heterogeneity-corrected maximum point doses to the skin surface and ribs were proportional to TG-43-calculated doses. The average deviation from proportionality was 1%. The proportional relationship suggests that a different metric other than maximum point dose may be needed to obtain a clinical advantage from heterogeneity correction. Alternatively, if maximum point dose continues to be used in recommended limits while incorporating heterogeneity correction, institutions without this capability may be able to accurately estimate these doses by use of a scaling factor.
Kim, Leonard; Narra, Venkat; Yue, Ning
2013-01-01
Recent studies have reported potentially clinically meaningful dose differences when heterogeneity correction is used in breast balloon brachytherapy. In this study, we report on the relationship between heterogeneity-corrected and -uncorrected doses for 2 commonly used plan evaluation metrics: maximum point dose to skin surface and maximum point dose to ribs. Maximum point doses to skin surface and ribs were calculated using TG-43 and Varian Acuros for 20 patients treated with breast balloon brachytherapy. The results were plotted against each other and fit with a zero-intercept line. Max skin dose (Acuros) = max skin dose (TG-43) * 0.930 (R(2) = 0.995). The average magnitude of difference from this relationship was 1.1% (max 2.8%). Max rib dose (Acuros) = max rib dose (TG-43) * 0.955 (R(2) = 0.9995). The average magnitude of difference from this relationship was 0.7% (max 1.6%). Heterogeneity-corrected maximum point doses to the skin surface and ribs were proportional to TG-43-calculated doses. The average deviation from proportionality was 1%. The proportional relationship suggests that a different metric other than maximum point dose may be needed to obtain a clinical advantage from heterogeneity correction. Alternatively, if maximum point dose continues to be used in recommended limits while incorporating heterogeneity correction, institutions without this capability may be able to accurately estimate these doses by use of a scaling factor.
Alfred Bürgi
2014-08-01
Full Text Available Models for exposure assessment of high frequency electromagnetic fields from mobile phone base stations need the technical data of the base stations as input. One of these parameters, the Equivalent Radiated Power (ERP, is a time-varying quantity, depending on communication traffic. In order to determine temporal averages of the exposure, corresponding averages of the ERP have to be available. These can be determined as duty factors, the ratios of the time-averaged power to the maximum output power according to the transmitter setting. We determine duty factors for UMTS from the data of 37 base stations in the Swisscom network. The UMTS base stations sample contains sites from different regions of Switzerland and also different site types (rural/suburban/urban/hotspot. Averaged over all regions and site types, a UMTS duty factor for the 24 h-average is obtained, i.e., the average output power corresponds to about a third of the maximum power. We also give duty factors for GSM based on simple approximations and a lower limit for LTE estimated from the base load on the signalling channels.
Bürgi, Alfred; Scanferla, Damiano; Lehmann, Hugo
2014-08-07
Models for exposure assessment of high frequency electromagnetic fields from mobile phone base stations need the technical data of the base stations as input. One of these parameters, the Equivalent Radiated Power (ERP), is a time-varying quantity, depending on communication traffic. In order to determine temporal averages of the exposure, corresponding averages of the ERP have to be available. These can be determined as duty factors, the ratios of the time-averaged power to the maximum output power according to the transmitter setting. We determine duty factors for UMTS from the data of 37 base stations in the Swisscom network. The UMTS base stations sample contains sites from different regions of Switzerland and also different site types (rural/suburban/urban/hotspot). Averaged over all regions and site types, a UMTS duty factor for the 24 h-average is obtained, i.e., the average output power corresponds to about a third of the maximum power. We also give duty factors for GSM based on simple approximations and a lower limit for LTE estimated from the base load on the signalling channels.
Antovic, Ivanka; Antovic, Nevenka M
2011-07-01
Concentration factors for Cs-137 and Ra-226 transfer from seawater, and dried sediment or mud with detritus, have been determined for whole, fresh weight, Chelon labrosus individuals and selected organs. Cesium was detected in 5 of 22 fish individuals, and its activity ranged from 1.0 to 1.6 Bq kg(-1). Radium was detected in all fish, and ranged from 0.4 to 2.1 Bq kg(-1), with an arithmetic mean of 1.0 Bq kg(-1). In regards to fish organs, cesium activity concentration was highest in muscles (maximum - 3.7 Bq kg(-1)), while radium was highest in skeletons (maximum - 25 Bq kg(-1)). Among cesium concentration factors, those for muscles were the highest (from seawater - an average of 47, from sediment - an average of 3.3, from mud with detritus - an average of 0.8). Radium concentration factors were the highest for skeleton (from seawater - an average of 130, from sediment - an average of 1.8, from mud with detritus - an average of 1.5). Additionally, annual intake of cesium and radium by human adults consuming muscles of this fish species has been estimated to provide, in aggregate, an effective dose of about 4.1 μSv y(-1).
Model Selection Through Sparse Maximum Likelihood Estimation
Banerjee, Onureena; D'Aspremont, Alexandre
2007-01-01
We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...
Maximum-entropy description of animal movement.
Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M
2015-03-01
We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.
Pareto versus lognormal: a maximum entropy test.
Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano
2011-08-01
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
Maximum Variance Hashing via Column Generation
Lei Luo
2013-01-01
item search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find......Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... the competitive ratio of various natural algorithms. We study the general versions of the problems as well as the parameterized versions where there is an upper bound of on the item sizes, for some integer k....
Nonparametric Maximum Entropy Estimation on Information Diagrams
Martin, Elliot A; Meinke, Alexander; Děchtěrenko, Filip; Davidsen, Jörn
2016-01-01
Maximum entropy estimation is of broad interest for inferring properties of systems across many different disciplines. In this work, we significantly extend a technique we previously introduced for estimating the maximum entropy of a set of random discrete variables when conditioning on bivariate mutual informations and univariate entropies. Specifically, we show how to apply the concept to continuous random variables and vastly expand the types of information-theoretic quantities one can condition on. This allows us to establish a number of significant advantages of our approach over existing ones. Not only does our method perform favorably in the undersampled regime, where existing methods fail, but it also can be dramatically less computationally expensive as the cardinality of the variables increases. In addition, we propose a nonparametric formulation of connected informations and give an illustrative example showing how this agrees with the existing parametric formulation in cases of interest. We furthe...
Regions of constrained maximum likelihood parameter identifiability
Lee, C.-H.; Herget, C. J.
1975-01-01
This paper considers the parameter identification problem of general discrete-time, nonlinear, multiple-input/multiple-output dynamic systems with Gaussian-white distributed measurement errors. Knowledge of the system parameterization is assumed to be known. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems. It is shown that if the vector of true parameters is locally CML identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the CML estimation sequence will converge to the true parameters.
A Maximum Radius for Habitable Planets.
Alibert, Yann
2015-09-01
We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-01-01
An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m)] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by t...
A stochastic maximum principle via Malliavin calculus
Øksendal, Bernt; Zhou, Xun Yu; Meyer-Brandis, Thilo
2008-01-01
This paper considers a controlled It\\^o-L\\'evy process where the information available to the controller is possibly less than the overall information. All the system coefficients and the objective performance functional are allowed to be random, possibly non-Markovian. Malliavin calculus is employed to derive a maximum principle for the optimal control of such a system where the adjoint process is explicitly expressed.
Tissue radiation response with maximum Tsallis entropy.
Sotolongo-Grau, O; Rodríguez-Pérez, D; Antoranz, J C; Sotolongo-Costa, Oscar
2010-10-08
The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature.
Maximum Estrada Index of Bicyclic Graphs
Wang, Long; Wang, Yi
2012-01-01
Let $G$ be a simple graph of order $n$, let $\\lambda_1(G),\\lambda_2(G),...,\\lambda_n(G)$ be the eigenvalues of the adjacency matrix of $G$. The Esrada index of $G$ is defined as $EE(G)=\\sum_{i=1}^{n}e^{\\lambda_i(G)}$. In this paper we determine the unique graph with maximum Estrada index among bicyclic graphs with fixed order.
Maximum privacy without coherence, zero-error
Leung, Debbie; Yu, Nengkun
2016-09-01
We study the possible difference between the quantum and the private capacities of a quantum channel in the zero-error setting. For a family of channels introduced by Leung et al. [Phys. Rev. Lett. 113, 030512 (2014)], we demonstrate an extreme difference: the zero-error quantum capacity is zero, whereas the zero-error private capacity is maximum given the quantum output dimension.
Maximum solid solubility of transition metals in vanadium solvent
ZHANG Jin-long; FANG Shou-shi; ZHOU Zi-qiang; LIN Gen-wen; GE Jian-sheng; FENG Feng
2005-01-01
Maximum solid solubility (Cmax) of different transition metals in metal solvent can be described by a semi-empirical equation using function Zf that contains electronegativity difference, atomic diameter and electron concentration. The relation between Cmax and these parameters of transition metals in vanadium solvent was studied.It is shown that the relation of Cmax and function Zf can be expressed as ln Cmax = Zf = 7. 316 5-2. 780 5 (△X)2 -71. 278δ2 -0. 855 56n2/3. The factor of atomic size parameter has the largest effect on the Cmax of the V binary alloy;followed by the factor of electronegativity difference; the electrons concentration has the smallest effect among the three bond parameters. Function Zf is used for predicting the unknown Cmax of the transition metals in vanadium solvent. The results are compared with Darken-Gurry theorem, which can be deduced by the obtained function Zf in this work.
Boundary condition effects on maximum groundwater withdrawal in coastal aquifers.
Lu, Chunhui; Chen, Yiming; Luo, Jian
2012-01-01
Prevention of sea water intrusion in coastal aquifers subject to groundwater withdrawal requires optimization of well pumping rates to maximize the water supply while avoiding sea water intrusion. Boundary conditions and the aquifer domain size have significant influences on simulating flow and concentration fields and estimating maximum pumping rates. In this study, an analytical solution is derived based on the potential-flow theory for evaluating maximum groundwater pumping rates in a domain with a constant hydraulic head landward boundary. An empirical correction factor, which was introduced by Pool and Carrera (2011) to account for mixing in the case with a constant recharge rate boundary condition, is found also applicable for the case with a constant hydraulic head boundary condition, and therefore greatly improves the usefulness of the sharp-interface analytical solution. Comparing with the solution for a constant recharge rate boundary, we find that a constant hydraulic head boundary often yields larger estimations of the maximum pumping rate and when the domain size is five times greater than the distance between the well and the coastline, the effect of setting different landward boundary conditions becomes insignificant with a relative difference between two solutions less than 2.5%. These findings can serve as a preliminary guidance for conducting numerical simulations and designing tank-scale laboratory experiments for studying groundwater withdrawal problems in coastal aquifers with minimized boundary condition effects.
Automatic maximum entropy spectral reconstruction in NMR.
Mobli, Mehdi; Maciejewski, Mark W; Gryk, Michael R; Hoch, Jeffrey C
2007-10-01
Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system.
Maximum entropy analysis of cosmic ray composition
Nosek, Dalibor; Vícha, Jakub; Trávníček, Petr; Nosková, Jana
2016-01-01
We focus on the primary composition of cosmic rays with the highest energies that cause extensive air showers in the Earth's atmosphere. A way of examining the two lowest order moments of the sample distribution of the depth of shower maximum is presented. The aim is to show that useful information about the composition of the primary beam can be inferred with limited knowledge we have about processes underlying these observations. In order to describe how the moments of the depth of shower maximum depend on the type of primary particles and their energies, we utilize a superposition model. Using the principle of maximum entropy, we are able to determine what trends in the primary composition are consistent with the input data, while relying on a limited amount of information from shower physics. Some capabilities and limitations of the proposed method are discussed. In order to achieve a realistic description of the primary mass composition, we pay special attention to the choice of the parameters of the sup...
A Maximum Resonant Set of Polyomino Graphs
Zhang Heping
2016-05-01
Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-03-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Minimal Length, Friedmann Equations and Maximum Density
Awad, Adel
2014-01-01
Inspired by Jacobson's thermodynamic approach[gr-qc/9504004], Cai et al [hep-th/0501055,hep-th/0609128] have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar--Cai derivation [hep-th/0609128] of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure $p(\\rho,a)$ leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature $k$. As an example w...
Maximum saliency bias in binocular fusion
Lu, Yuhao; Stafford, Tom; Fox, Charles
2016-07-01
Subjective experience at any instant consists of a single ("unitary"), coherent interpretation of sense data rather than a "Bayesian blur" of alternatives. However, computation of Bayes-optimal actions has no role for unitary perception, instead being required to integrate over every possible action-percept pair to maximise expected utility. So what is the role of unitary coherent percepts, and how are they computed? Recent work provided objective evidence for non-Bayes-optimal, unitary coherent, perception and action in humans; and further suggested that the percept selected is not the maximum a posteriori percept but is instead affected by utility. The present study uses a binocular fusion task first to reproduce the same effect in a new domain, and second, to test multiple hypotheses about exactly how utility may affect the percept. After accounting for high experimental noise, it finds that both Bayes optimality (maximise expected utility) and the previously proposed maximum-utility hypothesis are outperformed in fitting the data by a modified maximum-salience hypothesis, using unsigned utility magnitudes in place of signed utilities in the bias function.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-01-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous–Paleogene (K–Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes. PMID:22308461
The maximum rate of mammal evolution.
Evans, Alistair R; Jones, David; Boyer, Alison G; Brown, James H; Costa, Daniel P; Ernest, S K Morgan; Fitzgerald, Erich M G; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Smith, Felisa A; Stephens, Patrick R; Theodor, Jessica M; Uhen, Mark D
2012-03-13
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Analytic continuation by averaging Padé approximants
Schött, Johan; Locht, Inka L. M.; Lundin, Elin; Grânäs, Oscar; Eriksson, Olle; Di Marco, Igor
2016-02-01
The ill-posed analytic continuation problem for Green's functions and self-energies is investigated by revisiting the Padé approximants technique. We propose to remedy the well-known problems of the Padé approximants by performing an average of several continuations, obtained by varying the number of fitted input points and Padé coefficients independently. The suggested approach is then applied to several test cases, including Sm and Pr atomic self-energies, the Green's functions of the Hubbard model for a Bethe lattice and of the Haldane model for a nanoribbon, as well as two special test functions. The sensitivity to numerical noise and the dependence on the precision of the numerical libraries are analyzed in detail. The present approach is compared to a number of other techniques, i.e., the nonnegative least-squares method, the nonnegative Tikhonov method, and the maximum entropy method, and is shown to perform well for the chosen test cases. This conclusion holds even when the noise on the input data is increased to reach values typical for quantum Monte Carlo simulations. The ability of the algorithm to resolve fine structures is finally illustrated for two relevant test functions.
T. Aalto; J. Hatakka; J. Paatero; Tuovinen, J.-P.; Aurela, M.; Laurila, T.; Holmén, K.; Trivett, N.; Viisanen, Y.
2002-01-01
Diurnal and annual variations of CO2, O3, SO2, black carbon and condensation nuclei and their source areas were studied by utilizing air parcel trajectories and tropospheric concentration measurements at a boreal GAW site in Pallas, Finland. The average growth trend of CO2 was about 2.5 ppm yr−1 according to a 4-yr measurement period starting in October 1996. The annual cycle of CO2showed concentration difference of about 19 ppm between the summer minimum and winter maximum. The diurnal cycle...
Seismicity and average velocities beneath the Argentine Puna Plateau
Schurr, B.; Asch, G.; Rietbrock, A.; Kind, R.; Pardo, M.; Heit, B.; Monfret, T.
A network of 60 seismographs was deployed across the Andes at ∼23.5°S. The array was centered in the backarc, atop the Puna high plateau in NW Argentina. P and S arrival times of 426 intermediate depth earthquakes were inverted for 1-D velocity structure and hypocentral coordinates. Average velocities and υp/υs in the crust are low. Average mantle velocities are high but difficult to interpret because of the presence of a fast velocity slab at depth. Although the hypocenters sharply define a 35° dipping Benioff zone, seismicity in the slab is not continuous. The spatial clustering of earthquakes is thought to reflect inherited heterogeneties of the subducted oceanic lithosphere. Additionally, 57 crustal earthquakes were located. Seismicity concentrates in the fold and thrust belt of the foreland and Eastern Cordillera, and along and south of the El Toro-Olacapato-Calama Lineament (TOCL). Focal mechanisms of two earthquakes at this structure exhibit left lateral strike-slip mechanisms similar to the suggested kinematics of the TOCL. We believe that the Puna north of the TOCL behaves like a rigid block with little internal deformation, whereas the area south of the TOCL is weaker and currently deforming.
Yearly average performance of the principal solar collector types
Rabl, A.
1981-01-01
The results of hour-by-hour simulations for 26 meteorological stations are used to derive universal correlations for the yearly total energy that can be delivered by the principal solar collector types: flat plate, evacuated tubes, CPC, single- and dual-axis tracking collectors, and central receiver. The correlations are first- and second-order polynomials in yearly average insolation, latitude, and threshold (= heat loss/optical efficiency). With these correlations, the yearly collectible energy can be found by multiplying the coordinates of a single graph by the collector parameters, which reproduces the results of hour-by-hour simulations with an accuracy (rms error) of 2% for flat plates and 2% to 4% for concentrators. This method can be applied to collectors that operate year-around in such a way that no collected energy is discarded, including photovoltaic systems, solar-augmented industrial process heat systems, and solar thermal power systems. The method is also recommended for rating collectors of different type or manufacturer by yearly average performance, evaluating the effects of collector degradation, the benefits of collector cleaning, and the gains from collector improvements (due to enhanced optical efficiency or decreased heat loss per absorber surface). For most of these applications, the method is accurate enough to replace a system simulation.
Ozone concentrations in air flowing into New York State
Aleksic, Nenad; Kent, John; Walcek, Chris
2016-09-01
Ozone (O3) concentrations measured at Pinnacle State Park (PSPNY), very close to the southern border of New York State, are used to estimate concentrations in air flowing into New York. On 20% of the ozone season (April-September) afternoons from 2004 to 2015, mid-afternoon 500-m back trajectories calculated from PSPNY cross New York border from the south and spend less than three hours in New York State, in this area of negligible local pollution emissions. One-hour (2p.m.-3p.m.) O3 concentrations during these inflowing conditions were 46 ± 13 ppb, and ranged from a minimum of 15 ppb to a maximum of 84 ppb. On average during 2004-2015, each year experienced 11.8 days with inflowing 1-hr O3 concentrations exceeding 50 ppb, 4.3 days with O3 > 60 ppb, and 1.5 days had O3 > 70 ppb. During the same period, 8-hr average concentrations (10a.m. to 6p.m.) exceeded 50 ppb on 10.0 days per season, while 3.9 days exceeded 60 ppb, and 70 ppb was exceeded 1.2 days per season. Two afternoons of minimal in-state emission influences with high ozone concentrations were analyzed in more detail. Synoptic and back trajectory analysis, including comparison with upwind ozone concentrations, indicated that the two periods were characterized as photo-chemically aged air containing high inflowing O3 concentrations most likely heavily influenced by pollution emissions from states upwind of New York including Pennsylvania, Tennessee, West Virginia, and Ohio. These results suggest that New York state-level attempts to comply with National Ambient Air Quality Standards by regulating in-state O3 precursor NOx and organic emissions would be very difficult, since air frequently enters New York State very close to or in excess of Federal Air Quality Standards.
Hearing Office Average Processing Time Ranking Report, February 2016
Social Security Administration — A ranking of ODAR hearing offices by the average number of hearings dispositions per ALJ per day. The average shown will be a combined average for all ALJs working...
ANTINOMY OF THE MODERN AVERAGE PROFESSIONAL EDUCATION
A. A. Listvin
2017-01-01
of ways of their decision and options of the valid upgrade of the SPE system answering to the requirements of economy. The inefficiency of the concept of one-leveled SPE and its non-competitiveness against the background of development of an applied bachelor degree at the higher school is shown. It is offered to differentiate programs of basic level for training of skilled workers and the program of the increased level for training of specialists of an average link (technicians, technologists on the basis of basic level for forming of a single system of continuous professional training and effective functioning of regional systems of professional education. Such system will help to eliminate disproportions in a triad «a worker – a technician – an engineer», and will increase the quality of professional education. Furthermore, it is indicated the need of polyprofessional education wherein the integrated educational structures differing in degree of formation of split-level educational institutions on the basis of network interaction, convergence and integration are required. According to the author, in the regions it is necessary to develop two types of organizations and SPE organizations: territorial multi-profile colleges with flexible variable programs and the organizations realizing educational programs of applied qualifications in specific industries (metallurgical, chemical, construction, etc. according to the specifics of economy of territorial subjects.Practical significance. The results of the research can be useful to specialists of management of education, heads and pedagogical staff of SPE institutions, and also representatives of regional administrations and employers while organizing the multilevel network system of training of skilled workers and experts of middle ranking.
2010-07-01
... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual refinery or importer average and corporate pool average sulfur level determined? (a) The annual refinery or importer average...
Temporal Correlations of the Running Maximum of a Brownian Trajectory
Bénichou, Olivier; Krapivsky, P. L.; Mejía-Monasterio, Carlos; Oshanin, Gleb
2016-08-01
We study the correlations between the maxima m and M of a Brownian motion (BM) on the time intervals [0 ,t1] and [0 ,t2], with t2>t1. We determine the exact forms of the distribution functions P (m ,M ) and P (G =M -m ), and calculate the moments E {(M-m ) k} and the cross-moments E {mlMk} with arbitrary integers l and k . We show that correlations between m and M decay as √{t1/t2 } when t2/t1→∞ , revealing strong memory effects in the statistics of the BM maxima. We also compute the Pearson correlation coefficient ρ (m ,M ) and the power spectrum of Mt, and we discuss a possibility of extracting the ensemble-averaged diffusion coefficient in single-trajectory experiments using a single realization of the maximum process.
Maximum caliber inference and the stochastic Ising model
Cafaro, Carlo; Ali, Sean Alan
2016-11-01
We investigate the maximum caliber variational principle as an inference algorithm used to predict dynamical properties of complex nonequilibrium, stationary, statistical systems in the presence of incomplete information. Specifically, we maximize the path entropy over discrete time step trajectories subject to normalization, stationarity, and detailed balance constraints together with a path-dependent dynamical information constraint reflecting a given average global behavior of the complex system. A general expression for the transition probability values associated with the stationary random Markov processes describing the nonequilibrium stationary system is computed. By virtue of our analysis, we uncover that a convenient choice of the dynamical information constraint together with a perturbative asymptotic expansion with respect to its corresponding Lagrange multiplier of the general expression for the transition probability leads to a formal overlap with the well-known Glauber hyperbolic tangent rule for the transition probability for the stochastic Ising model in the limit of very high temperatures of the heat reservoir.
The Maximum Free Magnetic Energy Allowed in a Solar Active Region
Moore, Ronald L.; Falconer, David A.
2009-01-01
Two whole-active-region magnetic quantities that can be measured from a line-of-sight magnetogram are (sup L) WL(sub SG), a gauge of the total free energy in an active region's magnetic field, and sup L(sub theta), a measure of the active region's total magnetic flux. From these two quantities measured from 1865 SOHO/MDI magnetograms that tracked 44 sunspot active regions across the 0.5 R(sub Sun) central disk, together with each active region's observed production of CMEs, X flares, and M flares, Falconer et al (2009, ApJ, submitted) found that (1) active regions have a maximum attainable free magnetic energy that increases with the magnetic size (sup L) (sub theta) of the active region, (2) in (Log (sup L)WL(sub SG), Log(sup L) theta) space, CME/flare-productive active regions are concentrated in a straight-line main sequence along which the free magnetic energy is near its upper limit, and (3) X and M flares are restricted to large active regions. Here, from (a) these results, (b) the observation that even the greatest X flares produce at most only subtle changes in active region magnetograms, and (c) measurements from MSFC vector magnetograms and from MDI line-of-sight magnetograms showing that practically all sunspot active regions have nearly the same area-averaged magnetic field strength: =- theta/A approximately equal to 300 G, where theta is the active region's total photospheric flux of field stronger than 100 G and A is the area of that flux, we infer that (1) the maximum allowed ratio of an active region's free magnetic energy to its potential-field energy is 1, and (2) any one CME/flare eruption releases no more than a small fraction (less than 10%) of the active region's free magnetic energy. This work was funded by NASA's Heliophysics Division and NSF's Division of Atmospheric Sciences.
The monthly-averaged and yearly-averaged cosine effect factor of a heliostat field
Al-Rabghi, O.M.; Elsayed, M.M. (King Abdulaziz Univ., Jeddah (Saudi Arabia). Dept. of Thermal Engineering)
1992-01-01
Calculations are carried out to determine the dependence of the monthly-averaged and the yearly-averaged daily cosine effect factor on the pertinent parameters. The results are plotted on charts for each month and for the full year. These results cover latitude angles between 0 and 45[sup o]N, for fields with radii up to 50 tower height. In addition, the results are expressed in mathematical correlations to facilitate using them in computer applications. A procedure is outlined to use the present results to preliminary layout the heliostat field, and to predict the rated MW[sub th] reflected by the heliostat field during a period of a month, several months, or a year. (author)
Lagrangian averages, averaged Lagrangians, and the mean effects of fluctuations in fluid dynamics.
Holm, Darryl D.
2002-06-01
We begin by placing the generalized Lagrangian mean (GLM) equations for a compressible adiabatic fluid into the Euler-Poincare (EP) variational framework of fluid dynamics, for an averaged Lagrangian. This is the Lagrangian averaged Euler-Poincare (LAEP) theorem. Next, we derive a set of approximate small amplitude GLM equations (glm equations) at second order in the fluctuating displacement of a Lagrangian trajectory from its mean position. These equations express the linear and nonlinear back-reaction effects on the Eulerian mean fluid quantities by the fluctuating displacements of the Lagrangian trajectories in terms of their Eulerian second moments. The derivation of the glm equations uses the linearized relations between Eulerian and Lagrangian fluctuations, in the tradition of Lagrangian stability analysis for fluids. The glm derivation also uses the method of averaged Lagrangians, in the tradition of wave, mean flow interaction. Next, the new glm EP motion equations for incompressible ideal fluids are compared with the Euler-alpha turbulence closure equations. An alpha model is a GLM (or glm) fluid theory with a Taylor hypothesis closure. Such closures are based on the linearized fluctuation relations that determine the dynamics of the Lagrangian statistical quantities in the Euler-alpha equations. Thus, by using the LAEP theorem, we bridge between the GLM equations and the Euler-alpha closure equations, through the small-amplitude glm approximation in the EP variational framework. We conclude by highlighting a new application of the GLM, glm, and alpha-model results for Lagrangian averaged ideal magnetohydrodynamics. (c) 2002 American Institute of Physics.
Generalized Entropy Concentration for Counts
Oikonomou, Kostas N
2016-01-01
We consider the phenomenon of entropy concentration under linear constraints in a discrete setting, using the "balls and bins" paradigm, but without the assumption that the number of balls allocated to the bins is known. Therefore instead of \\ frequency vectors and ordinary entropy, we have count vectors with unknown sum, and a certain generalized entropy. We show that if the constraints bound the allowable sums, this suffices for concentration to occur even in this setting. The concentration can be either in terms of deviation from the maximum generalized entropy value, or in terms of the norm of the difference from the maximum generalized entropy vector. Without any asymptotic considerations, we quantify the concentration in terms of various parameters, notably a tolerance on the constraints which ensures that they are always satisfied by an integral vector. Generalized entropy maximization is not only compatible with ordinary MaxEnt, but can also be considered an extension of it, as it allows us to address...
A maximum entropy model for opinions in social groups
Davis, Sergio; Navarrete, Yasmín; Gutiérrez, Gonzalo
2014-04-01
We study how the opinions of a group of individuals determine their spatial distribution and connectivity, through an agent-based model. The interaction between agents is described by a Hamiltonian in which agents are allowed to move freely without an underlying lattice (the average network topology connecting them is determined from the parameters). This kind of model was derived using maximum entropy statistical inference under fixed expectation values of certain probabilities that (we propose) are relevant to social organization. Control parameters emerge as Lagrange multipliers of the maximum entropy problem, and they can be associated with the level of consequence between the personal beliefs and external opinions, and the tendency to socialize with peers of similar or opposing views. These parameters define a phase diagram for the social system, which we studied using Monte Carlo Metropolis simulations. Our model presents both first and second-order phase transitions, depending on the ratio between the internal consequence and the interaction with others. We have found a critical value for the level of internal consequence, below which the personal beliefs of the agents seem to be irrelevant.
Stimulus-dependent maximum entropy models of neural population codes.
Granot-Atedgi, Einat; Tkačik, Gašper; Segev, Ronen; Schneidman, Elad
2013-01-01
Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME) model-a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population.
Stimulus-dependent maximum entropy models of neural population codes.
Einat Granot-Atedgi
Full Text Available Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME model-a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population.
Forecasting ozone daily maximum levels at Santiago, Chile
Jorquera, Héctor; Pérez, Ricardo; Cipriano, Aldo; Espejo, Andrés; Victoria Letelier, M.; Acuña, Gonzalo
In major urban areas, air pollution impact on health is serious enough to include it in the group of meteorological variables that are forecast daily. This work focusses on the comparison of different forecasting systems for daily maximum ozone levels at Santiago, Chile. The modelling tools used for these systems were linear time series, artificial neural networks and fuzzy models. The structure of the forecasting model was derived from basic principles and it includes a combination of persistence and daily maximum air temperature as input variables. Assessment of the models is based on two indices: their ability to forecast well an episode, and their tendency to forecast an episode that did not occur at the end (a false positive). All the models tried in this work showed good forecasting performance, with 70-95% of successful forecasts at two monitor sites: Downtown (moderate impacts) and Eastern (downwind, highest impacts). The number of false positives was not negligible, but this may be improved by expressing the forecast in broad classes: low, average, high, very high impacts; the fuzzy model was the most reliable forecast, with the lowest number of false positives among the different models evaluated. The quality of the results and the dynamics of ozone formation suggest the use of a forecast to warn people about excessive exposure during episodic days at Santiago.
IDENTIFICATION OF IDEOTYPES BY CANONICAL ANALYSIS IN Panicum maximum
Janaina Azevedo Martuscello
2015-04-01
Full Text Available Grouping of genotypes by canonical variable analysis is an important tool in breeding. It allows the grouping of individuals with similar characteristics that are associated with superior agronomic performance and may indicate the ideal profile of a plant for the region. The objective of the present study was to define, by canonical analysis, the agronomic profile of Panicum maximum plants adapted to the Agreste region. The experiment was conducted in a completely randomized design with 28 treatments, 22 genotypes of Panicum maximum, and cultivars Mombasa, Tanzania, Massai, Milenio, BRS Zuri, and BRS Tamani in triplicate in 4-m² plots. Plots were harvested five times and the following traits were evaluated: plant height; total, leaf, and stem; dead dry matter yields; leaf:stem ratio; leaf percentage; and volumetric density of forage. The analysis of canonical variables was performed based on the phenotypic means of the evaluated traits and on the residual variance and covariance matrix. Genotype PM34 showed higher mean leaf dry matter yield under the conditions of the Agreste of Alagoas (on average 53% higher than cultivars Mombasa, Tanzania, Milenio and Massai. It was possible to summarize the variation observed in eight agronomic characteristics in only two canonical variables accounting for 81.44 % of the data variation. The ideotype plant adapted to the conditions of the Agreste should be tall and present high leaf yield, leaf percentage, and leaf:stem ratio, and intermediate values of volumetric density of forage.
Study on the Correlation Between Chlorophyll Maximum and Remote Sensing Data
XIU Peng; LIU Yuguang
2006-01-01
Based on the in situ optical measurements in the Bohai Sea of China, which belongs to a typical case-2 water area, we studied the characteristics of DCM (deep chlorophyll maximum) such as its spatial distribution, vertical profile,etc.We found that when the depth of the chlorophyll maximum is comparatively small, even in turbid coastal water regions,there is always a good correlation between the concentrations of chlorophyll maximum and the satellite-received signals in blue-green spectral bands; the correlation is even better than that between the surface chlorophyll concentrations and the satellite-received signals.The strong correlation existing even in turbid coastal water regions indicates that an ocean color model to retrieve the concentration of DCM can be constructed for coastal waters if a comprehensive knowledge of the vertical distribution of chlorophyll concentration in the Bohai Sea of China is available.
Maximum power operation of interacting molecular motors
Golubeva, Natalia; Imparato, Alberto
2013-01-01
We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors......, as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics....
Maximum a posteriori decoder for digital communications
Altes, Richard A. (Inventor)
1997-01-01
A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.
Kernel-based Maximum Entropy Clustering
JIANG Wei; QU Jiao; LI Benxi
2007-01-01
With the development of Support Vector Machine (SVM),the "kernel method" has been studied in a general way.In this paper,we present a novel Kernel-based Maximum Entropy Clustering algorithm (KMEC).By using mercer kernel functions,the proposed algorithm is firstly map the data from their original space to high dimensional space where the data are expected to be more separable,then perform MEC clustering in the feature space.The experimental results show that the proposed method has better performance in the non-hyperspherical and complex data structure.
The sun and heliosphere at solar maximum.
Smith, E J; Marsden, R G; Balogh, A; Gloeckler, G; Geiss, J; McComas, D J; McKibben, R B; MacDowall, R J; Lanzerotti, L J; Krupp, N; Krueger, H; Landgraf, M
2003-11-14
Recent Ulysses observations from the Sun's equator to the poles reveal fundamental properties of the three-dimensional heliosphere at the maximum in solar activity. The heliospheric magnetic field originates from a magnetic dipole oriented nearly perpendicular to, instead of nearly parallel to, the Sun's rotation axis. Magnetic fields, solar wind, and energetic charged particles from low-latitude sources reach all latitudes, including the polar caps. The very fast high-latitude wind and polar coronal holes disappear and reappear together. Solar wind speed continues to be inversely correlated with coronal temperature. The cosmic ray flux is reduced symmetrically at all latitudes.
Conductivity maximum in a charged colloidal suspension
Bastea, S
2009-01-27
Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.
Maximum entropy signal restoration with linear programming
Mastin, G.A.; Hanson, R.J.
1988-05-01
Dantzig's bounded-variable method is used to express the maximum entropy restoration problem as a linear programming problem. This is done by approximating the nonlinear objective function with piecewise linear segments, then bounding the variables as a function of the number of segments used. The use of a linear programming approach allows equality constraints found in the traditional Lagrange multiplier method to be relaxed. A robust revised simplex algorithm is used to implement the restoration. Experimental results from 128- and 512-point signal restorations are presented.