47 CFR 64.1801 - Geographic rate averaging and rate integration.
2010-10-01
... 47 Telecommunication 3 2010-10-01 2010-10-01 false Geographic rate averaging and rate integration. 64.1801 Section 64.1801 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) MISCELLANEOUS RULES RELATING TO COMMON CARRIERS Geographic Rate Averaging and...
Bounding quantum gate error rate based on reported average fidelity
International Nuclear Information System (INIS)
Sanders, Yuval R; Wallman, Joel J; Sanders, Barry C
2016-01-01
Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli distance as a measure of this deviation, and we show that knowledge of the Pauli distance enables tighter estimates of the error rate of quantum gates. (fast track communication)
Non-self-averaging nucleation rate due to quenched disorder
International Nuclear Information System (INIS)
Sear, Richard P
2012-01-01
We study the nucleation of a new thermodynamic phase in the presence of quenched disorder. The quenched disorder is a generic model of both impurities and disordered porous media; both are known to have large effects on nucleation. We find that the nucleation rate is non-self-averaging. This is in a simple Ising model with clusters of quenched spins. We also show that non-self-averaging behaviour is straightforward to detect in experiments, and may be rather common. (fast track communication)
Average expansion rate and light propagation in a cosmological Tardis spacetime
Energy Technology Data Exchange (ETDEWEB)
Lavinto, Mikko; Räsänen, Syksy [Department of Physics, University of Helsinki, and Helsinki Institute of Physics, P.O. Box 64, FIN-00014 University of Helsinki (Finland); Szybka, Sebastian J., E-mail: mikko.lavinto@helsinki.fi, E-mail: syksy.rasanen@iki.fi, E-mail: sebastian.szybka@uj.edu.pl [Astronomical Observatory, Jagellonian University, Orla 171, 30-244 Kraków (Poland)
2013-12-01
We construct the first exact statistically homogeneous and isotropic cosmological solution in which inhomogeneity has a significant effect on the expansion rate. The universe is modelled as a Swiss Cheese, with dust FRW background and inhomogeneous holes. We show that if the holes are described by the quasispherical Szekeres solution, their average expansion rate is close to the background under certain rather general conditions. We specialise to spherically symmetric holes and violate one of these conditions. As a result, the average expansion rate at late times grows relative to the background, \\ie backreaction is significant. The holes fit smoothly into the background, but are larger on the inside than a corresponding background domain: we call them Tardis regions. We study light propagation, find the effective equations of state and consider the relation of the spatially averaged expansion rate to the redshift and the angular diameter distance.
Average Rate of Heat-Related Hospitalizations in 23 States, 2001-2010
U.S. Environmental Protection Agency — This map shows the 2001–2010 average rate of hospitalizations classified as “heat-related” by medical professionals in 23 states that participate in CDC’s...
Medicare Readmission Rates Showed Meaningful Decline in 2012
U.S. Department of Health & Human Services — From 2007 through 2011, the national 30-day, all-cause, hospital readmission rate averaged 19 percent. During calendar year 2012, the readmission rate averaged 18.4...
Estimating average glandular dose by measuring glandular rate in mammograms
International Nuclear Information System (INIS)
Goto, Sachiko; Azuma, Yoshiharu; Sumimoto, Tetsuhiro; Eiho, Shigeru
2003-01-01
The glandular rate of the breast was objectively measured in order to calculate individual patient exposure dose (average glandular dose) in mammography. By employing image processing techniques and breast-equivalent phantoms with various glandular rate values, a conversion curve for pixel value to glandular rate can be determined by a neural network. Accordingly, the pixel values in clinical mammograms can be converted to the glandular rate value for each pixel. The individual average glandular dose can therefore be calculated using the individual glandular rates on the basis of the dosimetry method employed for quality control in mammography. In the present study, a data set of 100 craniocaudal mammograms from 50 patients was used to evaluate our method. The average glandular rate and average glandular dose of the data set were 41.2% and 1.79 mGy, respectively. The error in calculating the individual glandular rate can be estimated to be less than ±3%. When the calculation error of the glandular rate is taken into consideration, the error in the individual average glandular dose can be estimated to be 13% or less. We feel that our method for determining the glandular rate from mammograms is useful for minimizing subjectivity in the evaluation of patient breast composition. (author)
System for evaluation of the true average input-pulse rate
International Nuclear Information System (INIS)
Eichenlaub, D.P.; Garrett, P.
1977-01-01
The description is given of a digital radiation monitoring system making use of current digital circuit and microprocessor for rapidly processing the pulse data coming from remote radiation controllers. This system analyses the pulse rates in order to determine if a new datum is statistically the same as that previously received. Hence it determines the best possible average time for itself. So long as the true average pulse rate stays constant, the time required to establish an average can increase until the statistical error is under the desired level, i.e. 1%. When the digital processing of the pulse data indicates a change in the true average pulse rate, the time required to establish an average can be reduced so as to improve the response time of the system at the statistical error. This concept includes a fixed compromise between the statistical error and the response time [fr
Novel relations between the ergodic capacity and the average bit error rate
Yilmaz, Ferkan
2011-11-01
Ergodic capacity and average bit error rate have been widely used to compare the performance of different wireless communication systems. As such recent scientific research and studies revealed strong impact of designing and implementing wireless technologies based on these two performance indicators. However and to the best of our knowledge, the direct links between these two performance indicators have not been explicitly proposed in the literature so far. In this paper, we propose novel relations between the ergodic capacity and the average bit error rate of an overall communication system using binary modulation schemes for signaling with a limited bandwidth and operating over generalized fading channels. More specifically, we show that these two performance measures can be represented in terms of each other, without the need to know the exact end-to-end statistical characterization of the communication channel. We validate the correctness and accuracy of our newly proposed relations and illustrated their usefulness by considering some classical examples. © 2011 IEEE.
The Impact of Reviews and Average Rating on Hotel-Booking-Intention
DEFF Research Database (Denmark)
Buus, Line Thomassen; Jensen, Charlotte Thodberg; Jessen, Anne Mette Karnøe
2016-01-01
User-generated information types (ratings and reviews) are highly used when booking hotel rooms on Online Travel Agency (OTA) websites. The impact of user-generated information on decision-making is often investigated through quantitative research, thereby not examining in depth how and why...... website followed by an interview. We processed the data from the interview, and the analysis resulted in a model generalizing the use of reviews and average rating in the deliberation phase of a hotel-booking. The findings are overall consistent with related research. Yet, beyond this, the qualitative...... travelers use this information. This paper therefore presents a qualitative study conducted to achieve a deeper understanding. We investigated the use of reviews and average rating in a hotel-booking-context through a laboratory experiment, which involved a task of examining a hotel on a pre-designed OTA...
A new method for the measurement of two-phase mass flow rate using average bi-directional flow tube
International Nuclear Information System (INIS)
Yoon, B. J.; Uh, D. J.; Kang, K. H.; Song, C. H.; Paek, W. P.
2004-01-01
Average bi-directional flow tube was suggested to apply in the air/steam-water flow condition. Its working principle is similar with Pitot tube, however, it makes it possible to eliminate the cooling system which is normally needed to prevent from flashing in the pressure impulse line of pitot tube when it is used in the depressurization condition. The suggested flow tube was tested in the air-water vertical test section which has 80mm inner diameter and 10m length. The flow tube was installed at 120 of L/D from inlet of test section. In the test, the pressure drop across the average bi-directional flow tube, system pressure and average void fraction were measured on the measuring plane. In the test, fluid temperature and injected mass flow rates of air and water phases were also measured by a RTD and two coriolis flow meters, respectively. To calculate the phasic mass flow rates : from the measured differential pressure and void fraction, Chexal drift-flux correlation was used. In the test a new correlation of momentum exchange factor was suggested. The test result shows that the suggested instrumentation using the measured void fraction and Chexal drift-flux correlation can predict the mass flow rates within 10% error of measured data
A Framework for Control System Design Subject to Average Data-Rate Constraints
DEFF Research Database (Denmark)
Silva, Eduardo; Derpich, Milan; Østergaard, Jan
2011-01-01
This paper studies discrete-time control systems subject to average data-rate limits. We focus on a situation where a noisy linear system has been designed assuming transparent feedback and, due to implementation constraints, a source-coding scheme (with unity signal transfer function) has to be ...
He, Ning; Sun, Hechun; Dai, Miaomiao
2014-05-01
To evaluate the influence of temperature and humidity on the drug stability by initial average rate experiment, and to obtained the kinetic parameters. The effect of concentration error, drug degradation extent, humidity and temperature numbers, humidity and temperature range, and average humidity and temperature on the accuracy and precision of kinetic parameters in the initial average rate experiment was explored. The stability of vitamin C, as a solid state model, was investigated by an initial average rate experiment. Under the same experimental conditions, the kinetic parameters obtained from this proposed method were comparable to those from classical isothermal experiment at constant humidity. The estimates were more accurate and precise by controlling the extent of drug degradation, changing humidity and temperature range, or by setting the average temperature closer to room temperature. Compared with isothermal experiments at constant humidity, our proposed method saves time, labor, and materials.
Energy Technology Data Exchange (ETDEWEB)
Chumakov, G A; Slinko, M G
1979-05-01
The possibility of increasing the average rate of heterogeneous catalytic reactions by operating in the self-oscillating regime was demonstrated by analyzing a kinetic model of hydrogen interaction with oxygen over a metallic catalyst. Within a certain interval of partial pressures of oxygen, the average reaction rate over a period of oscillation may be over five times that of the steady-state reaction.
International Nuclear Information System (INIS)
Yun, B.J.; Kang, K.H.; Euh, D.J.; Song, C.H.; Baek, W.P.
2005-01-01
Full text of publication follows: A new type instrumentation, average bidirectional flow tube, was suggested to apply to the single and two phase flow condition. Its working principle is similar to that of the Pitot tube. The pressure measured at the front of the flow tube is equal to the total pressure, while that measured at the rear tube is slightly less than static pressure of flow field due to the suction effect at the downstream. It gives an amplification effect of measured pressure difference at the flow tube. The proposed instrumentation has the characteristics that it could be applicable to low flow condition and measure bidirectional flow. It was tested in the air-water vertical and horizontal test sections which have 0.08 m inner diameter. The pressure difference across the average bidirectional flow tube, system pressure, average void fraction and injection phasic mass flow rates were measured on the measuring plane. Test was performed primarily in the single phase water and air flow condition to get the amplification factor k of the flow tube. The test was also performed in the air-water two phase flow condition and the covered flow regimes were bubbly, slug, churn turbulent flow in the vertical pipe and stratified flow in the horizontal pipe. In order to calculate the phasic and total mass flow rates from the measured differential pressure, Chexal drift-flux correlation and momentum exchange factor between the two phases were introduced. The test result shows that the suggested instrumentation with the measured void fraction, Chexal drift-flux correlation and Bosio and Malnes' momentum exchange model can predict the phasic mass flow rates within 15% error compared to the true values. A new momentum exchange model was also suggested and it gives up to 5% improvement of the measured mass flow rate compared to combination of Bosio and Malnes' momentum exchange model. (authors)
International Nuclear Information System (INIS)
Robinson, G.S.
1986-03-01
The EDITAR module of the AUS neutronics code system edits one and two-dimensional flux data pools produced by other AUS modules to form reaction rates for materials and their constituent nuclides, and to average cross sections over space and energy. The module includes a Bsub(L) flux calculation for application to cell leakage. The STATUS data pool of the AUS system is used to enable the 'unsmearing' of fluxes and nuclide editing with minimal user input. The module distinguishes between neutron and photon groups, and printed reaction rates are formed accordingly. Bilinear weighting may be used to obtain material reactivity worths and to average cross sections. Bilinear weighting is at present restricted to diffusion theory leakage estimates made using mesh-average fluxes
Should the average tax rate be marginalized?
Czech Academy of Sciences Publication Activity Database
Feldman, N. E.; Katuščák, Peter
-, č. 304 (2006), s. 1-65 ISSN 1211-3298 Institutional research plan: CEZ:MSM0021620846 Keywords : tax * labor supply * average tax Subject RIV: AH - Economics http://www.cerge-ei.cz/pdf/wp/Wp304.pdf
Directory of Open Access Journals (Sweden)
Tao Wang
2010-12-01
Full Text Available Isotopic fractionation is the basis of tracing the water cycle using hydrogen and oxygen isotopes. Isotopic fractionation factors in water evaporating from free water bodies are mainly affected by temperature and relative humidity, and vary significantly with these atmospheric factors over the course of a day. The evaporation rate (E can reveal the effects of atmospheric factors. Therefore, there should be a certain functional relationship between isotopic fractionation factors and E. An average isotopic fractionation factor (α* was defined to describe isotopic differences between vapor and liquid phases in evaporation with time intervals of days. The relationship between α* and E based on the isotopic mass balance was investigated through an evaporation pan experiment with no inflow. The experimental results showed that the isotopic compositions of residual water were more enriched with time; α* was affected by air temperature, relative humidity, and other atmospheric factors, and had a strong functional relation with E. The values of α* can be easily calculated with the known values of E, the initial volume of water in the pan, and isotopic compositions of residual water.
Directory of Open Access Journals (Sweden)
Md Nuruzzaman Khan
Full Text Available Globally the rates of caesarean section (CS have steadily increased in recent decades. This rise is not fully accounted for by increases in clinical factors which indicate the need for CS. We investigated the socio-demographic predictors of CS and the average annual rates of CS in Bangladesh between 2004 and 2014.Data were derived from four waves of nationally representative Bangladesh Demographic and Health Survey (BDHS conducted between 2004 and 2014. Rate of change analysis was used to calculate the average annual rate of increase in CS from 2004 to 2014, by socio-demographic categories. Multi-level logistic regression was used to identify the socio-demographic predictors of CS in a cross-sectional analysis of the 2014 BDHS data.CS rates increased from 3.5% in 2004 to 23% in 2014. The average annual rate of increase in CS was higher among women of advanced maternal age (≥35 years, urban areas, and relatively high socio-economic status; with higher education, and who regularly accessed antenatal services. The multi-level logistic regression model indicated that lower (≤19 and advanced maternal age (≥35, urban location, relatively high socio-economic status, higher education, birth of few children (≤2, antenatal healthcare visits, overweight or obese were the key factors associated with increased utilization of CS. Underweight was a protective factor for CS.The use of CS has increased considerably in Bangladesh over the survey years. This rising trend and the risk of having CS vary significantly across regions and socio-economic status. Very high use of CS among women of relatively high socio-economic status and substantial urban-rural difference call for public awareness and practice guideline enforcement aimed at optimizing the use of CS.
Khan, Md Nuruzzaman; Islam, M Mofizul; Shariff, Asma Ahmad; Alam, Md Mahmudul; Rahman, Md Mostafizur
2017-01-01
Globally the rates of caesarean section (CS) have steadily increased in recent decades. This rise is not fully accounted for by increases in clinical factors which indicate the need for CS. We investigated the socio-demographic predictors of CS and the average annual rates of CS in Bangladesh between 2004 and 2014. Data were derived from four waves of nationally representative Bangladesh Demographic and Health Survey (BDHS) conducted between 2004 and 2014. Rate of change analysis was used to calculate the average annual rate of increase in CS from 2004 to 2014, by socio-demographic categories. Multi-level logistic regression was used to identify the socio-demographic predictors of CS in a cross-sectional analysis of the 2014 BDHS data. CS rates increased from 3.5% in 2004 to 23% in 2014. The average annual rate of increase in CS was higher among women of advanced maternal age (≥35 years), urban areas, and relatively high socio-economic status; with higher education, and who regularly accessed antenatal services. The multi-level logistic regression model indicated that lower (≤19) and advanced maternal age (≥35), urban location, relatively high socio-economic status, higher education, birth of few children (≤2), antenatal healthcare visits, overweight or obese were the key factors associated with increased utilization of CS. Underweight was a protective factor for CS. The use of CS has increased considerably in Bangladesh over the survey years. This rising trend and the risk of having CS vary significantly across regions and socio-economic status. Very high use of CS among women of relatively high socio-economic status and substantial urban-rural difference call for public awareness and practice guideline enforcement aimed at optimizing the use of CS.
Czech Academy of Sciences Publication Activity Database
Dušek, Libor; Kalíšková, Klára; Münich, Daniel
2013-01-01
Roč. 63, č. 6 (2013), s. 474-504 ISSN 0015-1920 R&D Projects: GA TA ČR(CZ) TD010033 Institutional support: RVO:67985998 Keywords : TAXBEN models * average tax rates * marginal tax rates Subject RIV: AH - Economics Impact factor: 0.358, year: 2013 http://journal.fsv.cuni.cz/storage/1287_dusek.pdf
Czech Academy of Sciences Publication Activity Database
Dušek, Libor; Kalíšková, Klára; Münich, Daniel
2013-01-01
Roč. 63, č. 6 (2013), s. 474-504 ISSN 0015-1920 R&D Projects: GA MŠk(CZ) SVV 267801/2013 Institutional support: PRVOUK-P23 Keywords : TAXBEN models * average tax rates * marginal tax rates Subject RIV: AH - Economics Impact factor: 0.358, year: 2013 http://journal.fsv.cuni.cz/storage/1287_dusek.pdf
International Nuclear Information System (INIS)
Boccaccini, L.V.
1986-07-01
To take advantages of the semi-implicit computer models - to solve the two phase flow differential system - a proper averaging procedure is also needed for the source terms. In fact, in some cases, the correlations normally used for the source terms - not time averaged - fail using the theoretical time step that arises from the linear stability analysis used on the right handside. Such a time averaging procedure is developed with reference to the bubbly flow regime. Moreover, the concept of mass that must be exchanged to reach equilibrium from a non-equilibrium state is introduced to limit the mass transfer during a time step. Finally some practical calculations are performed to compare the different correlations for the average mass transfer rate developed in this work. (orig.) [de
Directory of Open Access Journals (Sweden)
Alexie M. F. Heimburger
2017-06-01
Full Text Available To effectively address climate change, aggressive mitigation policies need to be implemented to reduce greenhouse gas emissions. Anthropogenic carbon emissions are mostly generated from urban environments, where human activities are spatially concentrated. Improvements in uncertainty determinations and precision of measurement techniques are critical to permit accurate and precise tracking of emissions changes relative to the reduction targets. As part of the INFLUX project, we quantified carbon dioxide (CO2, carbon monoxide (CO and methane (CH4 emission rates for the city of Indianapolis by averaging results from nine aircraft-based mass balance experiments performed in November-December 2014. Our goal was to assess the achievable precision of the aircraft-based mass balance method through averaging, assuming constant CO2, CH4 and CO emissions during a three-week field campaign in late fall. The averaging method leads to an emission rate of 14,600 mol/s for CO2, assumed to be largely fossil-derived for this period of the year, and 108 mol/s for CO. The relative standard error of the mean is 17% and 16%, for CO2 and CO, respectively, at the 95% confidence level (CL, i.e. a more than 2-fold improvement from the previous estimate of ~40% for single-flight measurements for Indianapolis. For CH4, the averaged emission rate is 67 mol/s, while the standard error of the mean at 95% CL is large, i.e. ±60%. Given the results for CO2 and CO for the same flight data, we conclude that this much larger scatter in the observed CH4 emission rate is most likely due to variability of CH4 emissions, suggesting that the assumption of constant daily emissions is not correct for CH4 sources. This work shows that repeated measurements using aircraft-based mass balance methods can yield sufficient precision of the mean to inform emissions reduction efforts by detecting changes over time in urban emissions.
Baumgartner, W A; Baumgartner, A M
2016-04-01
Since 1985, at least nine studies of the average rate of cone loss in retinitis pigmentosa (RP) populations have yielded conflicting average rate constant values (-k), differing by 90-160%. This is surprising, since, except for the first two investigations, the Harvard or Johns Hopkins' protocols used in these studies were identical with respect to: use of the same exponential decline model, calculation of average -k from individual patient k values, monitoring patients over similarly large time frames, and excluding data exhibiting floor and ceiling effects. A detailed analysis of Harvard's and Hopkins' protocols and data revealed two subtle differences: (i) Hopkins' use of half-life t0.5 (or t(1/e)) for expressing patient cone-loss rates rather than k as used by Harvard; (ii) Harvard obtaining substantially more +k from improving fields due to dormant-cone recovery effects and "small -k" values than Hopkins' ("small -k" is defined as less than -0.040 year(-1)), e.g., 16% +k, 31% small -k, vs. Hopkins' 3% and 6% respectively. Since t0.5=0.693/k, it follows that when k=0, or is very small, t0.5 (or t(1/e)) is respectively infinity or a very large number. This unfortunate mathematical property (which also prevents t0.5 (t(1/e)) histogram construction corresponding to -k to +k) caused Hopkins' to delete all "small -k" and all +k due to "strong leverage". Naturally this contributed to Hopkins' larger average -k. Difference (ii) led us to re-evaluate the Harvard/Hopkins' exponential unchanging -k model. In its place we propose a model of increasing biochemical stresses from dying rods on cones during RP progression: increasing oxidative stresses and trophic factor deficiencies (e.g., RdCVF), and RPE malfunction. Our kinetic analysis showed rod loss to follow exponential kinetics with unchanging -k due to constant genetic stresses, thereby providing a theoretical basis for Clarke et al.'s empirical observation of such kinetics with eleven animal models of RP. In
2011-07-20
... adjustments to the national average payment rates for meals and snacks served in child care centers, outside... payment rates for meals and snacks served in day care homes; and the administrative reimbursement rates for sponsoring organizations of day care homes, to reflect changes in the Consumer Price Index...
Effect of gas temperature on flow rate characteristics of an averaging pitot tube type flow meter
Energy Technology Data Exchange (ETDEWEB)
Yeo, Seung Hwa; Lee, Su Ryong; Lee, Choong Hoon [Seoul National University of Science and Technology, Seoul (Korea, Republic of)
2015-01-15
The flow rate characteristics passing through an averaging Pitot tube (APT) while constantly controlling the flow temperature were studied through experiments and CFD simulations. At controlled temperatures of 25, 50, 75, and 100 .deg .C, the flow characteristics, in this case the upstream, downstream and static pressure at the APT flow meter probe, were measured as the flow rate was increased. The flow rate through the APT flow meter was represented using the H-parameter (hydraulic height) obtained by a combination of the differential pressure and the air density measured at the APT flow meter probe. Four types of H-parameters were defined depending on the specific combination. The flow rate and the upstream, downstream and static pressures measured at the APT flow meter while changing the H-parameters were simulated by means of CFD. The flow rate curves showed different features depending on which type of H-parameter was used. When using the constant air density value in a standard state to calculate the H-parameters, the flow rate increased linearly with the H-parameter and the slope of the flow rate curve according to the H-parameter increased as the controlled target air temperature was increased. When using different air density levels corresponding to each target air temperature to calculate the H-parameter, the slope of the flow rate curve according to the H-parameter was constant and the flow rate curve could be represented by a single line. The CFD simulation results were in good agreement with the experimental results. The CFD simulations were performed while increasing the air temperature to 1200 K. The CFD simulation results for high air temperatures were similar to those at the low temperature ranging from 25 to 100 .deg. C.
Effect of gas temperature on flow rate characteristics of an averaging pitot tube type flow meter
International Nuclear Information System (INIS)
Yeo, Seung Hwa; Lee, Su Ryong; Lee, Choong Hoon
2015-01-01
The flow rate characteristics passing through an averaging Pitot tube (APT) while constantly controlling the flow temperature were studied through experiments and CFD simulations. At controlled temperatures of 25, 50, 75, and 100 .deg .C, the flow characteristics, in this case the upstream, downstream and static pressure at the APT flow meter probe, were measured as the flow rate was increased. The flow rate through the APT flow meter was represented using the H-parameter (hydraulic height) obtained by a combination of the differential pressure and the air density measured at the APT flow meter probe. Four types of H-parameters were defined depending on the specific combination. The flow rate and the upstream, downstream and static pressures measured at the APT flow meter while changing the H-parameters were simulated by means of CFD. The flow rate curves showed different features depending on which type of H-parameter was used. When using the constant air density value in a standard state to calculate the H-parameters, the flow rate increased linearly with the H-parameter and the slope of the flow rate curve according to the H-parameter increased as the controlled target air temperature was increased. When using different air density levels corresponding to each target air temperature to calculate the H-parameter, the slope of the flow rate curve according to the H-parameter was constant and the flow rate curve could be represented by a single line. The CFD simulation results were in good agreement with the experimental results. The CFD simulations were performed while increasing the air temperature to 1200 K. The CFD simulation results for high air temperatures were similar to those at the low temperature ranging from 25 to 100 .deg. C.
2011-07-26
... DEPARTMENT OF AGRICULTURE Food and Nutrition Service Child and Adult Care Food Program: National Average Payment Rates, Day Care Home Food Service Payment Rates, and Administrative Reimbursement Rates for Sponsoring Organizations of Day Care Homes for the Period July 1, 2011 Through June 30, 2012 Correction In notice document 2011-18257 appearin...
A generalization of the preset count moving average algorithm for digital rate meters
International Nuclear Information System (INIS)
Arandjelovic, Vojislav; Koturovic, Aleksandar; Vukanovic, Radomir
2002-01-01
A generalized definition of the preset count moving average algorithm for digital rate meters has been introduced. The algorithm is based on the knowledge of time intervals between successive pulses in random-pulse sequences. The steady state and transient regimes of the algorithm have been characterized. A measure for statistical fluctuations of the successive measurement results has been introduced. The versatility of the generalized algorithm makes it suitable for application in the design of the software of modern measuring/control digital systems
2010-07-19
...] Lunch and Centers Breakfast supper \\1\\ Snack Contingous States: Paid 0.26 0.26 0.06 Reduced Price 1.18 2... adjustments to the national average payment rates for meals and snacks served in child care centers, outside... payment rates for meals and snacks served in day care homes; and the administrative reimbursement rates...
Shin, Sang Soo; Shin, Young-Jeon
2016-01-01
With an increasing number of studies highlighting regional social capital (SC) as a determinant of health, many studies are using multi-level analysis with merged and averaged scores of community residents' survey responses calculated from community SC data. Sufficient examination is required to validate if the merged and averaged data can represent the community. Therefore, this study analyzes the validity of the selected indicators and their applicability in multi-level analysis. Within and between analysis (WABA) was performed after creating community variables using merged and averaged data of community residents' responses from the 2013 Community Health Survey in Korea, using subjective self-rated health assessment as a dependent variable. Further analysis was performed following the model suggested by WABA result. Both E-test results (1) and WABA results (2) revealed that single-level analysis needs to be performed using qualitative SC variable with cluster mean centering. Through single-level multivariate regression analysis, qualitative SC with cluster mean centering showed positive effect on self-rated health (0.054, panalysis using SC variables without cluster mean centering or multi-level analysis. As modification in qualitative SC was larger within the community than between communities, we validate that relational analysis of individual self-rated health can be performed within the group, using cluster mean centering. Other tests besides the WABA can be performed in the future to confirm the validity of using community variables and their applicability in multi-level analysis.
Shamir, Yariv; Rothhardt, Jan; Hädrich, Steffen; Demmler, Stefan; Tschernajew, Maxim; Limpert, Jens; Tünnermann, Andreas
2015-12-01
Sources of long wavelengths few-cycle high repetition rate pulses are becoming increasingly important for a plethora of applications, e.g., in high-field physics. Here, we report on the realization of a tunable optical parametric chirped pulse amplifier at 100 kHz repetition rate. At a central wavelength of 2 μm, the system delivered 33 fs pulses and a 6 W average power corresponding to 60 μJ pulse energy with gigawatt-level peak powers. Idler absorption and its crystal heating is experimentally investigated for a BBO. Strategies for further power scaling to several tens of watts of average power are discussed.
Gabel, Jon R; Whitmore, Heidi; Green, Matthew; Stromberg, Sam T; Weinstein, Daniel S; Oran, Rebecca
2015-12-01
Premiums for health insurance plans offered through the federally facilitated and state-based Marketplaces remained steady or increased only modestly from 2014 to 2015. We used data from the Marketplaces, state insurance departments, and insurer websites to examine patterns of premium pricing and the factors behind these patterns. Our data came from 2,964 unique plans offered in 2014 and 4,153 unique plans offered in 2015 in forty-nine states and the District of Columbia. Using descriptive and multivariate analysis, we found that the addition of a carrier in a rating area lowered average premiums for the two lowest-cost silver plans and the lowest-cost bronze plan by 2.2 percent. When all plans in a rating area were included, an additional carrier was associated with an average decline in premiums of 1.4 percent. Plans in the Consumer Operated and Oriented Plan Program and Medicaid managed care plans had lower premiums and average premium increases than national commercial and Blue Cross and Blue Shield plans. On average, premiums fell by an appreciably larger amount for catastrophic and bronze plans than for gold plans, and premiums for platinum plans increased. This trend of low premium increases overall is unlikely to continue, however, as insurers are faced with mounting medical claims. Project HOPE—The People-to-People Health Foundation, Inc.
Directory of Open Access Journals (Sweden)
Hyunwoo Lee
2018-01-01
Full Text Available Continuous cardiac monitoring has been developed to evaluate cardiac activity outside of clinical environments due to the advancement of novel instruments. Seismocardiography (SCG is one of the vital components that could develop such a monitoring system. Although SCG has been presented with a lower accuracy, this novel cardiac indicator has been steadily proposed over traditional methods such as electrocardiography (ECG. Thus, it is necessary to develop an enhanced method by combining the significant cardiac indicators. In this study, the six-axis signals of accelerometer and gyroscope were measured and integrated by the L2 normalization and multi-dimensional kineticardiography (MKCG approaches, respectively. The waveforms of accelerometer and gyroscope were standardized and combined via ensemble averaging, and the heart rate was calculated from the dominant frequency. Thirty participants (15 females were asked to stand or sit in relaxed and aroused conditions. Their SCG was measured during the task. As a result, proposed method showed higher accuracy than traditional SCG methods in all measurement conditions. The three main contributions are as follows: (1 the ensemble averaging enhanced heart rate estimation with the benefits of the six-axis signals; (2 the proposed method was compared with the previous SCG method that employs fewer-axis; and (3 the method was tested in various measurement conditions for a more practical application.
Hinkelman, Laura M.; Evans, K. Franklin; Clothiaux, Eugene E.; Ackerman, Thomas P.; Stackhouse, Paul W., Jr.
2006-01-01
Cumulus clouds can become tilted or elongated in the presence of wind shear. Nevertheless, most studies of the interaction of cumulus clouds and radiation have assumed these clouds to be isotropic. This paper describes an investigation of the effect of fair-weather cumulus cloud field anisotropy on domain-averaged solar fluxes and atmospheric heating rate profiles. A stochastic field generation algorithm was used to produce twenty three-dimensional liquid water content fields based on the statistical properties of cloud scenes from a large eddy simulation. Progressively greater degrees of x-z plane tilting and horizontal stretching were imposed on each of these scenes, so that an ensemble of scenes was produced for each level of distortion. The resulting scenes were used as input to a three-dimensional Monte Carlo radiative transfer model. Domain-average transmission, reflection, and absorption of broadband solar radiation were computed for each scene along with the average heating rate profile. Both tilt and horizontal stretching were found to significantly affect calculated fluxes, with the amount and sign of flux differences depending strongly on sun position relative to cloud distortion geometry. The mechanisms by which anisotropy interacts with solar fluxes were investigated by comparisons to independent pixel approximation and tilted independent pixel approximation computations for the same scenes. Cumulus anisotropy was found to most strongly impact solar radiative transfer by changing the effective cloud fraction, i.e., the cloud fraction when the field is projected on a surface perpendicular to the direction of the incident solar beam.
Directory of Open Access Journals (Sweden)
Stephen Carstens
2008-11-01
Full Text Available Companies tend to outsource transport to fleet management companies to increase efficiencies if transport is a non-core activity. The provision of fleet management services on contract introduces a certain amount of financial risk to the fleet management company, specifically fixed rate maintenance contracts. The quoted rate needs to be sufficient and also competitive in the market. Currently the quoted maintenance rates are based on the maintenance specifications of the manufacturer and the risk management approach of the fleet management company. This is usually reflected in a contingency that is included in the quoted maintenance rate. An alternative methodology for calculating the average maintenance cost for a vehicle fleet is proposed based on the actual maintenance expenditures of the vehicles and accepted statistical techniques. The proposed methodology results in accurate estimates (and associated confidence limits of the true average maintenance cost and can beused as a basis for the maintenance quote.
International Nuclear Information System (INIS)
Gil, J.M.; Rodriguez, R.; Florido, R.; Rubiano, J.G.; Mendoza, M.A.; Nuez, A. de la; Espinosa, G.; Martel, P.; Minguez, E.
2013-01-01
In this work we present an analysis of the influence of the thermodynamic regime on the monochromatic emissivity, the radiative power loss and the radiative cooling rate for optically thin carbon plasmas over a wide range of electron temperature and density assuming steady state situations. Furthermore, we propose analytical expressions depending on the electron density and temperature for the average ionization and cooling rate based on polynomial fittings which are valid for the whole range of plasma conditions considered in this work. -- Highlights: ► We compute the average ionization, cooling rates and emissivities of carbon plasmas. ► We compare LTE and NLTE calculations of these magnitudes. ► We perform a parametrization of these magnitudes in a wide range of plasma conditions. ► We provide information about where LTE regime assumption is accurate
Fedoseev, V. N.; Pisarevsky, M. I.; Balberkina, Y. N.
2018-01-01
This paper presents interconnection of dynamic and average flow rates of the coolant in a channel of complex geometry that is a basis for a generalization model of experimental data on heat transfer in various porous structures. Formulas for calculation of heat transfer of fuel rods in transversal fluid flow are acquired with the use of the abovementioned model. It is shown that the model describes a marginal case of separated flows in twisting channels where coolant constantly changes its flow direction and mixes in the communicating channels with large intensity. Dynamic speed is suggested to be identified by power for pumping. The coefficient of proportionality in general case depends on the geometry of the channel and the Reynolds number (Re). A calculation formula of the coefficient of proportionality for the narrow line rod packages is provided. The paper presents a comparison of experimental data and calculated values, which shows usability of the suggested models and calculation formulas.
A Hybrid Islanding Detection Technique Using Average Rate of Voltage Change and Real Power Shift
DEFF Research Database (Denmark)
Mahat, Pukar; Chen, Zhe; Bak-Jensen, Birgitte
2009-01-01
The mainly used islanding detection techniques may be classified as active and passive techniques. Passive techniques don't perturb the system but they have larger nondetection znes, whereas active techniques have smaller nondetection zones but they perturb the system. In this paper, a new hybrid...... technique is proposed to solve this problem. An average rate of voltage change (passive technique) has been used to initiate a real power shift (active technique), which changes the eal power of distributed generation (DG), when the passive technique cannot have a clear discrimination between islanding...
Average Bandwidth Allocation Model of WFQ
Directory of Open Access Journals (Sweden)
Tomáš Balogh
2012-01-01
Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.
2013-07-26
...This notice announces the annual adjustments to the national average payment rates for meals and snacks served in child care centers, outside-school-hours care centers, at-risk afterschool care centers, and adult day care centers; the food service payment rates for meals and snacks served in day care homes; and the administrative reimbursement rates for sponsoring organizations of day care homes, to reflect changes in the Consumer Price Index. Further adjustments are made to these rates to reflect the higher costs of providing meals in the States of Alaska and Hawaii. The adjustments contained in this notice are made on an annual basis each July, as required by the laws and regulations governing the Child and Adult Care Food Program.
Impact of connected vehicle guidance information on network-wide average travel time
Directory of Open Access Journals (Sweden)
Jiangfeng Wang
2016-12-01
Full Text Available With the emergence of connected vehicle technologies, the potential positive impact of connected vehicle guidance on mobility has become a research hotspot by data exchange among vehicles, infrastructure, and mobile devices. This study is focused on micro-modeling and quantitatively evaluating the impact of connected vehicle guidance on network-wide travel time by introducing various affecting factors. To evaluate the benefits of connected vehicle guidance, a simulation architecture based on one engine is proposed representing the connected vehicle–enabled virtual world, and connected vehicle route guidance scenario is established through the development of communication agent and intelligent transportation systems agents using connected vehicle application programming interface considering the communication properties, such as path loss and transmission power. The impact of connected vehicle guidance on network-wide travel time is analyzed by comparing with non-connected vehicle guidance in response to different market penetration rate, following rate, and congestion level. The simulation results explore that average network-wide travel time in connected vehicle guidance shows a significant reduction versus that in non–connected vehicle guidance. Average network-wide travel time in connected vehicle guidance have an increase of 42.23% comparing to that in non-connected vehicle guidance, and average travel time variability (represented by the coefficient of variance increases as the travel time increases. Other vital findings include that higher penetration rate and following rate generate bigger savings of average network-wide travel time. The savings of average network-wide travel time increase from 17% to 38% according to different congestion levels, and savings of average travel time in more serious congestion have a more obvious improvement for the same penetration rate or following rate.
Environmental stresses can alleviate the average deleterious effect of mutations
Directory of Open Access Journals (Sweden)
Leibler Stanislas
2003-05-01
Full Text Available Abstract Background Fundamental questions in evolutionary genetics, including the possible advantage of sexual reproduction, depend critically on the effects of deleterious mutations on fitness. Limited existing experimental evidence suggests that, on average, such effects tend to be aggravated under environmental stresses, consistent with the perception that stress diminishes the organism's ability to tolerate deleterious mutations. Here, we ask whether there are also stresses with the opposite influence, under which the organism becomes more tolerant to mutations. Results We developed a technique, based on bioluminescence, which allows accurate automated measurements of bacterial growth rates at very low cell densities. Using this system, we measured growth rates of Escherichia coli mutants under a diverse set of environmental stresses. In contrast to the perception that stress always reduces the organism's ability to tolerate mutations, our measurements identified stresses that do the opposite – that is, despite decreasing wild-type growth, they alleviate, on average, the effect of deleterious mutations. Conclusions Our results show a qualitative difference between various environmental stresses ranging from alleviation to aggravation of the average effect of mutations. We further show how the existence of stresses that are biased towards alleviation of the effects of mutations may imply the existence of average epistatic interactions between mutations. The results thus offer a connection between the two main factors controlling the effects of deleterious mutations: environmental conditions and epistatic interactions.
Directory of Open Access Journals (Sweden)
Ute Harrison
Full Text Available Antibiotic resistance in Helicobacter pylori is a factor preventing its successful eradication. Particularly in developing countries, resistance against commonly used antibiotics is widespread. Here, we present an epidemiological study from Nigeria with 111 isolates. We analyzed the associated disease outcome, and performed a detailed characterization of these isolated strains with respect to their antibiotic susceptibility and their virulence characteristics. Furthermore, statistical analysis was performed on microbiological data as well as patient information and the results of the gastroenterological examination. We found that the variability concerning the production of virulence factors between strains was minimal, with 96.4% of isolates being CagA-positive and 92.8% producing detectable VacA levels. In addition, high frequency of bacterial resistance was observed for metronidazole (99.1%, followed by amoxicillin (33.3%, clarithromycin (14.4% and tetracycline (4.5%. In conclusion, this study indicated that the infection rate of H. pylori infection within the cohort in the present study was surprisingly low (36.6%. Furthermore, an average gastric pathology was observed by histological grading and bacterial isolates showed a uniform pathogenicity profile while indicating divergent antibiotic resistance rates.
A collisional-radiative average atom model for hot plasmas
International Nuclear Information System (INIS)
Rozsnyai, B.F.
1996-01-01
A collisional-radiative 'average atom' (AA) model is presented for the calculation of opacities of hot plasmas not in the condition of local thermodynamic equilibrium (LTE). The electron impact and radiative rate constants are calculated using the dipole oscillator strengths of the average atom. A key element of the model is the photon escape probability which at present is calculated for a semi infinite slab. The Fermi statistics renders the rate equation for the AA level occupancies nonlinear, which requires iterations until the steady state. AA level occupancies are found. Detailed electronic configurations are built into the model after the self-consistent non-LTE AA state is found. The model shows a continuous transition from the non-LTE to the LTE state depending on the optical thickness of the plasma. 22 refs., 13 figs., 1 tab
Assessing the Efficacy of Adjustable Moving Averages Using ASEAN-5 Currencies.
Directory of Open Access Journals (Sweden)
Jacinta Chan Phooi M'ng
Full Text Available The objective of this research is to examine the trends in the exchange rate markets of the ASEAN-5 countries (Indonesia (IDR, Malaysia (MYR, the Philippines (PHP, Singapore (SGD, and Thailand (THB through the application of dynamic moving average trading systems. This research offers evidence of the usefulness of the time-varying volatility technical analysis indicator, Adjustable Moving Average (AMA' in deciphering trends in these ASEAN-5 exchange rate markets. This time-varying volatility factor, referred to as the Efficacy Ratio in this paper, is embedded in AMA'. The Efficacy Ratio adjusts the AMA' to the prevailing market conditions by avoiding whipsaws (losses due, in part, to acting on wrong trading signals, which generally occur when there is no general direction in the market in range trading and by entering early into new trends in trend trading. The efficacy of AMA' is assessed against other popular moving-average rules. Based on the January 2005 to December 2014 dataset, our findings show that the moving averages and AMA' are superior to the passive buy-and-hold strategy. Specifically, AMA' outperforms the other models for the United States Dollar against PHP (USD/PHP and USD/THB currency pairs. The results show that different length moving averages perform better in different periods for the five currencies. This is consistent with our hypothesis that a dynamic adjustable technical indicator is needed to cater for different periods in different markets.
Assessing the Efficacy of Adjustable Moving Averages Using ASEAN-5 Currencies.
Chan Phooi M'ng, Jacinta; Zainudin, Rozaimah
2016-01-01
The objective of this research is to examine the trends in the exchange rate markets of the ASEAN-5 countries (Indonesia (IDR), Malaysia (MYR), the Philippines (PHP), Singapore (SGD), and Thailand (THB)) through the application of dynamic moving average trading systems. This research offers evidence of the usefulness of the time-varying volatility technical analysis indicator, Adjustable Moving Average (AMA') in deciphering trends in these ASEAN-5 exchange rate markets. This time-varying volatility factor, referred to as the Efficacy Ratio in this paper, is embedded in AMA'. The Efficacy Ratio adjusts the AMA' to the prevailing market conditions by avoiding whipsaws (losses due, in part, to acting on wrong trading signals, which generally occur when there is no general direction in the market) in range trading and by entering early into new trends in trend trading. The efficacy of AMA' is assessed against other popular moving-average rules. Based on the January 2005 to December 2014 dataset, our findings show that the moving averages and AMA' are superior to the passive buy-and-hold strategy. Specifically, AMA' outperforms the other models for the United States Dollar against PHP (USD/PHP) and USD/THB currency pairs. The results show that different length moving averages perform better in different periods for the five currencies. This is consistent with our hypothesis that a dynamic adjustable technical indicator is needed to cater for different periods in different markets.
Gil de la Fe, Juan Miguel; Rodriguez Perez, Rafael; Florido, Ricardo; Garcia Rubiano, Jesus; Mendoza, M.A.; Nuez, A. de la; Espinosa, G.; Martel Escobar, Carlos; Mínguez Torres, Emilio
2013-01-01
In this work we present an analysis of the influence of the thermodynamic regime on the monochromatic emissivity, the radiative power loss and the radiative cooling rate for optically thin carbon plasmas over a wide range of electron temperature and density assuming steady state situations. Furthermore, we propose analytical expressions depending on the electron density and temperature for the average ionization and cooling rate based on polynomial fittings which are valid for the whole range...
DEFF Research Database (Denmark)
Toft, Henrik Stensgaard; Naess, Arvid; Saha, Nilanjan
2011-01-01
to cases where the Gumbel distribution is the appropriate asymptotic extreme value distribution. However, two extra parameters are introduced by which a more general and flexible class of extreme value distributions is obtained with the Gumbel distribution as a subclass. The general method is implemented...... within a hierarchical model where the variables that influence the loading are divided into ergodic variables and time-invariant non-ergodic variables. The presented method for statistical response load extrapolation was compared with the existing methods based on peak extrapolation for the blade out......The paper explores a recently developed method for statistical response load (load effect) extrapolation for application to extreme response of wind turbines during operation. The extrapolation method is based on average conditional exceedance rates and is in the present implementation restricted...
Aarthi, G.; Ramachandra Reddy, G.
2018-03-01
In our paper, the impact of adaptive transmission schemes: (i) optimal rate adaptation (ORA) and (ii) channel inversion with fixed rate (CIFR) on the average spectral efficiency (ASE) are explored for free-space optical (FSO) communications with On-Off Keying (OOK), Polarization shift keying (POLSK), and Coherent optical wireless communication (Coherent OWC) systems under different turbulence regimes. Further to enhance the ASE we have incorporated aperture averaging effects along with the above adaptive schemes. The results indicate that ORA adaptation scheme has the advantage of improving the ASE performance compared with CIFR under moderate and strong turbulence regime. The coherent OWC system with ORA excels the other modulation schemes and could achieve ASE performance of 49.8 bits/s/Hz at the average transmitted optical power of 6 dBm under strong turbulence. By adding aperture averaging effect we could achieve an ASE of 50.5 bits/s/Hz under the same conditions. This makes ORA with Coherent OWC modulation as a favorable candidate for improving the ASE of the FSO communication system.
International Nuclear Information System (INIS)
Osei Poku, L.
2012-01-01
Most reactors incorporate out-of-core neutron detectors to monitor the reactor power. An accurate relationship between the powers indicated by these detectors and actual core thermal power is required. This relationship is established by calibrating the thermal power. The most common method used in calibrating the thermal power of low power reactors is neutron activation technique. To enhance the principle of multiplicity and diversity of measuring the thermal neutron flux and/or power and temperature difference and/or average core temperature of low power research reactors, an alternative and complimentary method has been developed, in addition to the current method. Thermal neutron flux/Power and temperature difference/average core temperature were correlated with measured gamma dose rate. The thermal neutron flux and power predicted using gamma dose rate measurement were in good agreement with the calibrated/indicated thermal neutron fluxes and powers. The predicted data was also good agreement with thermal neutron fluxes and powers obtained using the activation technique. At an indicated power of 30 kW, the gamma dose rate measured predicted thermal neutron flux of (1* 10 12 ± 0.00255 * 10 12 ) n/cm 2 s and (0.987* 10 12 ± 0.00243 * 10 12 ) which corresponded to powers of (30.06 ± 0.075) kW and (29.6 ± 0.073) for both normal level of the pool water and 40 cm below normal levels respectively. At an indicated power of 15 kW, the gamma dose rate measured predicted thermal neutron flux of (5.07* 10 11 ± 0.025* 10 11 ) n/cm 2 s and (5.12 * 10 11 ±0.024* 10 11 ) n/cm 2 s which corresponded to power of (15.21 ± 0.075) kW and (15.36 ± 0.073) kW for both normal levels of the pool water and 40 cm below normal levels respectively. The power predicted by this work also compared well with power obtained from a three-dimensional neutronic analysis for GHARR-1 core. The predicted power also compares well with calculated power using a correlation equation obtained from
Effect of temporal averaging of meteorological data on predictions of groundwater recharge
Directory of Open Access Journals (Sweden)
Batalha Marcia S.
2018-06-01
Full Text Available Accurate estimates of infiltration and groundwater recharge are critical for many hydrologic, agricultural and environmental applications. Anticipated climate change in many regions of the world, especially in tropical areas, is expected to increase the frequency of high-intensity, short-duration precipitation events, which in turn will affect the groundwater recharge rate. Estimates of recharge are often obtained using monthly or even annually averaged meteorological time series data. In this study we employed the HYDRUS-1D software package to assess the sensitivity of groundwater recharge calculations to using meteorological time series of different temporal resolutions (i.e., hourly, daily, weekly, monthly and yearly averaged precipitation and potential evaporation rates. Calculations were applied to three sites in Brazil having different climatological conditions: a tropical savanna (the Cerrado, a humid subtropical area (the temperate southern part of Brazil, and a very wet tropical area (Amazonia. To simplify our current analysis, we did not consider any land use effects by ignoring root water uptake. Temporal averaging of meteorological data was found to lead to significant bias in predictions of groundwater recharge, with much greater estimated recharge rates in case of very uneven temporal rainfall distributions during the year involving distinct wet and dry seasons. For example, at the Cerrado site, using daily averaged data produced recharge rates of up to 9 times greater than using yearly averaged data. In all cases, an increase in the time of averaging of meteorological data led to lower estimates of groundwater recharge, especially at sites having coarse-textured soils. Our results show that temporal averaging limits the ability of simulations to predict deep penetration of moisture in response to precipitation, so that water remains in the upper part of the vadose zone subject to upward flow and evaporation.
Miglior, Filippo; Mallard, Bonnie A.
2013-01-01
The objective of this study was to compare the incidence rate of clinical mastitis (IRCM) between cows classified as high, average, or low for antibody-mediated immune responses (AMIR) and cell-mediated immune responses (CMIR). In collaboration with the Canadian Bovine Mastitis Research Network, 458 lactating Holsteins from 41 herds were immunized with a type 1 and a type 2 test antigen to stimulate adaptive immune responses. A delayed-type hypersensitivity test to the type 1 test antigen was used as an indicator of CMIR, and serum antibody of the IgG1 isotype to the type 2 test antigen was used for AMIR determination. By using estimated breeding values for these traits, cows were classified as high, average, or low responders. The IRCM was calculated as the number of cases of mastitis experienced over the total time at risk throughout the 2-year study period. High-AMIR cows had an IRCM of 17.1 cases per 100 cow-years, which was significantly lower than average and low responders, with 27.9 and 30.7 cases per 100 cow-years, respectively. Low-AMIR cows tended to have the most severe mastitis. No differences in the IRCM were noted when cows were classified based on CMIR, likely due to the extracellular nature of mastitis-causing pathogens. The results of this study demonstrate the desirability of breeding dairy cattle for enhanced immune responses to decrease the incidence and severity of mastitis in the Canadian dairy industry. PMID:23175290
International Nuclear Information System (INIS)
Park, J. P.; Jeong, J. H.; Yuna, B. J.; Jerng, D. W.
2013-01-01
The results show that the averaging BDFT is a promising flow meter for the accurate measurement of flow rates in the fouling condition of the NPPs. A new instrumentation, an averaging BDFT, was proposed to measure the accurate flow rate under corrosion environment. In this study, to validate the applicability of the averaging BDFT on the fouling conditions, flow analyses using the CFD code were performed. Analyses results show that this averaging BDFT does not lose the measuring performance even under the corrosion environment. Therefore, it is expected that the averaging BDFT can replace the type flow meters for the feedwater pipe of steam generator of NPPs. Most of the NPPs adopt pressure difference type flow meters such as venturi and orifice meters for the measurement of feedwater flow rates to calculate reactor thermal power. However, corrosion products in the feedwater deposits on the flow meter as operating time goes. These effects lead to severe errors in the flow indication and then determination of reactor thermal power. The averaging BDFT has a potentiality to minimize this problem. Therefore, it is expected that the averaging BDFT can replace the type venturi meters for the feedwater pipe of steam generator of NPPs. The present work compares the amplification factor, K, based on CFD calculation against the K obtained from experiments in order to confirm whether a CFD code can be applicable to the evaluation of characteristic for the averaging BDFT. In addition to this, the simulations to take into account of fouling effect are also carried out by rough wall option
Ma, Cheng-Jiun; McNamara, B.; Nulsen, P.; Schaffer, R.
2011-09-01
X-ray observations of nearby clusters and galaxies have shown that energetic feedback from AGN is heating hot atmospheres and is probably the principal agent that is offsetting cooling flows. Here we examine AGN heating in distant X-ray clusters by cross correlating clusters selected from the 400 Square Degree X-ray Cluster survey with radio sources in the NRAO VLA Sky Survey. The jet power for each radio source was determined using scaling relations between radio power and cavity power determined for nearby clusters, groups, and galaxies with atmospheres containing X-ray cavities. Roughly 30% of the clusters show radio emission above a flux threshold of 3 mJy within the central 250 kpc that is presumably associated with the brightest cluster galaxy. We find no significant correlation between radio power, hence jet power, and the X-ray luminosities of clusters in redshift range 0.1 -- 0.6. The detection frequency of radio AGN is inconsistent with the presence of strong cooling flows in 400SD, but cannot rule out the presence of weak cooling flows. The average jet power of central radio AGN is approximately 2 10^{44} erg/s. The jet power corresponds to an average heating of approximately 0.2 keV/particle for gas within R_500. Assuming the current AGN heating rate remained constant out to redshifts of about 2, these figures would rise by a factor of two. Our results show that the integrated energy injected from radio AGN outbursts in clusters is statistically significant compared to the excess entropy in hot atmospheres that is required for the breaking of self-similarity in cluster scaling relations. It is not clear that central AGN in 400SD clusters are maintained by a self-regulated feedback loop at the base of a cooling flow. However, they may play a significant role in preventing the development of strong cooling flows at early epochs.
DEFF Research Database (Denmark)
Jensen, Dan B.; Toft, Nils; Cornou, Cécile
2014-01-01
of the effects of wind shielding, linear mixed models were fitted to describe the average daily weight gain and feed conversion rate of 1271 groups (14 individuals per group) of purebred Duroc, Yorkshire and Danish Landrace boars, as a function of shielding (yes/no), insert season (winter, spring, summer, autumn...... groups placed in the 1st and 4th pen (p=0.0001). A similar effect was not seen on smaller pigs. Pen placement appears to have no effect on feed conversion rate.No interaction effects between shielding and distance to the corridor could be demonstrated. Furthermore, in models including both factors......). The effect could not be tested for Yorkshire and Danish Landrace due to lack of data on these breeds. For groups of pigs above the average start weight, a clear tendency of higher growth rates at greater distances from the central corridor was observed, with the most significant differences being between...
Directory of Open Access Journals (Sweden)
Lopez J.
2013-11-01
Full Text Available Ultrafast lasers provide an outstanding processing quality but their main drawback is the low removal rate per pulse compared to longer pulses. This limitation could be overcome by increasing both average power and repetition rate. In this paper, we report on the influence of high repetition rate and pulse duration on both ablation efficiency and processing quality on metals. All trials have been performed with a single tunable ultrafast laser (350 fs to 10ps.
It isn't like this on TV: Revisiting CPR survival rates depicted on popular TV shows.
Portanova, Jaclyn; Irvine, Krystle; Yi, Jae Yoon; Enguidanos, Susan
2015-11-01
Public perceptions of cardiopulmonary resuscitation (CPR) can be influenced by the media. Nearly two decades ago, a study found that the rates of survival following CPR were far higher in popular TV shows than actual rates. In recent years, major strides toward enhanced education and communication around life sustaining interventions have been made. This study aimed to reassess the accuracy of CPR portrayed by popular medical TV shows. Additionally, we sought to determine whether these shows depicted discussions of care preferences and referenced advance directives. Three trained research assistants independently coded two leading medical dramas airing between 2010 and 2011, Grey's Anatomy and House. Patient characteristics, CPR survival rates, and goals of care discussions were recorded. CPR was depicted 46 times in the 91 episodes, with a survival rate of 69.6%. Among those immediately surviving following CPR, the majority (71.9%) survived to hospital discharge and 15.6% died before discharge. Advance directive discussions only occurred for two patients, and preferences regarding code status (8.7%), intubation (6.5%) and feeding (4.3%) rarely occurred. Both popular TV shows portrayed CPR as more effective than actual rates. Overall, the shows portrayed an immediate survival rate nearly twice that of actual survival rates. Inaccurate TV portrayal of CPR survival rates may misinform viewers and influence care decisions made during serious illness and at end of life. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
MIT extraction method for measuring average subchannel axial velocities in reactor assemblies
International Nuclear Information System (INIS)
Hawley, J.T.; Chiu, C.; Todreas, N.E.
1980-08-01
The MIT extraction method for obtaining flow split data for individual subchannels is described in detail. An analysis of the method is presented which shows that isokinetic values of the subchannel flow rates are obtained directly even though the method is non-isokinetic. Time saving methods are discussed for obtaining the average value of the interior region flow split parameter. An analysis of the method at low bundle flow rates indicates that there is no inherent low flow rate limitation on the method and suggests a way to obtain laminar flow split data
Parameterization of Time-Averaged Suspended Sediment Concentration in the Nearshore
Directory of Open Access Journals (Sweden)
Hyun-Doug Yoon
2015-11-01
Full Text Available To quantify the effect of wave breaking turbulence on sediment transport in the nearshore, the vertical distribution of time-averaged suspended sediment concentration (SSC in the surf zone was parameterized in terms of the turbulent kinetic energy (TKE at different cross-shore locations, including the bar crest, bar trough, and inner surf zone. Using data from a large-scale laboratory experiment, a simple relationship was developed between the time-averaged SSC and the time-averaged TKE. The vertical variation of the time-averaged SSC was fitted to an equation analogous to the turbulent dissipation rate term. At the bar crest, the proposed equation was slightly modified to incorporate the effect of near-bed sediment processes and yielded reasonable agreement. This parameterization yielded the best agreement at the bar trough, with a coefficient of determination R2 ≥ 0.72 above the bottom boundary layer. The time-averaged SSC in the inner surf zone showed good agreement near the bed but poor agreement near the water surface, suggesting that there is a different sedimentation mechanism that controls the SSC in the inner surf zone.
Directory of Open Access Journals (Sweden)
Peihua Wang
Full Text Available After the implementation of the universal salt iodization (USI program in 1996, seven cross-sectional school-based surveys have been conducted to monitor iodine deficiency disorders (IDD among children in eastern China.This study aimed to examine the correlation of total goiter rate (TGR with average thyroid volume (Tvol and urinary iodine concentration (UIC in Jiangsu province after IDD elimination.Probability-proportional-to-size sampling was applied to select 1,200 children aged 8-10 years old in 30 clusters for each survey in 1995, 1997, 1999, 2001, 2002, 2005, 2009 and 2011. We measured Tvol using ultrasonography in 8,314 children and measured UIC (4,767 subjects and salt iodine (10,184 samples using methods recommended by the World Health Organization. Tvol was used to calculate TGR based on the reference criteria specified for sex and body surface area (BSA.TGR decreased from 55.2% in 1997 to 1.0% in 2009, and geometric means of Tvol decreased from 3.63 mL to 1.33 mL, along with the UIC increasing from 83 μg/L in 1995 to 407 μg/L in 1999, then decreasing to 243 μg/L in 2005, and then increasing to 345 μg/L in 2011. In the low goiter population (TGR 300 μg/L was associated with a smaller average Tvol in children.After IDD elimination in Jiangsu province in 2001, lower TGR was associated with smaller average Tvol. Average Tvol was more sensitive than TGR in detecting the fluctuation of UIC. A UIC of 300 μg/L may be defined as a critical value for population level iodine status monitoring.
Directory of Open Access Journals (Sweden)
Patricia Bouyer
2015-09-01
Full Text Available Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in NP inter coNP and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements.
Lead Time to Appointment and No-Show Rates for New and Follow-up Patients in an Ambulatory Clinic.
Drewek, Rupali; Mirea, Lucia; Adelson, P David
High rates of no-shows in outpatient clinics are problematic for revenue and for quality of patient care. Longer lead time to appointment has variably been implicated as a risk factor for no-shows, but the evidence within pediatric clinics is inconclusive. The goal of this study was to estimate no-show rates and test for association between appointment lead time and no-show rates for new and follow-up patients. Analyses included 534 new and 1920 follow-up patients from pulmonology and gastroenterology clinics at a freestanding children's hospital. The overall rate of no-shows was lower for visits scheduled within 0 to 30 days compared with 30 days or more (23% compared with 47%, P < .0001). Patient type significantly modified the association of appointment lead time; the rate of no-shows was higher (30%) among new patients compared with (21%) follow-up patients with appointments scheduled within 30 days (P = .004). For appointments scheduled 30 or more days' lead time, no-show rates were statistically similar for new patients (46%) and follow-up patients (0.48%). Time to appointment is a risk factor associated with no-shows, and further study is needed to identify and implement effective approaches to reduce appointment lead time, especially for new patients in pediatric subspecialties.
Large-Scale No-Show Patterns and Distributions for Clinic Operational Research
Directory of Open Access Journals (Sweden)
Michael L. Davies
2016-02-01
Full Text Available Patient no-shows for scheduled primary care appointments are common. Unused appointment slots reduce patient quality of care, access to services and provider productivity while increasing loss to follow-up and medical costs. This paper describes patterns of no-show variation by patient age, gender, appointment age, and type of appointment request for six individual service lines in the United States Veterans Health Administration (VHA. This retrospective observational descriptive project examined 25,050,479 VHA appointments contained in individual-level records for eight years (FY07-FY14 for 555,183 patients. Multifactor analysis of variance (ANOVA was performed, with no-show rate as the dependent variable, and gender, age group, appointment age, new patient status, and service line as factors. The analyses revealed that males had higher no-show rates than females to age 65, at which point males and females exhibited similar rates. The average no-show rates decreased with age until 75–79, whereupon rates increased. As appointment age increased, males and new patients had increasing no-show rates. Younger patients are especially prone to no-show as appointment age increases. These findings provide novel information to healthcare practitioners and management scientists to more accurately characterize no-show and attendance rates and the impact of certain patient factors. Future general population data could determine whether findings from VHA data generalize to others.
Large-Scale No-Show Patterns and Distributions for Clinic Operational Research.
Davies, Michael L; Goffman, Rachel M; May, Jerrold H; Monte, Robert J; Rodriguez, Keri L; Tjader, Youxu C; Vargas, Dominic L
2016-02-16
Patient no-shows for scheduled primary care appointments are common. Unused appointment slots reduce patient quality of care, access to services and provider productivity while increasing loss to follow-up and medical costs. This paper describes patterns of no-show variation by patient age, gender, appointment age, and type of appointment request for six individual service lines in the United States Veterans Health Administration (VHA). This retrospective observational descriptive project examined 25,050,479 VHA appointments contained in individual-level records for eight years (FY07-FY14) for 555,183 patients. Multifactor analysis of variance (ANOVA) was performed, with no-show rate as the dependent variable, and gender, age group, appointment age, new patient status, and service line as factors. The analyses revealed that males had higher no-show rates than females to age 65, at which point males and females exhibited similar rates. The average no-show rates decreased with age until 75-79, whereupon rates increased. As appointment age increased, males and new patients had increasing no-show rates. Younger patients are especially prone to no-show as appointment age increases. These findings provide novel information to healthcare practitioners and management scientists to more accurately characterize no-show and attendance rates and the impact of certain patient factors. Future general population data could determine whether findings from VHA data generalize to others.
Fluctuation Dynamics of Exchange Rates on Indian Financial Market
Sarkar, A.; Barat, P.
Here we investigate the scaling behavior and the complexity of the average daily exchange rate returns of the Indian Rupee against four foreign currencies namely US Dollar, Euro, Great Britain Pound and Japanese Yen. Our analysis revealed that the average daily exchange rate return of the Indian Rupee against the US Dollar exhibits a persistent scaling behavior and follow Levy stable distribution. On the contrary the average daily exchange rate returns of the other three foreign currencies show randomness and follow Gaussian distribution. Moreover, it is seen that the complexity of the average daily exchange rate return of the Indian Rupee against US Dollar is less than the other three exchange rate returns.
Spacetime averaging of exotic singularity universes
International Nuclear Information System (INIS)
Dabrowski, Mariusz P.
2011-01-01
Taking a spacetime average as a measure of the strength of singularities we show that big-rips (type I) are stronger than big-bangs. The former have infinite spacetime averages while the latter have them equal to zero. The sudden future singularities (type II) and w-singularities (type V) have finite spacetime averages. The finite scale factor (type III) singularities for some values of the parameters may have an infinite average and in that sense they may be considered stronger than big-bangs.
Ergodic averages via dominating processes
DEFF Research Database (Denmark)
Møller, Jesper; Mengersen, Kerrie
2006-01-01
We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain....
Energy Technology Data Exchange (ETDEWEB)
Nakajima, Kimihiko; Shimasaku, Kazuhiro; Ono, Yoshiaki; Okamura, Sadanori [Department of Astronomy, Graduate School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Ouchi, Masami [Institute for the Physics and Mathematics of the Universe (IPMU), TODIAS, The University of Tokyo, 5-1-5 Kashiwanoha, Kashiwa, Chiba 277-8583 (Japan); Lee, Janice C.; Ly, Chun [Observatories of the Carnegie Institution of Washington, 813 Santa Barbara Street, Pasadena, CA 91101 (United States); Foucaud, Sebastien [Department of Earth Sciences, National Taiwan Normal University, No. 88, Tingzhou Road, Sec. 4, Taipei 11677, Taiwan (China); Dale, Daniel A. [Department of Physics and Astronomy, University of Wyoming, Laramie, WY (United States); Salim, Samir [Department of Astronomy, Indiana University, Bloomington, IN (United States); Finn, Rose [Department of Physics, Siena College, Loudonville, NY (United States); Almaini, Omar, E-mail: nakajima@astron.s.u-tokyo.ac.jp [School of Physics and Astronomy, University of Nottingham, Nottingham (United Kingdom)
2012-01-20
We present the average metallicity and star formation rate (SFR) of Ly{alpha} emitters (LAEs) measured from our large-area survey with three narrowband (NB) filters covering the Ly{alpha}, [O II]{lambda}3727, and H{alpha}+[N II] lines of LAEs at z = 2.2. We select 919 z = 2.2 LAEs from Subaru/Suprime-Cam NB data in conjunction with Magellan/IMACS spectroscopy. Of these LAEs, 561 and 105 are observed with KPNO/NEWFIRM near-infrared NB filters whose central wavelengths are matched to redshifted [O II] and H{alpha} nebular lines, respectively. By stacking the near-infrared images of the LAEs, we successfully obtain average nebular-line fluxes of LAEs, the majority of which are too faint to be identified individually by NB imaging or deep spectroscopy. The stacked object has an H{alpha} luminosity of 1.7 Multiplication-Sign 10{sup 42} erg s{sup -1} corresponding to an SFR of 14 M{sub Sun} yr{sup -1}. We place, for the first time, a firm lower limit to the average metallicity of LAEs of Z {approx}> 0.09 Z{sub Sun} (2{sigma}) based on the [O II]/(H{alpha}+[N II]) index together with photoionization models and empirical relations. This lower limit of metallicity rules out the hypothesis that LAEs, so far observed at z {approx} 2, are extremely metal-poor (Z < 2 Multiplication-Sign 10{sup -2} Z{sub Sun }) galaxies at the 4{sigma} level. This limit is higher than a simple extrapolation of the observed mass-metallicity relation of z {approx} 2 UV-selected galaxies toward lower masses (5 Multiplication-Sign 10{sup 8} M{sub Sun }), but roughly consistent with a recently proposed fundamental mass-metallicity relation when the LAEs' relatively low SFR is taken into account. The H{alpha} and Ly{alpha} luminosities of our NB-selected LAEs indicate that the escape fraction of Ly{alpha} photons is {approx}12%-30%, much higher than the values derived for other galaxy populations at z {approx} 2.
AVERAGE METALLICITY AND STAR FORMATION RATE OF Lyα EMITTERS PROBED BY A TRIPLE NARROWBAND SURVEY
International Nuclear Information System (INIS)
Nakajima, Kimihiko; Shimasaku, Kazuhiro; Ono, Yoshiaki; Okamura, Sadanori; Ouchi, Masami; Lee, Janice C.; Ly, Chun; Foucaud, Sebastien; Dale, Daniel A.; Salim, Samir; Finn, Rose; Almaini, Omar
2012-01-01
We present the average metallicity and star formation rate (SFR) of Lyα emitters (LAEs) measured from our large-area survey with three narrowband (NB) filters covering the Lyα, [O II]λ3727, and Hα+[N II] lines of LAEs at z = 2.2. We select 919 z = 2.2 LAEs from Subaru/Suprime-Cam NB data in conjunction with Magellan/IMACS spectroscopy. Of these LAEs, 561 and 105 are observed with KPNO/NEWFIRM near-infrared NB filters whose central wavelengths are matched to redshifted [O II] and Hα nebular lines, respectively. By stacking the near-infrared images of the LAEs, we successfully obtain average nebular-line fluxes of LAEs, the majority of which are too faint to be identified individually by NB imaging or deep spectroscopy. The stacked object has an Hα luminosity of 1.7 × 10 42 erg s –1 corresponding to an SFR of 14 M ☉ yr –1 . We place, for the first time, a firm lower limit to the average metallicity of LAEs of Z ∼> 0.09 Z ☉ (2σ) based on the [O II]/(Hα+[N II]) index together with photoionization models and empirical relations. This lower limit of metallicity rules out the hypothesis that LAEs, so far observed at z ∼ 2, are extremely metal-poor (Z –2 Z ☉ ) galaxies at the 4σ level. This limit is higher than a simple extrapolation of the observed mass-metallicity relation of z ∼ 2 UV-selected galaxies toward lower masses (5 × 10 8 M ☉ ), but roughly consistent with a recently proposed fundamental mass-metallicity relation when the LAEs' relatively low SFR is taken into account. The Hα and Lyα luminosities of our NB-selected LAEs indicate that the escape fraction of Lyα photons is ∼12%-30%, much higher than the values derived for other galaxy populations at z ∼ 2.
Application of the Value Averaging Investment Method on the US Stock Market
Directory of Open Access Journals (Sweden)
Martin Širůček
2015-01-01
Full Text Available The paper focuses on empirical testing and the use of the regular investment, particularly on the value averaging investment method on real data from the US stock market in the years 1990–2013. The 23-year period was chosen because of a consistently interesting situation in the market and so this regular investment method could be tested to see how it works in a bull (expansion period and a bear (recession period. The analysis is focused on results obtained by using this investment method from the viewpoint of return and risk on selected investment horizons (short-term 1 year, medium-term 5 years and long-term 10 years. The selected aim is reached by using the ratio between profit and risk. The revenue-risk profile is the ratio of the average annual profit rate measured for each investment by the internal rate of return and average annual risk expressed by selective standard deviation. The obtained results show that regular investment is suitable for a long investment horizon or the longer the investment horizon, the better the revenue-risk ratio (Sharpe ratio. According to the results obtained, specific investment recommendations are presented in the conclusion, e.g. if this investment method is suitable for a long investment period, if it is better to use value averaging for a growing, sinking or sluggish market, etc.
Smolenskaya, N. M.; Smolenskii, V. V.
2018-01-01
The paper presents models for calculating the average velocity of propagation of the flame front, obtained from the results of experimental studies. Experimental studies were carried out on a single-cylinder gasoline engine UIT-85 with hydrogen additives up to 6% of the mass of fuel. The article shows the influence of hydrogen addition on the average velocity propagation of the flame front in the main combustion phase. The dependences of the turbulent propagation velocity of the flame front in the second combustion phase on the composition of the mixture and operating modes. The article shows the influence of the normal combustion rate on the average flame propagation velocity in the third combustion phase.
75 FR 9257 - SBA Lender Risk Rating System
2010-03-01
... Liquidation Rate; 3. Gross Delinquency Rate; 4. Gross Past-Due Rate; 5. Six (6) Month Net Flow Indicator; 6.... The statistical analysis performed showed that incorporating the Portfolio Size/Age component improved...) Month Delinquency Rate; 3. Gross Delinquency Rate; 4. Gross Past-Due Rate; 5. Average Small Business...
Asymmetric network connectivity using weighted harmonic averages
Morrison, Greg; Mahadevan, L.
2011-02-01
We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.
Stochastic Averaging and Stochastic Extremum Seeking
Liu, Shu-Jun
2012-01-01
Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering and analysis of bacterial convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...
Alwee, Razana; Shamsuddin, Siti Mariyam Hj; Sallehuddin, Roselina
2013-01-01
Crimes forecasting is an important area in the field of criminology. Linear models, such as regression and econometric models, are commonly applied in crime forecasting. However, in real crimes data, it is common that the data consists of both linear and nonlinear components. A single model may not be sufficient to identify all the characteristics of the data. The purpose of this study is to introduce a hybrid model that combines support vector regression (SVR) and autoregressive integrated moving average (ARIMA) to be applied in crime rates forecasting. SVR is very robust with small training data and high-dimensional problem. Meanwhile, ARIMA has the ability to model several types of time series. However, the accuracy of the SVR model depends on values of its parameters, while ARIMA is not robust to be applied to small data sets. Therefore, to overcome this problem, particle swarm optimization is used to estimate the parameters of the SVR and ARIMA models. The proposed hybrid model is used to forecast the property crime rates of the United State based on economic indicators. The experimental results show that the proposed hybrid model is able to produce more accurate forecasting results as compared to the individual models.
Directory of Open Access Journals (Sweden)
Razana Alwee
2013-01-01
Full Text Available Crimes forecasting is an important area in the field of criminology. Linear models, such as regression and econometric models, are commonly applied in crime forecasting. However, in real crimes data, it is common that the data consists of both linear and nonlinear components. A single model may not be sufficient to identify all the characteristics of the data. The purpose of this study is to introduce a hybrid model that combines support vector regression (SVR and autoregressive integrated moving average (ARIMA to be applied in crime rates forecasting. SVR is very robust with small training data and high-dimensional problem. Meanwhile, ARIMA has the ability to model several types of time series. However, the accuracy of the SVR model depends on values of its parameters, while ARIMA is not robust to be applied to small data sets. Therefore, to overcome this problem, particle swarm optimization is used to estimate the parameters of the SVR and ARIMA models. The proposed hybrid model is used to forecast the property crime rates of the United State based on economic indicators. The experimental results show that the proposed hybrid model is able to produce more accurate forecasting results as compared to the individual models.
Liu, Xiaojia; An, Haizhong; Wang, Lijun; Guan, Qing
2017-09-01
The moving average strategy is a technical indicator that can generate trading signals to assist investment. While the trading signals tell the traders timing to buy or sell, the moving average cannot tell the trading volume, which is a crucial factor for investment. This paper proposes a fuzzy moving average strategy, in which the fuzzy logic rule is used to determine the strength of trading signals, i.e., the trading volume. To compose one fuzzy logic rule, we use four types of moving averages, the length of the moving average period, the fuzzy extent, and the recommend value. Ten fuzzy logic rules form a fuzzy set, which generates a rating level that decides the trading volume. In this process, we apply genetic algorithms to identify an optimal fuzzy logic rule set and utilize crude oil futures prices from the New York Mercantile Exchange (NYMEX) as the experiment data. Each experiment is repeated for 20 times. The results show that firstly the fuzzy moving average strategy can obtain a more stable rate of return than the moving average strategies. Secondly, holding amounts series is highly sensitive to price series. Thirdly, simple moving average methods are more efficient. Lastly, the fuzzy extents of extremely low, high, and very high are more popular. These results are helpful in investment decisions.
Low average blister-rust infection rates may mean high control costs
Robert Marty
1965-01-01
The Northeastern Forest Experiment Station, in cooperation with Federal and State forest-pest-control agencies, undertook a survey of blister-rust infection rates in the white pine region of the East during 1962 and 1963. Those engaged in blister-rust-control activities will not be surprised at the survey's results. We found that infection rates were significantly...
Approximate Dual Averaging Method for Multiagent Saddle-Point Problems with Stochastic Subgradients
Directory of Open Access Journals (Sweden)
Deming Yuan
2014-01-01
Full Text Available This paper considers the problem of solving the saddle-point problem over a network, which consists of multiple interacting agents. The global objective function of the problem is a combination of local convex-concave functions, each of which is only available to one agent. Our main focus is on the case where the projection steps are calculated approximately and the subgradients are corrupted by some stochastic noises. We propose an approximate version of the standard dual averaging method and show that the standard convergence rate is preserved, provided that the projection errors decrease at some appropriate rate and the noises are zero-mean and have bounded variance.
Energy Technology Data Exchange (ETDEWEB)
Molins, Sergi [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Earth Sciences Division; Trebotich, David [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division; Steefel, Carl I. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Earth Sciences Division; Shen, Chaopeng [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division
2012-03-30
The scale-dependence of geochemical reaction rates hinders their use in continuum scale models intended for the interpretation and prediction of chemical fate and transport in subsurface environments such as those considered for geologic sequestration of CO_{2}. Processes that take place at the pore scale, especially those involving mass transport limitations to reactive surfaces, may contribute to the discrepancy commonly observed between laboratory-determined and continuum-scale or field rates. In this study we investigate the dependence of mineral dissolution rates on the pore structure of the porous media by means of pore scale modeling of flow and multicomponent reactive transport. The pore scale model is composed of high-performance simulation tools and algorithms for incompressible flow and conservative transport combined with a general-purpose multicomponent geochemical reaction code. The model performs direct numerical simulation of reactive transport based on an operator-splitting approach to coupling transport and reactions. The approach is validated with a Poiseuille flow single-pore experiment and verified with an equivalent 1-D continuum-scale model of a capillary tube packed with calcite spheres. Using the case of calcite dissolution as an example, the high-resolution model is used to demonstrate that nonuniformity in the flow field at the pore scale has the effect of decreasing the overall reactivity of the system, even when systems with identical reactive surface area are considered. In conclusion, the effect becomes more pronounced as the heterogeneity of the reactive grain packing increases, particularly where the flow slows sufficiently such that the solution approaches equilibrium locally and the average rate becomes transport-limited.
International Nuclear Information System (INIS)
Tuyen, L A; Khiem, D D; Phuc, P T; Kajcsos, Zs; Lázár, K; Tap, T D
2013-01-01
Positron lifetime spectroscopy was used to study multi-wall carbon nanotubes. The measurements were performed in vacuum on the samples having different average diameters. The positron lifetime values depend on the nanotube diameter. The results also show an influence of the nanotube diameter on the positron annihilation intensity on the nanotube surface. The change in the annihilation probability is described and interpreted by the modified diffusion model introducing the positron escape rate from the nanotubes to their external surface.
Salecker-Wigner-Peres clock and average tunneling times
International Nuclear Information System (INIS)
Lunardi, Jose T.; Manzoni, Luiz A.; Nystrom, Andrew T.
2011-01-01
The quantum clock of Salecker-Wigner-Peres is used, by performing a post-selection of the final state, to obtain average transmission and reflection times associated to the scattering of localized wave packets by static potentials in one dimension. The behavior of these average times is studied for a Gaussian wave packet, centered around a tunneling wave number, incident on a rectangular barrier and, in particular, on a double delta barrier potential. The regime of opaque barriers is investigated and the results show that the average transmission time does not saturate, showing no evidence of the Hartman effect (or its generalized version).
Averaging in SU(2) open quantum random walk
International Nuclear Information System (INIS)
Ampadu Clement
2014-01-01
We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT
Averaging in SU(2) open quantum random walk
Clement, Ampadu
2014-03-01
We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT.
Averaging Robertson-Walker cosmologies
International Nuclear Information System (INIS)
Brown, Iain A.; Robbers, Georg; Behrend, Juliane
2009-01-01
The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ω eff 0 ≈ 4 × 10 −6 , with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10 −8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w eff < −1/3 can be found for strongly phantom models
A high speed digital signal averager for pulsed NMR
International Nuclear Information System (INIS)
Srinivasan, R.; Ramakrishna, J.; Ra agopalan, S.R.
1978-01-01
A 256-channel digital signal averager suitable for pulsed nuclear magnetic resonance spectroscopy is described. It implements 'stable averaging' algorithm and hence provides a calibrated display of the average signal at all times during the averaging process on a CRT. It has a maximum sampling rate of 2.5 μ sec and a memory capacity of 256 x 12 bit words. Number of sweeps is selectable through a front panel control in binary steps from 2 3 to 2 12 . The enhanced signal can be displayed either on a CRT or by a 3.5-digit LED display. The maximum S/N improvement that can be achieved with this instrument is 36 dB. (auth.)
Books average previous decade of economic misery.
Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.
Bayesian Averaging is Well-Temperated
DEFF Research Database (Denmark)
Hansen, Lars Kai
2000-01-01
Bayesian predictions are stochastic just like predictions of any other inference scheme that generalize from a finite sample. While a simple variational argument shows that Bayes averaging is generalization optimal given that the prior matches the teacher parameter distribution the situation is l...
Corporate financing and anticipated credit rating changes
Hung, Chi-Hsiou D.; Banerjee, Anurag; Meng, Qingrui
2017-01-01
Firm circumstances change but rating agencies may not make timely revisions to their\\ud ratings, increasing information asymmetry between firms and the market. We examine\\ud whether firms time the securities market before a credit rating agency publicly reveals\\ud its decision to downgrade a firm's credit rating. Using quarterly data, we show that\\ud firms adjust their financing structures before credit rating downgrades are publicly\\ud revealed. More specifically, firms on average increase t...
Determining average path length and average trapping time on generalized dual dendrimer
Li, Ling; Guan, Jihong
2015-03-01
Dendrimer has wide number of important applications in various fields. In some cases during transport or diffusion process, it transforms into its dual structure named Husimi cactus. In this paper, we study the structure properties and trapping problem on a family of generalized dual dendrimer with arbitrary coordination numbers. We first calculate exactly the average path length (APL) of the networks. The APL increases logarithmically with the network size, indicating that the networks exhibit a small-world effect. Then we determine the average trapping time (ATT) of the trapping process in two cases, i.e., the trap placed on a central node and the trap is uniformly distributed in all the nodes of the network. In both case, we obtain explicit solutions of ATT and show how they vary with the networks size. Besides, we also discuss the influence of the coordination number on trapping efficiency.
Books Average Previous Decade of Economic Misery
Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159
Variable Rate, Adaptive Transform Tree Coding Of Images
Pearlman, William A.
1988-10-01
A tree code, asymptotically optimal for stationary Gaussian sources and squared error distortion [2], is used to encode transforms of image sub-blocks. The variance spectrum of each sub-block is estimated and specified uniquely by a set of one-dimensional auto-regressive parameters. The expected distortion is set to a constant for each block and the rate is allowed to vary to meet the given level of distortion. Since the spectrum and rate are different for every block, the code tree differs for every block. Coding simulations for target block distortion of 15 and average block rate of 0.99 bits per pel (bpp) show that very good results can be obtained at high search intensities at the expense of high computational complexity. The results at the higher search intensities outperform a parallel simulation with quantization replacing tree coding. Comparative coding simulations also show that the reproduced image with variable block rate and average rate of 0.99 bpp has 2.5 dB less distortion than a similarly reproduced image with a constant block rate equal to 1.0 bpp.
U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...
FPGA based computation of average neutron flux and e-folding period for start-up range of reactors
International Nuclear Information System (INIS)
Ram, Rajit; Borkar, S.P.; Dixit, M.Y.; Das, Debashis
2013-01-01
Pulse processing instrumentation channels used for reactor applications, play a vital role to ensure nuclear safety in startup range of reactor operation and also during fuel loading and first approach to criticality. These channels are intended for continuous run time computation of equivalent reactor core neutron flux and e-folding period. This paper focuses only the computational part of these instrumentation channels which is implemented in single FPGA using 32-bit floating point arithmetic engine. The computations of average count rate, log of average count rate, log rate and reactor period are done in VHDL using digital circuit realization approach. The computation of average count rate is done using fully adaptive window size moving average method, while Taylor series expansion for logarithms is implemented in FPGA to compute log of count rate, log rate and reactor e-folding period. This paper describes the block diagrams of digital logic realization in FPGA and advantage of fully adaptive window size moving average technique over conventional fixed size moving average technique for pulse processing of reactor instrumentations. (author)
江原, 幸雄
2009-01-01
Shallow ground temperatures such as 1m depth temperature have been measured to delineate thermal anomalies of geothermal fields and also to estimate heat discharge rates from geothermal fields. As a result, a close linear relation between 1m depth temperature and average geothermal gradient at 75cm depth has been recognized in many geothermal fields and was used to estimate conductive heat discharge rates. However, such a linear relation may show that the shallow thermal regime in geothermal ...
Pass-through of Change in Policy Interest Rate to Market Rates
M. Idrees Khawaja; Sajawal Khan
2008-01-01
This paper examines the pass through of the change in policy interest rate of the central bank of Pakistan to market interest rates. The market rates examined include KIBOR, six month deposit rate and weighted average lending rate. More or less complete pass-through of the change in policy rate to KIBOR is observed within one month. However, the magnitude of change in policy rate to deposit and lending rate is not only low but is slow as well. The pass-through to the weighted average lending ...
International Nuclear Information System (INIS)
Poussier, E.; Rambaut, M.
1986-01-01
Detection consists of a measurement of a counting rate. A probability of wrong detection is associated with this counting rate and with an average estimated rate of noise. Detection consists also in comparing the wrong detection probability to a predeterminated rate of wrong detection. The comparison can use tabulated values. Application is made to corpuscule radiation detection [fr
Fractional averaging of repetitive waveforms induced by self-imaging effects
Romero Cortés, Luis; Maram, Reza; Azaña, José
2015-10-01
We report the theoretical prediction and experimental observation of averaging of stochastic events with an equivalent result of calculating the arithmetic mean (or sum) of a rational number of realizations of the process under test, not necessarily limited to an integer record of realizations, as discrete statistical theory dictates. This concept is enabled by a passive amplification process, induced by self-imaging (Talbot) effects. In the specific implementation reported here, a combined spectral-temporal Talbot operation is shown to achieve undistorted, lossless repetition-rate division of a periodic train of noisy waveforms by a rational factor, leading to local amplification, and the associated averaging process, by the fractional rate-division factor.
The estimated cost of "no-shows" in an academic pediatric neurology clinic.
Guzek, Lindsay M; Gentry, Shelley D; Golomb, Meredith R
2015-02-01
Missed appointments ("no-shows") represent an important source of lost revenue for academic medical centers. The goal of this study was to examine the costs of "no-shows" at an academic pediatric neurology outpatient clinic. This was a retrospective cohort study of patients who missed appointments at an academic pediatric neurology outpatient clinic during 1 academic year. Revenue lost was estimated based on average reimbursement for different insurance types and visit types. The yearly "no-show" rate was 26%. Yearly revenue lost from missed appointments was $257,724.57, and monthly losses ranged from $15,652.33 in October 2013 to $27,042.44 in January 2014. The yearly revenue lost from missed appointments at the academic pediatric neurology clinic represents funds that could have been used to improve patient access and care. Further work is needed to develop strategies to decrease the no-show rate to decrease lost revenue and improve patient care and access. Copyright © 2015 Elsevier Inc. All rights reserved.
Diedrichs, Phillippa C; Lee, Christina
2010-06-01
Increasing body size and shape diversity in media imagery may promote positive body image. While research has largely focused on female models and women's body image, men may also be affected by unrealistic images. We examined the impact of average-size and muscular male fashion models on men's and women's body image and perceived advertisement effectiveness. A sample of 330 men and 289 women viewed one of four advertisement conditions: no models, muscular, average-slim or average-large models. Men and women rated average-size models as equally effective in advertisements as muscular models. For men, exposure to average-size models was associated with more positive body image in comparison to viewing no models, but no difference was found in comparison to muscular models. Similar results were found for women. Internalisation of beauty ideals did not moderate these effects. These findings suggest that average-size male models can promote positive body image and appeal to consumers. 2010 Elsevier Ltd. All rights reserved.
Interpreting Bivariate Regression Coefficients: Going beyond the Average
Halcoussis, Dennis; Phillips, G. Michael
2010-01-01
Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…
Accelerated Distributed Dual Averaging Over Evolving Networks of Growing Connectivity
Liu, Sijia; Chen, Pin-Yu; Hero, Alfred O.
2018-04-01
We consider the problem of accelerating distributed optimization in multi-agent networks by sequentially adding edges. Specifically, we extend the distributed dual averaging (DDA) subgradient algorithm to evolving networks of growing connectivity and analyze the corresponding improvement in convergence rate. It is known that the convergence rate of DDA is influenced by the algebraic connectivity of the underlying network, where better connectivity leads to faster convergence. However, the impact of network topology design on the convergence rate of DDA has not been fully understood. In this paper, we begin by designing network topologies via edge selection and scheduling. For edge selection, we determine the best set of candidate edges that achieves the optimal tradeoff between the growth of network connectivity and the usage of network resources. The dynamics of network evolution is then incurred by edge scheduling. Further, we provide a tractable approach to analyze the improvement in the convergence rate of DDA induced by the growth of network connectivity. Our analysis reveals the connection between network topology design and the convergence rate of DDA, and provides quantitative evaluation of DDA acceleration for distributed optimization that is absent in the existing analysis. Lastly, numerical experiments show that DDA can be significantly accelerated using a sequence of well-designed networks, and our theoretical predictions are well matched to its empirical convergence behavior.
An average salary: approaches to the index determination
Directory of Open Access Journals (Sweden)
T. M. Pozdnyakova
2017-01-01
Full Text Available The article “An average salary: approaches to the index determination” is devoted to studying various methods of calculating this index, both used by official state statistics of the Russian Federation and offered by modern researchers.The purpose of this research is to analyze the existing approaches to calculating the average salary of employees of enterprises and organizations, as well as to make certain additions that would help to clarify this index.The information base of the research is laws and regulations of the Russian Federation Government, statistical and analytical materials of the Federal State Statistics Service of Russia for the section «Socio-economic indexes: living standards of the population», as well as materials of scientific papers, describing different approaches to the average salary calculation. The data on the average salary of employees of educational institutions of the Khabarovsk region served as the experimental base of research. In the process of conducting the research, the following methods were used: analytical, statistical, calculated-mathematical and graphical.The main result of the research is an option of supplementing the method of calculating average salary index within enterprises or organizations, used by Goskomstat of Russia, by means of introducing a correction factor. Its essence consists in the specific formation of material indexes for different categories of employees in enterprises or organizations, mainly engaged in internal secondary jobs. The need for introducing this correction factor comes from the current reality of working conditions of a wide range of organizations, when an employee is forced, in addition to the main position, to fulfill additional job duties. As a result, the situation is frequent when the average salary at the enterprise is difficult to assess objectively because it consists of calculating multiple rates per staff member. In other words, the average salary of
An approximate analytical approach to resampling averages
DEFF Research Database (Denmark)
Malzahn, Dorthe; Opper, M.
2004-01-01
Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....
Aperture averaging in strong oceanic turbulence
Gökçe, Muhsin Caner; Baykal, Yahya
2018-04-01
Receiver aperture averaging technique is employed in underwater wireless optical communication (UWOC) systems to mitigate the effects of oceanic turbulence, thus to improve the system performance. The irradiance flux variance is a measure of the intensity fluctuations on a lens of the receiver aperture. Using the modified Rytov theory which uses the small-scale and large-scale spatial filters, and our previously presented expression that shows the atmospheric structure constant in terms of oceanic turbulence parameters, we evaluate the irradiance flux variance and the aperture averaging factor of a spherical wave in strong oceanic turbulence. Irradiance flux variance variations are examined versus the oceanic turbulence parameters and the receiver aperture diameter are examined in strong oceanic turbulence. Also, the effect of the receiver aperture diameter on the aperture averaging factor is presented in strong oceanic turbulence.
An average-based accounting approach to capital asset investments: The case of project finance
Carlo Alberto Magni
2014-01-01
Literature and textbooks on capital budgeting endorse Net Present Value (NPV) and generally treat accounting rates of return as not being reliable tools. This paper shows that accounting numbers can be reconciled with NPV and fruitfully employed in real-life applications. Focusing on project finance transactions, an Average Return On Investment (AROI) is drawn from the pro forma financial statements, obtained as the ratio of aggregate income to aggregate book value. It is shown that such a me...
A Group Neighborhood Average Clock Synchronization Protocol for Wireless Sensor Networks
Lin, Lin; Ma, Shiwei; Ma, Maode
2014-01-01
Clock synchronization is a very important issue for the applications of wireless sensor networks. The sensors need to keep a strict clock so that users can know exactly what happens in the monitoring area at the same time. This paper proposes a novel internal distributed clock synchronization solution using group neighborhood average. Each sensor node collects the offset and skew rate of the neighbors. Group averaging of offset and skew rate value are calculated instead of conventional point-to-point averaging method. The sensor node then returns compensated value back to the neighbors. The propagation delay is considered and compensated. The analytical analysis of offset and skew compensation is presented. Simulation results validate the effectiveness of the protocol and reveal that the protocol allows sensor networks to quickly establish a consensus clock and maintain a small deviation from the consensus clock. PMID:25120163
Accurate measurement of imaging photoplethysmographic signals based camera using weighted average
Pang, Zongguang; Kong, Lingqin; Zhao, Yuejin; Sun, Huijuan; Dong, Liquan; Hui, Mei; Liu, Ming; Liu, Xiaohua; Liu, Lingling; Li, Xiaohui; Li, Rongji
2018-01-01
Imaging Photoplethysmography (IPPG) is an emerging technique for the extraction of vital signs of human being using video recordings. IPPG technology with its advantages like non-contact measurement, low cost and easy operation has become one research hot spot in the field of biomedicine. However, the noise disturbance caused by non-microarterial area cannot be removed because of the uneven distribution of micro-arterial, different signal strength of each region, which results in a low signal noise ratio of IPPG signals and low accuracy of heart rate. In this paper, we propose a method of improving the signal noise ratio of camera-based IPPG signals of each sub-region of the face using a weighted average. Firstly, we obtain the region of interest (ROI) of a subject's face based camera. Secondly, each region of interest is tracked and feature-based matched in each frame of the video. Each tracked region of face is divided into 60x60 pixel block. Thirdly, the weights of PPG signal of each sub-region are calculated, based on the signal-to-noise ratio of each sub-region. Finally, we combine the IPPG signal from all the tracked ROI using weighted average. Compared with the existing approaches, the result shows that the proposed method takes modest but significant effects on improvement of signal noise ratio of camera-based PPG estimated and accuracy of heart rate measurement.
Poland, Michael P.
2014-01-01
Differencing digital elevation models (DEMs) derived from TerraSAR add-on for Digital Elevation Measurements (TanDEM-X) synthetic aperture radar imagery provides a measurement of elevation change over time. On the East Rift Zone (EZR) of Kīlauea Volcano, Hawai‘i, the effusion of lava causes changes in topography. When these elevation changes are summed over the area of an active lava flow, it is possible to quantify the volume of lava emplaced at the surface during the time spanned by the TanDEM-X data—a parameter that can be difficult to measure across the entirety of an ~100 km2 lava flow field using ground-based techniques or optical remote sensing data. Based on the differences between multiple TanDEM-X-derived DEMs collected days to weeks apart, the mean dense-rock equivalent time-averaged discharge rate of lava at Kīlauea between mid-2011 and mid-2013 was approximately 2 m3/s, which is about half the long-term average rate over the course of Kīlauea's 1983–present ERZ eruption. This result implies that there was an increase in the proportion of lava stored versus erupted, a decrease in the rate of magma supply to the volcano, or some combination of both during this time period. In addition to constraining the time-averaged discharge rate of lava and the rates of magma supply and storage, topographic change maps derived from space-based TanDEM-X data provide insights into the four-dimensional evolution of Kīlauea's ERZ lava flow field. TanDEM-X data are a valuable complement to other space-, air-, and ground-based observations of eruptive activity at Kīlauea and offer great promise at locations around the world for aiding with monitoring not just volcanic eruptions but any hazardous activity that results in surface change, including landslides, floods, earthquakes, and other natural and anthropogenic processes.
Hybrid Reynolds-Averaged/Large-Eddy Simulations of a Co-Axial Supersonic Free-Jet Experiment
Baurle, R. A.; Edwards, J. R.
2009-01-01
Reynolds-averaged and hybrid Reynolds-averaged/large-eddy simulations have been applied to a supersonic coaxial jet flow experiment. The experiment utilized either helium or argon as the inner jet nozzle fluid, and the outer jet nozzle fluid consisted of laboratory air. The inner and outer nozzles were designed and operated to produce nearly pressure-matched Mach 1.8 flow conditions at the jet exit. The purpose of the computational effort was to assess the state-of-the-art for each modeling approach, and to use the hybrid Reynolds-averaged/large-eddy simulations to gather insight into the deficiencies of the Reynolds-averaged closure models. The Reynolds-averaged simulations displayed a strong sensitivity to choice of turbulent Schmidt number. The baseline value chosen for this parameter resulted in an over-prediction of the mixing layer spreading rate for the helium case, but the opposite trend was noted when argon was used as the injectant. A larger turbulent Schmidt number greatly improved the comparison of the results with measurements for the helium simulations, but variations in the Schmidt number did not improve the argon comparisons. The hybrid simulation results showed the same trends as the baseline Reynolds-averaged predictions. The primary reason conjectured for the discrepancy between the hybrid simulation results and the measurements centered around issues related to the transition from a Reynolds-averaged state to one with resolved turbulent content. Improvements to the inflow conditions are suggested as a remedy to this dilemma. Comparisons between resolved second-order turbulence statistics and their modeled Reynolds-averaged counterparts were also performed.
Regional averaging and scaling in relativistic cosmology
International Nuclear Information System (INIS)
Buchert, Thomas; Carfora, Mauro
2002-01-01
Averaged inhomogeneous cosmologies lie at the forefront of interest, since cosmological parameters such as the rate of expansion or the mass density are to be considered as volume-averaged quantities and only these can be compared with observations. For this reason the relevant parameters are intrinsically scale-dependent and one wishes to control this dependence without restricting the cosmological model by unphysical assumptions. In the latter respect we contrast our way to approach the averaging problem in relativistic cosmology with shortcomings of averaged Newtonian models. Explicitly, we investigate the scale-dependence of Eulerian volume averages of scalar functions on Riemannian three-manifolds. We propose a complementary view of a Lagrangian smoothing of (tensorial) variables as opposed to their Eulerian averaging on spatial domains. This programme is realized with the help of a global Ricci deformation flow for the metric. We explain rigorously the origin of the Ricci flow which, on heuristic grounds, has already been suggested as a possible candidate for smoothing the initial dataset for cosmological spacetimes. The smoothing of geometry implies a renormalization of averaged spatial variables. We discuss the results in terms of effective cosmological parameters that would be assigned to the smoothed cosmological spacetime. In particular, we find that on the smoothed spatial domain B-bar evaluated cosmological parameters obey Ω-bar B-bar m + Ω-bar B-bar R + Ω-bar B-bar A + Ω-bar B-bar Q 1, where Ω-bar B-bar m , Ω-bar B-bar R and Ω-bar B-bar A correspond to the standard Friedmannian parameters, while Ω-bar B-bar Q is a remnant of cosmic variance of expansion and shear fluctuations on the averaging domain. All these parameters are 'dressed' after smoothing out the geometrical fluctuations, and we give the relations of the 'dressed' to the 'bare' parameters. While the former provide the framework of interpreting observations with a 'Friedmannian bias
Exploiting scale dependence in cosmological averaging
International Nuclear Information System (INIS)
Mattsson, Teppo; Ronkainen, Maria
2008-01-01
We study the role of scale dependence in the Buchert averaging method, using the flat Lemaitre–Tolman–Bondi model as a testing ground. Within this model, a single averaging scale gives predictions that are too coarse, but by replacing it with the distance of the objects R(z) for each redshift z, we find an O(1%) precision at z<2 in the averaged luminosity and angular diameter distances compared to their exact expressions. At low redshifts, we show the improvement for generic inhomogeneity profiles, and our numerical computations further verify it up to redshifts z∼2. At higher redshifts, the method breaks down due to its inability to capture the time evolution of the inhomogeneities. We also demonstrate that the running smoothing scale R(z) can mimic acceleration, suggesting that it could be at least as important as the backreaction in explaining dark energy as an inhomogeneity induced illusion
A Group Neighborhood Average Clock Synchronization Protocol for Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Lin Lin
2014-08-01
Full Text Available Clock synchronization is a very important issue for the applications of wireless sensor networks. The sensors need to keep a strict clock so that users can know exactly what happens in the monitoring area at the same time. This paper proposes a novel internal distributed clock synchronization solution using group neighborhood average. Each sensor node collects the offset and skew rate of the neighbors. Group averaging of offset and skew rate value are calculated instead of conventional point-to-point averaging method. The sensor node then returns compensated value back to the neighbors. The propagation delay is considered and compensated. The analytical analysis of offset and skew compensation is presented. Simulation results validate the effectiveness of the protocol and reveal that the protocol allows sensor networks to quickly establish a consensus clock and maintain a small deviation from the consensus clock.
THE ASSESSMENT OF CORPORATE BONDS ON THE BASIS OF THE WEIGHTED AVERAGE
Directory of Open Access Journals (Sweden)
Victor V. Prokhorov
2014-01-01
Full Text Available The article considers the problem associated with the assessment of the interest rate of a public corporate bond issue. The theme of research is the study of techniques for evaluationof interest rates of corporate bond. The article discusses the task of developing a methodology for assessing the marketinterest rate of corporate bonded loan, which allows to takeinto account the systematic and speciﬁc risks. The technique of evaluation of market interest rates of corporate bonds onthe basis of weighted averages is proposed. This procedure uses in the calculation of cumulative barrier interest rate, sectoral weighted average interest rate and the interest ratedetermined on the basis of the model CAPM (Capital Asset Pricing Model. The results, which enable to speak about the possibility of applying the proposed methodology for assessing the market interest rate of a public corporate bond issuein the Russian conditions. The results may be applicable for Russian industrial enterprises, organizing issue public bonds,as well as investment companies exposed organizers of corporate securities loans and other organizations specializingin investments in the Russian public corporate bond loans.
Averaging models: parameters estimation with the R-Average procedure
Directory of Open Access Journals (Sweden)
S. Noventa
2010-01-01
Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.
High Average Power Fiber Laser for Satellite Communications, Phase I
National Aeronautics and Space Administration — Very high average power lasers with high electrical-top-optical (E-O) efficiency, which also support pulse position modulation (PPM) formats in the MHz-data rate...
Hybrid Reynolds-Averaged/Large-Eddy Simulations of a Coaxial Supersonic Free-Jet Experiment
Baurle, Robert A.; Edwards, Jack R.
2010-01-01
Reynolds-averaged and hybrid Reynolds-averaged/large-eddy simulations have been applied to a supersonic coaxial jet flow experiment. The experiment was designed to study compressible mixing flow phenomenon under conditions that are representative of those encountered in scramjet combustors. The experiment utilized either helium or argon as the inner jet nozzle fluid, and the outer jet nozzle fluid consisted of laboratory air. The inner and outer nozzles were designed and operated to produce nearly pressure-matched Mach 1.8 flow conditions at the jet exit. The purpose of the computational effort was to assess the state-of-the-art for each modeling approach, and to use the hybrid Reynolds-averaged/large-eddy simulations to gather insight into the deficiencies of the Reynolds-averaged closure models. The Reynolds-averaged simulations displayed a strong sensitivity to choice of turbulent Schmidt number. The initial value chosen for this parameter resulted in an over-prediction of the mixing layer spreading rate for the helium case, but the opposite trend was observed when argon was used as the injectant. A larger turbulent Schmidt number greatly improved the comparison of the results with measurements for the helium simulations, but variations in the Schmidt number did not improve the argon comparisons. The hybrid Reynolds-averaged/large-eddy simulations also over-predicted the mixing layer spreading rate for the helium case, while under-predicting the rate of mixing when argon was used as the injectant. The primary reason conjectured for the discrepancy between the hybrid simulation results and the measurements centered around issues related to the transition from a Reynolds-averaged state to one with resolved turbulent content. Improvements to the inflow conditions were suggested as a remedy to this dilemma. Second-order turbulence statistics were also compared to their modeled Reynolds-averaged counterparts to evaluate the effectiveness of common turbulence closure
International Nuclear Information System (INIS)
Thompson, K.N.; Jackson, S.G.; Rooney, J.R.
1988-01-01
The relationship between body weight gain and the onset of bone aberrations (e.g. epiphysitis) is described. A model was derived which described the increase in transverse epiphyseal width, and the major factor found to affect epiphyseal width was average daily gain in body weight. In addition, a radiographic examination of the epiphyseal areas showed a larger number of bone aberrations in groups gaining weight at an above-average rate. Thus, a rapid increase in body weight can be suggested as a significant factor in the onset of epiphysitis
An axially averaged-radial transport model of tokamak edge plasmas
International Nuclear Information System (INIS)
Prinja, A.K.; Conn, R.W.
1984-01-01
A two-zone axially averaged-radial transport model for edge plasmas is described that incorporates parallel electron and ion conduction, localized recycling, parallel electron pressure gradient effects and sheath losses. Results for high recycling show that the radial electron temperature profile is determined by parallel electron conduction over short radial distances (proportional 3 cm). At larger radius where Tsub(e) has fallen appreciably, convective transport becomes equally important. The downstream density and ion temperature profiles are very flat over the region where electron conduction dominates. This is seen to result from a sharply decaying velocity profile that follows the radial electron temperature. A one-dimensional analytical recycling model shows that at high neutral pumping rates, the plasma density at the plate, nsub(ia), scales linearly with the unperturbed background density, nsub(io). When ionization dominates nsub(ia)/nsub(io) proportional exp(nsub(io)) while in the intermediate regime nsub(ia)/nsub(io) proportional exp(proportional nsub(io)). Such behavior is qualitatively in accord with experimental observations. (orig.)
Discrete rate and variable power adaptation for underlay cognitive networks
Abdallah, Mohamed M.
2010-01-01
We consider the problem of maximizing the average spectral efficiency of a secondary link in underlay cognitive networks. In particular, we consider the network setting whereby the secondary transmitter employs discrete rate and variable power adaptation under the constraints of maximum average transmit power and maximum average interference power allowed at the primary receiver due to the existence of an interference link between the secondary transmitter and the primary receiver. We first find the optimal discrete rates assuming a predetermined partitioning of the signal-to-noise ratio (SNR) of both the secondary and interference links. We then present an iterative algorithm for finding a suboptimal partitioning of the SNR of the interference link assuming a fixed partitioning of the SNR of secondary link selected for the case where no interference link exists. Our numerical results show that the average spectral efficiency attained by using the iterative algorithm is close to that achieved by the computationally extensive exhaustive search method for the case of Rayleigh fading channels. In addition, our simulations show that selecting the optimal partitioning of the SNR of the secondary link assuming no interference link exists still achieves the maximum average spectral efficiency for the case where the average interference constraint is considered. © 2010 IEEE.
Aperture averaging and BER for Gaussian beam in underwater oceanic turbulence
Gökçe, Muhsin Caner; Baykal, Yahya
2018-03-01
In an underwater wireless optical communication (UWOC) link, power fluctuations over finite-sized collecting lens are investigated for a horizontally propagating Gaussian beam wave. The power scintillation index, also known as the irradiance flux variance, for the received irradiance is evaluated in weak oceanic turbulence by using the Rytov method. This lets us further quantify the associated performance indicators, namely, the aperture averaging factor and the average bit-error rate (). The effects on the UWOC link performance of the oceanic turbulence parameters, i.e., the rate of dissipation of kinetic energy per unit mass of fluid, the rate of dissipation of mean-squared temperature, Kolmogorov microscale, the ratio of temperature to salinity contributions to the refractive index spectrum as well as system parameters, i.e., the receiver aperture diameter, Gaussian source size, laser wavelength and the link distance are investigated.
Average [O II] nebular emission associated with Mg II absorbers: dependence on Fe II absorption
Joshi, Ravi; Srianand, Raghunathan; Petitjean, Patrick; Noterdaeme, Pasquier
2018-05-01
We investigate the effect of Fe II equivalent width (W2600) and fibre size on the average luminosity of [O II] λλ3727, 3729 nebular emission associated with Mg II absorbers (at 0.55 ≤ z ≤ 1.3) in the composite spectra of quasars obtained with 3 and 2 arcsec fibres in the Sloan Digital Sky Survey. We confirm the presence of strong correlations between [O II] luminosity (L_{[O II]}) and equivalent width (W2796) and redshift of Mg II absorbers. However, we show L_{[O II]} and average luminosity surface density suffer from fibre size effects. More importantly, for a given fibre size, the average L_{[O II]} strongly depends on the equivalent width of Fe II absorption lines and found to be higher for Mg II absorbers with R ≡W2600/W2796 ≥ 0.5. In fact, we show the observed strong correlations of L_{[O II]} with W2796 and z of Mg II absorbers are mainly driven by such systems. Direct [O II] detections also confirm the link between L_{[O II]} and R. Therefore, one has to pay attention to the fibre losses and dependence of redshift evolution of Mg II absorbers on W2600 before using them as a luminosity unbiased probe of global star formation rate density. We show that the [O II] nebular emission detected in the stacked spectrum is not dominated by few direct detections (i.e. detections ≥3σ significant level). On an average, the systems with R ≥ 0.5 and W2796 ≥ 2 Å are more reddened, showing colour excess E(B - V) ˜ 0.02, with respect to the systems with R < 0.5 and most likely trace the high H I column density systems.
Delineation of facial archetypes by 3d averaging.
Shaweesh, Ashraf I; Thomas, C David L; Bankier, Agnes; Clement, John G
2004-10-01
The objective of this study was to investigate the feasibility of creating archetypal 3D faces through computerized 3D facial averaging. A 3D surface scanner Fiore and its software were used to acquire the 3D scans of the faces while 3D Rugle3 and locally-developed software generated the holistic facial averages. 3D facial averages were created from two ethnic groups; European and Japanese and from children with three previous genetic disorders; Williams syndrome, achondroplasia and Sotos syndrome as well as the normal control group. The method included averaging the corresponding depth (z) coordinates of the 3D facial scans. Compared with other face averaging techniques there was not any warping or filling in the spaces by interpolation; however, this facial average lacked colour information. The results showed that as few as 14 faces were sufficient to create an archetypal facial average. In turn this would make it practical to use face averaging as an identification tool in cases where it would be difficult to recruit a larger number of participants. In generating the average, correcting for size differences among faces was shown to adjust the average outlines of the facial features. It is assumed that 3D facial averaging would help in the identification of the ethnic status of persons whose identity may not be known with certainty. In clinical medicine, it would have a great potential for the diagnosis of syndromes with distinctive facial features. The system would also assist in the education of clinicians in the recognition and identification of such syndromes.
Hilley, G. E.; Burgmann, R.; Dumitru, T. A.; Ebert, Y.; Fosdick, J. C.; Le, K.; Levine, N. M.; Wilson, A.; Gudmundsdottir, M. H.
2010-12-01
We present eleven Apatite Fission Track (AFT) and Apatite (U-Th)/He (A-He) analyses and eighteen catchment-averaged cosmogenic 10Be denudation rates from the Santa Cruz Mountains (SCM) that resolve the unroofing history of this range over the past several Myr. This range lies within a restraining bend in the San Andreas Fault (SAF), which appears to be fixed to the crust on the northeast side of the fault based on previous work. In this view, the topographic asymmetry of the SCM reflects the advection of material southwest of the right-lateral SAF through a zone of uplift centered on the restraining bend, while material northwest of the fault remains trapped this zone. Northeast of the fault bend in the Sierra Azul block of the SCM, AFT ages adjacent to the SAF appear completely reset during the Pliocene, and show partial resetting at the periphery of the block. This suggests that total exhumation exceeded 3-4 km within the heart of the block and was SCM are near mass flux steady state over the timescales captured by the CRN (~1.5-6.5 ka). Nonetheless, the extent of topography in areas far from the bend suggests that there may be some component of regional fault-normal contraction and/or that this steady state has not been fully attained because of geomorphic lags and isostatic adjustments.
Function reconstruction from noisy local averages
International Nuclear Information System (INIS)
Chen Yu; Huang Jianguo; Han Weimin
2008-01-01
A regularization method is proposed for the function reconstruction from noisy local averages in any dimension. Error bounds for the approximate solution in L 2 -norm are derived. A number of numerical examples are provided to show computational performance of the method, with the regularization parameters selected by different strategies
Average Distance Travelled To School by Primary and Secondary ...
African Journals Online (AJOL)
This study investigated average distance travelled to school by students in primary and secondary schools in Anambra, Enugu, and Ebonyi States and effect on attendance. These are among the top ten densely populated and educationally advantaged States in Nigeria. Research evidences report high dropout rates in ...
Rahmat, R. F.; Nasution, F. R.; Seniman; Syahputra, M. F.; Sitompul, O. S.
2018-02-01
Weather is condition of air in a certain region at a relatively short period of time, measured with various parameters such as; temperature, air preasure, wind velocity, humidity and another phenomenons in the atmosphere. In fact, extreme weather due to global warming would lead to drought, flood, hurricane and other forms of weather occasion, which directly affects social andeconomic activities. Hence, a forecasting technique is to predict weather with distinctive output, particullary mapping process based on GIS with information about current weather status in certain cordinates of each region with capability to forecast for seven days afterward. Data used in this research are retrieved in real time from the server openweathermap and BMKG. In order to obtain a low error rate and high accuracy of forecasting, the authors use Bayesian Model Averaging (BMA) method. The result shows that the BMA method has good accuracy. Forecasting error value is calculated by mean square error shows (MSE). The error value emerges at minumum temperature rated at 0.28 and maximum temperature rated at 0.15. Meanwhile, the error value of minimum humidity rates at 0.38 and the error value of maximum humidity rates at 0.04. Afterall, the forecasting error rate of wind speed is at 0.076. The lower the forecasting error rate, the more optimized the accuracy is.
International Nuclear Information System (INIS)
Badnell, N.R.; Pindzola, M.S.
1989-01-01
We have calculated dielectronic recombination cross sections and rate coefficients for the Ne-like ions P 5+ and Cl 7+ in configuration-average, LS-coupling, and intermediate-coupling approximations. Autoionization into excited states reduces the cross sections and rate coefficients by substantial amounts in all three methods. There is only rough agreement between the configuration-average cross-section results and the corresponding intermediate-coupling results. There is good agreement, however, between the LS-coupling cross-section results and the corresponding intermediate-coupling results. The LS-coupling and intermediate-coupling rate coefficients agree to better than 5%, while the configuration-average rate coefficients are about 30% higher than the other two coupling methods. External electric field effects, as calculated in the configuration-average approximation, are found to be relatively small for the cross sections and completely negligible for the rate coefficients. Finally, the general formula of Burgess was found to overestimate the rate coefficients by roughly a factor of 5, mainly due to the neglect of autoionization into excited states
Click rates and silences of sperm whales at Kaikoura, New Zealand
Douglas, Lesley A.; Dawson, Stephen M.; Jaquet, Nathalie
2005-07-01
Analysis of the usual click rates of sperm whales (Physeter macrocephalus) at Kaikoura, New Zealand, confirms the potential for assessing abundance via ``click counting.'' Usual click rates over three dive cycles each of three photographically identified whales showed that 5 min averages of usual click rate did not differ significantly within dives, among dives of the same whale or among whales. Over the nine dives (n=13 728 clicks) mean usual click rate was 1.272 clicks s-1 (95% CI=0.151). On average, individual sperm whales at Kaikoura spent 60% of their time usual clicking in winter and in summer. There was no evidence that whale identity or stage of the dive recorded affects significantly the percentage of time spent usual clicking. Differences in vocal behavior among sperm whale populations worldwide indicate that estimates of abundance that are based on click rates need to based on data from the population of interest, rather than from another population or some global average.
International Nuclear Information System (INIS)
Chrien, R.E.
1986-10-01
The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs
2012-02-14
...), US--Zeroing (Japan), US--Stainless Steel (Mexico), and US--Continued Zeroing (EC) found the denial of... comparisons between transaction-specific export prices and average normal values and does not offset the amount of dumping that is found with the results of comparisons for which the transaction-specific export...
Adaptive Spontaneous Transitions between Two Mechanisms of Numerical Averaging.
Brezis, Noam; Bronfman, Zohar Z; Usher, Marius
2015-06-04
We investigated the mechanism with which humans estimate numerical averages. Participants were presented with 4, 8 or 16 (two-digit) numbers, serially and rapidly (2 numerals/second) and were instructed to convey the sequence average. As predicted by a dual, but not a single-component account, we found a non-monotonic influence of set-size on accuracy. Moreover, we observed a marked decrease in RT as set-size increases and RT-accuracy tradeoff in the 4-, but not in the 16-number condition. These results indicate that in accordance with the normative directive, participants spontaneously employ analytic/sequential thinking in the 4-number condition and intuitive/holistic thinking in the 16-number condition. When the presentation rate is extreme (10 items/sec) we find that, while performance still remains high, the estimations are now based on intuitive processing. The results are accounted for by a computational model postulating population-coding underlying intuitive-averaging and working-memory-mediated symbolic procedures underlying analytical-averaging, with flexible allocation between the two.
van Wee, B.; Rietveld, P.; Meurs, H.
2006-01-01
Recent research suggests that the average time spent travelling by the Dutch population has increased over the past decades. However, different data sources show different levels of increase. This paper explores possible causes for this increase. They include a rise in incomes, which has probably
Decision trees with minimum average depth for sorting eight elements
AbouEisha, Hassan M.; Chikalov, Igor; Moshkov, Mikhail
2015-01-01
We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees
Economic Factors for Televison Programme Rating in Slovenia
Matjaz Dodic; Bojan Nastav
2011-01-01
Factors that influence televison programme rating can be divided into environment, outer factors and internal factors of televison programmes. In this paper we apply the regression analysis to study the influence of the number of unemployed, inflation rate, average salary, consumers’ trust, households’ financial status in the past 12 months and the economic state in Slovenia on rating of national, commercial and other televison programmes in Slovenia in the 2000–2009 period. The results show ...
A Capital Mistake? The Neglected Effect of Immigration on Average Wages
Declan Trott
2011-01-01
Much recent literature on the wage effects of immigration assumes that the return to capital, and therefore the average wage, is unaffected in the long run. If immigration is modelled as a continuous flow rather than a one off shock, this result does not necessarily hold. A simple calibration with pre-crisis US immigration rates gives a reduction in average wages of 5%, larger than most estimates of its effect on relative wages.
Data base of system-average dose rates at nuclear power plants: Final report
International Nuclear Information System (INIS)
Beal, S.K.; Britz, W.L.; Cohen, S.C.; Goldin, A.S.; Goldin, D.J.
1987-10-01
In this work, a data base is derived of area dose rates for systems and components listed in the Energy Economic Data Base (EEDB). The data base is derived from area surveys obtained during outages at four boiling water reactors (BWRs) at three stations and eight pressurized water reactors (PWRs) at four stations. Separate tables are given for BWRs and PWRs. These tables may be combined with estimates of labor hours to provide order-of-magnitude estimates of exposure for purposes of regulatory analysis. They are only valid for work involving entire systems or components. The estimates of labor hours used in conjunction with the dose rates to estimate exposure must be adjusted to account for in-field time. Finally, the dose rates given in the data base do not reflect ALARA considerations. 11 refs., 2 figs., 3 tabs
Gibbs equilibrium averages and Bogolyubov measure
International Nuclear Information System (INIS)
Sankovich, D.P.
2011-01-01
Application of the functional integration methods in equilibrium statistical mechanics of quantum Bose-systems is considered. We show that Gibbs equilibrium averages of Bose-operators can be represented as path integrals over a special Gauss measure defined in the corresponding space of continuous functions. We consider some problems related to integration with respect to this measure
Plan averaging for multicriteria navigation of sliding window IMRT and VMAT
International Nuclear Information System (INIS)
Craft, David; Papp, Dávid; Unkelbach, Jan
2014-01-01
Purpose: To describe a method for combining sliding window plans [intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT)] for use in treatment plan averaging, which is needed for Pareto surface navigation based multicriteria treatment planning. Methods: The authors show that by taking an appropriately defined average of leaf trajectories of sliding window plans, the authors obtain a sliding window plan whose fluence map is the exact average of the fluence maps corresponding to the initial plans. In the case of static-beam IMRT, this also implies that the dose distribution of the averaged plan is the exact dosimetric average of the initial plans. In VMAT delivery, the dose distribution of the averaged plan is a close approximation of the dosimetric average of the initial plans. Results: The authors demonstrate the method on three Pareto optimal VMAT plans created for a demanding paraspinal case, where the tumor surrounds the spinal cord. The results show that the leaf averaged plans yield dose distributions that approximate the dosimetric averages of the precomputed Pareto optimal plans well. Conclusions: The proposed method enables the navigation of deliverable Pareto optimal plans directly, i.e., interactive multicriteria exploration of deliverable sliding window IMRT and VMAT plans, eliminating the need for a sequencing step after navigation and hence the dose degradation that is caused by such a sequencing step
Plan averaging for multicriteria navigation of sliding window IMRT and VMAT.
Craft, David; Papp, Dávid; Unkelbach, Jan
2014-02-01
To describe a method for combining sliding window plans [intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT)] for use in treatment plan averaging, which is needed for Pareto surface navigation based multicriteria treatment planning. The authors show that by taking an appropriately defined average of leaf trajectories of sliding window plans, the authors obtain a sliding window plan whose fluence map is the exact average of the fluence maps corresponding to the initial plans. In the case of static-beam IMRT, this also implies that the dose distribution of the averaged plan is the exact dosimetric average of the initial plans. In VMAT delivery, the dose distribution of the averaged plan is a close approximation of the dosimetric average of the initial plans. The authors demonstrate the method on three Pareto optimal VMAT plans created for a demanding paraspinal case, where the tumor surrounds the spinal cord. The results show that the leaf averaged plans yield dose distributions that approximate the dosimetric averages of the precomputed Pareto optimal plans well. The proposed method enables the navigation of deliverable Pareto optimal plans directly, i.e., interactive multicriteria exploration of deliverable sliding window IMRT and VMAT plans, eliminating the need for a sequencing step after navigation and hence the dose degradation that is caused by such a sequencing step.
Plan averaging for multicriteria navigation of sliding window IMRT and VMAT
Energy Technology Data Exchange (ETDEWEB)
Craft, David, E-mail: dcraft@partners.org; Papp, Dávid; Unkelbach, Jan [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114 (United States)
2014-02-15
Purpose: To describe a method for combining sliding window plans [intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT)] for use in treatment plan averaging, which is needed for Pareto surface navigation based multicriteria treatment planning. Methods: The authors show that by taking an appropriately defined average of leaf trajectories of sliding window plans, the authors obtain a sliding window plan whose fluence map is the exact average of the fluence maps corresponding to the initial plans. In the case of static-beam IMRT, this also implies that the dose distribution of the averaged plan is the exact dosimetric average of the initial plans. In VMAT delivery, the dose distribution of the averaged plan is a close approximation of the dosimetric average of the initial plans. Results: The authors demonstrate the method on three Pareto optimal VMAT plans created for a demanding paraspinal case, where the tumor surrounds the spinal cord. The results show that the leaf averaged plans yield dose distributions that approximate the dosimetric averages of the precomputed Pareto optimal plans well. Conclusions: The proposed method enables the navigation of deliverable Pareto optimal plans directly, i.e., interactive multicriteria exploration of deliverable sliding window IMRT and VMAT plans, eliminating the need for a sequencing step after navigation and hence the dose degradation that is caused by such a sequencing step.
Abediseid, Walid
2012-12-21
The exact average complexity analysis of the basic sphere decoder for general space-time codes applied to multiple-input multiple-output (MIMO) wireless channel is known to be difficult. In this work, we shed the light on the computational complexity of sphere decoding for the quasi- static, lattice space-time (LAST) coded MIMO channel. Specifically, we drive an upper bound of the tail distribution of the decoder\\'s computational complexity. We show that when the computational complexity exceeds a certain limit, this upper bound becomes dominated by the outage probability achieved by LAST coding and sphere decoding schemes. We then calculate the minimum average computational complexity that is required by the decoder to achieve near optimal performance in terms of the system parameters. Our results indicate that there exists a cut-off rate (multiplexing gain) for which the average complexity remains bounded. Copyright © 2012 John Wiley & Sons, Ltd.
Changing mortality and average cohort life expectancy
Directory of Open Access Journals (Sweden)
Robert Schoen
2005-10-01
Full Text Available Period life expectancy varies with changes in mortality, and should not be confused with the life expectancy of those alive during that period. Given past and likely future mortality changes, a recent debate has arisen on the usefulness of the period life expectancy as the leading measure of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure, the average cohort life expectancy (ACLE, to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate measures of mortality are calculated for England and Wales, Norway, and Switzerland for the years 1880 to 2000. CAL is found to be sensitive to past and present changes in death rates. ACLE requires the most data, but gives the best representation of the survivorship of cohorts present at a given time.
FDA Food Code recommendations: how do popular US baking shows measure up?
Directory of Open Access Journals (Sweden)
Valerie Cadorett
2018-05-01
Full Text Available The purpose of this study was to determine if popular US baking shows follow the FDA Food Code recommendations and critical food safety principles. This cross-sectional study examined a convenience sample of 75 episodes from three popular baking shows. The three shows were about competitively baking cupcakes, competitively baking cakes, and baking in a popular local bakery. Twenty-five episodes from each show were viewed. Coding involved tallying how many times 17 FDA Food Code recommendations were or were not followed. On each show, bare hands frequently came in contact with ready-to-eat food. On a per-hour basis, this occurred 80, 155, and 176 times on shows 1-3, respectively. Hands were washed before cooking three times on the three shows and never for the recommended 20 seconds. On each show, many people touched food while wearing jewelry other than a plain wedding band, for an average of at least 7 people per hour on each show. Shows 1-3 had high rates of long-haired bakers not wearing hair restraints (11.14, 6.57, and 14.06 per hour, respectively. Shows 1 and 2 had high rates of running among the bakers (22.29 and 10.57 instances per hour, respectively. These popular baking shows do not demonstrate proper food safety techniques put forth by the FDA and do not contribute the reduction of foodborne illnesses through proper food handling.
Wave function collapse implies divergence of average displacement
Marchewka, A.; Schuss, Z.
2005-01-01
We show that propagating a truncated discontinuous wave function by Schr\\"odinger's equation, as asserted by the collapse axiom, gives rise to non-existence of the average displacement of the particle on the line. It also implies that there is no Zeno effect. On the other hand, if the truncation is done so that the reduced wave function is continuous, the average coordinate is finite and there is a Zeno effect. Therefore the collapse axiom of measurement needs to be revised.
Average of delta: a new quality control tool for clinical laboratories.
Jones, Graham R D
2016-01-01
Average of normals is a tool used to control assay performance using the average of a series of results from patients' samples. Delta checking is a process of identifying errors in individual patient results by reviewing the difference from previous results of the same patient. This paper introduces a novel alternate approach, average of delta, which combines these concepts to use the average of a number of sequential delta values to identify changes in assay performance. Models for average of delta and average of normals were developed in a spreadsheet application. The model assessed the expected scatter of average of delta and average of normals functions and the effect of assay bias for different values of analytical imprecision and within- and between-subject biological variation and the number of samples included in the calculations. The final assessment was the number of patients' samples required to identify an added bias with 90% certainty. The model demonstrated that with larger numbers of delta values, the average of delta function was tighter (lower coefficient of variation). The optimal number of samples for bias detection with average of delta was likely to be between 5 and 20 for most settings and that average of delta outperformed average of normals when the within-subject biological variation was small relative to the between-subject variation. Average of delta provides a possible additional assay quality control tool which theoretical modelling predicts may be more valuable than average of normals for analytes where the group biological variation is wide compared with within-subject variation and where there is a high rate of repeat testing in the laboratory patient population. © The Author(s) 2015.
Post-model selection inference and model averaging
Directory of Open Access Journals (Sweden)
Georges Nguefack-Tsague
2011-07-01
Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.
Bootstrapping pre-averaged realized volatility under market microstructure noise
DEFF Research Database (Denmark)
Hounyo, Ulrich; Goncalves, Sílvia; Meddahi, Nour
The main contribution of this paper is to propose a bootstrap method for inference on integrated volatility based on the pre-averaging approach of Jacod et al. (2009), where the pre-averaging is done over all possible overlapping blocks of consecutive observations. The overlapping nature of the pre......-averaged returns implies that these are kn-dependent with kn growing slowly with the sample size n. This motivates the application of a blockwise bootstrap method. We show that the "blocks of blocks" bootstrap method suggested by Politis and Romano (1992) (and further studied by Bühlmann and Künsch (1995......)) is valid only when volatility is constant. The failure of the blocks of blocks bootstrap is due to the heterogeneity of the squared pre-averaged returns when volatility is stochastic. To preserve both the dependence and the heterogeneity of squared pre-averaged returns, we propose a novel procedure...
Generation of µW level plateau harmonics at high repetition rate.
Hädrich, S; Krebs, M; Rothhardt, J; Carstens, H; Demmler, S; Limpert, J; Tünnermann, A
2011-09-26
The process of high harmonic generation allows for coherent transfer of infrared laser light to the extreme ultraviolet spectral range opening a variety of applications. The low conversion efficiency of this process calls for optimization or higher repetition rate intense ultrashort pulse lasers. Here we present state-of-the-art fiber laser systems for the generation of high harmonics up to 1 MHz repetition rate. We perform measurements of the average power with a calibrated spectrometer and achieved µW harmonics between 45 nm and 61 nm (H23-H17) at a repetition rate of 50 kHz. Additionally, we show the potential for few-cycle pulses at high average power and repetition rate that may enable water-window harmonics at unprecedented repetition rate. © 2011 Optical Society of America
Giri, Veda N.; Coups, Elliot J.; Ruth, Karen; Goplerud, Julia; Raysor, Susan; Kim, Taylor Y.; Bagden, Loretta; Mastalski, Kathleen; Zakrzewski, Debra; Leimkuhler, Suzanne; Watkins-Bruner, Deborah
2009-01-01
Purpose Men with a family history (FH) of prostate cancer (PCA) and African American (AA) men are at higher risk for PCA. Recruitment and retention of these high-risk men into early detection programs has been challenging. We report a comprehensive analysis on recruitment methods, show rates, and participant factors from the Prostate Cancer Risk Assessment Program (PRAP), which is a prospective, longitudinal PCA screening study. Materials and Methods Men 35–69 years are eligible if they have a FH of PCA, are AA, or have a BRCA1/2 mutation. Recruitment methods were analyzed with respect to participant demographics and show to the first PRAP appointment using standard statistical methods Results Out of 707 men recruited, 64.9% showed to the initial PRAP appointment. More individuals were recruited via radio than from referral or other methods (χ2 = 298.13, p < .0001). Men recruited via radio were more likely to be AA (p<0.001), less educated (p=0.003), not married or partnered (p=0.007), and have no FH of PCA (p<0.001). Men recruited via referrals had higher incomes (p=0.007). Men recruited via referral were more likely to attend their initial PRAP visit than those recruited by radio or other methods (χ2 = 27.08, p < .0001). Conclusions This comprehensive analysis finds that radio leads to higher recruitment of AA men with lower socioeconomic status. However, these are the high-risk men that have lower show rates for PCA screening. Targeted motivational measures need to be studied to improve show rates for PCA risk assessment for these high-risk men. PMID:19758657
Statistics on exponential averaging of periodograms
Energy Technology Data Exchange (ETDEWEB)
Peeters, T.T.J.M. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Ciftcioglu, Oe. [Istanbul Technical Univ. (Turkey). Dept. of Electrical Engineering
1994-11-01
The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a {chi}{sup 2} distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.).
Statistics on exponential averaging of periodograms
International Nuclear Information System (INIS)
Peeters, T.T.J.M.; Ciftcioglu, Oe.
1994-11-01
The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a χ 2 distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.)
HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS
International Nuclear Information System (INIS)
2005-01-01
Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department
Unscrambling The "Average User" Of Habbo Hotel
Directory of Open Access Journals (Sweden)
Mikael Johnson
2007-01-01
Full Text Available The “user” is an ambiguous concept in human-computer interaction and information systems. Analyses of users as social actors, participants, or configured users delineate approaches to studying design-use relationships. Here, a developer’s reference to a figure of speech, termed the “average user,” is contrasted with design guidelines. The aim is to create an understanding about categorization practices in design through a case study about the virtual community, Habbo Hotel. A qualitative analysis highlighted not only the meaning of the “average user,” but also the work that both the developer and the category contribute to this meaning. The average user a represents the unknown, b influences the boundaries of the target user groups, c legitimizes the designer to disregard marginal user feedback, and d keeps the design space open, thus allowing for creativity. The analysis shows how design and use are intertwined and highlights the developers’ role in governing different users’ interests.
Asymptotic behaviour of time averages for non-ergodic Gaussian processes
Ślęzak, Jakub
2017-08-01
In this work, we study the behaviour of time-averages for stationary (non-ageing), but ergodicity-breaking Gaussian processes using their representation in Fourier space. We provide explicit formulae for various time-averaged quantities, such as mean square displacement, density, and analyse the behaviour of time-averaged characteristic function, which gives insight into rich memory structure of the studied processes. Moreover, we show applications of the ergodic criteria in Fourier space, determining the ergodicity of the generalised Langevin equation's solutions.
DEFF Research Database (Denmark)
Gramkow, Claus
1999-01-01
In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belo...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion......In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...
Measurement of the single and two phase flow using newly developed average bidirectional flow tube
International Nuclear Information System (INIS)
Yun, Byong Jo; Euh, Dong Jin; Kang, Kyung Ho; Song, Chul Hwa; Baek, Won Pil
2005-01-01
A new instrument, an average BDFT (Birectional Flow Tube), was proposed to measure the flow rate in single and two phase flows. Its working principle is similar to that of the pitot tube, wherein the dynamic pressure is measured. In an average BDFT, the pressure measured at the front of the flow tube is equal to the total pressure, while that measured at the rear tube is slightly less than the static pressure of the flow field due to the suction effect downstream. The proposed instrument was tested in air/water vertical and horizontal test sections with an inner diameter of 0.08m. The tests were performed primarily in single phase water and air flow conditions to obtain the amplification factor(k) of the flow tube in the vertical and horizontal test sections. Tests were also performed in air/water vertical two phase flow conditions in which the flow regimes were bubbly, slug, and churn turbulent flows. In order to calculate the phasic mass flow rates from the measured differential pressure, the Chexal dirft-flux correlation and a momentum exchange factor between the two phases were introduced. The test results show that the proposed instrument with a combination of the measured void fraction, Chexal drift-flux correlation, and Bosio and Malnes' momentum exchange model could predict the phasic mass flow rates within a 15% error. A new momentum exchange model was also proposed from the present data and its implementation provides a 5% improvement to the measured mass flow rate when compared to that with the Bosio and Malnes' model
General and Local: Averaged k-Dependence Bayesian Classifiers
Directory of Open Access Journals (Sweden)
Limin Wang
2015-06-01
Full Text Available The inference of a general Bayesian network has been shown to be an NP-hard problem, even for approximate solutions. Although k-dependence Bayesian (KDB classifier can construct at arbitrary points (values of k along the attribute dependence spectrum, it cannot identify the changes of interdependencies when attributes take different values. Local KDB, which learns in the framework of KDB, is proposed in this study to describe the local dependencies implicated in each test instance. Based on the analysis of functional dependencies, substitution-elimination resolution, a new type of semi-naive Bayesian operation, is proposed to substitute or eliminate generalization to achieve accurate estimation of conditional probability distribution while reducing computational complexity. The final classifier, averaged k-dependence Bayesian (AKDB classifiers, will average the output of KDB and local KDB. Experimental results on the repository of machine learning databases from the University of California Irvine (UCI showed that AKDB has significant advantages in zero-one loss and bias relative to naive Bayes (NB, tree augmented naive Bayes (TAN, Averaged one-dependence estimators (AODE, and KDB. Moreover, KDB and local KDB show mutually complementary characteristics with respect to variance.
Decision trees with minimum average depth for sorting eight elements
AbouEisha, Hassan M.
2015-11-19
We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees is approximately equal to 8.548×10^326365), has also minimum depth. Both problems were considered by Knuth (1998). To obtain these results, we use tools based on extensions of dynamic programming which allow us to make sequential optimization of decision trees relative to depth and average depth, and to count the number of decision trees with minimum average depth.
Systematic literature review shows that appetite rating does not predict energy intake.
Holt, Guy M; Owen, Lauren J; Till, Sophie; Cheng, Yanying; Grant, Vicky A; Harden, Charlotte J; Corfe, Bernard M
2017-11-02
Ratings of appetite are commonly used to assess appetite modification following an intervention. Subjectively rated appetite is a widely employed proxy measure for energy intake (EI), measurement of which requires greater time and resources. However, the validity of appetite as a reliable predictor of EI has not yet been reviewed systematically. This literature search identified studies that quantified both appetite ratings and EI. Outcomes were predefined as: (1) agreement between self-reported appetite scores and EI; (2) no agreement between self-reported appetitescores and EI. The presence of direct statistical comparison between the endpoints, intervention type and study population were also recorded. 462 papers were included in this review. Appetite scores failed to correspond with EI in 51.3% of the total studies. Only 6% of all studies evaluated here reported a direct statistical comparison between appetite scores and EI. χ 2 analysis demonstrated that any relationship between EI and appetite was independent of study type stratification by age, gender or sample size. The very substantive corpus reviewed allows us to conclude that self-reported appetite ratings of appetite do not reliably predict EI. Caution should be exercised when drawing conclusions based from self-reported appetite scores in relation to prospective EI.
Camacho-Rodríguez, J; González-Céspedes, A M; Cerón-García, M C; Fernández-Sevilla, J M; Acién-Fernández, F G; Molina-Grima, E
2014-03-01
Different pilot-scale outdoor photobioreactors using medium recycling were operated in a greenhouse under different environmental conditions and the growth rates (0.1 to 0.5 day(-1)) obtained evaluated in order to compare them with traditional systems used in aquaculture. The annualized volumetric growth rate for Nannochloropsis gaditana was 0.26 g l(-1) day(-1) (peak 0.4 g l(-1) day(-1)) at 0.4 day(-1) in a 5-cm wide flat-panel bioreactor (FP-PBR). The biomass productivity achieved in this reactor was 10-fold higher than in traditional reactors, reaching values of 28 % and 45 % dry weight (d.w.) of lipids and proteins, respectively, with a 4.3 % (d.w.) content of eicosapentaenoic acid (EPA). A model for predicting EPA productivity from N. gaditana cultures that takes into account the existence of photolimitation and photoinhibition of growth under outdoor conditions is presented. The effect of temperature and average irradiance on EPA content is also studied. The maximum EPA productivity attained is 30 mg l(-1) day(-1).
The consequences of time averaging for measuring temporal species turnover in the fossil record
Tomašových, Adam; Kidwell, Susan
2010-05-01
Modeling time averaging effects with simple simulations allows us to evaluate the magnitude of change in temporal species turnover that is expected to occur in long (paleoecological) time series with fossil assemblages. Distinguishing different modes of metacommunity dynamics (such as neutral, density-dependent, or trade-off dynamics) with time-averaged fossil assemblages requires scaling-up time-averaging effects because the decrease in temporal resolution and the decrease in temporal inter-sample separation (i.e., the two main effects of time averaging) substantially increase community stability relative to assemblages without or with weak time averaging. Large changes in temporal scale that cover centuries to millennia can lead to unprecedented effects on temporal rate of change in species composition. Temporal variation in species composition monotonically decreases with increasing duration of time-averaging in simulated fossil assemblages. Time averaging is also associated with the reduction of species dominance owing to the temporal switching in the identity of dominant species. High degrees of time averaging can cause that community parameters of local fossil assemblages converge to parameters of metacommunity rather that to parameters of individual local non-averaged communities. We find that the low variation in species composition observed among mollusk and ostracod subfossil assemblages can be explained by time averaging alone, and low temporal resolution and reduced temporal separation among assemblages in time series can thus explain a substantial part of the reduced variation in species composition relative to unscaled predictions of neutral model (i.e., species do not differ in birth, death, and immigration rates on per capita basis). The structure of time-averaged assemblages can thus provide important insights into processes that act over larger temporal scales, such as evolution of niches and dispersal, range-limit dynamics, taxon cycles, and
Noise Reduction for Nonlinear Nonstationary Time Series Data using Averaging Intrinsic Mode Function
Directory of Open Access Journals (Sweden)
Christofer Toumazou
2013-07-01
Full Text Available A novel noise filtering algorithm based on averaging Intrinsic Mode Function (aIMF, which is a derivation of Empirical Mode Decomposition (EMD, is proposed to remove white-Gaussian noise of foreign currency exchange rates that are nonlinear nonstationary times series signals. Noise patterns with different amplitudes and frequencies were randomly mixed into the five exchange rates. A number of filters, namely; Extended Kalman Filter (EKF, Wavelet Transform (WT, Particle Filter (PF and the averaging Intrinsic Mode Function (aIMF algorithm were used to compare filtering and smoothing performance. The aIMF algorithm demonstrated high noise reduction among the performance of these filters.
Averaging of nonlinearity-managed pulses
International Nuclear Information System (INIS)
Zharnitsky, Vadim; Pelinovsky, Dmitry
2005-01-01
We consider the nonlinear Schroedinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons
38 CFR 4.76a - Computation of average concentric contraction of visual fields.
2010-07-01
... concentric contraction of visual fields. 4.76a Section 4.76a Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS SCHEDULE FOR RATING DISABILITIES Disability Ratings The Organs of Special Sense § 4.76a Computation of average concentric contraction of visual fields. Table III—Normal Visual...
Iverson, Richard M.; George, David L.
2014-01-01
To simulate debris-flow behaviour from initiation to deposition, we derive a depth-averaged, two-phase model that combines concepts of critical-state soil mechanics, grain-flow mechanics and fluid mechanics. The model's balance equations describe coupled evolution of the solid volume fraction, m, basal pore-fluid pressure, flow thickness and two components of flow velocity. Basal friction is evaluated using a generalized Coulomb rule, and fluid motion is evaluated in a frame of reference that translates with the velocity of the granular phase, vs. Source terms in each of the depth-averaged balance equations account for the influence of the granular dilation rate, defined as the depth integral of ∇⋅vs. Calculation of the dilation rate involves the effects of an elastic compressibility and an inelastic dilatancy angle proportional to m−meq, where meq is the value of m in equilibrium with the ambient stress state and flow rate. Normalization of the model equations shows that predicted debris-flow behaviour depends principally on the initial value of m−meq and on the ratio of two fundamental timescales. One of these timescales governs downslope debris-flow motion, and the other governs pore-pressure relaxation that modifies Coulomb friction and regulates evolution of m. A companion paper presents a suite of model predictions and tests.
Predicting Atomic Decay Rates Using an Informational-Entropic Approach
Gleiser, Marcelo; Jiang, Nan
2018-06-01
We show that a newly proposed Shannon-like entropic measure of shape complexity applicable to spatially-localized or periodic mathematical functions known as configurational entropy (CE) can be used as a predictor of spontaneous decay rates for one-electron atoms. The CE is constructed from the Fourier transform of the atomic probability density. For the hydrogen atom with degenerate states labeled with the principal quantum number n, we obtain a scaling law relating the n-averaged decay rates to the respective CE. The scaling law allows us to predict the n-averaged decay rate without relying on the traditional computation of dipole matrix elements. We tested the predictive power of our approach up to n = 20, obtaining an accuracy better than 3.7% within our numerical precision, as compared to spontaneous decay tables listed in the literature.
Predicting Atomic Decay Rates Using an Informational-Entropic Approach
Gleiser, Marcelo; Jiang, Nan
2018-02-01
We show that a newly proposed Shannon-like entropic measure of shape complexity applicable to spatially-localized or periodic mathematical functions known as configurational entropy (CE) can be used as a predictor of spontaneous decay rates for one-electron atoms. The CE is constructed from the Fourier transform of the atomic probability density. For the hydrogen atom with degenerate states labeled with the principal quantum number n, we obtain a scaling law relating the n-averaged decay rates to the respective CE. The scaling law allows us to predict the n-averaged decay rate without relying on the traditional computation of dipole matrix elements. We tested the predictive power of our approach up to n = 20, obtaining an accuracy better than 3.7% within our numerical precision, as compared to spontaneous decay tables listed in the literature.
Evaluation of radiation shielding rate of lead aprons in nuclear medicine
Energy Technology Data Exchange (ETDEWEB)
Han, Sang Hyun; Han, Beom Heui; Lee, Sang Ho [Dept. of Radiological Science, Seonam University, Asan (Korea, Republic of); Hong, Dong Heui [Dept. of Radiological Science, Far East University, Eumseong (Korea, Republic of); Kim, Gi Jin [Dept. of Nuclear Medicine, Konyang University Hospital, Daejeon (Korea, Republic of)
2017-03-15
Considering that the X-ray apron used in the department of radiology is also used in the department of nuclear medicine, the study aimed to analyze the shielding rate of the apron according to types of radioisotopes, thus γ ray energy, to investigate the protective effects. The radioisotopes used in the experiment were the top 5 nuclides in usage statistics {sup 99m}Tc, {sup 18}F, {sup 131}I, {sup 123}I, and {sup 201}Tl, and the aprons were lead equivalent 0.35 mmPb aprons currently under use in the department of nuclear medicine. As a result of experiments, average shielding rates of aprons were {sup 99m}Tc 31.59%, {sup 201}Tl 68.42%, and {sup 123}I 76.63%. When using an apron, the shielding rate of {sup 13}'1I actually resulted in average dose rate increase of 33.72%, and {sup 18}F showed an average shielding rate of –0.315%, showing there was almost no shielding effect. As a result, the radioisotopes with higher shielding rate of apron was in the descending order of {sup 123}I, {sup 201}Tl, {sup 99m}Tc, {sup 18}F, {sup 131}I. Currently, aprons used in the nuclear medicine laboratory are general X-ray aprons, and it is thought that it is not appropriate for nuclear medicine environment that utilizes γ rays. Therefore, development of nuclear medicine exclusive aprons suitable for the characteristics of radioisotopes is required in consideration of effective radiation protection and work efficiency of radiation workers.
ship between IS-month mating mass and average lifetime repro
African Journals Online (AJOL)
1976; Elliol, Rae & Wickham, 1979; Napier, et af., 1980). Although being in general agreement with results in the literature, it is evident that the present phenotypic correlations between I8-month mating mass and average lifetime lambing and weaning rate tended to be equal to the highest comparable estimates in the ...
Evaporation of Liquid Droplet in Nano and Micro Scales from Statistical Rate Theory.
Duan, Fei; He, Bin; Wei, Tao
2015-04-01
The statistical rate theory (SRT) is applied to predict the average evaporation flux of liquid droplet after the approach is validated in the sessile droplet experiments of the water and heavy water. The steady-state experiments show a temperature discontinuity at the evaporating interface. The average evaporation flux is evaluated by individually changing the measurement at a liquid-vapor interface, including the interfacial liquid temperature, the interfacial vapor temperature, the vapor-phase pressure, and the droplet size. The parameter study shows that a higher temperature jump would reduce the average evaporation flux. The average evaporation flux can significantly be influenced by the interfacial liquid temperature and the vapor-phase pressure. The variation can switch the evaporation into condensation. The evaporation flux is found to remain relative constant if the droplet is larger than a micro scale, while the smaller diameters in nano scale can produce a much higher evaporation flux. In addition, a smaller diameter of droplets with the same liquid volume has a larger surface area. It is suggested that the evaporation rate increases dramatically as the droplet shrinks into nano size.
The difference between alternative averages
Directory of Open Access Journals (Sweden)
James Vaupel
2012-09-01
Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.
Review of Research Shows, Overall, Acupuncture Did Not Increase Pregnancy Rates with IVF
... did seem to increase pregnancy success rates at IVF clinics with baseline pregnancy rates that were lower than 32 percent. This review, funded in part by NCCAM, was published online in the journal Human Reproduction Update. The review analyzed 16 randomized controlled clinical ...
High-average-power diode-pumped Yb: YAG lasers
International Nuclear Information System (INIS)
Avizonis, P V; Beach, R; Bibeau, C M; Emanuel, M A; Harris, D G; Honea, E C; Monroe, R S; Payne, S A; Skidmore, J A; Sutton, S B
1999-01-01
A scaleable diode end-pumping technology for high-average-power slab and rod lasers has been under development for the past several years at Lawrence Livermore National Laboratory (LLNL). This technology has particular application to high average power Yb:YAG lasers that utilize a rod configured gain element. Previously, this rod configured approach has achieved average output powers in a single 5 cm long by 2 mm diameter Yb:YAG rod of 430 W cw and 280 W q-switched. High beam quality (M(sup 2)= 2.4) q-switched operation has also been demonstrated at over 180 W of average output power. More recently, using a dual rod configuration consisting of two, 5 cm long by 2 mm diameter laser rods with birefringence compensation, we have achieved 1080 W of cw output with an M(sup 2) value of 13.5 at an optical-to-optical conversion efficiency of 27.5%. With the same dual rod laser operated in a q-switched mode, we have also demonstrated 532 W of average power with an M(sup 2) and lt; 2.5 at 17% optical-to-optical conversion efficiency. These q-switched results were obtained at a 10 kHz repetition rate and resulted in 77 nsec pulse durations. These improved levels of operational performance have been achieved as a result of technology advancements made in several areas that will be covered in this manuscript. These enhancements to our architecture include: (1) Hollow lens ducts that enable the use of advanced cavity architectures permitting birefringence compensation and the ability to run in large aperture-filling near-diffraction-limited modes. (2) Compound laser rods with flanged-nonabsorbing-endcaps fabricated by diffusion bonding. (3) Techniques for suppressing amplified spontaneous emission (ASE) and parasitics in the polished barrel rods
Model averaging, optimal inference and habit formation
Directory of Open Access Journals (Sweden)
Thomas H B FitzGerald
2014-06-01
Full Text Available Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function – the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge – that of determining which model or models of their environment are the best for guiding behaviour. Bayesian model averaging – which says that an agent should weight the predictions of different models according to their evidence – provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent’s behaviour should show an equivalent balance. We hypothesise that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realisable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behaviour. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded Bayesian inference, focussing particularly upon the relationship between goal-directed and habitual behaviour.
Han, Sheng; Xi, Shi-qiong; Geng, Wei-dong
2017-11-01
In order to solve the problem of low recognition rate of traditional feature extraction operators under low-resolution images, a novel algorithm of expression recognition is proposed, named central oblique average center-symmetric local binary pattern (CS-LBP) with adaptive threshold (ATCS-LBP). Firstly, the features of face images can be extracted by the proposed operator after pretreatment. Secondly, the obtained feature image is divided into blocks. Thirdly, the histogram of each block is computed independently and all histograms can be connected serially to create a final feature vector. Finally, expression classification is achieved by using support vector machine (SVM) classifier. Experimental results on Japanese female facial expression (JAFFE) database show that the proposed algorithm can achieve a recognition rate of 81.9% when the resolution is as low as 16×16, which is much better than that of the traditional feature extraction operators.
Impact of window decrement rate on TCP performance in an adhoc network
Suherman; Hutasuhut, Arief T. W.; Badra, Khaldun; Al-Akaidi, Marwan
2017-09-01
Transmission control protocol (TCP) is a reliable transport protocol handling end to end connection in TCP/IP stack. It works well in copper or optical fibre link, but experiences increasing delay in wireless network. Further, TCP experiences multiple retransmissions due to higher collision probability within wireless network. The situation may get worsen in an ad hoc network. This paper examines the impact half window or window reduction rate to the overall TCP performances. The evaluation using NS-2 simulator shows that the smaller the window decrement rate results the smaller end to end delay. Delay is reduced to 17.05% in average when window decrement rate decreases. Average jitter also decreases 4.15%, while packet loss is not affected.
Neonatal heart rate prediction.
Abdel-Rahman, Yumna; Jeremic, Aleksander; Tan, Kenneth
2009-01-01
Technological advances have caused a decrease in the number of infant deaths. Pre-term infants now have a substantially increased chance of survival. One of the mechanisms that is vital to saving the lives of these infants is continuous monitoring and early diagnosis. With continuous monitoring huge amounts of data are collected with so much information embedded in them. By using statistical analysis this information can be extracted and used to aid diagnosis and to understand development. In this study we have a large dataset containing over 180 pre-term infants whose heart rates were recorded over the length of their stay in the Neonatal Intensive Care Unit (NICU). We test two types of models, empirical bayesian and autoregressive moving average. We then attempt to predict future values. The autoregressive moving average model showed better results but required more computation.
Leion, Felicia; Hegbrant, Josefine; den Bakker, Emil; Jonsson, Magnus; Abrahamson, Magnus; Nyman, Ulf; Björk, Jonas; Lindström, Veronica; Larsson, Anders; Bökenkamp, Arend; Grubb, Anders
2017-09-01
Estimating glomerular filtration rate (GFR) in adults by using the average of values obtained by a cystatin C- (eGFR cystatin C ) and a creatinine-based (eGFR creatinine ) equation shows at least the same diagnostic performance as GFR estimates obtained by equations using only one of these analytes or by complex equations using both analytes. Comparison of eGFR cystatin C and eGFR creatinine plays a pivotal role in the diagnosis of Shrunken Pore Syndrome, where low eGFR cystatin C compared to eGFR creatinine has been associated with higher mortality in adults. The present study was undertaken to elucidate if this concept can also be applied in children. Using iohexol and inulin clearance as gold standard in 702 children, we studied the diagnostic performance of 10 creatinine-based, 5 cystatin C-based and 3 combined cystatin C-creatinine eGFR equations and compared them to the result of the average of 9 pairs of a eGFR cystatin C and a eGFR creatinine estimate. While creatinine-based GFR estimations are unsuitable in children unless calibrated in a pediatric or mixed pediatric-adult population, cystatin C-based estimations in general performed well in children. The average of a suitable creatinine-based and a cystatin C-based equation generally displayed a better diagnostic performance than estimates obtained by equations using only one of these analytes or by complex equations using both analytes. Comparing eGFR cystatin and eGFR creatinine may help identify pediatric patients with Shrunken Pore Syndrome.
De Luca, G.; Magnus, J.R.
2011-01-01
In this article, we describe the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals, which implement, respectively, the exact Bayesian model-averaging estimator and the weighted-average least-squares
Evaluation of radiation shielding rate of lead aprons in nuclear medicine
International Nuclear Information System (INIS)
Han, Sang Hyun; Han, Beom Heui; Lee, Sang Ho; Hong, Dong Heui; Kim, Gi Jin
2017-01-01
Considering that the X-ray apron used in the department of radiology is also used in the department of nuclear medicine, the study aimed to analyze the shielding rate of the apron according to types of radioisotopes, thus γ ray energy, to investigate the protective effects. The radioisotopes used in the experiment were the top 5 nuclides in usage statistics "9"9"mTc, "1"8F, "1"3"1I, "1"2"3I, and "2"0"1Tl, and the aprons were lead equivalent 0.35 mmPb aprons currently under use in the department of nuclear medicine. As a result of experiments, average shielding rates of aprons were "9"9"mTc 31.59%, "2"0"1Tl 68.42%, and "1"2"3I 76.63%. When using an apron, the shielding rate of "1"3'1I actually resulted in average dose rate increase of 33.72%, and "1"8F showed an average shielding rate of –0.315%, showing there was almost no shielding effect. As a result, the radioisotopes with higher shielding rate of apron was in the descending order of "1"2"3I, "2"0"1Tl, "9"9"mTc, "1"8F, "1"3"1I. Currently, aprons used in the nuclear medicine laboratory are general X-ray aprons, and it is thought that it is not appropriate for nuclear medicine environment that utilizes γ rays. Therefore, development of nuclear medicine exclusive aprons suitable for the characteristics of radioisotopes is required in consideration of effective radiation protection and work efficiency of radiation workers
Estimating 1970-99 average annual groundwater recharge in Wisconsin using streamflow data
Gebert, Warren A.; Walker, John F.; Kennedy, James L.
2011-01-01
Average annual recharge in Wisconsin for the period 1970-99 was estimated using streamflow data from U.S. Geological Survey continuous-record streamflow-gaging stations and partial-record sites. Partial-record sites have discharge measurements collected during low-flow conditions. The average annual base flow of a stream divided by the drainage area is a good approximation of the recharge rate; therefore, once average annual base flow is determined recharge can be calculated. Estimates of recharge for nearly 72 percent of the surface area of the State are provided. The results illustrate substantial spatial variability of recharge across the State, ranging from less than 1 inch to more than 12 inches per year. The average basin size for partial-record sites (50 square miles) was less than the average basin size for the gaging stations (305 square miles). Including results for smaller basins reveals a spatial variability that otherwise would be smoothed out using only estimates for larger basins. An error analysis indicates that the techniques used provide base flow estimates with standard errors ranging from 5.4 to 14 percent.
2007-03-30
PHYSICAL THERAPY ,TMC 53 9 14.52% AUDIOLOGY,BAMC 133 22 14.19% OPHTH PEDS,BAMC 28 4 12.50% OCCUPATIONAL THERAPY ,BAMC 135 18 11.76% ALLERGY CLINIC,BAMC...for physical therapy and have a total of 4 appointments set up with that clinic. For this study, only the first referral was counted and not the follow...Nephrology. Some of the other low no-show rate clinics were: Pain Management (1.27%), Endocrinology (1.52%), General Surgery (1.83%), Rheumatology (1.89
International Nuclear Information System (INIS)
Raaphorst, G. Peter; Ng, Cheng E.; Shahine, Bilal
1999-01-01
Purpose: Long duration mild hyperthermia has been shown to be an effective radiosensitizer when given concurrently with low dose rate irradiation. Pulsed simulated low dose rate (PSLDR) is now being used clinically, and we have set out to determine whether concurrent mild hyperthermia can be an effective radiosensitizer for the PSLDR protocol. Materials and Methods: Human glioma cells (U-87MG) were grown to plateau phase and treated in plateau phase in order to minimize cell cycle redistribution during protracted treatments. Low dose rate (LDR) irradiation and 41 deg. C hyperthermia were delivered by having a radium irradiator inside a temperature-controlled incubator. PSLDR was given using a 150 kVp X-ray unit and maintaining the cells at 41 deg. C between irradiations. The duration of irradiation and concurrent heating depended on total dose and extended up to 48 h. Results: When 41 deg. C hyperthermia was given currently with LDR or PSLDR, the thermal enhancement ratios (TER) were about the same if the average dose rate for PSLDR was the same as for LDR. At higher average dose rates for PSLDR the TERs became less. Conclusions: Our data show that concurrent mild hyperthermia can be an effective sensitizer for PSLDR. This sensitization can be as effective as for LDR if the same average dose rate is used and the TER increases with decreasing dose rate. Thus mild hyperthermia combined with PSLDR may be an effective clinical protocol
Minimal average consumption downlink base station power control strategy
Holtkamp H.; Auer G.; Haas H.
2011-01-01
We consider single cell multi-user OFDMA downlink resource allocation on a flat-fading channel such that average supply power is minimized while fulfilling a set of target rates. Available degrees of freedom are transmission power and duration. This paper extends our previous work on power optimal resource allocation in the mobile downlink by detailing the optimal power control strategy investigation and extracting fundamental characteristics of power optimal operation in cellular downlink. W...
Self-averaging correlation functions in the mean field theory of spin glasses
International Nuclear Information System (INIS)
Mezard, M.; Parisi, G.
1984-01-01
In the infinite range spin glass model, we consider the staggered spin σsub(lambda)associated with a given eigenvector of the interaction matrix. We show that the thermal average of sub(lambda)sup(2) is a self-averaging quantity and we compute it
Effect of Heating Rate on Grain Structure and Superplasticity of 7B04 Aluminum Alloy Sheets
Directory of Open Access Journals (Sweden)
CHEN Min
2017-03-01
Full Text Available Fine-grained 7B04 aluminum alloy sheets were manufactured through thermo-mechanical treatment. The effects of anneal heating rate on grain structure and superplasticity were investigated using electron back scattering diffraction(EBSD and high temperature tensile test. The results show that at the heating rate of 5.0×10-3K/s, the average grain sizes along the rolling direction(RD and normal direction(ND are 28.2μm and 13.9μm respectively, the nucleation rate is 1/1000. With the increase of heating rate, the average grain size decreases, and the nucleation rate increases. When the heating rate increases to 30.0K/s, the average grain sizes along the RD and ND decrease respectively to 9.9μm and 5.1μm, and the nucleation rate increases to 1/80. Besides, with the increase of heating rate, the elongation of sheets also increases. The elongation of the specimens increases from 100% to 730% under the deforming condition of 773K/8×10-4s-1.
Nongeostrophic theory of zonally averaged circulation. I - Formulation
Tung, Ka Kit
1986-01-01
A nongeostrophic theory of zonally averaged circulation is formulated using the nonlinear primitive equations (mass conservation, thermodynamics, and zonal momentum) on a sphere. The relationship between the mean meridional circulation and diabatic heating rate is studied. Differences between results of nongeostropic theory and the geostrophic formulation concerning the role of eddy forcing of the diabatic circulation and the nonlinear nearly inviscid limit versus the geostrophic limit are discussed. Consideration is given to the Eliassen-Palm flux divergence, the Eliassen-Palm pseudodivergence, the nonacceleration theorem, and the nonlinear nongeostrophic Taylor relationship.
The Prediction of Exchange Rates with the Use of Auto-Regressive Integrated Moving-Average Models
Directory of Open Access Journals (Sweden)
Daniela Spiesová
2014-10-01
Full Text Available Currency market is recently the largest world market during the existence of which there have been many theories regarding the prediction of the development of exchange rates based on macroeconomic, microeconomic, statistic and other models. The aim of this paper is to identify the adequate model for the prediction of non-stationary time series of exchange rates and then use this model to predict the trend of the development of European currencies against Euro. The uniqueness of this paper is in the fact that there are many expert studies dealing with the prediction of the currency pairs rates of the American dollar with other currency but there is only a limited number of scientific studies concerned with the long-term prediction of European currencies with the help of the integrated ARMA models even though the development of exchange rates has a crucial impact on all levels of economy and its prediction is an important indicator for individual countries, banks, companies and businessmen as well as for investors. The results of this study confirm that to predict the conditional variance and then to estimate the future values of exchange rates, it is adequate to use the ARIMA (1,1,1 model without constant, or ARIMA [(1,7,1,(1,7] model, where in the long-term, the square root of the conditional variance inclines towards stable value.
An Analysis of Total Lightning Flash Rates Over Florida
Mazzetti, Thomas O.; Fuelberg, Henry E.
2017-12-01
Although Florida is known as the "Sunshine State", it also contains the greatest lightning flash densities in the United States. Flash density has received considerable attention in the literature, but lightning flash rate has received much less attention. We use data from the Earth Networks Total Lightning Network (ENTLN) to produce a 5 year (2010-2014) set of statistics regarding total flash rates over Florida and adjacent regions. Instead of tracking individual storms, we superimpose a 0.2° × 0.2° grid over the study region and count both cloud-to-ground (CG) and in-cloud (IC) flashes over 5 min intervals. Results show that the distribution of total flash rates is highly skewed toward small values, whereas the greatest rate is 185 flashes min-1. Greatest average annual flash rates ( 3 flashes min-1) are located near Orlando. The southernmost peninsula, North Florida, and the Florida Panhandle exhibit smaller average annual flash rates ( 1.5 flashes min-1). Large flash rates > 100 flashes min-1 can occur during any season, at any time during the 24 h period, and at any location within the domain. However, they are most likely during the afternoon and early evening in East Central Florida during the spring and summer months.
Averaging in spherically symmetric cosmology
International Nuclear Information System (INIS)
Coley, A. A.; Pelavas, N.
2007-01-01
The averaging problem in cosmology is of fundamental importance. When applied to study cosmological evolution, the theory of macroscopic gravity (MG) can be regarded as a long-distance modification of general relativity. In the MG approach to the averaging problem in cosmology, the Einstein field equations on cosmological scales are modified by appropriate gravitational correlation terms. We study the averaging problem within the class of spherically symmetric cosmological models. That is, we shall take the microscopic equations and effect the averaging procedure to determine the precise form of the correlation tensor in this case. In particular, by working in volume-preserving coordinates, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. We find that the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background must be of the form of a spatial curvature. Inhomogeneities and spatial averaging, through this spatial curvature correction term, can have a very significant dynamical effect on the dynamics of the Universe and cosmological observations; in particular, we discuss whether spatial averaging might lead to a more conservative explanation of the observed acceleration of the Universe (without the introduction of exotic dark matter fields). We also find that the correlation tensor for a non-FLRW background can be interpreted as the sum of a spatial curvature and an anisotropic fluid. This may lead to interesting effects of averaging on astrophysical scales. We also discuss the results of averaging an inhomogeneous Lemaitre-Tolman-Bondi solution as well as calculations of linear perturbations (that is, the backreaction) in an FLRW background, which support the main conclusions of the analysis
High-power pre-chirp managed amplification of femtosecond pulses at high repetition rates
International Nuclear Information System (INIS)
Liu, Yang; Li, Wenxue; Zhao, Jian; Bai, Dongbi; Luo, Daping; Zeng, Heping
2015-01-01
Femtosecond pulses at 250 MHz repetition rate from a mode-locked fiber laser are amplified to high power in a pre-chirp managed amplifier. The experimental strategy offers a potential towards high-power ultrashort laser pulses at high repetition rates. By investigating the laser pulse evolution in the amplification processes, we show that self-similar evolution, finite gain bandwidth and mode instabilities determine pulse characteristics in different regimes. Further average power scaling is limited by the mode instabilities. Nevertheless, this laser system enables us to achieve sub-50 fs pulses with an average power of 93 W. (letter)
How to average logarithmic retrievals?
Directory of Open Access Journals (Sweden)
B. Funke
2012-04-01
Full Text Available Calculation of mean trace gas contributions from profiles obtained by retrievals of the logarithm of the abundance rather than retrievals of the abundance itself are prone to biases. By means of a system simulator, biases of linear versus logarithmic averaging were evaluated for both maximum likelihood and maximum a priori retrievals, for various signal to noise ratios and atmospheric variabilities. These biases can easily reach ten percent or more. As a rule of thumb we found for maximum likelihood retrievals that linear averaging better represents the true mean value in cases of large local natural variability and high signal to noise ratios, while for small local natural variability logarithmic averaging often is superior. In the case of maximum a posteriori retrievals, the mean is dominated by the a priori information used in the retrievals and the method of averaging is of minor concern. For larger natural variabilities, the appropriateness of the one or the other method of averaging depends on the particular case because the various biasing mechanisms partly compensate in an unpredictable manner. This complication arises mainly because of the fact that in logarithmic retrievals the weight of the prior information depends on abundance of the gas itself. No simple rule was found on which kind of averaging is superior, and instead of suggesting simple recipes we cannot do much more than to create awareness of the traps related with averaging of mixing ratios obtained from logarithmic retrievals.
Racusin, J. L.; Oates, S. R.; De Pasquale, M.; Kocevski, D.
2016-01-01
We present a correlation between the average temporal decay (alpha X,avg, greater than 200 s) and early-time luminosity (LX,200 s) of X-ray afterglows of gamma-ray bursts as observed by the Swift X-ray Telescope. Both quantities are measured relative to a rest-frame time of 200 s after the gamma-ray trigger. The luminosity â€" average decay correlation does not depend on specific temporal behavior and contains one scale-independent quantity minimizing the role of selection effects. This is a complementary correlation to that discovered by Oates et al. in the optical light curves observed by the Swift Ultraviolet Optical Telescope. The correlation indicates that, on average, more luminous X-ray afterglows decay faster than less luminous ones, indicating some relative mechanism for energy dissipation. The X-ray and optical correlations are entirely consistent once corrections are applied and contamination is removed. We explore the possible biases introduced by different light-curve morphologies and observational selection effects, and how either geometrical effects or intrinsic properties of the central engine and jet could explain the observed correlation.
Pritychenko, B.; Mughabghab, S. F.
2012-12-01
We present calculations of neutron thermal cross sections, Westcott factors, resonance integrals, Maxwellian-averaged cross sections and astrophysical reaction rates for 843 ENDF materials using data from the major evaluated nuclear libraries and European activation file. Extensive analysis of newly-evaluated neutron reaction cross sections, neutron covariances, and improvements in data processing techniques motivated us to calculate nuclear industry and neutron physics quantities, produce s-process Maxwellian-averaged cross sections and astrophysical reaction rates, systematically calculate uncertainties, and provide additional insights on currently available neutron-induced reaction data. Nuclear reaction calculations are discussed and new results are presented. Due to space limitations, the present paper contains only calculated Maxwellian-averaged cross sections and their uncertainties. The complete data sets for all results are published in the Brookhaven National Laboratory report.
Fluctuation of blood pressure and pulse rate during colostomy irrigation.
Sadahiro, S; Noto, T; Tajima, T; Mitomi, T; Miyazaki, T; Numata, M
1995-06-01
The aim of this study was to determine the effects of colostomy irrigation on the vital signs of patients with left colostomy. Twenty-two consecutive patients who underwent abdominoperineal resection for cancer of the lower rectum and had left lower quadrant end colostomy were included in this study. Subjective symptoms, blood pressure, and pulse rate during the first irrigation were investigated. Fluctuation of blood pressure during instillation was 8.0/8.5 mmHg (average) and 25.0/17.9 mmHg during evacuation. Fluctuation of pulse rate was 5.5 per minute (average) during instillation and 11.5 per minute during evacuation. The number of subjects who showed more than 20% fluctuation of systolic pressure was 12 (54.5 percent) and that of diastolic pressure was 14 (63.6 percent). One of 22 patients complained of illness during irrigation. Although colostomy irrigation showed no significant effects on vital signs in the majority of patients, it caused a significant reduction in both blood pressure and pulse rate in a small number of patients. Careful attention should be paid to vital signs considering the possibility of such effects, especially on the initial irrigation.
International Nuclear Information System (INIS)
Kobayashi, Katsuhei; Kobayashi, Tooru
1992-01-01
The 235 U fission spectrum-averaged cross sections for 13 threshold reactions were measured with the fission plate (27 cm in diameter and 1.1 cm thick) at the heavy water thermal neutron facility of the Kyoto University Reactor. The Monte Carlo code MCNP was applied to check the deviation from the 235 U fission neutron spectrum due to the room-scattered neutrons, and it was found that the resultant spectrum was close to that of 235 U fission neutrons. Supplementally, the relations to derive the absorbed dose rates with the fission plate were also given using the calculated neutron spectra and the neutron Kerma factors. Finally, the present values of the fission spectrum-averaged cross sections were employed to adjust the 235 U fission neutron spectrum with the NEUPAC code. The adjusted spectrum showed a good agreement with the Watt-type fission neutron spectrum. (author)
Average Soil Water Retention Curves Measured by Neutron Radiography
Energy Technology Data Exchange (ETDEWEB)
Cheng, Chu-Lin [ORNL; Perfect, Edmund [University of Tennessee, Knoxville (UTK); Kang, Misun [ORNL; Voisin, Sophie [ORNL; Bilheux, Hassina Z [ORNL; Horita, Juske [Texas Tech University (TTU); Hussey, Dan [NIST Center for Neutron Research (NCRN), Gaithersburg, MD
2011-01-01
Water retention curves are essential for understanding the hydrologic behavior of partially-saturated porous media and modeling flow transport processes within the vadose zone. In this paper we report direct measurements of the main drying and wetting branches of the average water retention function obtained using 2-dimensional neutron radiography. Flint sand columns were saturated with water and then drained under quasi-equilibrium conditions using a hanging water column setup. Digital images (2048 x 2048 pixels) of the transmitted flux of neutrons were acquired at each imposed matric potential (~10-15 matric potential values per experiment) at the NCNR BT-2 neutron imaging beam line. Volumetric water contents were calculated on a pixel by pixel basis using Beer-Lambert s law after taking into account beam hardening and geometric corrections. To remove scattering effects at high water contents the volumetric water contents were normalized (to give relative saturations) by dividing the drying and wetting sequences of images by the images obtained at saturation and satiation, respectively. The resulting pixel values were then averaged and combined with information on the imposed basal matric potentials to give average water retention curves. The average relative saturations obtained by neutron radiography showed an approximate one-to-one relationship with the average values measured volumetrically using the hanging water column setup. There were no significant differences (at p < 0.05) between the parameters of the van Genuchten equation fitted to the average neutron radiography data and those estimated from replicated hanging water column data. Our results indicate that neutron imaging is a very effective tool for quantifying the average water retention curve.
Bivariate copulas on the exponentially weighted moving average control chart
Directory of Open Access Journals (Sweden)
Sasigarn Kuvattana
2016-10-01
Full Text Available This paper proposes four types of copulas on the Exponentially Weighted Moving Average (EWMA control chart when observations are from an exponential distribution using a Monte Carlo simulation approach. The performance of the control chart is based on the Average Run Length (ARL which is compared for each copula. Copula functions for specifying dependence between random variables are used and measured by Kendall’s tau. The results show that the Normal copula can be used for almost all shifts.
International Nuclear Information System (INIS)
Huang Mingxin; Rivera-Diaz-del-Castillo, Pedro E J; Zwaag, Sybrand van der; Bouaziz, Olivier
2009-01-01
Based on the theory of irreversible thermodynamics, the present work proposes a dislocation-based model to describe the plastic deformation of FCC metals over wide ranges of strain rates. The stress-strain behaviour and the evolution of the average dislocation density are derived. It is found that there is a transitional strain rate (∼ 10 4 s -1 ) over which the phonon drag effects appear, resulting in a significant increase in the flow stress and the average dislocation density. The model is applied to pure Cu deformed at room temperature and at strain rates ranging from 10 -5 to 10 6 s -1 showing good agreement with experimental results.
Topological quantization of ensemble averages
International Nuclear Information System (INIS)
Prodan, Emil
2009-01-01
We define the current of a quantum observable and, under well-defined conditions, we connect its ensemble average to the index of a Fredholm operator. The present work builds on a formalism developed by Kellendonk and Schulz-Baldes (2004 J. Funct. Anal. 209 388) to study the quantization of edge currents for continuous magnetic Schroedinger operators. The generalization given here may be a useful tool to scientists looking for novel manifestations of the topological quantization. As a new application, we show that the differential conductance of atomic wires is given by the index of a certain operator. We also comment on how the formalism can be used to probe the existence of edge states
DEFF Research Database (Denmark)
Gramkow, Claus
2001-01-01
In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong ...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation.......In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...
Variable Rate Characteristic Waveform Interpolation Speech Coder Based on Phonetic Classification
Institute of Scientific and Technical Information of China (English)
WANG Jing; KUANG Jing-ming; ZHAO Sheng-hui
2007-01-01
A variable-bit-rate characteristic waveform interpolation (VBR-CWI) speech codec with about 1.8kbit/s average bit rate which integrates phonetic classification into characteristic waveform (CW) decomposition is proposed.Each input frame is classified into one of 4 phonetic classes.Non-speech frames are represented with Bark-band noise model.The extracted CWs become rapidly evolving waveforms (REWs) or slowly evolving waveforms (SEWs) in the cases of unvoiced or stationary voiced frames respectively, while mixed voiced frames use the same CW decomposition as that in the conventional CWI.Experimental results show that the proposed codec can eliminate most buzzy and noisy artifacts existing in the fixed-bit-rate characteristic waveform interpolation (FBR-CWI) speech codec, the average bit rate can be much lower, and its reconstructed speech quality is much better than FS 1016 CELP at 4.8kbit/s and similar to G.723.1 ACELP at 5.3kbit/s.
Determination of radon exhalation rates from tiles using active and passive techniques
International Nuclear Information System (INIS)
Al-Jarallah, M.I.; Abu-Jarad, F.; Fazal-ur-Rehman
2001-01-01
Measurements of radon exhalation rates for selected samples of tiles used in Saudi Arabia were carried out using active and passive measuring techniques. These samples were granite, marble and ceramic. In the active method, a PC-based radon gas analyzer with emanation container was used, while, in the passive method, PM-355 nuclear track detectors with the 'can technique' were applied for 180 days. A comparison of the exhalation rates measured by the two techniques showed a good linear correlation coefficient of 0.7. The granite samples showed an average radon exhalation rate of 0.7 Bq m -2 h -1 , which was higher than that of marble and ceramic by more than twofold. The radon exhalation rates measured by the 'can technique' showed a non-uniform exhalation from the surface of the same tile
A spectral measurement method for determining white OLED average junction temperatures
Zhu, Yiting; Narendran, Nadarajah
2016-09-01
The objective of this study was to investigate an indirect method of measuring the average junction temperature of a white organic light-emitting diode (OLED) based on temperature sensitivity differences in the radiant power emitted by individual emitter materials (i.e., "blue," "green," and "red"). The measured spectral power distributions (SPDs) of the white OLED as a function of temperature showed amplitude decrease as a function of temperature in the different spectral bands, red, green, and blue. Analyzed data showed a good linear correlation between the integrated radiance for each spectral band and the OLED panel temperature, measured at a reference point on the back surface of the panel. The integrated radiance ratio of the spectral band green compared to red, (G/R), correlates linearly with panel temperature. Assuming that the panel reference point temperature is proportional to the average junction temperature of the OLED panel, the G/R ratio can be used for estimating the average junction temperature of an OLED panel.
Lagrangian averaging with geodesic mean.
Oliver, Marcel
2017-11-01
This paper revisits the derivation of the Lagrangian averaged Euler (LAE), or Euler- α equations in the light of an intrinsic definition of the averaged flow map as the geodesic mean on the volume-preserving diffeomorphism group. Under the additional assumption that first-order fluctuations are statistically isotropic and transported by the mean flow as a vector field, averaging of the kinetic energy Lagrangian of an ideal fluid yields the LAE Lagrangian. The derivation presented here assumes a Euclidean spatial domain without boundaries.
Hossain, Md Jahangir
2010-03-01
Our contribution, in this paper, is two-fold. First, we analyze the performance of a hierarchical modulation-assisted two-best user opportunistic scheduling (TBS) scheme, which was proposed by the authors, in a fading environment where different users have different average link gains. Specifically, we present a new expression for the spectral efficiency (SE) of the users and using this expression, we compare the degrees of fairness (DOF) of the TBS scheme with that of classical single user opportunistic scheduling schemes, namely, absolute carrier-to-noise ratio (CNR) based single-best user scheduling (SBS) and normalized CNR based proportional fair scheduling (PFS) schemes. The second contribution is that we propose a new hybrid two-user opportunistic scheduling (HTS) scheme based on our earlier proposed TBS scheme. This HTS scheme selects the first user based on the largest absolute CNR value among all the users while the second user is selected based on the ratios of the absolute CNRs to the corresponding average CNRs of the remaining users. The total transmission rate i.e., the constellation size is selected according to the absolute CNR of the first best user. The total transmission rate is then allocated among these selected users by joint consideration of their absolute CNRs and allocated number of information bit(s) are transmitted to them using hierarchical modulations. Numerical results are presented for a fading environment where different users experience independent but non-identical (i.n.d.) channel fading. These selected numerical results show that the proposed HTS scheme can considerably increase the system\\'s fairness without any degradation of the link spectral efficiency (LSE) i.e., the multiuser diversity gain compared to the classical SBS scheme. These results also show that the proposed HTS scheme has a lower fairness in comparison to the PFS scheme which suffers from a considerable degradation in LSE. © 2010 IEEE.
Hand hygiene compliance rates: Fact or fiction?
McLaws, Mary-Louise; Kwok, Yen Lee Angela
2018-05-16
The mandatory national hand hygiene program requires Australian public hospitals to use direct human auditing to establish compliance rates. To establish the magnitude of the Hawthorne effect, we compared direct human audit rates with concurrent automated surveillance rates. A large tertiary Australian teaching hospital previously trialed automated surveillance while simultaneously performing mandatory human audits for 20 minutes daily on a medical and a surgical ward. Subtracting automated surveillance rates from human audit rates provided differences in percentage points (PPs) for each of the 3 quarterly reporting periods for 2014 and 2015. Direct human audit rates for the medical ward were inflated by an average of 55 PPs in 2014 and 64 PPs in 2015, 2.8-3.1 times higher than automated surveillance rates. The rates for the surgical ward were inflated by an average of 32 PPs in 2014 and 31 PPs in 2015, 1.6 times higher than automated surveillance rates. Over the 6 mandatory reporting quarters, human audits collected an average of 255 opportunities, whereas automation collected 578 times more data, averaging 147,308 opportunities per quarter. The magnitude of the Hawthorne effect on direct human auditing was not trivial and produced highly inflated compliance rates. Mandatory compliance necessitates accuracy that only automated surveillance can achieve, whereas daily hand hygiene ambassadors or reminder technology could harness clinicians' ability to hyperrespond to produce habitual compliance. Crown Copyright © 2018. Published by Elsevier Inc. All rights reserved.
MCBS Highlights: Ownership and Average Premiums for Medicare Supplementary Insurance Policies
Chulis, George S.; Eppig, Franklin J.; Poisal, John A.
1995-01-01
This article describes private supplementary health insurance holdings and average premiums paid by Medicare enrollees. Data were collected as part of the 1992 Medicare Current Beneficiary Survey (MCBS). Data show the number of persons with insurance and average premiums paid by type of insurance held—individually purchased policies, employer-sponsored policies, or both. Distributions are shown for a variety of demographic, socioeconomic, and health status variables. Primary findings include: Seventy-eight percent of Medicare beneficiaries have private supplementary insurance; 25 percent of those with private insurance hold more than one policy. The average premium paid for private insurance in 1992 was $914. PMID:10153473
Waif goodbye! Average-size female models promote positive body image and appeal to consumers.
Diedrichs, Phillippa C; Lee, Christina
2011-10-01
Despite consensus that exposure to media images of thin fashion models is associated with poor body image and disordered eating behaviours, few attempts have been made to enact change in the media. This study sought to investigate an effective alternative to current media imagery, by exploring the advertising effectiveness of average-size female fashion models, and their impact on the body image of both women and men. A sample of 171 women and 120 men were assigned to one of three advertisement conditions: no models, thin models and average-size models. Women and men rated average-size models as equally effective in advertisements as thin and no models. For women with average and high levels of internalisation of cultural beauty ideals, exposure to average-size female models was associated with a significantly more positive body image state in comparison to exposure to thin models and no models. For men reporting high levels of internalisation, exposure to average-size models was also associated with a more positive body image state in comparison to viewing thin models. These findings suggest that average-size female models can promote positive body image and appeal to consumers.
Trajectory averaging for stochastic approximation MCMC algorithms
Liang, Faming
2010-10-01
The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.
Combining rate-based and cap-and-trade emissions policies
International Nuclear Information System (INIS)
Fischer, Carolyn
2003-12-01
Rate-based emissions policies (like tradable performance standards, TPS) fix average emissions intensity, while cap-and-trade (CAT) policies fix total emissions. This paper shows that unfettered trade between rate-based and cap-and-trade programs always raises combined emissions, except when product markets are related in particular ways. Gains from trade are fully passed on to consumers in the rate-based sector, resulting in more output and greater emissions allocations. We consider several policy options to offset the expansion, including a tax, an 'exchange rate' to adjust for relative permit values, output-based allocation (OBA) for the rate-based sector, and tightening the cap. A range of combinations of tighter allocations could improve situations in both sectors with trade while holding emissions constant
Ultra-low noise miniaturized neural amplifier with hardware averaging.
Dweiri, Yazan M; Eggers, Thomas; McCallum, Grant; Durand, Dominique M
2015-08-01
Peripheral nerves carry neural signals that could be used to control hybrid bionic systems. Cuff electrodes provide a robust and stable interface but the recorded signal amplitude is small (concept of hardware averaging to nerve recordings obtained with cuff electrodes. An optimization procedure is developed to minimize noise and power simultaneously. The novel design was based on existing neural amplifiers (Intan Technologies, LLC) and is validated with signals obtained from the FINE in chronic dog experiments. We showed that hardware averaging leads to a reduction in the total recording noise by a factor of 1/√N or less depending on the source resistance. Chronic recording of physiological activity with FINE using the presented design showed significant improvement on the recorded baseline noise with at least two parallel operation transconductance amplifiers leading to a 46.1% reduction at N = 8. The functionality of these recordings was quantified by the SNR improvement and shown to be significant for N = 3 or more. The present design was shown to be capable of generating hardware averaging on noise improvement for neural recording with cuff electrodes, and can accommodate the presence of high source impedances that are associated with the miniaturized contacts and the high channel count in electrode arrays. This technique can be adopted for other applications where miniaturized and implantable multichannel acquisition systems with ultra-low noise and low power are required.
Comparison of power pulses from homogeneous and time-average-equivalent models
International Nuclear Information System (INIS)
De, T.K.; Rouben, B.
1995-01-01
The time-average-equivalent model is an 'instantaneous' core model designed to reproduce the same three dimensional power distribution as that generated by a time-average model. However it has been found that the time-average-equivalent model gives a full-core static void reactivity about 8% smaller than the time-average or homogeneous models. To investigate the consequences of this difference in static void reactivity in time dependent calculations, simulations of the power pulse following a hypothetical large-loss-of-coolant accident were performed with a homogeneous model and compared with the power pulse from the time-average-equivalent model. The results show that there is a much smaller difference in peak dynamic reactivity than in static void reactivity between the two models. This is attributed to the fact that voiding is not complete, but also to the retardation effect of the delayed-neutron precursors on the dynamic flux shape. The difference in peak reactivity between the models is 0.06 milli-k. The power pulses are essentially the same in the two models, because the delayed-neutron fraction in the time-average-equivalent model is lower than in the homogeneous model, which compensates for the lower void reactivity in the time-average-equivalent model. (author). 1 ref., 5 tabs., 9 figs
Asymptotic Time Averages and Frequency Distributions
Directory of Open Access Journals (Sweden)
Muhammad El-Taha
2016-01-01
Full Text Available Consider an arbitrary nonnegative deterministic process (in a stochastic setting {X(t, t≥0} is a fixed realization, i.e., sample-path of the underlying stochastic process with state space S=(-∞,∞. Using a sample-path approach, we give necessary and sufficient conditions for the long-run time average of a measurable function of process to be equal to the expectation taken with respect to the same measurable function of its long-run frequency distribution. The results are further extended to allow unrestricted parameter (time space. Examples are provided to show that our condition is not superfluous and that it is weaker than uniform integrability. The case of discrete-time processes is also considered. The relationship to previously known sufficient conditions, usually given in stochastic settings, will also be discussed. Our approach is applied to regenerative processes and an extension of a well-known result is given. For researchers interested in sample-path analysis, our results will give them the choice to work with the time average of a process or its frequency distribution function and go back and forth between the two under a mild condition.
Averaged null energy condition from causality
Hartman, Thomas; Kundu, Sandipan; Tajdini, Amirhossein
2017-07-01
Unitary, Lorentz-invariant quantum field theories in flat spacetime obey mi-crocausality: commutators vanish at spacelike separation. For interacting theories in more than two dimensions, we show that this implies that the averaged null energy, ∫ duT uu , must be non-negative. This non-local operator appears in the operator product expansion of local operators in the lightcone limit, and therefore contributes to n-point functions. We derive a sum rule that isolates this contribution and is manifestly positive. The argument also applies to certain higher spin operators other than the stress tensor, generating an infinite family of new constraints of the form ∫ duX uuu··· u ≥ 0. These lead to new inequalities for the coupling constants of spinning operators in conformal field theory, which include as special cases (but are generally stronger than) the existing constraints from the lightcone bootstrap, deep inelastic scattering, conformal collider methods, and relative entropy. We also comment on the relation to the recent derivation of the averaged null energy condition from relative entropy, and suggest a more general connection between causality and information-theoretic inequalities in QFT.
Griffin, Tyler J.; Hilton, John, III.; Plummer, Kenneth; Barret, Devynne
2014-01-01
One of the most contentious potential sources of bias is whether instructors who give higher grades receive higher ratings from students. We examined the grade point averages (GPAs) and student ratings across 2073 general education religion courses at a large private university. A moderate correlation was found between GPAs and student evaluations…
The classical correlation limits the ability of the measurement-induced average coherence
Zhang, Jun; Yang, Si-Ren; Zhang, Yang; Yu, Chang-Shui
2017-04-01
Coherence is the most fundamental quantum feature in quantum mechanics. For a bipartite quantum state, if a measurement is performed on one party, the other party, based on the measurement outcomes, will collapse to a corresponding state with some probability and hence gain the average coherence. It is shown that the average coherence is not less than the coherence of its reduced density matrix. In particular, it is very surprising that the extra average coherence (and the maximal extra average coherence with all the possible measurements taken into account) is upper bounded by the classical correlation of the bipartite state instead of the quantum correlation. We also find the sufficient and necessary condition for the null maximal extra average coherence. Some examples demonstrate the relation and, moreover, show that quantum correlation is neither sufficient nor necessary for the nonzero extra average coherence within a given measurement. In addition, the similar conclusions are drawn for both the basis-dependent and the basis-free coherence measure.
Directory of Open Access Journals (Sweden)
Rodrigo Silva Vidotto
2009-04-01
Full Text Available The increase in the number of investors at Bovespa since 2000 is due to stabilized inflation and falling interest rates. The use of tools that assist investors in selling and buying stocks is very important in a competitive and risky market. The technical analysis of stocks is used to search for trends in the movements of share prices and therefore indicate a suitable moment to buy or sell stocks. Among these technical indicators is the Moving Average Convergence-Divergence [MACD], which uses the concept of moving average in its equation and is considered by financial analysts as a simple tool to operate and analyze. This article aims to assess the effectiveness of the use of the MACD to indicate the moment to purchase and sell stocks in five companies – selected at random – a total of ninety companies in the Bovespa New Market and analyze the profitability gained during 2006, taking as a reference the valorization of the Ibovespa exchange in that year. The results show that the cumulative average return of the five companies was of 26.7% against a cumulative average return of 0.90% for Ibovespa.
The true bladder dose: on average thrice higher than the ICRU reference
International Nuclear Information System (INIS)
Barillot, I.; Horiot, J.C.; Maingon, P.; Bone-Lepinoy, M.C.; D'Hombres, A.; Comte, J.; Delignette, A.; Feutray, S.; Vaillant, D.
1996-01-01
The aim of this study is to compare ICRU dose to doses at the bladder base located from ultrasonography measurements. Since 1990, the dose delivered to the bladder during utero-vaginal brachytherapy was systematically calculated at 3 or 4 points representative of bladder base determined with ultrasonography. The ICRU Reference Dose (IRD) from films, the Maximum Dose (Dmax), the Mean Dose (Dmean) representative of the dose received by a large area of bladder mucosa, the Reference Dose Rate (RDR) and the Mean Dose Rate (MDR) were recorded. Material: from 1990 to 1994, 198 measurements were performed in 152 patients. 98 patients were treated for cervix carcinomas, 54 for endometrial carcinomas. Methods: Bladder complications were classified using French Italian Syllabus. The influence of doses and dose rates on complications were tested using non parametric t test. Results: On average IRD is 21 Gy +/- 12 Gy, Dmax is 51Gy +/- 21Gy, Dmean is 40 Gy +/16 Gy. On average Dmax is thrice higher than IRD and Dmean twice higher than IRD. The same results are obtained for cervix and endometrium. Comparisons on dose rates were also performed: MDR is on average twice higher than RDR (RDR 48 cGy/h vs MDR 88 cGy/h). The five observed complications consist of incontinence only (3 G1, 1G2, 1G3). They are only statistically correlated with RDR p=0.01 (46 cGy/h in patients without complications vs 74 cGy/h in patients with complications). However the full responsibility of RT remains doubtful and should be shared with surgery in all cases. In summary: Bladder mucosa seems to tolerate well much higher doses than previous recorded without increased risk of severe sequelae. However this finding is probably explained by our efforts to spare most of bladder mucosa by 1 deg. ) customised external irradiation therapy (4 fields, full bladder) 2 deg. ) reproduction of physiologic bladder filling during brachytherapy by intermittent clamping of the Foley catheter
Stochastic Averaging Principle for Spatial Birth-and-Death Evolutions in the Continuum
Friesen, Martin; Kondratiev, Yuri
2018-06-01
We study a spatial birth-and-death process on the phase space of locally finite configurations Γ^+ × Γ^- over R}^d. Dynamics is described by an non-equilibrium evolution of states obtained from the Fokker-Planck equation and associated with the Markov operator L^+(γ ^-) + 1/ɛ L^-, ɛ > 0. Here L^- describes the environment process on Γ^- and L^+(γ ^-) describes the system process on Γ^+, where γ ^- indicates that the corresponding birth-and-death rates depend on another locally finite configuration γ ^- \\in Γ^-. We prove that, for a certain class of birth-and-death rates, the corresponding Fokker-Planck equation is well-posed, i.e. there exists a unique evolution of states μ _t^{ɛ } on Γ^+ × Γ^-. Moreover, we give a sufficient condition such that the environment is ergodic with exponential rate. Let μ _{inv} be the invariant measure for the environment process on Γ^-. In the main part of this work we establish the stochastic averaging principle, i.e. we prove that the marginal of μ _t^{ɛ } onto Γ^+ converges weakly to an evolution of states on {Γ}^+ associated with the averaged Markov birth-and-death operator {\\overline{L}} = \\int _{Γ}^- L^+(γ ^-)d μ _{inv}(γ ^-).
Stochastic Averaging Principle for Spatial Birth-and-Death Evolutions in the Continuum
Friesen, Martin; Kondratiev, Yuri
2018-04-01
We study a spatial birth-and-death process on the phase space of locally finite configurations Γ^+ × Γ^- over R^d . Dynamics is described by an non-equilibrium evolution of states obtained from the Fokker-Planck equation and associated with the Markov operator L^+(γ ^-) + 1/ɛ L^- , ɛ > 0 . Here L^- describes the environment process on Γ^- and L^+(γ ^-) describes the system process on Γ^+ , where γ ^- indicates that the corresponding birth-and-death rates depend on another locally finite configuration γ ^- \\in Γ^- . We prove that, for a certain class of birth-and-death rates, the corresponding Fokker-Planck equation is well-posed, i.e. there exists a unique evolution of states μ _t^{ɛ } on Γ^+ × Γ^- . Moreover, we give a sufficient condition such that the environment is ergodic with exponential rate. Let μ _{inv} be the invariant measure for the environment process on Γ^- . In the main part of this work we establish the stochastic averaging principle, i.e. we prove that the marginal of μ _t^{ɛ } onto Γ^+ converges weakly to an evolution of states on Γ^+ associated with the averaged Markov birth-and-death operator \\overline{L} = \\int _{Γ}^-}L^+(γ ^-)d μ _{inv}(γ ^-).
Kumaraswamy autoregressive moving average models for double bounded environmental data
Bayer, Fábio Mariano; Bayer, Débora Missio; Pumi, Guilherme
2017-12-01
In this paper we introduce the Kumaraswamy autoregressive moving average models (KARMA), which is a dynamic class of models for time series taking values in the double bounded interval (a,b) following the Kumaraswamy distribution. The Kumaraswamy family of distribution is widely applied in many areas, especially hydrology and related fields. Classical examples are time series representing rates and proportions observed over time. In the proposed KARMA model, the median is modeled by a dynamic structure containing autoregressive and moving average terms, time-varying regressors, unknown parameters and a link function. We introduce the new class of models and discuss conditional maximum likelihood estimation, hypothesis testing inference, diagnostic analysis and forecasting. In particular, we provide closed-form expressions for the conditional score vector and conditional Fisher information matrix. An application to environmental real data is presented and discussed.
Determination of radon exhalation rates from tiles using active and passive techniques
Energy Technology Data Exchange (ETDEWEB)
Al-Jarallah, M.I. E-mail: mibrahim@kfupm.edu.sa; Abu-Jarad, F.; Fazal-ur-Rehman
2001-06-01
Measurements of radon exhalation rates for selected samples of tiles used in Saudi Arabia were carried out using active and passive measuring techniques. These samples were granite, marble and ceramic. In the active method, a PC-based radon gas analyzer with emanation container was used, while, in the passive method, PM-355 nuclear track detectors with the 'can technique' were applied for 180 days. A comparison of the exhalation rates measured by the two techniques showed a good linear correlation coefficient of 0.7. The granite samples showed an average radon exhalation rate of 0.7 Bq m{sup -2} h{sup -1}, which was higher than that of marble and ceramic by more than twofold. The radon exhalation rates measured by the 'can technique' showed a non-uniform exhalation from the surface of the same tile.
Evaluation of IOM personal sampler at different flow rates.
Zhou, Yue; Cheng, Yung-Sung
2010-02-01
The Institute of Occupational Medicine (IOM) personal sampler is usually operated at a flow rate of 2.0 L/min, the rate at which it was designed and calibrated, for sampling the inhalable mass fraction of airborne particles in occupational environments. In an environment of low aerosol concentrations only small amounts of material are collected, and that may not be sufficient for analysis. Recently, a new sampling pump with a flow rate up to 15 L/min became available for personal samplers, with the potential of operating at higher flow rates. The flow rate of a Leland Legacy sampling pump, which operates at high flow rates, was evaluated and calibrated, and its maximum flow was found to be 10.6 L/min. IOM samplers were placed on a mannequin, and sampling was conducted in a large aerosol wind tunnel at wind speeds of 0.56 and 2.22 m/s. Monodisperse aerosols of oleic acid tagged with sodium fluorescein in the size range of 2 to 100 microm were used in the test. The IOM samplers were operated at flow rates of 2.0 and 10.6 L/min. Results showed that the IOM samplers mounted in the front of the mannequin had a higher sampling efficiency than those mounted at the side and back, regardless of the wind speed and flow rate. For the wind speed of 0.56 m/s, the direction-averaged (the average value of all orientations facing the wind direction) sampling efficiency of the samplers operated at 2.0 L/min was slightly higher than that of 10.6 L/min. For the wind speed of 2.22 m/s, the sampling efficiencies at both flow rates were similar for particles < 60 microm. The results also show that the IOM's sampling efficiency at these two different flow rates follows the inhalable mass curve for particles in the size range of 2 to 20 microm. The test results indicate that the IOM sampler can be used at higher flow rates.
The Effect of Exchange Rate Volatility on Iran’s Raisin Export
Directory of Open Access Journals (Sweden)
2014-10-01
Full Text Available Exchange rate volatility is one of the effective and ambiguous factors in agricultural product export. Considering the importance of agricultural trade to avoid single-product economy, the main aim of this study was to investigate the impact of exchange rate volatility on the Raisin export of Iran during the years1959-2011. For this purpose, exchange rate volatility index was estimated using Moving Average Standard Deviation (MASD. Then, the impact of exchange rate volatility on the value of Raisin export was examined using Johansen's and Juselius's cointegration method and Vector Error Correction Model (VECM.The results showed that in the long-term and short-term there is a significant relationship between Raisin exports and its main variables (weighted average of Gross income of importers, Wholesale Prices, real exchange rate, Value-added of agricultural sector; as according to the theory it has negative relationship with exchange rate volatility. The error correction coefficient sentence ECM (-1 significantly and its sign was negative as expected. The value of this coefficient is equal to the -0/20 and indicates that about 20 percent of Raisin exports imbalance from its long-term value, after of a period will be Elapse.
Children’s Attitudes and Stereotype Content Toward Thin, Average-Weight, and Overweight Peers
Directory of Open Access Journals (Sweden)
Federica Durante
2014-05-01
Full Text Available Six- to 11-year-old children’s attitudes toward thin, average-weight, and overweight targets were investigated with associated warmth and competence stereotypes. The results showed positive attitudes toward average-weight targets and negative attitudes toward overweight peers: Both attitudes decreased as a function of children’s age. Thin targets were perceived more positively than overweight ones but less positively than average-weight targets. Notably, social desirability concerns predicted the decline of anti-fat bias in older children. Finally, the results showed ambivalent stereotypes toward thin and overweight targets—particularly among older children—mirroring the stereotypes observed in adults. This result suggests that by the end of elementary school, children manage the two fundamental dimensions of social judgment similar to adults.
Pharmaceutical R&D performance by firm size: approval success rates and economic returns.
DiMasi, Joseph A
2014-01-01
The R&D productivity of pharmaceutical firms has become an increasingly significant concern of industry, regulators, and policymakers. To address an important aspect of R&D performance, public and private data sources were used to estimate clinical phase transition and clinical approval probabilities for the pipelines of the 50 largest pharmaceutical firms (by sales) by 3 firms size groups (top 10 firms, top 11-20 firms, and top 21-50 firms). For self-originated compounds, the clinical approval success rates were 14.3%, 16.4%, and 18.4% for top 10 firms, top 11-20 firms, and top 21-50 firms, respectively. The results showing higher success rates for smaller firms were largely driven by outcomes for the small-molecule drugs. Adjustments for the relatively small differences in therapeutic class distributions across the firm size groups showed that the success rate for small-molecule self-originated drugs was 6% below average for top 10 firms and 17% above average for top 21-50 firms. Although success rates for small firms were higher, this advantage was offset to some degree by lower returns on approved drugs, suggesting different strategic objectives with regard to risk and reward by firm size.
Face averages enhance user recognition for smartphone security.
Robertson, David J; Kramer, Robin S S; Burton, A Mike
2015-01-01
Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy). In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual's 'face-average'--a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1) and for real faces (Experiment 2): users could unlock their phones more reliably when the device stored an average of the user's face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings.
What factors drive interest rate spread of commercial banks? Empirical evidence from Kenya
Directory of Open Access Journals (Sweden)
Maureen Were
2014-12-01
Full Text Available The paper empirically investigates the determinants of interest rate spread in Kenya's banking sector based on panel data analysis. The findings show that bank-specific factors play a significant role in the determination of interest rate spreads. These include bank size, credit risk as measured by non-performing loans to total loans ratio, return on average assets and operating costs, all of which positively influence interest rate spreads. On the other hand, higher bank liquidity ratio has a negative effect on the spreads. On average, big banks have higher spreads compared to small banks. The impact of macroeconomic factors such as real economic growth is insignificant. The effect of the monetary policy rate is positive but not highly significant. The results largely reflect the structure of the banking industry, in which a few big banks control a significant share of the market.
Wang, Ling; Abdel-Aty, Mohamed; Wang, Xuesong; Yu, Rongjie
2018-02-01
There have been plenty of traffic safety studies based on average daily traffic (ADT), average hourly traffic (AHT), or microscopic traffic at 5 min intervals. Nevertheless, not enough research has compared the performance of these three types of safety studies, and seldom of previous studies have intended to find whether the results of one type of study is transferable to the other two studies. First, this study built three models: a Bayesian Poisson-lognormal model to estimate the daily crash frequency using ADT, a Bayesian Poisson-lognormal model to estimate the hourly crash frequency using AHT, and a Bayesian logistic regression model for the real-time safety analysis using microscopic traffic. The model results showed that the crash contributing factors found by different models were comparable but not the same. Four variables, i.e., the logarithm of volume, the standard deviation of speed, the logarithm of segment length, and the existence of diverge segment, were positively significant in the three models. Additionally, weaving segments experienced higher daily and hourly crash frequencies than merge and basic segments. Then, each of the ADT-based, AHT-based, and real-time models was used to estimate safety conditions at different levels: daily and hourly, meanwhile, the real-time model was also used in 5 min intervals. The results uncovered that the ADT- and AHT-based safety models performed similar in predicting daily and hourly crash frequencies, and the real-time safety model was able to provide hourly crash frequency. Copyright © 2017 Elsevier Ltd. All rights reserved.
Isotopic incorporation rates and discrimination factors in mantis shrimp crustaceans.
Directory of Open Access Journals (Sweden)
Maya S deVries
Full Text Available Stable isotope analysis has provided insights into the trophic ecology of a wide diversity of animals. Knowledge about isotopic incorporation rates and isotopic discrimination between the consumer and its diet for different tissue types is essential for interpreting stable isotope data, but these parameters remain understudied in many animal taxa and particularly in aquatic invertebrates. We performed a 292-day diet shift experiment on 92 individuals of the predatory mantis shrimp, Neogonodactylus bredini, to quantify carbon and nitrogen incorporation rates and isotope discrimination factors in muscle and hemolymph tissues. Average isotopic discrimination factors between mantis shrimp muscle and the new diet were 3.0 ± 0.6 ‰ and 0.9 ± 0.3 ‰ for carbon and nitrogen, respectively, which is contrary to what is seen in many other animals (e.g. C and N discrimination is generally 0-1 ‰ and 3-4 ‰, respectively. Surprisingly, the average residence time of nitrogen in hemolymph (28.9 ± 8.3 days was over 8 times longer than that of carbon (3.4 ± 1.4 days. In muscle, the average residence times of carbon and nitrogen were of the same magnitude (89.3 ± 44.4 and 72.8 ± 18.8 days, respectively. We compared the mantis shrimps' incorporation rates, along with rates from four other invertebrate taxa from the literature, to those predicted by an allometric equation relating carbon incorporation rate to body mass that was developed for teleost fishes and sharks. The rate of carbon incorporation into muscle was consistent with rates predicted by this equation. Our findings provide new insight into isotopic discrimination factors and incorporation rates in invertebrates with the former showing a different trend than what is commonly observed in other animals.
A Robust Interpretation of Teaching Evaluation Ratings
Bi, Henry H.
2018-01-01
There are no absolute standards regarding what teaching evaluation ratings are satisfactory. It is also problematic to compare teaching evaluation ratings with the average or with a cutoff number to determine whether they are adequate. In this paper, we use average and standard deviation charts (X[overbar]-S charts), which are based on the theory…
McLennan, Stuart; Strech, Daniel; Reimann, Swantje
2017-08-25
Physician rating websites (PRWs) have been developed to allow all patients to rate, comment, and discuss physicians' quality online as a source of information for others searching for a physician. At the beginning of 2010, a sample of 298 randomly selected physicians from the physician associations in Hamburg and Thuringia were searched for on 6 German PRWs to examine the frequency of ratings and evaluation tendencies. The objective of this study was to examine (1) the number of identifiable physicians on German PRWs; (2) the number of rated physicians on German PRWs; (3) the average and maximum number of ratings per physician on German PRWs; (4) the average rating on German PRWs; (5) the website visitor ranking positions of German PRWs; and (6) how these data compare with 2010 results. A random stratified sample of 298 selected physicians from the physician associations in Hamburg and Thuringia was generated. Every selected physician was searched for on the 6 PRWs (Jameda, Imedo, Docinsider, Esando, Topmedic, and Medführer) used in the 2010 study and a PRW, Arztnavigator, launched by Allgemeine Ortskrankenkasse (AOK). The results were as follows: (1) Between 65.1% (194/298) on Imedo to 94.6% (282/298) on AOK-Arztnavigator of the physicians were identified on the selected PRWs. (2) Between 16.4% (49/298) on Esando to 83.2% (248/298) on Jameda of the sample had been rated at least once. (3) The average number of ratings per physician ranged from 1.2 (Esando) to 7.5 (AOK-Arztnavigator). The maximum number of ratings per physician ranged from 3 (Esando) to 115 (Docinsider), indicating an increase compared with the ratings of 2 to 27 in the 2010 study sample. (4) The average converted standardized rating (1=positive, 2=neutral, and 3=negative) ranged from 1.0 (Medführer) to 1.2 (Jameda and Topmedic). (5) Only Jameda (position 317) and Medführer (position 9796) were placed among the top 10,000 visited websites in Germany. Whereas there has been an overall increase in
Effect of daily noise exposure monitoring on annual rates of hearing loss in industrial workers.
Rabinowitz, Peter M; Galusha, Deron; Kirsche, Sharon R; Cullen, Mark R; Slade, Martin D; Dixon-Ernst, Christine
2011-06-01
Occupational noise-induced hearing loss (NIHL) is prevalent, yet evidence on the effectiveness of preventive interventions is lacking. The effectiveness of a new technology allowing workers to monitor daily at-ear noise exposure was analysed. Workers in the hearing conservation program of an aluminium smelter were recruited because of accelerated rates of hearing loss. The intervention consisted of daily monitoring of at-ear noise exposure and regular feedback on exposures from supervisors. The annual rate of change in high frequency hearing average at 2, 3 and 4 KHz before intervention (2000-2004) and 4 years after intervention (2006-2009) was determined. Annual rates of loss were compared between 78 intervention subjects and 234 controls in other company smelters matched for age, gender and high frequency hearing threshold level in 2005. Individuals monitoring daily noise exposure experienced on average no further worsening of high frequency hearing (average rate of hearing change at 2, 3 and 4 KHz = -0.5 dB/year). Matched controls also showed decelerating hearing loss, the difference in rates between the two groups being significant (p hearing loss showed a similar trend but the difference was not statistically significant (p = 0.06). Monitoring daily occupational noise exposure inside hearing protection with ongoing administrative feedback apparently reduces the risk of occupational NIHL in industrial workers. Longer follow-up of these workers will help determine the significance of the intervention effect. Intervention studies for the prevention of NIHL need to include appropriate control groups.
Are average and symmetric faces attractive to infants? Discrimination and looking preferences.
Rhodes, Gillian; Geddes, Keren; Jeffery, Linda; Dziurawiec, Suzanne; Clark, Alison
2002-01-01
Young infants prefer to look at faces that adults find attractive, suggesting a biological basis for some face preferences. However, the basis for infant preferences is not known. Adults find average and symmetric faces attractive. We examined whether 5-8-month-old infants discriminate between different levels of averageness and symmetry in faces, and whether they prefer to look at faces with higher levels of these traits. Each infant saw 24 pairs of female faces. Each pair consisted of two versions of the same face differing either in averageness (12 pairs) or symmetry (12 pairs). Data from the mothers confirmed that adults preferred the more average and more symmetric versions in each pair. The infants were sensitive to differences in both averageness and symmetry, but showed no looking preference for the more average or more symmetric versions. On the contrary, longest looks were significantly longer for the less average versions, and both longest looks and first looks were marginally longer for the less symmetric versions. Mean looking times were also longer for the less average and less symmetric versions, but those differences were not significant. We suggest that the infant looking behaviour may reflect a novelty preference rather than an aesthetic preference.
The economic production lot size model extended to include more than one production rate
DEFF Research Database (Denmark)
Larsen, Christian
2001-01-01
btween the demand rate and the production rate which minimizes unit production costs, and should be used in an increasing order. Then, given the production rates, we derive closed-form expressions for all optimal runtimes as well as the minimum average cost. This analysis reveals that it is the size...... of the setup cost that determines the need for being able to use several production rates. Finally, we show how to derive a near-optimal solution of the general problem....
An Experimental Study Related to Planning Abilities of Gifted and Average Students
Directory of Open Access Journals (Sweden)
Marilena Z. Leana-Taşcılar
2016-02-01
Full Text Available Gifted students differ from their average peers in psychological, social, emotional and cognitive development. One of these differences in the cognitive domain is related to executive functions. One of the most important executive functions is planning and organization ability. The aim of this study was to compare planning abilities of gifted students with those of their average peers and to test the effectiveness of a training program on planning abilities of gifted students and average students. First, students’ intelligence and planning abilities were measured and then assigned to either experimental or control group. The groups were matched by intelligence and planning ability (experimental: (13 gifted and 8 average; control: 14 gifted and 8 average. In total 182 students (79 gifted and 103 average participated in the study. Then, a training program was implemented in the experimental group to find out if it improved students’ planning ability. Results showed that boys had better planning abilities than girls did, and gifted students had better planning abilities than their average peers did. Significant results were obtained in favor of the experimental group in the posttest scores
Time-dependent angularly averaged inverse transport
International Nuclear Information System (INIS)
Bal, Guillaume; Jollivet, Alexandre
2009-01-01
This paper concerns the reconstruction of the absorption and scattering parameters in a time-dependent linear transport equation from knowledge of angularly averaged measurements performed at the boundary of a domain of interest. Such measurement settings find applications in medical and geophysical imaging. We show that the absorption coefficient and the spatial component of the scattering coefficient are uniquely determined by such measurements. We obtain stability results on the reconstruction of the absorption and scattering parameters with respect to the measured albedo operator. The stability results are obtained by a precise decomposition of the measurements into components with different singular behavior in the time domain
Ergodic averages for monotone functions using upper and lower dominating processes
DEFF Research Database (Denmark)
Møller, Jesper; Mengersen, Kerrie
We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain. Our...... methods are studied in detail for three models using Markov chain Monte Carlo methods and we also discuss various types of other models for which our methods apply....
Ergodic averages for monotone functions using upper and lower dominating processes
DEFF Research Database (Denmark)
Møller, Jesper; Mengersen, Kerrie
2007-01-01
We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain. Our...... methods are studied in detail for three models using Markov chain Monte Carlo methods and we also discuss various types of other models for which our methods apply....
A note on moving average models for Gaussian random fields
DEFF Research Database (Denmark)
Hansen, Linda Vadgård; Thorarinsdottir, Thordis L.
The class of moving average models offers a flexible modeling framework for Gaussian random fields with many well known models such as the Matérn covariance family and the Gaussian covariance falling under this framework. Moving average models may also be viewed as a kernel smoothing of a Lévy...... basis, a general modeling framework which includes several types of non-Gaussian models. We propose a new one-parameter spatial correlation model which arises from a power kernel and show that the associated Hausdorff dimension of the sample paths can take any value between 2 and 3. As a result...
The average crossing number of equilateral random polygons
International Nuclear Information System (INIS)
Diao, Y; Dobay, A; Kusner, R B; Millett, K; Stasiak, A
2003-01-01
In this paper, we study the average crossing number of equilateral random walks and polygons. We show that the mean average crossing number ACN of all equilateral random walks of length n is of the form (3/16)n ln n + O(n). A similar result holds for equilateral random polygons. These results are confirmed by our numerical studies. Furthermore, our numerical studies indicate that when random polygons of length n are divided into individual knot types, the for each knot type K can be described by a function of the form = a(n-n 0 )ln(n-n 0 ) + b(n-n 0 ) + c where a, b and c are constants depending on K and n 0 is the minimal number of segments required to form K. The profiles diverge from each other, with more complex knots showing higher than less complex knots. Moreover, the profiles intersect with the profile of all closed walks. These points of intersection define the equilibrium length of K, i.e., the chain length n e (K) at which a statistical ensemble of configurations with given knot type K-upon cutting, equilibration and reclosure to a new knot type K'-does not show a tendency to increase or decrease . This concept of equilibrium length seems to be universal, and applies also to other length-dependent observables for random knots, such as the mean radius of gyration g >
Macroeconomic Forecasts in Models with Bayesian Averaging of Classical Estimates
Directory of Open Access Journals (Sweden)
Piotr Białowolski
2012-03-01
Full Text Available The aim of this paper is to construct a forecasting model oriented on predicting basic macroeconomic variables, namely: the GDP growth rate, the unemployment rate, and the consumer price inflation. In order to select the set of the best regressors, Bayesian Averaging of Classical Estimators (BACE is employed. The models are atheoretical (i.e. they do not reflect causal relationships postulated by the macroeconomic theory and the role of regressors is played by business and consumer tendency survey-based indicators. Additionally, survey-based indicators are included with a lag that enables to forecast the variables of interest (GDP, unemployment, and inflation for the four forthcoming quarters without the need to make any additional assumptions concerning the values of predictor variables in the forecast period. Bayesian Averaging of Classical Estimators is a method allowing for full and controlled overview of all econometric models which can be obtained out of a particular set of regressors. In this paper authors describe the method of generating a family of econometric models and the procedure for selection of a final forecasting model. Verification of the procedure is performed by means of out-of-sample forecasts of main economic variables for the quarters of 2011. The accuracy of the forecasts implies that there is still a need to search for new solutions in the atheoretical modelling.
Medium term municipal solid waste generation prediction by autoregressive integrated moving average
International Nuclear Information System (INIS)
Younes, Mohammad K.; Nopiah, Z. M.; Basri, Noor Ezlin A.; Basri, Hassan
2014-01-01
Generally, solid waste handling and management are performed by municipality or local authority. In most of developing countries, local authorities suffer from serious solid waste management (SWM) problems and insufficient data and strategic planning. Thus it is important to develop robust solid waste generation forecasting model. It helps to proper manage the generated solid waste and to develop future plan based on relatively accurate figures. In Malaysia, solid waste generation rate increases rapidly due to the population growth and new consumption trends that characterize the modern life style. This paper aims to develop monthly solid waste forecasting model using Autoregressive Integrated Moving Average (ARIMA), such model is applicable even though there is lack of data and will help the municipality properly establish the annual service plan. The results show that ARIMA (6,1,0) model predicts monthly municipal solid waste generation with root mean square error equals to 0.0952 and the model forecast residuals are within accepted 95% confident interval
Medium term municipal solid waste generation prediction by autoregressive integrated moving average
Younes, Mohammad K.; Nopiah, Z. M.; Basri, Noor Ezlin A.; Basri, Hassan
2014-09-01
Generally, solid waste handling and management are performed by municipality or local authority. In most of developing countries, local authorities suffer from serious solid waste management (SWM) problems and insufficient data and strategic planning. Thus it is important to develop robust solid waste generation forecasting model. It helps to proper manage the generated solid waste and to develop future plan based on relatively accurate figures. In Malaysia, solid waste generation rate increases rapidly due to the population growth and new consumption trends that characterize the modern life style. This paper aims to develop monthly solid waste forecasting model using Autoregressive Integrated Moving Average (ARIMA), such model is applicable even though there is lack of data and will help the municipality properly establish the annual service plan. The results show that ARIMA (6,1,0) model predicts monthly municipal solid waste generation with root mean square error equals to 0.0952 and the model forecast residuals are within accepted 95% confident interval.
Medium term municipal solid waste generation prediction by autoregressive integrated moving average
Energy Technology Data Exchange (ETDEWEB)
Younes, Mohammad K.; Nopiah, Z. M.; Basri, Noor Ezlin A.; Basri, Hassan [Department of Civil and Structural Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, 43600 Bangi, Selangor (Malaysia)
2014-09-12
Generally, solid waste handling and management are performed by municipality or local authority. In most of developing countries, local authorities suffer from serious solid waste management (SWM) problems and insufficient data and strategic planning. Thus it is important to develop robust solid waste generation forecasting model. It helps to proper manage the generated solid waste and to develop future plan based on relatively accurate figures. In Malaysia, solid waste generation rate increases rapidly due to the population growth and new consumption trends that characterize the modern life style. This paper aims to develop monthly solid waste forecasting model using Autoregressive Integrated Moving Average (ARIMA), such model is applicable even though there is lack of data and will help the municipality properly establish the annual service plan. The results show that ARIMA (6,1,0) model predicts monthly municipal solid waste generation with root mean square error equals to 0.0952 and the model forecast residuals are within accepted 95% confident interval.
Effects of a Staff Training Intervention on Seclusion Rates on an Adult Inpatient Psychiatric Unit.
Newman, Julie; Paun, Olimpia; Fogg, Louis
2018-06-01
The current article presents the effects of a 90-minute staff training intervention aimed at reducing inpatient psychiatric seclusion rates through strengthened staff commitment to seclusion alternatives and improved de-escalation skills. The intervention occurred at an 18-bed adult inpatient psychiatric unit whose seclusion rates in 2015 were seven times the national average. Although the project's primary outcome compared patient seclusion rates before and after the intervention, anonymous staff surveys measured several secondary outcomes. Seclusion rates were reduced from a 6-month pre-intervention average of 2.95 seclusion hours per 1,000 patient hours to a 6-month post-intervention average of 0.29 seclusion hours per 1,000 patient hours, a 90.2% reduction. Completed staff surveys showed significant staff knowledge gains, non-significant changes in staff attitudes about seclusion, non-significant changes in staff de-escalation skill confidence, and use of the new resource sheet by only 17% of staff. The key study implication is that time-limited, focused staff training interventions can have a measurable impact on reducing inpatient seclusion rates. [Journal of Psychosocial Nursing and Mental Health Services, 56(6), 23-30.]. Copyright 2018, SLACK Incorporated.
Adaptive discrete rate and power transmission for spectrum sharing systems
Abdallah, Mohamed M.
2012-04-01
In this paper we develop a framework for optimizing the performance of the secondary link in terms of the average spectral efficiency assuming quantized channel state information (CSI) of the secondary and the secondary-to-primary interference channels available at the secondary transmitter. We consider the problem under the constraints of maximum average interference power levels at the primary receiver. We develop a sub-optimal computationally efficient iterative algorithm for finding the optimal CSI quantizers as well as the discrete power and rate employed at the cognitive transmitter for each quantized CSI level so as to maximize the average spectral efficiency. We show via analysis and simulations that the proposed algorithm converges for Rayleigh fading channels. Our numerical results give the number of bits required to sufficiently represent the CSI to achieve almost the maximum average spectral efficiency attained using full knowledge of the CSI. © 2012 IEEE.
The economic production lot size model extended to include more than one production rate
DEFF Research Database (Denmark)
Larsen, Christian
2005-01-01
production rates should be chosen in the interval between the demand rate and the production rate which minimizes unit production costs, and should be used in an increasing order. Then, given the production rates, we derive closed-form expressions for all optimal runtimes as well as the minimum average cost....... This analysis reveals that it is the size of the setup cost that determines the need for being able to use several production rates. We also show how to derive a near-optimal solution of the general problem....
Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.
Dirks, Jean; And Others
1983-01-01
Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)
Limit cycles from a cubic reversible system via the third-order averaging method
Directory of Open Access Journals (Sweden)
Linping Peng
2015-04-01
Full Text Available This article concerns the bifurcation of limit cycles from a cubic integrable and non-Hamiltonian system. By using the averaging theory of the first and second orders, we show that under any small cubic homogeneous perturbation, at most two limit cycles bifurcate from the period annulus of the unperturbed system, and this upper bound is sharp. By using the averaging theory of the third order, we show that two is also the maximal number of limit cycles emerging from the period annulus of the unperturbed system.
Evaluations of average level spacings
International Nuclear Information System (INIS)
Liou, H.I.
1980-01-01
The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of 168 Er data. 19 figures, 2 tables
Role of spatial averaging in multicellular gradient sensing.
Smith, Tyler; Fancher, Sean; Levchenko, Andre; Nemenman, Ilya; Mugler, Andrew
2016-05-20
Gradient sensing underlies important biological processes including morphogenesis, polarization, and cell migration. The precision of gradient sensing increases with the length of a detector (a cell or group of cells) in the gradient direction, since a longer detector spans a larger range of concentration values. Intuition from studies of concentration sensing suggests that precision should also increase with detector length in the direction transverse to the gradient, since then spatial averaging should reduce the noise. However, here we show that, unlike for concentration sensing, the precision of gradient sensing decreases with transverse length for the simplest gradient sensing model, local excitation-global inhibition. The reason is that gradient sensing ultimately relies on a subtraction of measured concentration values. While spatial averaging indeed reduces the noise in these measurements, which increases precision, it also reduces the covariance between the measurements, which results in the net decrease in precision. We demonstrate how a recently introduced gradient sensing mechanism, regional excitation-global inhibition (REGI), overcomes this effect and recovers the benefit of transverse averaging. Using a REGI-based model, we compute the optimal two- and three-dimensional detector shapes, and argue that they are consistent with the shapes of naturally occurring gradient-sensing cell populations.
The average inter-crossing number of equilateral random walks and polygons
International Nuclear Information System (INIS)
Diao, Y; Dobay, A; Stasiak, A
2005-01-01
In this paper, we study the average inter-crossing number between two random walks and two random polygons in the three-dimensional space. The random walks and polygons in this paper are the so-called equilateral random walks and polygons in which each segment of the walk or polygon is of unit length. We show that the mean average inter-crossing number ICN between two equilateral random walks of the same length n is approximately linear in terms of n and we were able to determine the prefactor of the linear term, which is a = 3ln2/8 ∼ 0.2599. In the case of two random polygons of length n, the mean average inter-crossing number ICN is also linear, but the prefactor of the linear term is different from that of the random walks. These approximations apply when the starting points of the random walks and polygons are of a distance ρ apart and ρ is small compared to n. We propose a fitting model that would capture the theoretical asymptotic behaviour of the mean average ICN for large values of ρ. Our simulation result shows that the model in fact works very well for the entire range of ρ. We also study the mean ICN between two equilateral random walks and polygons of different lengths. An interesting result is that even if one random walk (polygon) has a fixed length, the mean average ICN between the two random walks (polygons) would still approach infinity if the length of the other random walk (polygon) approached infinity. The data provided by our simulations match our theoretical predictions very well
Strain rate sensitivity studies on bulk nanocrystalline aluminium by nanoindentation
Energy Technology Data Exchange (ETDEWEB)
Varam, Sreedevi; Rajulapati, Koteswararao V., E-mail: kvrse@uohyd.ernet.in; Bhanu Sankara Rao, K.
2014-02-05
Nanocrystalline aluminium powder synthesized using high energy ball milling process was characterized by X-ray Diffraction (XRD) and Transmission Electron Microscopy (TEM). The studies indicated the powder having an average grain size of ∼42 nm. The consolidation of the powder was carried out by high-pressure compaction using a uni-axial press at room temperature by applying a pressure of 1.5 GPa. The cold compacted bulk sample having a density of ∼98% was subjected to nanoindentation which showed an average hardness and elastic modulus values of 1.67 ± 0.09 GPa and 83 ± 8 GPa respectively at a peak force of 8000 μN and a strain rate of 10{sup −2} s{sup −1}. Achieving good strength along with good ductility is challenging in nanocrystalline metals. When enough sample sizes are not available to measure ductility and other mechanical properties as per ASTM standards, as is the case with nanocrystalline materials, nanoindentation is a very promising technique to evaluate strain rate sensitivity. Strain rate sensitivity is a good measure of ductility and in the present work it is measured by performing indentation at various loads with varying loading rates. Strain rate sensitivity values of 0.024–0.054 are obtained for nanocrystalline Al which are high over conventional coarse grained Al. In addition, Scanning Probe Microscopy (SPM) image of the indent shows that there is some plastically flown region around the indent suggesting that this nanocrystalline aluminium is ductile.
Evaluation of average glandular dose in digital and conventional systems of the mammography
International Nuclear Information System (INIS)
Xavier, Aline C.S.; Barros, Vinicius S.M.; Khoury, Hellen J.
2014-01-01
Mammography is currently the most effective method of diagnosis and detection of breast pathologies. The main interest in this kid of exam comes from the high incidence rate of breast cancer and necessity of high quality images for accurate diagnosis. Digital mammography systems have several advantages compared to conventional systems, however the use of digital imaging systems is not always integrated to an image acquisition protocol. Therefore, it is questionable if digital systems truly reduce the dose received by the patient, because many times is introduced in the clinics without optimization of the image acquisition protocols. The aim of this study is to estimate the value of incident air Kerma and average glandular dose (AGD) in patients undergoing conventional and digital mammography systems in Recife. This study was conducted with 650 patients in three hospitals. The value of incident air Kerma was estimated from the measurement of the yield of equipment and irradiation parameters used for each patient. From these results and using the methodology proposed by Dance et al. the value of the average glandular dose was calculated. The results obtained show that the lowest value of AGD was found with conventional screen-film system, indicating that the parameters for image acquisition with digital systems are not optimized. It was also observed that the institutions with digital systems use lower breast compression values than the conventional. (author)
Chromospheric oscillations observed with OSO 8. III. Average phase spectra for Si II
International Nuclear Information System (INIS)
White, O.R.; Athay, R.G.
1979-01-01
Time series of intensity and Doppler-shift fluctuations in the Si II emission lines lambda816.93 and lambda817.45 are Fourier analyzed to determine the frequency variation of phase differences between intensity and velocity and between these two lines formed 300 km apart in the middle chromosphere. Average phase spectra show that oscillations between 2 and 9 mHz in the two lines have time delays from 35 to 40 s, which is consistent with the upward propagation of sound wave at 8.6-7.5 km s -1 . In this same frequency band near 3 mHz, maximum brightness leads maximum blueshift by 60 0 . At frequencies above 11 mHz where the power spectrum is flat, the phase differences are uncertain, but approximately 65% of the cases indicate upward propagation. At these higher frequencies, the phase lead between intensity and blue Doppler shift ranges from 0 0 to 180 0 with an average value of 90 0 . However, the phase estimates in this upper band are corrupted by both aliasing and randomness inherent to the measured signals. Phase differences in the two narrow spectral features seen at 10.5 and 27 mHz in the power spectra are shown to be consistent with properties expected for aliases of the wheel rotation rate of the spacecraft wheel section
International Nuclear Information System (INIS)
Ichiguchi, Katsuji
1998-01-01
A new reduced set of resistive MHD equations is derived by averaging the full MHD equations on specified flux coordinates, which is consistent with 3D equilibria. It is confirmed that the total energy is conserved and the linearized equations for ideal modes are self-adjoint. (author)
A study on gamma dose rate in Seoul (I)
International Nuclear Information System (INIS)
Kim, You Hyun; Kim, Chang Kyun; Choi, Jong Hak; Kim, Jeong Min
2001-01-01
This study was conducted to find out gamma dose rate in Seoul, from January to December in 2000, and the following results were achieved : The annual gamma dose rate in Seoul was 17.24 μR/hr as average. The annual gamma dose rate in subway of Seoul was 14.96 μR/hr as average. The highest annual gamma dose rate was Dong-daemon ku. Annual gamma dose rate in Seoul was higher autumn than winter
Effects of ocean acidification on the dissolution rates of reef-coral skeletons
Directory of Open Access Journals (Sweden)
Robert van Woesik
2013-11-01
Full Text Available Ocean acidification threatens the foundation of tropical coral reefs. This study investigated three aspects of ocean acidification: (i the rates at which perforate and imperforate coral-colony skeletons passively dissolve when pH is 7.8, which is predicted to occur globally by 2100, (ii the rates of passive dissolution of corals with respect to coral-colony surface areas, and (iii the comparative rates of a vertical reef-growth model, incorporating passive dissolution rates, and predicted sea-level rise. By 2100, when the ocean pH is expected to be 7.8, perforate Montipora coral skeletons will lose on average 15 kg CaCO3 m−2 y−1, which is approximately −10.5 mm of vertical reduction of reef framework per year. This rate of passive dissolution is higher than the average rate of reef growth over the last several millennia and suggests that reefs composed of perforate Montipora coral skeletons will have trouble keeping up with sea-level rise under ocean acidification. Reefs composed of primarily imperforate coral skeletons will not likely dissolve as rapidly, but our model shows they will also have trouble keeping up with sea-level rise by 2050.
Erfanian, A.; Fomenko, L.; Wang, G.
2016-12-01
Multi-model ensemble (MME) average is considered the most reliable for simulating both present-day and future climates. It has been a primary reference for making conclusions in major coordinated studies i.e. IPCC Assessment Reports and CORDEX. The biases of individual models cancel out each other in MME average, enabling the ensemble mean to outperform individual members in simulating the mean climate. This enhancement however comes with tremendous computational cost, which is especially inhibiting for regional climate modeling as model uncertainties can originate from both RCMs and the driving GCMs. Here we propose the Ensemble-based Reconstructed Forcings (ERF) approach to regional climate modeling that achieves a similar level of bias reduction at a fraction of cost compared with the conventional MME approach. The new method constructs a single set of initial and boundary conditions (IBCs) by averaging the IBCs of multiple GCMs, and drives the RCM with this ensemble average of IBCs to conduct a single run. Using a regional climate model (RegCM4.3.4-CLM4.5), we tested the method over West Africa for multiple combination of (up to six) GCMs. Our results indicate that the performance of the ERF method is comparable to that of the MME average in simulating the mean climate. The bias reduction seen in ERF simulations is achieved by using more realistic IBCs in solving the system of equations underlying the RCM physics and dynamics. This endows the new method with a theoretical advantage in addition to reducing computational cost. The ERF output is an unaltered solution of the RCM as opposed to a climate state that might not be physically plausible due to the averaging of multiple solutions with the conventional MME approach. The ERF approach should be considered for use in major international efforts such as CORDEX. Key words: Multi-model ensemble, ensemble analysis, ERF, regional climate modeling
Averaging for solitons with nonlinearity management
International Nuclear Information System (INIS)
Pelinovsky, D.E.; Kevrekidis, P.G.; Frantzeskakis, D.J.
2003-01-01
We develop an averaging method for solitons of the nonlinear Schroedinger equation with a periodically varying nonlinearity coefficient, which is used to effectively describe solitons in Bose-Einstein condensates, in the context of the recently proposed technique of Feshbach resonance management. Using the derived local averaged equation, we study matter-wave bright and dark solitons and demonstrate a very good agreement between solutions of the averaged and full equations
Canada’s 2010 Tax Competitiveness Ranking: Moving to the Average but Biased Against Services
Directory of Open Access Journals (Sweden)
Duanjie Chen
2011-02-01
Full Text Available For the first time since 1975 (the year Canada’s marginal effective tax rates were first measured, Canada has become the most tax-competitive country among G-7 states with respect to taxation of capital investment. Even more remarkably, Canada accomplished this feat within a mere six years, having previously been the least taxcompetitive G-7 member. Even in comparison to strongly growing emerging economies, Canada’s 2010 marginal effective tax rate on capital is still above average. The planned reductions in federal and provincial corporate taxes by 2013 will reduce Canada’s effective tax rate on new investments to 18.4 percent, below the Organization for Economic Co-operation and Development (OECD 2010 average and close to the average of the 50 non-OECD countries studied. This remarkable change in Canada’s tax competitiveness must be maintained in the coming years, as countries are continually reducing their business taxation despite the recent fiscal pressures arising from the 2008-9 downturn in the world economy. Many countries have forged ahead with significant reforms designed to increase tax competitiveness and improve tax neutrality including Greece, Israel, Japan, New Zealand, Taiwan and the United Kingdom. The continuing bias in Canada’s corporate income tax structure favouring manufacturing and processing business warrants close scrutiny. Measured by the difference between the marginal effective tax rate on capital between manufacturing and the broad range of service sectors, Canada has the greatest gap in tax burdens between manufacturing and services among OECD countries. Surprisingly, preferential tax treatment (such as fast write-off and investment tax credits favouring only manufacturing and processing activities has become the norm in Canada, although it does not exist in most developed economies.
Boets, Pieter; Goethals, Peter L. M.
2016-01-01
Growing travel and trade threatens biodiversity as it increases the rate of biological invasions globally, either by accidental or intentional introduction. Therefore, avoiding these impacts by forecasting invasions and impeding further spread is of utmost importance. In this study, three forecasting approaches were tested and combined to predict the invasive behaviour of the alien macrophyte Lemna minuta in comparison with the native Lemna minor: the functional response (FR) and relative growth rate (RGR), supplemented with a combined biomass-based nutrient removal (BBNR). Based on the idea that widespread invasive species are more successful competitors than local, native species, a higher FR and RGR were expected for the invasive compared to the native species. Five different nutrient concentrations were tested, ranging from low (4 mgN.L-1 and 1 mgP.L-1) to high (70 mgN.L-1 and 21 mgP.L-1). After four days, a significant amount of nutrients was removed by both Lemna spp., though significant differences among L. minor and L. minuta were only observed at lower nutrient concentrations (lower than 17 mgN.L-1 and 6 mgP.L-1) with higher nutrient removal exerted by L. minor. The derived FR did not show a clear dominance of the invasive L. minuta, contradicting field observations. Similarly, the RGR ranged from 0.4 to 0.6 d-1, but did not show a biomass-based dominance of L. minuta (0.5 ± 0.1 d-1 versus 0.63 ± 0.09 d-1 for L. minor). BBNR showed similar results as the FR. Contrary to our expectations, all three approaches resulted in higher values for L. minor. Consequently, based on our results FR is sensitive to differences, though contradicted the expectations, while RGR and BBNR do not provide sufficient power to differentiate between a native and an invasive alien macrophyte and should be supplemented with additional ecosystem-based experiments to determine the invasion impact. PMID:27861603
Measurement of 89Y(n,2n) spectral averaged cross section in LR-0 special core reactor spectrum
Košťál, Michal; Losa, Evžen; Baroň, Petr; Šolc, Jaroslav; Švadlenková, Marie; Koleška, Michal; Mareček, Martin; Uhlíř, Jan
2017-12-01
The present paper describes reaction rate measurement of 89Y(n,2n)88Y in a well-defined reactor spectrum of a special core assembled in the LR-0 reactor and compares this value with results of simulation. The reaction rate is derived from the measurement of activity of 88Y using gamma-ray spectrometry of irradiated Y2O3 sample. The resulting cross section value averaged in spectrum is 43.9 ± 1.5 μb, averaged in the 235U spectrum is 0.172 ± 0.006 mb. This cross-section is important as it is used as high energy neutron monitor and is therefore included in the International Reactor Dosimetry and Fusion File. Calculations of reaction rates were performed with the MCNP6 code using ENDF/B-VII.0, JEFF-3.1, JEFF-3.2, JENDL-3.3, JENDL-4, ROSFOND-2010, CENDL-3.1 and IRDFF nuclear data libraries. The agreement with uranium description by CIELO library is very good, while in ENDF/B-VII.0 description of uranium, underprediction about 10% in average can be observed.
The combined bowling rate as a measure of bowling performance in ...
African Journals Online (AJOL)
A single measure that can be used to assess the performance of bowlers in cricket is defined. This study shows how it can be used to rank bowlers. The performance of bowlers is generally measured by using three different criteria, i.e. the average number of runs conceded per wicket taken (A), the economy rate (E), which ...
Directory of Open Access Journals (Sweden)
Raluca Necula
2016-08-01
Full Text Available Romania's economic growth is a target that can be achieved only within the accordance of all the economic sectors with the Europe 2020 Strategy. As provided in the Convergence Programme 2014-2020, this objective entails a series of steps that Romania must rigorously follow in order to be able to ensure a real converge process at the level of developed European Union (EU countries form the Euro Area. This paper aims an overview presentation of the economy synthetized in its major result, respectively in the dynamics of the total Gross Domestic Product (GDP /capita and agricultural Gross Domestic Product (GDP /capita, and it also compares with the level of the EU 28 average and the level of the Euro Area average. There are calculated, using linear and quadratic functions, the Gross Domestic Product GDP trends, and, with the convergence equation, there are calculated the years that separate Romania from the level of other countries, through the application of annual growth rates. The calculations result shows a strong economic boost of Romania, the annual growth rates being high, both for Gross Domestic Product (GDP / capita (US$ and for agricultural Gross Domestic Product (GDP/ capita (US$, but also a pretty big gap between its development level and the EU 28 and the Euro Area average level.
A Divergence Median-based Geometric Detector with A Weighted Averaging Filter
Hua, Xiaoqiang; Cheng, Yongqiang; Li, Yubo; Wang, Hongqiang; Qin, Yuliang
2018-01-01
To overcome the performance degradation of the classical fast Fourier transform (FFT)-based constant false alarm rate detector with the limited sample data, a divergence median-based geometric detector on the Riemannian manifold of Heimitian positive definite matrices is proposed in this paper. In particular, an autocorrelation matrix is used to model the correlation of sample data. This method of the modeling can avoid the poor Doppler resolution as well as the energy spread of the Doppler filter banks result from the FFT. Moreover, a weighted averaging filter, conceived from the philosophy of the bilateral filtering in image denoising, is proposed and combined within the geometric detection framework. As the weighted averaging filter acts as the clutter suppression, the performance of the geometric detector is improved. Numerical experiments are given to validate the effectiveness of our proposed method.
2010-07-01
... and average carbon-related exhaust emissions. 600.510-12 Section 600.510-12 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF... Transportation. (iv) [Reserved] (2) Average carbon-related exhaust emissions will be calculated to the nearest...
SEASONAL AVERAGE FLOW IN RÂUL NEGRU HYDROGRAPHIC BASIN
Directory of Open Access Journals (Sweden)
VIGH MELINDA
2015-03-01
Full Text Available The Râul Negru hydrographic basin is a well individualised physical-geographical unit inside the Braşov Depression. The flow is controlled by six hydrometric stations placed on the main river and on two important tributaries. The data base for seasonal flow analysis contains the discharges from 1950-2012. The results of data analysis show that there significant space-time differences between multiannual seasonal averages. Some interesting conclusions can be obtained by comparing abundant and scarce periods. Flow analysis was made using seasonal charts Q = f(T. The similarities come from the basin’s relative homogeneity, and the differences from flow’s evolution and trend. Flow variation is analysed using variation coefficient. In some cases appear significant Cv values differences. Also, Cv values trends are analysed according to basins’ average altitude.
Improved averaging for non-null interferometry
Fleig, Jon F.; Murphy, Paul E.
2013-09-01
Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.
Phosphasalen indium complexes showing high rates and isoselectivities in rac-lactide polymerizations
Energy Technology Data Exchange (ETDEWEB)
Myers, Dominic; White, Andrew J.P. [Department of Chemistry, Imperial College London (United Kingdom); Forsyth, Craig M. [School of Chemistry, Monash University, Clayton, VIC (Australia); Bown, Mark [CSIRO Manufacturing, Bayview Avenue, Clayton, VIC (Australia); Williams, Charlotte K. [Department of Chemistry, Oxford University (United Kingdom)
2017-05-02
Polylactide (PLA) is the leading bioderived polymer produced commercially by the metal-catalyzed ring-opening polymerization of lactide. Control over tacticity to produce stereoblock PLA, from rac-lactide improves thermal properties but is an outstanding challenge. Here, phosphasalen indium catalysts feature high rates (30±3 m{sup -1} min{sup -1}, THF, 298 K), high control, low loadings (0.2 mol %), and isoselectivity (P{sub i}=0.92, THF, 258 K). Furthermore, the phosphasalen indium catalysts do not require any chiral additives. (copyright 2017 Wiley-VCH Verlag GmbH and Co. KGaA, Weinheim)
DEFF Research Database (Denmark)
Fatum, Rasmus; Pedersen, Jesper; Sørensen, Peter Norman
This paper investigates the intraday effects of unannounced foreign exchange intervention on bid-ask exchange rate spreads using official intraday intervention data provided by the Danish central bank. Our starting point is a simple theoretical model of the bid-ask spread which we use to formulate...... exert a significant influence on the exchange rate spread, but in opposite directions: intervention purchases of the smaller currency, on average, reduce the spread while intervention sales, on average, increase the spread. We also show that intervention only affects the exchange rate spread when...... the state of the market is not abnormally volatile. Our results are consistent with the notion that illiquidity arises when traders fear speculative pressure against the smaller currency and confirms the asymmetry hypothesis of our theoretical model....
Roelfs, David J; Shor, Eran; Blank, Aharon; Schwartz, Joseph E
2015-05-01
Individual-level unemployment has been consistently linked to poor health and higher mortality, but some scholars have suggested that the negative effect of job loss may be lower during times and in places where aggregate unemployment rates are high. We review three logics associated with this moderation hypothesis: health selection, social isolation, and unemployment stigma. We then test whether aggregate unemployment rates moderate the individual-level association between unemployment and all-cause mortality. We use six meta-regression models (each using a different measure of the aggregate unemployment rate) based on 62 relative all-cause mortality risk estimates from 36 studies (from 15 nations). We find that the magnitude of the individual-level unemployment-mortality association is approximately the same during periods of high and low aggregate-level unemployment. Model coefficients (exponentiated) were 1.01 for the crude unemployment rate (P = .27), 0.94 for the change in unemployment rate from the previous year (P = .46), 1.01 for the deviation of the unemployment rate from the 5-year running average (P = .87), 1.01 for the deviation of the unemployment rate from the 10-year running average (P = .73), 1.01 for the deviation of the unemployment rate from the overall average (measured as a continuous variable; P = .61), and showed no variation across unemployment levels when the deviation of the unemployment rate from the overall average was measured categorically. Heterogeneity between studies was significant (P unemployment experiences change when macroeconomic conditions change. Efforts to ameliorate the negative social and economic consequences of unemployment should continue to focus on the individual and should be maintained regardless of periodic changes in macroeconomic conditions. Copyright © 2015 Elsevier Inc. All rights reserved.
Walker, K A; Mellish, J E; Weary, D M
2011-10-01
This study assessed the heart rate, breathing rate and behavioural responses of 12 juvenile Steller sea lions during hot-iron branding under isoflurane anaesthesia. Physiological and behavioural measures were recorded in four periods: baseline (five minutes), sham branding (one minute), branding (approximately 2.7 minutes) and postbranding (five minutes). No difference in heart rate was noted from baseline to sham branding, but heart rate increased from mean (sem) 78.3 (2.4) bpm in the baseline period to 85.6 (2.5) bpm in the branding period. Heart rate remained elevated in the postbranding period, averaging 84.7 (2.5) bpm. Breathing rate averaged 2.5 (1.0) breaths/minute in the baseline and sham branding periods increased to 8.9 (1.0) breaths/minute during branding, but returned to baseline by the postbranding period. Behaviourally, half of the sea lions exhibited trembling and head and shoulder movements during branding.
A time-averaged cosmic ray propagation theory
International Nuclear Information System (INIS)
Klimas, A.J.
1975-01-01
An argument is presented, which casts doubt on our ability to choose an appropriate magnetic field ensemble for computing the average behavior of cosmic ray particles. An alternate procedure, using time-averages rather than ensemble-averages, is presented. (orig.) [de
International Nuclear Information System (INIS)
Wang Jianqing; Fujiwara, Osamu; Kodera, Sachiko; Watanabe, Soichi
2006-01-01
Due to the difficulty of the specific absorption rate (SAR) measurement in an actual human body for electromagnetic radio-frequency (RF) exposure, in various compliance assessment procedures the incident electric field or power density is being used as a reference level, which should never yield a larger whole-body average SAR than the basic safety limit. The relationship between the reference level and the whole-body average SAR, however, was established mainly based on numerical calculations for highly simplified human modelling dozens of years ago. Its validity is being questioned by the latest calculation results. In verifying the validity of the reference level with respect to the basic SAR limit for RF exposure, it is essential to have a high accuracy of human modelling and numerical code. In this study, we made a detailed error analysis in the whole-body average SAR calculation for the finite-difference time-domain (FDTD) method in conjunction with the perfectly matched layer (PML) absorbing boundaries. We derived a basic rule for the PML employment based on a dielectric sphere and the Mie theory solution. We then attempted to clarify to what extent the whole-body average SAR may reach using an anatomically based Japanese adult model and a scaled child model. The results show that the whole-body average SAR under the ICNIRP reference level exceeds the basic safety limit nearly 30% for the child model both in the resonance frequency and 2 GHz band
Energy Technology Data Exchange (ETDEWEB)
Wang Jianqing [Graduate School of Engineering, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya 466-8555 (Japan); Fujiwara, Osamu [Graduate School of Engineering, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya 466-8555 (Japan); Kodera, Sachiko [Graduate School of Engineering, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya 466-8555 (Japan); Watanabe, Soichi [National Institute of Information and Communications Technology, Nukui-kitamachi, Koganei, Tokyo 184-8795 (Japan)
2006-09-07
Due to the difficulty of the specific absorption rate (SAR) measurement in an actual human body for electromagnetic radio-frequency (RF) exposure, in various compliance assessment procedures the incident electric field or power density is being used as a reference level, which should never yield a larger whole-body average SAR than the basic safety limit. The relationship between the reference level and the whole-body average SAR, however, was established mainly based on numerical calculations for highly simplified human modelling dozens of years ago. Its validity is being questioned by the latest calculation results. In verifying the validity of the reference level with respect to the basic SAR limit for RF exposure, it is essential to have a high accuracy of human modelling and numerical code. In this study, we made a detailed error analysis in the whole-body average SAR calculation for the finite-difference time-domain (FDTD) method in conjunction with the perfectly matched layer (PML) absorbing boundaries. We derived a basic rule for the PML employment based on a dielectric sphere and the Mie theory solution. We then attempted to clarify to what extent the whole-body average SAR may reach using an anatomically based Japanese adult model and a scaled child model. The results show that the whole-body average SAR under the ICNIRP reference level exceeds the basic safety limit nearly 30% for the child model both in the resonance frequency and 2 GHz band.
Improving consensus structure by eliminating averaging artifacts
Directory of Open Access Journals (Sweden)
KC Dukka B
2009-03-01
Full Text Available Abstract Background Common structural biology methods (i.e., NMR and molecular dynamics often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures. However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%; in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38 of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA 1, our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction 2, which
Flexible time domain averaging technique
Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng
2013-09-01
Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.
Averaged head phantoms from magnetic resonance images of Korean children and young adults
Han, Miran; Lee, Ae-Kyoung; Choi, Hyung-Do; Jung, Yong Wook; Park, Jin Seo
2018-02-01
Increased use of mobile phones raises concerns about the health risks of electromagnetic radiation. Phantom heads are routinely used for radiofrequency dosimetry simulations, and the purpose of this study was to construct averaged phantom heads for children and young adults. Using magnetic resonance images (MRI), sectioned cadaver images, and a hybrid approach, we initially built template phantoms representing 6-, 9-, 12-, 15-year-old children and young adults. Our subsequent approach revised the template phantoms using 29 averaged items that were identified by averaging the MRI data from 500 children and young adults. In females, the brain size and cranium thickness peaked in the early teens and then decreased. This is contrary to what was observed in males, where brain size and cranium thicknesses either plateaued or grew continuously. The overall shape of brains was spherical in children and became ellipsoidal by adulthood. In this study, we devised a method to build averaged phantom heads by constructing surface and voxel models. The surface model could be used for phantom manipulation, whereas the voxel model could be used for compliance test of specific absorption rate (SAR) for users of mobile phones or other electronic devices.
The rating reliability calculator
Directory of Open Access Journals (Sweden)
Solomon David J
2004-04-01
Full Text Available Abstract Background Rating scales form an important means of gathering evaluation data. Since important decisions are often based on these evaluations, determining the reliability of rating data can be critical. Most commonly used methods of estimating reliability require a complete set of ratings i.e. every subject being rated must be rated by each judge. Over fifty years ago Ebel described an algorithm for estimating the reliability of ratings based on incomplete data. While his article has been widely cited over the years, software based on the algorithm is not readily available. This paper describes an easy-to-use Web-based utility for estimating the reliability of ratings based on incomplete data using Ebel's algorithm. Methods The program is available public use on our server and the source code is freely available under GNU General Public License. The utility is written in PHP, a common open source imbedded scripting language. The rating data can be entered in a convenient format on the user's personal computer that the program will upload to the server for calculating the reliability and other statistics describing the ratings. Results When the program is run it displays the reliability, number of subject rated, harmonic mean number of judges rating each subject, the mean and standard deviation of the averaged ratings per subject. The program also displays the mean, standard deviation and number of ratings for each subject rated. Additionally the program will estimate the reliability of an average of a number of ratings for each subject via the Spearman-Brown prophecy formula. Conclusion This simple web-based program provides a convenient means of estimating the reliability of rating data without the need to conduct special studies in order to provide complete rating data. I would welcome other researchers revising and enhancing the program.
International Nuclear Information System (INIS)
Sesnic, S.S.; Bitter, M.; Hill, K.W.; Hiroe, S.; Hulse, R.; Shimada, M.; Stratton, B.; von Goeler, S.
1986-05-01
The soft x-ray continuum radiation in TFTR low density neutral beam discharges can be much lower than its theoretical value obtained by assuming a corona equilibrium. This reduced continuum radiation is caused by an ionization equilibrium shift toward lower states, which strongly changes the value of the average recombination coefficient of metallic impurities anti γ, even for only slight changes in the average charge, anti Z. The primary agent for this shift is the charge exchange between the highly ionized impurity ions and the neutral hydrogen, rather than impurity transport, because the central density of the neutral hydrogen is strongly enhanced at lower plasma densities with intense beam injection. In the extreme case of low density, high neutral beam power TFTR operation (energetic ion mode) the reduction in anti γ can be as much as one-half to two-thirds. We calculate the parametric dependence of anti γ and anti Z for Ti, Cr, Fe, and Ni impurities on neutral density (equivalent to beam power), electron temperature, and electron density. These values are obtained by using either a one-dimensional impurity transport code (MIST) or a zero-dimensional code with a finite particle confinement time. As an example, we show the variation of anti γ and anti Z in different TFTR discharges
Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method
Tsai, F. T. C.; Elshall, A. S.
2014-12-01
Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.
Yin, Yip Chee; Hock-Eam, Lim
2012-09-01
This paper investigates the forecasting ability of Mallows Model Averaging (MMA) by conducting an empirical analysis of five Asia countries, Malaysia, Thailand, Philippines, Indonesia and China's GDP growth rate. Results reveal that MMA has no noticeable differences in predictive ability compared to the general autoregressive fractional integrated moving average model (ARFIMA) and its predictive ability is sensitive to the effect of financial crisis. MMA could be an alternative forecasting method for samples without recent outliers such as financial crisis.
Susi, Louis; Reader, Al; Nusstein, John; Beck, Mike; Weaver, Joel; Drum, Melissa
2008-01-01
The authors, using a crossover design, randomly administered, in a single-blind manner, 3 primary intraosseous injections to 61 subjects using: the Wand local anesthetic system at a deposition rate of 45 seconds (fast injection); the Wand local anesthetic system at a deposition rate of 4 minutes and 45 seconds (slow injection); a conventional syringe injection at a deposition rate of 4 minutes and 45 seconds (slow injection), in 3 separate appointments spaced at least 3 weeks apart. A pulse oximeter measured heart rate (pulse). The results demonstrated the mean maximum heart rate was statistically higher with the fast intraosseous injection (average 21 to 28 beats/min increase) than either of the 2 slow intraosseous injections (average 10 to 12 beats/min increase). There was no statistically significant difference between the 2 slow injections. We concluded that an intraosseous injection of 1.4 mL of 2% lidocaine with 1 : 100,000 epinephrine with the Wand at a 45-second rate of anesthetic deposition resulted in a significantly higher heart rate when compared with a 4-minute and 45-second anesthetic solution deposition using either the Wand or traditional syringe.
Weiss, Shennan A; Orosz, Iren; Salamon, Noriko; Moy, Stephanie; Wei, Linqing; Van ’t Klooster, Maryse A; Knight, Robert T; Harper, Ronald M; Bragin, Anatol; Fried, Itzhak; Engel, Jerome; Staba, Richard J
2016-01-01
Objective Ripples (80–150 Hz) recorded from clinical macroelectrodes have been shown to be an accurate biomarker of epileptogenic brain tissue. We investigated coupling between epileptiform spike phase and ripple amplitude to better understand the mechanisms that generate this type of pathological ripple (pRipple) event. Methods We quantified phase amplitude coupling (PAC) between epileptiform EEG spike phase and ripple amplitude recorded from intracranial depth macroelectrodes during episodes of sleep in 12 patients with mesial temporal lobe epilepsy. PAC was determined by 1) a phasor transform that corresponds to the strength and rate of ripples coupled with spikes, and a 2) ripple-triggered average to measure the strength, morphology, and spectral frequency of the modulating and modulated signals. Coupling strength was evaluated in relation to recording sites within and outside the seizure onset zone (SOZ). Results Both the phasor transform and ripple-triggered averaging methods showed ripple amplitude was often robustly coupled with epileptiform EEG spike phase. Coupling was more regularly found inside than outside the SOZ, and coupling strength correlated with the likelihood a macroelectrode’s location was within the SOZ (pripples coupled with EEG spikes inside the SOZ to rates of coupled ripples in non-SOZ was greater than the ratio of rates of ripples on spikes detected irrespective of coupling (pripple amplitude (pripple spectral frequency (pripple amplitude. The changes in excitability reflected as epileptiform spikes may also cause clusters of pathologically interconnected bursting neurons to grow and synchronize into aberrantly large neuronal assemblies. PMID:27723936
Error rates of a full-duplex system over EGK fading channels subject to laplacian interference
Soury, Hamza; Elsawy, Hesham; Alouini, Mohamed-Slim
2017-01-01
modulation schemes is studied and a unified closed-form expression for the average symbol error rate is derived. To this end, we show the effective downlink throughput gain, harvested by employing FD communication at a BS that serves HD users, as a function
Simple relations between mean passage times and Kramers' stationary rate
International Nuclear Information System (INIS)
Boilley, David; Jurado, Beatriz; Schmitt, Christelle
2004-01-01
The classical problem of the escape time of a metastable potential well in a thermal environment is generally studied by various quantities like Kramers' stationary escape rate, mean first passage time, nonlinear relaxation time, or mean last passage time. In addition, numerical simulations lead to the definition of other quantities as the long-time limit escape rate and the transient time. In this paper, we propose some simple analytical relations between all these quantities. In particular, we point out the hypothesis used to evaluate these various times in order to clarify their comparison and applicability, and show how average times include the transient time and the long-time limit of the escape rate
40 CFR 76.11 - Emissions averaging.
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Emissions averaging. 76.11 Section 76.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General...
Shen, Hong; Liu, Wen-xing; Zhou, Xue-yun; Zhou, Li-ling; Yu, Long-Kun
2018-02-01
In order to thoroughly understand the characteristics of the aperture-averaging effect of atmospheric scintillation in terrestrial optical wireless communication and provide references for engineering design and performance evaluation of the optics system employed in the atmosphere, we have theoretically deduced the generally analytic expression of the aperture-averaging factor of atmospheric scintillation, and numerically investigated characteristics of the apertureaveraging factor under different propagation conditions. The limitations of the current commonly used approximate calculation formula of aperture-averaging factor have been discussed, and the results showed that the current calculation formula is not applicable for the small receiving aperture under non-uniform turbulence link. Numerical calculation has showed that aperture-averaging factor of atmospheric scintillation presented an exponential decline model for the small receiving aperture under non-uniform turbulent link, and the general expression of the model was given. This model has certain guiding significance for evaluating the aperture-averaging effect in the terrestrial optical wireless communication.
Choosing the Discount Rate for Defense Decisionmaking.
1976-07-01
a weighted average of the after-personal-income-tax rate of return to savers and the pre- corporate - income - tax cost of capital. Stockfisch calcu].ates...occurs between the corporate and noncorporate sector. Many economists assume 100 percent shifting of the corporate income tax , so if the corporate ...capital is a weighted average of the after-personal-income-tax rate of return to savers and the pre- corporate - income - tax cost of capital. Stockfisch
Designing container shipping network under changing demand and freight rates
Directory of Open Access Journals (Sweden)
C. Chen
2010-03-01
Full Text Available This paper focuses on the optimization of container shipping network and its operations under changing cargo demand and freight rates. The problem is formulated as a mixed integer non-linear programming problem (MINP with an objective of maximizing the average unit ship-slot profit at three stages using analytical methodology. The issues such as empty container repositioning, ship-slot allocating, ship sizing, and container configuration are simultaneously considered based on a series of the matrices of demand for a year. To solve the model, a bi-level genetic algorithm based method is proposed. Finally, numerical experiments are provided to illustrate the validity of the proposed model and algorithms. The obtained results show that the suggested model can provide a more realistic solution to the issues on the basis of changing demand and freight rates and arrange a more effective approach to the optimization of container shipping network structures and operations than does the model based on the average demand.
A practical guide to averaging functions
Beliakov, Gleb; Calvo Sánchez, Tomasa
2016-01-01
This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...
Estimates of Annual Soil Loss Rates in the State of São Paulo, Brazil
Directory of Open Access Journals (Sweden)
Grasiela de Oliveira Rodrigues Medeiros
Full Text Available ABSTRACT: Soil is a natural resource that has been affected by human pressures beyond its renewal capacity. For this reason, large agricultural areas that were productive have been abandoned due to soil degradation, mainly caused by the erosion process. The objective of this study was to apply the Universal Soil Loss Equation to generate more recent estimates of soil loss rates for the state of São Paulo using a database with information from medium resolution (30 m. The results showed that many areas of the state have high (critical levels of soil degradation due to the predominance of consolidated human activities, especially in growing sugarcane and pasture use. The average estimated rate of soil loss is 30 Mg ha-1 yr-1 and 59 % of the area of the state (except for water bodies and urban areas had estimated rates above 12 Mg ha-1 yr-1, considered as the average tolerance limit in the literature. The average rates of soil loss in areas with annual agricultural crops, semi-perennial agricultural crops (sugarcane, and permanent agricultural crops were 118, 78, and 38 Mg ha-1 yr-1 respectively. The state of São Paulo requires attention to conservation of soil resources, since most soils led to estimates beyond the tolerance limit.
7 CFR 51.2561 - Average moisture content.
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except when...
Lugo, Stefano; Croce, Annalisa; Faff, Robert
2015-01-01
This article examines how credit rating agencies (CRAs) react to rating decisions on mortgage-backed securities by rival agencies in the aftermath of the subprime crisis. While Fitch is on average the first mover, Moody’s and S&P perform more timely downgrades given a downgrade or a more severe
International Nuclear Information System (INIS)
Liu, Zhouyu; Collins, Benjamin; Kochunas, Brendan; Downar, Thomas; Xu, Yunlin; Wu, Hongchun
2015-01-01
Highlights: • The CDP combines the benefits of the CPM’s efficiency and the MOC’s flexibility. • Boundary averaging reduces the computation effort with losing minor accuracy. • An analysis model is used to justify the choice of optimize averaging strategy. • Numerical results show the performance and accuracy. - Abstract: The method of characteristic direction probabilities (CDP) combines the benefits of the collision probability method (CPM) and the method of characteristics (MOC) for the solution of the integral form of the Botlzmann Transport Equation. By coupling only the fine regions traversed by the characteristic rays in a particular direction, the computational effort required to calculate the probability matrices and to solve the matrix system is considerably reduced compared to the CPM. Furthermore, boundary averaging is performed to reduce the storage and computation but the capability of dealing with complicated geometries is preserved since the same ray tracing information is used as in MOC. An analysis model for the outgoing angular flux is used to analyze a variety of outgoing angular flux averaging methods for the boundary and to justify the choice of optimize averaging strategy. The boundary average CDP method was then implemented in the Michigan PArallel Characteristic based Transport (MPACT) code to perform 2-D and 3-D transport calculations. The numerical results are given for different cases to show the effect of averaging on the outgoing angular flux, region scalar flux and the eigenvalue. Comparison of the results with the case with no averaging demonstrates that an angular dependent averaging strategy is possible for the CDP to improve its computational performance without compromising the achievable accuracy
Computation of the bounce-average code
International Nuclear Information System (INIS)
Cutler, T.A.; Pearlstein, L.D.; Rensink, M.E.
1977-01-01
The bounce-average computer code simulates the two-dimensional velocity transport of ions in a mirror machine. The code evaluates and bounce-averages the collision operator and sources along the field line. A self-consistent equilibrium magnetic field is also computed using the long-thin approximation. Optionally included are terms that maintain μ, J invariance as the magnetic field changes in time. The assumptions and analysis that form the foundation of the bounce-average code are described. When references can be cited, the required results are merely stated and explained briefly. A listing of the code is appended
Peak-to-average power ratio reduction in interleaved OFDMA systems
Al-Shuhail, Shamael; Ali, Anum; Al-Naffouri, Tareq Y.
2015-01-01
Orthogonal frequency division multiple access (OFDMA) systems suffer from several impairments, and communication system engineers use powerful signal processing tools to combat these impairments and to keep up with the capacity/rate demands. One of these impairments is high peak-to-average power ratio (PAPR) and clipping is the simplest peak reduction scheme. However, in general, when multiple users are subjected to clipping, frequency domain clipping distortions spread over the spectrum of all users. This results in compromised performance and hence clipping distortions need to be mitigated at the receiver. Mitigating these distortions in multiuser case is not simple and requires complex clipping mitigation procedures at the receiver. However, it was observed that interleaved OFDMA presents a special structure that results in only self-inflicted clipping distortions (i.e., the distortions of a particular user do not interfere with other users). In this work, we prove analytically that distortions do not spread over multiple users (while utilizing interleaved carrier assignment in OFDMA) and construct a compressed sensing system that utilizes the sparsity of the clipping distortions and recovers it on each user. We provide numerical results that validate our analysis and show promising performance for the proposed clipping recovery scheme.
Peak-to-average power ratio reduction in interleaved OFDMA systems
Al-Shuhail, Shamael
2015-12-07
Orthogonal frequency division multiple access (OFDMA) systems suffer from several impairments, and communication system engineers use powerful signal processing tools to combat these impairments and to keep up with the capacity/rate demands. One of these impairments is high peak-to-average power ratio (PAPR) and clipping is the simplest peak reduction scheme. However, in general, when multiple users are subjected to clipping, frequency domain clipping distortions spread over the spectrum of all users. This results in compromised performance and hence clipping distortions need to be mitigated at the receiver. Mitigating these distortions in multiuser case is not simple and requires complex clipping mitigation procedures at the receiver. However, it was observed that interleaved OFDMA presents a special structure that results in only self-inflicted clipping distortions (i.e., the distortions of a particular user do not interfere with other users). In this work, we prove analytically that distortions do not spread over multiple users (while utilizing interleaved carrier assignment in OFDMA) and construct a compressed sensing system that utilizes the sparsity of the clipping distortions and recovers it on each user. We provide numerical results that validate our analysis and show promising performance for the proposed clipping recovery scheme.
Eliazar, Iddo
2018-02-01
The popular perception of statistical distributions is depicted by the iconic bell curve which comprises of a massive bulk of 'middle-class' values, and two thin tails - one of small left-wing values, and one of large right-wing values. The shape of the bell curve is unimodal, and its peak represents both the mode and the mean. Thomas Friedman, the famous New York Times columnist, recently asserted that we have entered a human era in which "Average is Over" . In this paper we present mathematical models for the phenomenon that Friedman highlighted. While the models are derived via different modeling approaches, they share a common foundation. Inherent tipping points cause the models to phase-shift from a 'normal' bell-shape statistical behavior to an 'anomalous' statistical behavior: the unimodal shape changes to an unbounded monotone shape, the mode vanishes, and the mean diverges. Hence: (i) there is an explosion of small values; (ii) large values become super-large; (iii) 'middle-class' values are wiped out, leaving an infinite rift between the small and the super large values; and (iv) "Average is Over" indeed.
Effect of solution cooling rate on the γ' precipitation behaviors of a Ni-base P/M superalloy
Institute of Scientific and Technical Information of China (English)
无
2008-01-01
The effect of cooling rate on the cooling "/' precipitation behaviors was investigated in a Ni-base powder/metallurgy (P/M)superalioy (FGH4096).The empirical equations were established between the cooling rate and the average sizes of secondary and tertiary γ' precipitates within grains and tertiary γ' precipitates at grain boundaries,as well as the apparent width of grain boundaries.The results show that the average sizes of secondary or tertiary γ' precipitates are inversely correlated with the cooling rate.The shape of secondary γ' precipitates within grains changes from butterfly-like to spherical with the increase of cooling rate,but all the tertiary γ' precipitates formed are spherical in shape.It is also found that tertiary γ' may be precipitated in the latter part of the cooling cycle only if the cooling rate is not faster than 4.3℃/s,and the apparent width of grain boundaries decreases linearly with the increase of cooling rate.
Oregon's On-Time High School Graduation Rate Shows Strong Growth in 2014-15. Research Brief
Oregon Department of Education, 2016
2016-01-01
Oregon continues to make gains in its on-time high school graduation rate. The rate increased to 74% for the 2014-15 school year--up from 72% the year before. The graduation rate for almost all student groups rose, led by Hispanic students (2.4 percentage points) and Black students (2.4 percentage points). The rate for economically disadvantaged…
2010-01-01
... 7 Agriculture 10 2010-01-01 2010-01-01 false Loan rates. 1435.101 Section 1435.101 Agriculture... AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Sugar Loan Program § 1435.101 Loan rates. (a) The national average loan rate for raw cane sugar produced from domestically grown sugarcane is: 18...
MACROECONOMIC AND MARKET DETERMINANTS OF INTEREST RATE SPREAD: EVIDENCE FROM ALBANIA
Directory of Open Access Journals (Sweden)
Brunilda NELI
2015-12-01
Full Text Available The banking system, as the most important component of the financial system in Albania, plays a crucial role in economic development. Measuring the efficiency of the intermediation system requires special attention because of its implications on the level of investments, savings, resource allocation etc. The most common indicator for the efficiency of the banking system is the cost of intermediation, measured by the spread of interest rates (the difference between the average lending rate and the average deposit rate. The study aims to analyze the trend of interest rate spread (IRS in Albania for the period 2005-2014 based on a comparative analysis with other countries and to identify the factors with significant impact on the level of IRS in the local currency. It is based on the empirical analysis of several macroeconomic and market factors that determine IRS, used in previous studies in this field, but also incorporating other elements that are associated with the characteristics of the Albanian system. Albania has experienced high IRS during the last decade, with large fluctuations, especially in the local currency. The results of the study based on quarterly panel data for the period 2005-2014 show that IRS in Albania is negatively affected by the level of development of the banking sector and the discount rate, while inflation, deficit rate and monetary supply put positive pressure on this indicator.
Averaging and sampling for magnetic-observatory hourly data
Directory of Open Access Journals (Sweden)
J. J. Love
2010-11-01
Full Text Available A time and frequency-domain analysis is made of the effects of averaging and sampling methods used for constructing magnetic-observatory hourly data values. Using 1-min data as a proxy for continuous, geomagnetic variation, we construct synthetic hourly values of two standard types: instantaneous "spot" measurements and simple 1-h "boxcar" averages. We compare these average-sample types with others: 2-h average, Gaussian, and "brick-wall" low-frequency-pass. Hourly spot measurements provide a statistically unbiased representation of the amplitude range of geomagnetic-field variation, but as a representation of continuous field variation over time, they are significantly affected by aliasing, especially at high latitudes. The 1-h, 2-h, and Gaussian average-samples are affected by a combination of amplitude distortion and aliasing. Brick-wall values are not affected by either amplitude distortion or aliasing, but constructing them is, in an operational setting, relatively more difficult than it is for other average-sample types. It is noteworthy that 1-h average-samples, the present standard for observatory hourly data, have properties similar to Gaussian average-samples that have been optimized for a minimum residual sum of amplitude distortion and aliasing. For 1-h average-samples from medium and low-latitude observatories, the average of the combination of amplitude distortion and aliasing is less than the 5.0 nT accuracy standard established by Intermagnet for modern 1-min data. For medium and low-latitude observatories, average differences between monthly means constructed from 1-min data and monthly means constructed from any of the hourly average-sample types considered here are less than the 1.0 nT resolution of standard databases. We recommend that observatories and World Data Centers continue the standard practice of reporting simple 1-h-average hourly values.
The average size of ordered binary subgraphs
van Leeuwen, J.; Hartel, Pieter H.
To analyse the demands made on the garbage collector in a graph reduction system, the change in size of an average graph is studied when an arbitrary edge is removed. In ordered binary trees the average number of deleted nodes as a result of cutting a single edge is equal to the average size of a
Hayashi, Risa; Nakai, Kenji; Fukushima, Akimune; Itoh, Manabu; Sugiyama, Toru
2009-03-01
Although ultrasonic diagnostic imaging and fetal heart monitors have undergone great technological improvements, the development and use of fetal electrocardiograms to evaluate fetal arrhythmias and autonomic nervous activity have not been fully established. We verified the clinical significance of the novel signal-averaged vector-projected high amplification ECG (SAVP-ECG) method in fetuses from 48 gravidas at 32-41 weeks of gestation and in 34 neonates. SAVP-ECGs from fetuses and newborns were recorded using a modified XYZ-leads system. Once noise and maternal QRS waves were removed, the P, QRS, and T wave intervals were measured from the signal-averaged fetal ECGs. We also compared fetal and neonatal heart rates (HRs), coefficients of variation of heart rate variability (CV) as a parasympathetic nervous activity, and the ratio of low to high frequency (LF/HF ratio) as a sympathetic nervous activity. The rate of detection of a fetal ECG by SAVP-ECG was 72.9%, and the fetal and neonatal QRS and QTc intervals were not significantly different. The neonatal CVs and LF/HF ratios were significantly increased compared with those in the fetus. In conclusion, we have developed a fetal ECG recording method using the SAVP-ECG system, which we used to evaluate autonomic nervous system development.
Familiarity and Voice Representation: From Acoustic-Based Representation to Voice Averages
Directory of Open Access Journals (Sweden)
Maureen Fontaine
2017-07-01
Full Text Available The ability to recognize an individual from their voice is a widespread ability with a long evolutionary history. Yet, the perceptual representation of familiar voices is ill-defined. In two experiments, we explored the neuropsychological processes involved in the perception of voice identity. We specifically explored the hypothesis that familiar voices (trained-to-familiar (Experiment 1, and famous voices (Experiment 2 are represented as a whole complex pattern, well approximated by the average of multiple utterances produced by a single speaker. In experiment 1, participants learned three voices over several sessions, and performed a three-alternative forced-choice identification task on original voice samples and several “speaker averages,” created by morphing across varying numbers of different vowels (e.g., [a] and [i] produced by the same speaker. In experiment 2, the same participants performed the same task on voice samples produced by familiar speakers. The two experiments showed that for famous voices, but not for trained-to-familiar voices, identification performance increased and response times decreased as a function of the number of utterances in the averages. This study sheds light on the perceptual representation of familiar voices, and demonstrates the power of average in recognizing familiar voices. The speaker average captures the unique characteristics of a speaker, and thus retains the information essential for recognition; it acts as a prototype of the speaker.
Fitting a function to time-dependent ensemble averaged data.
Fogelmark, Karl; Lomholt, Michael A; Irbäck, Anders; Ambjörnsson, Tobias
2018-05-03
Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion constants). A commonly overlooked challenge in such function fitting procedures is that fluctuations around mean values, by construction, exhibit temporal correlations. We show that the only available general purpose function fitting methods, correlated chi-square method and the weighted least squares method (which neglects correlation), fail at either robust parameter estimation or accurate error estimation. We remedy this by deriving a new closed-form error estimation formula for weighted least square fitting. The new formula uses the full covariance matrix, i.e., rigorously includes temporal correlations, but is free of the robustness issues, inherent to the correlated chi-square method. We demonstrate its accuracy in four examples of importance in many fields: Brownian motion, damped harmonic oscillation, fractional Brownian motion and continuous time random walks. We also successfully apply our method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software.
Calculating ensemble averaged descriptions of protein rigidity without sampling.
González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J
2012-01-01
Previous works have demonstrated that protein rigidity is related to thermodynamic stability, especially under conditions that favor formation of native structure. Mechanical network rigidity properties of a single conformation are efficiently calculated using the integer body-bar Pebble Game (PG) algorithm. However, thermodynamic properties require averaging over many samples from the ensemble of accessible conformations to accurately account for fluctuations in network topology. We have developed a mean field Virtual Pebble Game (VPG) that represents the ensemble of networks by a single effective network. That is, all possible number of distance constraints (or bars) that can form between a pair of rigid bodies is replaced by the average number. The resulting effective network is viewed as having weighted edges, where the weight of an edge quantifies its capacity to absorb degrees of freedom. The VPG is interpreted as a flow problem on this effective network, which eliminates the need to sample. Across a nonredundant dataset of 272 protein structures, we apply the VPG to proteins for the first time. Our results show numerically and visually that the rigidity characterizations of the VPG accurately reflect the ensemble averaged [Formula: see text] properties. This result positions the VPG as an efficient alternative to understand the mechanical role that chemical interactions play in maintaining protein stability.
Scalzitti, Nicholas; Brennan, Joseph; Bothwell, Nici; Brigger, Matthew; Ramsey, Mitchell; Gallagher, Thomas; Maturo, Stephen
2014-05-01
During the wars in Iraq and Afghanistan, the US military has continued to train medical residents despite concern that postgraduate medical education at military training facilities has suffered. This study compares the experience of otolaryngology residents at military programs with the experience of their civilian counterparts. Retrospective review. Academic military medical centers. Resident caseload data and board examination passing rates were requested from each of the 6 Department of Defense otolaryngology residency programs for 2001 to 2010. The American Board of Otolaryngology and the Accreditation Council for Graduate Medical Education provided the national averages for resident caseload. National board passing rates from 2004 to 2010 were also obtained. Two-sample t tests were used to compare the pooled caseloads from the military programs with the national averages. Board passing rates were compared with a test of proportions. Data were available for all but one military program. Regarding total cases, only 2001 and 2003 showed a significant difference (P < .05), with military residents completing more cases in those years. For individual case categories, the military averages were higher in Otology (299.6 vs 261.2, P = .033) and Plastics/Reconstruction (248.1 vs 149.2, P = .003). Only the Head & Neck category significantly favored the national average over the military (278.3 and 226.0, P = .039). The first-time board passing rates were identical between the groups (93%). Our results suggest that the military otolaryngology residency programs are equal in terms of caseload and board passing rates compared with civilian programs over this time period.
What Hides Behind an Umemployment Rate: Comparing Portuguese and U.S. Unemployment
Olivier Blanchard; Pedro Portugal
1998-01-01
Over the last 15 years, Portugal and the United States have had the same average unemployment rate, about 6.5%. But behind these similar rates hide two very different labor markets. Unemployment duration in Portugal is more than three times that of the United States. Symmetrically, the flow of workers into unemployment in Portugal is, in proportion to the labor force, less than a third of what it is in the United States. Relying on evidence from Portuguese and U.S. micro data sets, we show th...
Mahur, Ajay Kumar; Sharma, Anil; Sonkawade, R. G.; Sengupta, D.; Sharma, A. C.; Prasad, Rajendra
Natural radioactivity is wide spread in the earth's environment and exists in various geological formations like soils, rocks, water and sand etc. The measurement of activities of naturally occurring radionuclides 226Ra, 232Th and 40K is important for the estimation of radiation risk and has been the subject of interest of research scientists all over the world. Building construction materials and soil beneath the house are the main sources of radon inside the dwellings. Radon exhalation rate from building materials like, cement, sand and concrete etc. is a major source of radiation to the habitants. In the present studies radon exhalation rates in sand samples collected from Gopalpur and Rushikulya beach placer deposit in Orissa are measured by using "Sealed Can technique" with LR-115 type II nuclear track detectors. In Samples from Rushikulya beach show radon activities varying from 389 ± 24 to 997 ± 38 Bq m-3 with an average value of 549 ±28 Bq m-3. Surface exhalation rates in these samples are found to vary from 140 ± 9 to 359 ± 14 mBq m-2 h-1with an average value of 197 ±10 mBq m-2 h-1, whereas, mass exhalation rates vary from 5 ± 0.3 to 14 ± 0.5 mBq kg-1 h-1 with an average value of 8 ± 0.4 mBq kg-1 h-1. Samples from Gopalpur radon activities are found to vary from 371 ± 23 to 800 ± 34 Bq m-3 with an average value of 549 ± 28 Bq m-3. Surface exhalation rates in these samples are found to vary from 133 ± 8 to 288 ± 12 mBq m-2h-1 with an average value of 197 ± 10 mBq m-2 h-1, whereas, mass exhalation rates vary from 5 ± 0.3 to 11 ± 1 mBq kg-1 h-1 with an average value of 8 ± 0.4 mBq kg-1 h-1.
Average subentropy, coherence and entanglement of random mixed quantum states
Energy Technology Data Exchange (ETDEWEB)
Zhang, Lin, E-mail: godyalin@163.com [Institute of Mathematics, Hangzhou Dianzi University, Hangzhou 310018 (China); Singh, Uttam, E-mail: uttamsingh@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India); Pati, Arun K., E-mail: akpati@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India)
2017-02-15
Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate that mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.
International Nuclear Information System (INIS)
Neff, D.
2003-11-01
. This corrosion form, constituted among others by a siderite layer is due to a particular environment: waterlogged soil containing wood. In the whole, analyses conducted in the TM show that it is composed of goethite badly crystallized in comparison with those of the DPL. Moreover, in this zone, the average elemental iron amount decreases progressively from the metal to the soil in which it stabilizes. In order to know the behaviour of the identified phases in soil water, some thermodynamic data have been involved to calculate their solubility in function of pH, potential and various water composition. The first conclusion concerns the influence of the composition and the structure of the material which is not important for the corrosion behaviour. From the results, some hypothesis have been formulated on the long term corrosion mechanisms of hypo-eutectoids steels in the considered environment. The role of the cracks formed in the DPL during the burial was evidenced. Moreover, these corrosion products undertake a dissolution in the soil water and a reprecipitation, explaining the progressive decrease of the iron amount in the TM. Lastly, some average corrosion rates have been measured with the help of the analytical and thermodynamic results: they do not exceed 4 μm/year. (author)
Natural and anthropogenic rates of soil erosion
Directory of Open Access Journals (Sweden)
Mark A. Nearing
2017-06-01
Full Text Available Regions of land that are brought into crop production from native vegetation typically undergo a period of soil erosion instability, and long term erosion rates are greater than for natural lands as long as the land continues being used for crop production. Average rates of soil erosion under natural, non-cropped conditions have been documented to be less than 2 Mg ha−1 yr−1. On-site rates of erosion of lands under cultivation over large cropland areas, such as in the United States, have been documented to be on the order of 6 Mg ha−1 yr−1 or more. In northeastern China, lands that were brought into production during the last century are thought to have average rates of erosion over this large area of as much as 15 Mg ha−1 yr−1 or more. Broadly applied soil conservation practices, and in particular conservation tillage and no-till cropping, have been found to be effective in reducing rates of erosion, as was seen in the United States when the average rates of erosion on cropped lands decreased from on the order of 9 Mg ha−1 yr−1 to 6 or 7 Mg ha−1 yr−1 between 1982 and 2002, coincident with the widespread adoption of new conservation tillage and residue management practices. Taking cropped lands out of production and restoring them to perennial plant cover, as was done in areas of the United States under the Conservation Reserve Program, is thought to reduce average erosion rates to approximately 1 Mg ha−1 yr−1 or less on those lands.
To quantum averages through asymptotic expansion of classical averages on infinite-dimensional space
International Nuclear Information System (INIS)
Khrennikov, Andrei
2007-01-01
We study asymptotic expansions of Gaussian integrals of analytic functionals on infinite-dimensional spaces (Hilbert and nuclear Frechet). We obtain an asymptotic equality coupling the Gaussian integral and the trace of the composition of scaling of the covariation operator of a Gaussian measure and the second (Frechet) derivative of a functional. In this way we couple classical average (given by an infinite-dimensional Gaussian integral) and quantum average (given by the von Neumann trace formula). We can interpret this mathematical construction as a procedure of 'dequantization' of quantum mechanics. We represent quantum mechanics as an asymptotic projection of classical statistical mechanics with infinite-dimensional phase space. This space can be represented as the space of classical fields, so quantum mechanics is represented as a projection of 'prequantum classical statistical field theory'
Glycogen with short average chain length enhances bacterial durability
Wang, Liang; Wise, Michael J.
2011-09-01
Glycogen is conventionally viewed as an energy reserve that can be rapidly mobilized for ATP production in higher organisms. However, several studies have noted that glycogen with short average chain length in some bacteria is degraded very slowly. In addition, slow utilization of glycogen is correlated with bacterial viability, that is, the slower the glycogen breakdown rate, the longer the bacterial survival time in the external environment under starvation conditions. We call that a durable energy storage mechanism (DESM). In this review, evidence from microbiology, biochemistry, and molecular biology will be assembled to support the hypothesis of glycogen as a durable energy storage compound. One method for testing the DESM hypothesis is proposed.
Bootstrap inference for pre-averaged realized volatility based on non-overlapping returns
DEFF Research Database (Denmark)
Gonçalves, Sílvia; Hounyo, Ulrich; Meddahi, Nour
The main contribution of this paper is to propose bootstrap methods for realized volatility-like estimators defined on pre-averaged returns. In particular, we focus on the pre-averaged realized volatility estimator proposed by Podolskij and Vetter (2009). This statistic can be written (up to a bias......-overlapping nature of the pre-averaged returns implies that these are asymptotically independent, but possibly heteroskedastic. This motivates the application of the wild bootstrap in this context. We provide a proof of the first order asymptotic validity of this method for percentile and percentile-t intervals. Our...... Monte Carlo simulations show that the wild bootstrap can improve the finite sample properties of the existing first order asymptotic theory provided we choose the external random variable appropriately. We use empirical work to illustrate its use in practice....
Averaging principle for second-order approximation of heterogeneous models with homogeneous models.
Fibich, Gadi; Gavious, Arieh; Solan, Eilon
2012-11-27
Typically, models with a heterogeneous property are considerably harder to analyze than the corresponding homogeneous models, in which the heterogeneous property is replaced by its average value. In this study we show that any outcome of a heterogeneous model that satisfies the two properties of differentiability and symmetry is O(ε(2)) equivalent to the outcome of the corresponding homogeneous model, where ε is the level of heterogeneity. We then use this averaging principle to obtain new results in queuing theory, game theory (auctions), and social networks (marketing).
Averaging principle for second-order approximation of heterogeneous models with homogeneous models
Fibich, Gadi; Gavious, Arieh; Solan, Eilon
2012-01-01
Typically, models with a heterogeneous property are considerably harder to analyze than the corresponding homogeneous models, in which the heterogeneous property is replaced by its average value. In this study we show that any outcome of a heterogeneous model that satisfies the two properties of differentiability and symmetry is O(ɛ2) equivalent to the outcome of the corresponding homogeneous model, where ɛ is the level of heterogeneity. We then use this averaging principle to obtain new results in queuing theory, game theory (auctions), and social networks (marketing). PMID:23150569
Visual Perception Based Rate Control Algorithm for HEVC
Feng, Zeqi; Liu, PengYu; Jia, Kebin
2018-01-01
For HEVC, rate control is an indispensably important video coding technology to alleviate the contradiction between video quality and the limited encoding resources during video communication. However, the rate control benchmark algorithm of HEVC ignores subjective visual perception. For key focus regions, bit allocation of LCU is not ideal and subjective quality is unsatisfied. In this paper, a visual perception based rate control algorithm for HEVC is proposed. First bit allocation weight of LCU level is optimized based on the visual perception of luminance and motion to ameliorate video subjective quality. Then λ and QP are adjusted in combination with the bit allocation weight to improve rate distortion performance. Experimental results show that the proposed algorithm reduces average 0.5% BD-BR and maximum 1.09% BD-BR at no cost in bitrate accuracy compared with HEVC (HM15.0). The proposed algorithm devotes to improving video subjective quality under various video applications.
Thermal motion in proteins: Large effects on the time-averaged interaction energies
International Nuclear Information System (INIS)
Goethe, Martin; Rubi, J. Miguel; Fita, Ignacio
2016-01-01
As a consequence of thermal motion, inter-atomic distances in proteins fluctuate strongly around their average values, and hence, also interaction energies (i.e. the pair-potentials evaluated at the fluctuating distances) are not constant in time but exhibit pronounced fluctuations. These fluctuations cause that time-averaged interaction energies do generally not coincide with the energy values obtained by evaluating the pair-potentials at the average distances. More precisely, time-averaged interaction energies behave typically smoother in terms of the average distance than the corresponding pair-potentials. This averaging effect is referred to as the thermal smoothing effect. Here, we estimate the strength of the thermal smoothing effect on the Lennard-Jones pair-potential for globular proteins at ambient conditions using x-ray diffraction and simulation data of a representative set of proteins. For specific atom species, we find a significant smoothing effect where the time-averaged interaction energy of a single atom pair can differ by various tens of cal/mol from the Lennard-Jones potential at the average distance. Importantly, we observe a dependency of the effect on the local environment of the involved atoms. The effect is typically weaker for bulky backbone atoms in beta sheets than for side-chain atoms belonging to other secondary structure on the surface of the protein. The results of this work have important practical implications for protein software relying on free energy expressions. We show that the accuracy of free energy expressions can largely be increased by introducing environment specific Lennard-Jones parameters accounting for the fact that the typical thermal motion of protein atoms depends strongly on their local environment.
Thermal motion in proteins: Large effects on the time-averaged interaction energies
Energy Technology Data Exchange (ETDEWEB)
Goethe, Martin, E-mail: martingoethe@ub.edu; Rubi, J. Miguel [Departament de Física Fonamental, Universitat de Barcelona, Martí i Franquès 1, 08028 Barcelona (Spain); Fita, Ignacio [Institut de Biologia Molecular de Barcelona, Baldiri Reixac 10, 08028 Barcelona (Spain)
2016-03-15
As a consequence of thermal motion, inter-atomic distances in proteins fluctuate strongly around their average values, and hence, also interaction energies (i.e. the pair-potentials evaluated at the fluctuating distances) are not constant in time but exhibit pronounced fluctuations. These fluctuations cause that time-averaged interaction energies do generally not coincide with the energy values obtained by evaluating the pair-potentials at the average distances. More precisely, time-averaged interaction energies behave typically smoother in terms of the average distance than the corresponding pair-potentials. This averaging effect is referred to as the thermal smoothing effect. Here, we estimate the strength of the thermal smoothing effect on the Lennard-Jones pair-potential for globular proteins at ambient conditions using x-ray diffraction and simulation data of a representative set of proteins. For specific atom species, we find a significant smoothing effect where the time-averaged interaction energy of a single atom pair can differ by various tens of cal/mol from the Lennard-Jones potential at the average distance. Importantly, we observe a dependency of the effect on the local environment of the involved atoms. The effect is typically weaker for bulky backbone atoms in beta sheets than for side-chain atoms belonging to other secondary structure on the surface of the protein. The results of this work have important practical implications for protein software relying on free energy expressions. We show that the accuracy of free energy expressions can largely be increased by introducing environment specific Lennard-Jones parameters accounting for the fact that the typical thermal motion of protein atoms depends strongly on their local environment.
Direct measurement of fast transients by using boot-strapped waveform averaging
Olsson, Mattias; Edman, Fredrik; Karki, Khadga Jung
2018-03-01
An approximation to coherent sampling, also known as boot-strapped waveform averaging, is presented. The method uses digital cavities to determine the condition for coherent sampling. It can be used to increase the effective sampling rate of a repetitive signal and the signal to noise ratio simultaneously. The method is demonstrated by using it to directly measure the fluorescence lifetime from Rhodamine 6G by digitizing the signal from a fast avalanche photodiode. The obtained lifetime of 4.0 ns is in agreement with the known values.
Complication rate in unprotected carotid artery stenting with closed-cell stents
International Nuclear Information System (INIS)
Tietke, Marc W.K.; Kerby, Tina; Alfke, Karsten; Riedel, Christian; Rohr, Axel; Jensen, Ulf; Jansen, Olaf; Zimmermann, Phillip; Stingele, Robert
2010-01-01
The discussion on the use of protection devices (PDs) in carotid artery stenting (CAS) is gaining an increasing role in lowering the periprocedural complication rates. While many reviews and reports with retrospective data analysis do promote the use of PDs the most recent multi-centre trials are showing advantages for unprotected CAS combined with closed-cell stent designs. We retrospectively analysed 358 unprotected CAS procedures performed from January 2003 to June 2009 in our clinic. Male/female ratio was 2.68/1. The average age was 69.3 years. Seventy-three percent (261/358) showed initial neurological symptoms. All patients were treated on a standardised interventional protocol. A closed and small-sized cell designed stent was implanted in most cases (85.2%). One hundred seventy-one (47.8%) were controlled by Doppler ultrasonography usually at first in a 3-month and later in 6-month intervals. The peri-interventional and 30-day mortality/stroke rate was 4.19% (15/358). These events included three deaths, five hyperperfusion syndromes (comprising one death by a secondary fatal intracranial haemorrhage), one subarachnoid haemorrhage and seven ischaemic strokes. Only 20% (3/15) of all complications occurred directly peri-interventional. The overall peri-interventional complication rate was 0.8% (3/358). Most complications occurred in initial symptomatic patients (5.36%). The in-stent restenosis rate for more than 70% was 7% (12/171) detected at an average of 9.8 month. Our clinical outcome demonstrates that unprotected CAS with small cell designed stents results in a very low procedural complication rate, which makes the use of a protection device dispensable. (orig.)
Gao, Peng
2018-04-01
This work concerns the problem associated with averaging principle for a higher order nonlinear Schrödinger equation perturbed by a oscillating term arising as the solution of a stochastic reaction-diffusion equation evolving with respect to the fast time. This model can be translated into a multiscale stochastic partial differential equations. Stochastic averaging principle is a powerful tool for studying qualitative analysis of stochastic dynamical systems with different time-scales. To be more precise, under suitable conditions, we prove that there is a limit process in which the fast varying process is averaged out and the limit process which takes the form of the higher order nonlinear Schrödinger equation is an average with respect to the stationary measure of the fast varying process. Finally, by using the Khasminskii technique we can obtain the rate of strong convergence for the slow component towards the solution of the averaged equation, and as a consequence, the system can be reduced to a single higher order nonlinear Schrödinger equation with a modified coefficient.
Gao, Peng
2018-06-01
This work concerns the problem associated with averaging principle for a higher order nonlinear Schrödinger equation perturbed by a oscillating term arising as the solution of a stochastic reaction-diffusion equation evolving with respect to the fast time. This model can be translated into a multiscale stochastic partial differential equations. Stochastic averaging principle is a powerful tool for studying qualitative analysis of stochastic dynamical systems with different time-scales. To be more precise, under suitable conditions, we prove that there is a limit process in which the fast varying process is averaged out and the limit process which takes the form of the higher order nonlinear Schrödinger equation is an average with respect to the stationary measure of the fast varying process. Finally, by using the Khasminskii technique we can obtain the rate of strong convergence for the slow component towards the solution of the averaged equation, and as a consequence, the system can be reduced to a single higher order nonlinear Schrödinger equation with a modified coefficient.
Self-similarity of higher-order moving averages
Arianos, Sergio; Carbone, Anna; Türk, Christian
2011-10-01
In this work, higher-order moving average polynomials are defined by straightforward generalization of the standard moving average. The self-similarity of the polynomials is analyzed for fractional Brownian series and quantified in terms of the Hurst exponent H by using the detrending moving average method. We prove that the exponent H of the fractional Brownian series and of the detrending moving average variance asymptotically agree for the first-order polynomial. Such asymptotic values are compared with the results obtained by the simulations. The higher-order polynomials correspond to trend estimates at shorter time scales as the degree of the polynomial increases. Importantly, the increase of polynomial degree does not require to change the moving average window. Thus trends at different time scales can be obtained on data sets with the same size. These polynomials could be interesting for those applications relying on trend estimates over different time horizons (financial markets) or on filtering at different frequencies (image analysis).
Survival Rate of Limb Replantation in Different Age Groups.
Tatebe, Masahiro; Urata, Shiro; Tanaka, Kenji; Kurahashi, Toshikazu; Takeda, Shinsuke; Hirata, Hitoshi
2017-08-01
Revascularization of damaged limbs/digits is technically feasible, but indications for surgical replantation remain controversial. The authors analyzed the survival rate of upper limb amputations and the associated factors in different age groups. They grouped 371 limb/digit amputees (average age, 44 years; range, 2-85 years) treated in their hospital during the past 10 years into three groups based on age (young, ≤ 15 years, n = 12; adult, 16-64 years, n = 302; elderly, ≥ 65 years, n = 57) and analyzed their injury type (extent of injury and stump status), operation method, presence of medical complications (Charlson comorbidity index), and survival rate. There were 168 replantations, and the overall replantation survival rate was 93%. The Charlson comorbidity index of the replantation patients was 0 in 124 cases; 1 in 32; 2 in 9; and 3 in 3, but it did not show any significant difference in survival rate after replantation. Eight elderly patients (14%) did not opt for replantation. Younger patients tended to undergo replantation, but they had lower success rates due to their severe injury status. The results of this study show that the survival rate of replantation in elderly patients is equal to that in adults. Stump evaluation is important for survival, but the presence of medical complications is not associated with the overall survival rate.
Determinants of College Grade Point Averages
Bailey, Paul Dean
2012-01-01
Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the…
A constant travel time budget? In search for explanations for an increase in average travel time
Rietveld, P.; Wee, van B.
2002-01-01
Recent research suggests that during the past decades the average travel time of the Dutch population has probably increased. However, different datasources show different levels of increase. Possible causes of the increase in average travel time are presented here. Increased incomes have
Rotational averaging of multiphoton absorption cross sections
Energy Technology Data Exchange (ETDEWEB)
Friese, Daniel H., E-mail: daniel.h.friese@uit.no; Beerepoot, Maarten T. P.; Ruud, Kenneth [Centre for Theoretical and Computational Chemistry, University of Tromsø — The Arctic University of Norway, N-9037 Tromsø (Norway)
2014-11-28
Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.
The inner state differences of preterm birth rates in Brazil: a time series study
Directory of Open Access Journals (Sweden)
Rosana Rosseto de Oliveira
2016-05-01
Full Text Available Abstract Background Preterm birth is a serious public health problem, as it is linked to high rates of neonatal and child morbidity and mortality. The prevalence of premature births has increased worldwide, with regional differences. The objective of this study was to analyze the trend of preterm births in the state of Paraná, Brazil, according to Macro-regional and Regional Health Offices (RHOs. Methods This is an ecological time series study using preterm births records from the national live birth registry system of Brazil’s National Health Service - Live Birth Information System (Sinasc, for residents of the state of Paraná, Brazil, between 2000 and 2013. The preterm birth rates was calculated on a yearly basis and grouped into three-year periods (2000–2002, 2003–2005, 2006–2008, 2009–2011 and one two-year period (2012–2013, according to gestational age and mother’s Regional Health Office of residence. The polynomial regression model was used for trend analysis. Results The predominance of preterm birth rate increased from 6.8 % in 2000 to 10.5 % in 2013, with an average increase of 0.20 % per year (r2 = 0.89, and a greater share of moderate preterm births (32 to <37 weeks, which increased from 5.8 % to 9 %. The same pattern was observed for all Macro-regional Health Offices, with highlight to the Northern Macro-Regional Office, which showed the highest average rate of prematurity and average annual growth during that period (7.55 % and 0.35 %, respectively. The trend analysis of preterm birth rates according to RHO showed a growing trend for almost all RHOs – except for the 7th RHO where a declining trend was observed (−0.95 a year; and in the 20th, 21st and 22nd RHOs which remained unchanged. In the last three-year of the study period (2011–2013, no RHO showed preterm birth rates below 7.3 % or prevalence of moderate preterm birth below 9.4 %. Conclusions The results show an increase in preterm births
Too-connected versus too-big-to-fail: banks’ network centrality and overnight interest rates.
Gabrieli, S.
2012-01-01
What influences banks’ borrowing costs in the unsecured money market? The objective of this paper is to test whether measures of centrality, quantifying network effects due to interactions among banks in the market, can help explain heterogeneous patterns in the interest rates paid to borrow unsecured funds once bank size and other bank and market factors that affect the overnight segment are controlled for. Preliminary evidence shows that large banks borrow on average at better rates compare...
Li, Huaqing; Chen, Guo; Huang, Tingwen; Dong, Zhaoyang; Zhu, Wei; Gao, Lan
2016-12-01
In this paper, we consider the event-triggered distributed average-consensus of discrete-time first-order multiagent systems with limited communication data rate and general directed network topology. In the framework of digital communication network, each agent has a real-valued state but can only exchange finite-bit binary symbolic data sequence with its neighborhood agents at each time step due to the digital communication channels with energy constraints. Novel event-triggered dynamic encoder and decoder for each agent are designed, based on which a distributed control algorithm is proposed. A scheme that selects the number of channel quantization level (number of bits) at each time step is developed, under which all the quantizers in the network are never saturated. The convergence rate of consensus is explicitly characterized, which is related to the scale of network, the maximum degree of nodes, the network structure, the scaling function, the quantization interval, the initial states of agents, the control gain and the event gain. It is also found that under the designed event-triggered protocol, by selecting suitable parameters, for any directed digital network containing a spanning tree, the distributed average consensus can be always achieved with an exponential convergence rate based on merely one bit information exchange between each pair of adjacent agents at each time step. Two simulation examples are provided to illustrate the feasibility of presented protocol and the correctness of the theoretical results.
Optimal transformation for correcting partial volume averaging effects in magnetic resonance imaging
International Nuclear Information System (INIS)
Soltanian-Zadeh, H.; Windham, J.P.; Yagle, A.E.
1993-01-01
Segmentation of a feature of interest while correcting for partial volume averaging effects is a major tool for identification of hidden abnormalities, fast and accurate volume calculation, and three-dimensional visualization in the field of magnetic resonance imaging (MRI). The authors present the optimal transformation for simultaneous segmentation of a desired feature and correction of partial volume averaging effects, while maximizing the signal-to-noise ratio (SNR) of the desired feature. It is proved that correction of partial volume averaging effects requires the removal of the interfering features from the scene. It is also proved that correction of partial volume averaging effects can be achieved merely by a linear transformation. It is finally shown that the optimal transformation matrix is easily obtained using the Gram-Schmidt orthogonalization procedure, which is numerically stable. Applications of the technique to MRI simulation, phantom, and brain images are shown. They show that in all cases the desired feature is segmented from the interfering features and partial volume information is visualized in the resulting transformed images
Results of Propellant Mixing Variable Study Using Precise Pressure-Based Burn Rate Calculations
Stefanski, Philip L.
2014-01-01
A designed experiment was conducted in which three mix processing variables (pre-curative addition mix temperature, pre-curative addition mixing time, and mixer speed) were varied to estimate their effects on within-mix propellant burn rate variability. The chosen discriminator for the experiment was the 2-inch diameter by 4-inch long (2x4) Center-Perforated (CP) ballistic evaluation motor. Motor nozzle throat diameters were sized to produce a common targeted chamber pressure. Initial data analysis did not show a statistically significant effect. Because propellant burn rate must be directly related to chamber pressure, a method was developed that showed statistically significant effects on chamber pressure (either maximum or average) by adjustments to the process settings. Burn rates were calculated from chamber pressures and these were then normalized to a common pressure for comparative purposes. The pressure-based method of burn rate determination showed significant reduction in error when compared to results obtained from the Brooks' modification of the propellant web-bisector burn rate determination method. Analysis of effects using burn rates calculated by the pressure-based method showed a significant correlation of within-mix burn rate dispersion to mixing duration and the quadratic of mixing duration. The findings were confirmed in a series of mixes that examined the effects of mixing time on burn rate variation, which yielded the same results.
Directory of Open Access Journals (Sweden)
Liu Yang
2017-01-01
Full Text Available We construct a new two-stage stochastic model of supply chain with multiple factories and distributors for perishable product. By introducing a second-order stochastic dominance (SSD constraint, we can describe the preference consistency of the risk taker while minimizing the expected cost of company. To solve this problem, we convert it into a one-stage stochastic model equivalently; then we use sample average approximation (SAA method to approximate the expected values of the underlying random functions. A smoothing approach is proposed with which we can get the global solution and avoid introducing new variables and constraints. Meanwhile, we investigate the convergence of an optimal value from solving the transformed model and show that, with probability approaching one at exponential rate, the optimal value converges to its counterpart as the sample size increases. Numerical results show the effectiveness of the proposed algorithm and analysis.
METHODS OF CONTROLLING THE AVERAGE DIAMETER OF THE THREAD WITH ASYMMETRICAL PROFILE
Directory of Open Access Journals (Sweden)
L. M. Aliomarov
2015-01-01
Full Text Available To handle the threaded holes in hard materials made of marine machinery, operating at high temperatures, heavy loads and in aggressive environments, the authors have developed the combined tool core drill -tap with a special cutting scheme, which has an asymmetric thread profile on the tap part. In order to control the average diameter of the thread of tap part of the combined tool was used the method three wires, which allows to make continuous measurement of the average diameter of the thread along the entire profile. Deviation from the average diameter from the sample is registered by inductive sensor and is recorded by the recorder. In the work are developed and presented control schemes of the average diameter of the threads with a symmetrical and asymmetrical profile. On the basis of these schemes are derived formulas for calculating the theoretical option to set the wires in the thread profile in the process of measuring the average diameter. Conducted complex research and the introduction of the combined instrument core drill-tap in the production of products of marine engineering, shipbuilding, ship repair power plants made of hard materials showed a high efficiency of the proposed technology for the processing of high-quality small-diameter threaded holes that meet modern requirements.
Comparative rates of violence in chimpanzees and humans.
Wrangham, Richard W; Wilson, Michael L; Muller, Martin N
2006-01-01
This paper tests the proposal that chimpanzees (Pan troglodytes) and humans have similar rates of death from intraspecific aggression, whereas chimpanzees have higher rates of non-lethal physical attack (Boehm 1999, Hierarchy in the forest: the evolution of egalitarian behavior. Harvard University Press). First, we assembled data on lethal aggression from long-term studies of nine communities of chimpanzees living in five populations. We calculated rates of death from intraspecific aggression both within and between communities. Variation among communities in mortality rates from aggression was high, and rates of death from intercommunity and intracommunity aggression were not correlated. Estimates for average rates of lethal violence for chimpanzees proved to be similar to average rates for subsistence societies of hunter-gatherers and farmers. Second, we compared rates of non-lethal physical aggression for two populations of chimpanzees and one population of recently settled hunter-gatherers. Chimpanzees had rates of aggression between two and three orders of magnitude higher than humans. These preliminary data support Boehm's hypothesis.
International Nuclear Information System (INIS)
Eimerl, D.
1985-01-01
High-average-power frequency conversion using solid state nonlinear materials is discussed. Recent laboratory experience and new developments in design concepts show that current technology, a few tens of watts, may be extended by several orders of magnitude. For example, using KD*P, efficient doubling (>70%) of Nd:YAG at average powers approaching 100 KW is possible; and for doubling to the blue or ultraviolet regions, the average power may approach 1 MW. Configurations using segmented apertures permit essentially unlimited scaling of average power. High average power is achieved by configuring the nonlinear material as a set of thin plates with a large ratio of surface area to volume and by cooling the exposed surfaces with a flowing gas. The design and material fabrication of such a harmonic generator are well within current technology
Maddix, Danielle C.; Sampaio, Luiz; Gerritsen, Margot
2018-05-01
The degenerate parabolic Generalized Porous Medium Equation (GPME) poses numerical challenges due to self-sharpening and its sharp corner solutions. For these problems, we show results for two subclasses of the GPME with differentiable k (p) with respect to p, namely the Porous Medium Equation (PME) and the superslow diffusion equation. Spurious temporal oscillations, and nonphysical locking and lagging have been reported in the literature. These issues have been attributed to harmonic averaging of the coefficient k (p) for small p, and arithmetic averaging has been suggested as an alternative. We show that harmonic averaging is not solely responsible and that an improved discretization can mitigate these issues. Here, we investigate the causes of these numerical artifacts using modified equation analysis. The modified equation framework can be used for any type of discretization. We show results for the second order finite volume method. The observed problems with harmonic averaging can be traced to two leading error terms in its modified equation. This is also illustrated numerically through a Modified Harmonic Method (MHM) that can locally modify the critical terms to remove the aforementioned numerical artifacts.
Hybrid empirical mode decomposition- ARIMA for forecasting exchange rates
Abadan, Siti Sarah; Shabri, Ani; Ismail, Shuhaida
2015-02-01
This paper studied the forecasting of monthly Malaysian Ringgit (MYR)/ United State Dollar (USD) exchange rates using the hybrid of two methods which are the empirical model decomposition (EMD) and the autoregressive integrated moving average (ARIMA). MYR is pegged to USD during the Asian financial crisis causing the exchange rates are fixed to 3.800 from 2nd of September 1998 until 21st of July 2005. Thus, the chosen data in this paper is the post-July 2005 data, starting from August 2005 to July 2010. The comparative study using root mean square error (RMSE) and mean absolute error (MAE) showed that the EMD-ARIMA outperformed the single-ARIMA and the random walk benchmark model.
International Nuclear Information System (INIS)
Louie, K.; Wiegand, R.D.; Anderson, R.E.
1988-01-01
The authors have studied the de novo synthesis and subsequent turnover of major docosahexaenoate-containing molecular species in frog rod outer segment (ROS) phospholipids following intravitreal injection of [2- 3 H]glycerol. On selected days after injection, ROS were prepared and phospholipids extracted. Phosphatidylcholine (PC), phosphatidylethanolamine (PE), and phosphatidylserine (PS) were isolated and converted to diradylglycerols with phospholipase C. Diradylglycerols were derivatized with benzoic anhydride and resolve into diacylglycerobenzoates and ether-linked glycerobenzoates. The diacylglycerobenzoates were fractionated into molecular species by HPLC, quantitated, and counted for radioactivity. Label was incorporated into ROS phospholipids by day 1 and was followed up through the eighth day. The dipolyenoic species 22:6-22:6 from PC showed 1 3-5 times higher radiospecific activity than the same species from either PE or PS. The rate of decline was determined by calculating the half-life of each molecular species, which was used as a measure of the turnover of the species. The percent distribution of radioactivity in the molecular species of PC and PE was quite different from the relative mass distribution at day 1. However, percent dpm approached the mole percent by 31 days. In PS, percent dpm and mole percent were the same at all time points. These results indicate that the molecular species composition of PC and PE in frog retinal ROS is determined by a combination of factors, which include rate of synthesis, rate of degradation, and selective interconversions. In contrast, PS composition appears to be determined at the time of synthesis
Calculating ensemble averaged descriptions of protein rigidity without sampling.
Directory of Open Access Journals (Sweden)
Luis C González
Full Text Available Previous works have demonstrated that protein rigidity is related to thermodynamic stability, especially under conditions that favor formation of native structure. Mechanical network rigidity properties of a single conformation are efficiently calculated using the integer body-bar Pebble Game (PG algorithm. However, thermodynamic properties require averaging over many samples from the ensemble of accessible conformations to accurately account for fluctuations in network topology. We have developed a mean field Virtual Pebble Game (VPG that represents the ensemble of networks by a single effective network. That is, all possible number of distance constraints (or bars that can form between a pair of rigid bodies is replaced by the average number. The resulting effective network is viewed as having weighted edges, where the weight of an edge quantifies its capacity to absorb degrees of freedom. The VPG is interpreted as a flow problem on this effective network, which eliminates the need to sample. Across a nonredundant dataset of 272 protein structures, we apply the VPG to proteins for the first time. Our results show numerically and visually that the rigidity characterizations of the VPG accurately reflect the ensemble averaged [Formula: see text] properties. This result positions the VPG as an efficient alternative to understand the mechanical role that chemical interactions play in maintaining protein stability.
Oliveira, Ryan D.; Mousel, Michelle R.; Pabilonia, Kristy L.; Highland, Margaret A.; Taylor, J. Bret; Knowles, Donald P.
2017-01-01
Coxiella burnetii is a globally distributed zoonotic bacterial pathogen that causes abortions in ruminant livestock. In humans, an influenza-like illness results with the potential for hospitalization, chronic infection, abortion, and fatal endocarditis. Ruminant livestock, particularly small ruminants, are hypothesized to be the primary transmission source to humans. A recent Netherlands outbreak from 2007–2010 traced to dairy goats resulted in over 4,100 human cases with estimated costs of more than 300 million euros. Smaller human Q fever outbreaks of small ruminant origin have occurred in the United States, and characterizing shedding is important to understand the risk of future outbreaks. In this study, we assessed bacterial shedding and seroprevalence in 100 sheep from an Idaho location associated with a 1984 human Q fever outbreak. We observed 5% seropositivity, which was not significantly different from the national average of 2.7% for the U.S. (P>0.05). Furthermore, C. burnetii was not detected by quantitative PCR from placentas, vaginal swabs, or fecal samples. Specifically, a three-target quantitative PCR of placenta identified 0.0% shedding (exact 95% confidence interval: 0.0%-2.9%). While presence of seropositive individuals demonstrates some historical C. burnetii exposure, the placental sample confidence interval suggests 2016 shedding events were rare or absent. The location maintained the flock with little or no depopulation in 1984 and without C. burnetii vaccination during or since 1984. It is not clear how a zero-shedding rate was achieved in these sheep beyond natural immunity, and more work is required to discover and assess possible factors that may contribute towards achieving zero-shedding status. We provide the first U.S. sheep placental C. burnetii shedding update in over 60 years and demonstrate potential for C. burnetii shedding to reach undetectable levels after an outbreak event even in the absence of targeted interventions, such
Directory of Open Access Journals (Sweden)
Ryan D Oliveira
Full Text Available Coxiella burnetii is a globally distributed zoonotic bacterial pathogen that causes abortions in ruminant livestock. In humans, an influenza-like illness results with the potential for hospitalization, chronic infection, abortion, and fatal endocarditis. Ruminant livestock, particularly small ruminants, are hypothesized to be the primary transmission source to humans. A recent Netherlands outbreak from 2007-2010 traced to dairy goats resulted in over 4,100 human cases with estimated costs of more than 300 million euros. Smaller human Q fever outbreaks of small ruminant origin have occurred in the United States, and characterizing shedding is important to understand the risk of future outbreaks. In this study, we assessed bacterial shedding and seroprevalence in 100 sheep from an Idaho location associated with a 1984 human Q fever outbreak. We observed 5% seropositivity, which was not significantly different from the national average of 2.7% for the U.S. (P>0.05. Furthermore, C. burnetii was not detected by quantitative PCR from placentas, vaginal swabs, or fecal samples. Specifically, a three-target quantitative PCR of placenta identified 0.0% shedding (exact 95% confidence interval: 0.0%-2.9%. While presence of seropositive individuals demonstrates some historical C. burnetii exposure, the placental sample confidence interval suggests 2016 shedding events were rare or absent. The location maintained the flock with little or no depopulation in 1984 and without C. burnetii vaccination during or since 1984. It is not clear how a zero-shedding rate was achieved in these sheep beyond natural immunity, and more work is required to discover and assess possible factors that may contribute towards achieving zero-shedding status. We provide the first U.S. sheep placental C. burnetii shedding update in over 60 years and demonstrate potential for C. burnetii shedding to reach undetectable levels after an outbreak event even in the absence of targeted
Multiphase averaging of periodic soliton equations
International Nuclear Information System (INIS)
Forest, M.G.
1979-01-01
The multiphase averaging of periodic soliton equations is considered. Particular attention is given to the periodic sine-Gordon and Korteweg-deVries (KdV) equations. The periodic sine-Gordon equation and its associated inverse spectral theory are analyzed, including a discussion of the spectral representations of exact, N-phase sine-Gordon solutions. The emphasis is on physical characteristics of the periodic waves, with a motivation from the well-known whole-line solitons. A canonical Hamiltonian approach for the modulational theory of N-phase waves is prescribed. A concrete illustration of this averaging method is provided with the periodic sine-Gordon equation; explicit averaging results are given only for the N = 1 case, laying a foundation for a more thorough treatment of the general N-phase problem. For the KdV equation, very general results are given for multiphase averaging of the N-phase waves. The single-phase results of Whitham are extended to general N phases, and more importantly, an invariant representation in terms of Abelian differentials on a Riemann surface is provided. Several consequences of this invariant representation are deduced, including strong evidence for the Hamiltonian structure of N-phase modulational equations
Tendon surveillance requirements - average tendon force
International Nuclear Information System (INIS)
Fulton, J.F.
1982-01-01
Proposed Rev. 3 to USNRC Reg. Guide 1.35 discusses the need for comparing, for individual tendons, the measured and predicted lift-off forces. Such a comparison is intended to detect any abnormal tendon force loss which might occur. Recognizing that there are uncertainties in the prediction of tendon losses, proposed Guide 1.35.1 has allowed specific tolerances on the fundamental losses. Thus, the lift-off force acceptance criteria for individual tendons appearing in Reg. Guide 1.35, Proposed Rev. 3, is stated relative to a lower bound predicted tendon force, which is obtained using the 'plus' tolerances on the fundamental losses. There is an additional acceptance criterion for the lift-off forces which is not specifically addressed in these two Reg. Guides; however, it is included in a proposed Subsection IWX to ASME Code Section XI. This criterion is based on the overriding requirement that the magnitude of prestress in the containment structure be sufficeint to meet the minimum prestress design requirements. This design requirement can be expressed as an average tendon force for each group of vertical hoop, or dome tendons. For the purpose of comparing the actual tendon forces with the required average tendon force, the lift-off forces measured for a sample of tendons within each group can be averaged to construct the average force for the entire group. However, the individual lift-off forces must be 'corrected' (normalized) prior to obtaining the sample average. This paper derives the correction factor to be used for this purpose. (orig./RW)
A spatially-averaged mathematical model of kidney branching morphogenesis
Zubkov, V.S.
2015-08-01
© 2015 Published by Elsevier Ltd. Kidney development is initiated by the outgrowth of an epithelial ureteric bud into a population of mesenchymal cells. Reciprocal morphogenetic responses between these two populations generate a highly branched epithelial ureteric tree with the mesenchyme differentiating into nephrons, the functional units of the kidney. While we understand some of the mechanisms involved, current knowledge fails to explain the variability of organ sizes and nephron endowment in mice and humans. Here we present a spatially-averaged mathematical model of kidney morphogenesis in which the growth of the two key populations is described by a system of time-dependant ordinary differential equations. We assume that branching is symmetric and is invoked when the number of epithelial cells per tip reaches a threshold value. This process continues until the number of mesenchymal cells falls below a critical value that triggers cessation of branching. The mathematical model and its predictions are validated against experimentally quantified C57Bl6 mouse embryonic kidneys. Numerical simulations are performed to determine how the final number of branches changes as key system parameters are varied (such as the growth rate of tip cells, mesenchyme cells, or component cell population exit rate). Our results predict that the developing kidney responds differently to loss of cap and tip cells. They also indicate that the final number of kidney branches is less sensitive to changes in the growth rate of the ureteric tip cells than to changes in the growth rate of the mesenchymal cells. By inference, increasing the growth rate of mesenchymal cells should maximise branch number. Our model also provides a framework for predicting the branching outcome when ureteric tip or mesenchyme cells change behaviour in response to different genetic or environmental developmental stresses.
A spatially-averaged mathematical model of kidney branching morphogenesis
Zubkov, V.S.; Combes, A.N.; Short, K.M.; Lefevre, J.; Hamilton, N.A.; Smyth, I.M.; Little, M.H.; Byrne, H.M.
2015-01-01
© 2015 Published by Elsevier Ltd. Kidney development is initiated by the outgrowth of an epithelial ureteric bud into a population of mesenchymal cells. Reciprocal morphogenetic responses between these two populations generate a highly branched epithelial ureteric tree with the mesenchyme differentiating into nephrons, the functional units of the kidney. While we understand some of the mechanisms involved, current knowledge fails to explain the variability of organ sizes and nephron endowment in mice and humans. Here we present a spatially-averaged mathematical model of kidney morphogenesis in which the growth of the two key populations is described by a system of time-dependant ordinary differential equations. We assume that branching is symmetric and is invoked when the number of epithelial cells per tip reaches a threshold value. This process continues until the number of mesenchymal cells falls below a critical value that triggers cessation of branching. The mathematical model and its predictions are validated against experimentally quantified C57Bl6 mouse embryonic kidneys. Numerical simulations are performed to determine how the final number of branches changes as key system parameters are varied (such as the growth rate of tip cells, mesenchyme cells, or component cell population exit rate). Our results predict that the developing kidney responds differently to loss of cap and tip cells. They also indicate that the final number of kidney branches is less sensitive to changes in the growth rate of the ureteric tip cells than to changes in the growth rate of the mesenchymal cells. By inference, increasing the growth rate of mesenchymal cells should maximise branch number. Our model also provides a framework for predicting the branching outcome when ureteric tip or mesenchyme cells change behaviour in response to different genetic or environmental developmental stresses.
Rate-distortion in Closed-Loop LTI Systems
DEFF Research Database (Denmark)
Silva, Eduardo; Derpich, Milan; Østergaard, Jan
2013-01-01
We consider a networked LTI system subject to an average data-rate constraint in the feedback path.We provide upper bounds to the minimal source coding rate required to achieve mean square stability and a desired level of performance. In the quadratic Gaussian case, an almost complete rate...
Xie, Ting-Ting; Su, Pei-Xi; Gao, Song
2010-06-01
The measurement system of Li-8100 carbon flux and the modified assimilation chamber were used to study the photosynthetic characteristics of cotton (Gossypium hirsutum L.) canopy in the oasis edge region in middle reach of Heihe River Basin, mid Hexi Corridor of Gansu. At the experimental site, soil respiration and evaporation rates were significantly higher in late June than in early August, and the diurnal variation of canopy photosynthetic rate showed single-peak type. The photosynthetic rate was significantly higher (P transpiration rate also presented single-peak type, with the daily average value in late June and early August being (3.10 +/- 0.34) mmol H2O x m(-2) x s(-1) and (1.60 +/- 0.26) mmol H2O x m(-2) x s(-1), respectively, and differed significantly (P efficiency in late June and early August was (15.67 +/- 1.77) mmol CO2 x mol(-1) H2O and (23.08 +/- 5.54) mmol CO2 x mol(-1) H2O, respectively, but the difference was not significant (P > 0.05). Both in late June and in early August, the canopy photosynthetic rate was positively correlated with air temperature, PAR, and soil moisture content, suggesting that there was no midday depression of photosynthesis in the two periods. In August, the canopy photosynthetic rate and transpiration rate decreased significantly, because of the lower soil moisture content and leaf senescence, but the canopy water use efficiency had no significant decrease.
Increasing average period lengths by switching of robust chaos maps in finite precision
Nagaraj, N.; Shastry, M. C.; Vaidya, P. G.
2008-12-01
Grebogi, Ott and Yorke (Phys. Rev. A 38, 1988) have investigated the effect of finite precision on average period length of chaotic maps. They showed that the average length of periodic orbits (T) of a dynamical system scales as a function of computer precision (ɛ) and the correlation dimension (d) of the chaotic attractor: T ˜ɛ-d/2. In this work, we are concerned with increasing the average period length which is desirable for chaotic cryptography applications. Our experiments reveal that random and chaotic switching of deterministic chaotic dynamical systems yield higher average length of periodic orbits as compared to simple sequential switching or absence of switching. To illustrate the application of switching, a novel generalization of the Logistic map that exhibits Robust Chaos (absence of attracting periodic orbits) is first introduced. We then propose a pseudo-random number generator based on chaotic switching between Robust Chaos maps which is found to successfully pass stringent statistical tests of randomness.
Investigating the reasons for Spain's falling birth rate.
Bosch, X
1998-09-12
On August 25, 1998, the Spanish National Institute of Statistics announced that Spain, which has had the most accelerated decrease in fecundity of all European countries during the last 25 years, had the lowest birth rate in Europe. Spain's average birth rate was 2.86 in 1970, 2.21 in 1980, and 1.21 in 1994. According to Eurostat, Spain's average birth rate in 1995 was 1.18, while the European Community's was 1.43. Although all the countries of the European Community have birth rates below 2.1, Spain's is 44% below this minimum rate needed to achieve generation replacement. In 1994 and 1997, in 5 northern communities, including the Basque country and Galicia, the birth rate was less than 1.0. The lowest birth rate (0.76 in 1997) was in the northern region of Asturias. Although southern autonomous regions have higher birth rates (between 1.21 and 1.44 for 1997) than northern ones, these are also decreasing (from 3.36 in 1970 to 1.29 in 1997 in Andalusia). Credit for the rapid decrease is given to improved quality of life and education, increased contraceptive usage, and social change. Employment of women has increased, and unemployed sons are remaining at home for longer periods. The most important reasons are 1) the increased number of single people and 2) the increased average age of women having their first child. The latter increase began in 1988. Most Spanish women now have their first child between the ages of 30 and 39 years. The average age was 28 years in 1975; in 1995, it was 30 years. Women from the northern autonomous regions have the highest average age at first birth (Basque women, 31.2 years in 1995). The pattern of fecundity in Spain is different from other countries in Europe. In Spain, the decrease started in the late 1960s and early 1970s. Until the 1980s, Spain had one of the highest birth rates in Europe. This was followed by a decrease in the 1990s. However, in 1997, there were 3000 more births than in 1996. The National Institute of Demography
Real-time heart rate measurement for multi-people using compressive tracking
Liu, Lingling; Zhao, Yuejin; Liu, Ming; Kong, Lingqin; Dong, Liquan; Ma, Feilong; Pang, Zongguang; Cai, Zhi; Zhang, Yachu; Hua, Peng; Yuan, Ruifeng
2017-09-01
The rise of aging population has created a demand for inexpensive, unobtrusive, automated health care solutions. Image PhotoPlethysmoGraphy(IPPG) aids in the development of these solutions by allowing for the extraction of physiological signals from video data. However, the main deficiencies of the recent IPPG methods are non-automated, non-real-time and susceptible to motion artifacts(MA). In this paper, a real-time heart rate(HR) detection method for multiple subjects simultaneously was proposed and realized using the open computer vision(openCV) library, which consists of getting multiple subjects' facial video automatically through a Webcam, detecting the region of interest (ROI) in the video, reducing the false detection rate by our improved Adaboost algorithm, reducing the MA by our improved compress tracking(CT) algorithm, wavelet noise-suppression algorithm for denoising and multi-threads for higher detection speed. For comparison, HR was measured simultaneously using a medical pulse oximetry device for every subject during all sessions. Experimental results on a data set of 30 subjects show that the max average absolute error of heart rate estimation is less than 8 beats per minute (BPM), and the processing speed of every frame has almost reached real-time: the experiments with video recordings of ten subjects under the condition of the pixel resolution of 600× 800 pixels show that the average HR detection time of 10 subjects was about 17 frames per second (fps).
Rates and statistics of work absenteeism: the case of Universidad Nacional (Costa Rica
Directory of Open Access Journals (Sweden)
Ingrid Berrocal
2012-12-01
Full Text Available The objective of this research was to determine rates and statistics of work absenteeism due to medical conditions at Universidad Nacional in Heredia, Costa Rica, from November 2005 to October 2007. A descriptive-quantitative study was conducted to analyze the variables for the above period. A total of 4,345 sick leave forms were generated, which represented 24,551 work days lost. The average absenteeism rate was 2.46%, while the frequency rate showed a cyclic behavior during the months with more and less incidence of sick leave forms. The severity rate determined that the most affected units were the Graduate Studies Department and the Academic Affairs Commission, and the average sick leave duration was 5.66 days. The occupational groups with more sick leave permits were identified and the top possible cause for work absenteeism in those groups was respiratory conditions. University authorities should deeply reflect on these findings to achieve short term comprehensive efficient and productive safety and health policies for Universidad Nacional employees. This research should be a starting point for future studies that will complement and include work absenteeism in an integral way as part of Talent Management in the institution.
States with low non-fatal injury rates have high fatality rates and vice-versa.
Mendeloff, John; Burns, Rachel
2013-05-01
State-level injury rates or fatality rates are sometimes used in studies of the impact of various safety programs or other state policies. How much does the metric used affect the view of relative occupational risks among U.S. states? This paper uses a measure of severe injuries (fatalities) and of less severe injuries (non-fatal injuries with days away from work, restricted work, or job transfer-DART) to examine that issue. We looked at the correlation between the average DART injury rate (from the BLS Survey of Occupational Injuries and Illnesses) and an adjusted average fatality rate (from the BLS Census of Fatal Occupational Injuries) in the construction sector for states for 2003-2005 and for 2006-2008. The RAND Human Subjects Protection Committee determined that this study was exempt from review. The correlations between the fatal and non-fatal injury rates were between -0.30 and -0.70 for all construction and for the subsector of special trade contractors. The negative correlation was much smaller between the rate of fatal falls from heights and the rate of non-fatal falls from heights. Adjusting for differences in the industry composition of the construction sector across states had minor effects on these results. Although some have suggested that fatal and non-fatal injury rates should not necessarily be positively correlated, no one has suggested that the correlation is negative, which is what we find. We know that reported non-fatal rates are influenced by workers' compensation benefits and other factors. Fatality rates appear to be a more valid measure of risk. Efforts to explain the variations that we find should be undertaken. Copyright © 2012 Wiley Periodicals, Inc.
Grain, milling, and head rice yields as affected by nitrogen rate and bio-fertilizer application
Directory of Open Access Journals (Sweden)
Saeed FIROUZI
2015-11-01
Full Text Available To evaluate the effects of nitrogen rate and bio-fertilizer application on grain, milling, and head rice yields, a field experiment was conducted at Rice Research Station of Tonekabon, Iran, in 2013. The experimental design was a factorial treatment arrangement in a randomized complete block with three replicates. Factors were three N rates (0, 75, and 150 kg ha-1 and two bio-fertilizer applications (inoculation and uninoculation with Nitroxin, a liquid bio-fertilizer containing Azospirillum spp. and Azotobacter spp. bacteria. Analysis of variance showed that rice grain yield, panicle number per m2, grain number per panicle, flag leaves area, biological yield, grains N concentration and uptake, grain protein concentration, and head rice yield were significantly affected by N rate, while bio-fertilizer application had significant effect on rice grain yield, grain number per panicle, flag leaves area, biological yield, harvest index, grains N concentration and uptake, and grain protein concentration. Results showed that regardless of bio-fertilizer application, rice grain and biological yields were significantly increased as N application rate increased from 0 to 75 kg ha-1, but did not significantly increase at the higher N rate (150 kg ha-1. Grain yield was significantly increased following bio-fertilizer application when averaged across N rates. Grains N concentration and uptake were significantly increased as N rate increased up to 75 kg ha-1, but further increases in N rate had no significant effect on these traits. Bio-fertilizer application increased significantly grains N concentration and uptake, when averaged across N rates. Regardless of bio-fertilizer application, head rice yield was significantly increased from 56 % to 60 % when N rate increased from 0 to 150 kg ha-1. Therefore, this experiment illustrated that rice grain and head yields increased with increasing N rate, while bio-fertilizer application increased only rice grain
Average and local structure of α-CuI by configurational averaging
International Nuclear Information System (INIS)
Mohn, Chris E; Stoelen, Svein
2007-01-01
Configurational Boltzmann averaging together with density functional theory are used to study in detail the average and local structure of the superionic α-CuI. We find that the coppers are spread out with peaks in the atom-density at the tetrahedral sites of the fcc sublattice of iodines. We calculate Cu-Cu, Cu-I and I-I pair radial distribution functions, the distribution of coordination numbers and the distribution of Cu-I-Cu, I-Cu-I and Cu-Cu-Cu bond-angles. The partial pair distribution functions are in good agreement with experimental neutron diffraction-reverse Monte Carlo, extended x-ray absorption fine structure and ab initio molecular dynamics results. In particular, our results confirm the presence of a prominent peak at around 2.7 A in the Cu-Cu pair distribution function as well as a broader, less intense peak at roughly 4.3 A. We find highly flexible bonds and a range of coordination numbers for both iodines and coppers. This structural flexibility is of key importance in order to understand the exceptional conductivity of coppers in α-CuI; the iodines can easily respond to changes in the local environment as the coppers diffuse, and a myriad of different diffusion-pathways is expected due to the large variation in the local motifs
Experimental demonstration of squeezed-state quantum averaging
DEFF Research Database (Denmark)
Lassen, Mikael Østergaard; Madsen, Lars Skovgaard; Sabuncu, Metin
2010-01-01
We propose and experimentally demonstrate a universal quantum averaging process implementing the harmonic mean of quadrature variances. The averaged variances are prepared probabilistically by means of linear optical interference and measurement-induced conditioning. We verify that the implemented...
Fourier analysis of spherically averaged momentum densities for some gaseous molecules
International Nuclear Information System (INIS)
Tossel, J.A.; Moore, J.H.
1981-01-01
The spherically averaged autocorrelation function, B(r), of the position-space wavefunction, psi(anti r), is calculated by numerical Fourier transformation from spherically averaged momentum densities, rho(p), obtained from either theoretical wavefunctions or (e,2e) electron-impact ionization experiments. Inspection of B(r) for the π molecular orbitals of C 4 H 6 established that autocorrelation function differences, ΔB(r), can be qualitatively related to bond lengths and numbers of bonding interactions. Differences between B(r) functions obtained from different approximate wavefunctions for a given orbital can be qualitatively understood in terms of wavefunction difference, Δpsi(1anti r), maps for these orbitals. Comparison of the B(r) function for the 1αsub(u) orbital of C 4 H 6 obtained from (e,2e) momentum densities with that obtained from an ab initio SCF MO wavefunction shows differences consistent with expected correlation effects. Thus, B(r) appears to be a useful quantity for relating spherically averaged momentum distributions to position-space wavefunction differences. (orig.)
20 CFR 404.220 - Average-monthly-wage method.
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Average-monthly-wage method. 404.220 Section... INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.220 Average-monthly-wage method. (a) Who is eligible for this method. You must...
Statistical approaches to forecast gamma dose rates by using measurements from the atmosphere
International Nuclear Information System (INIS)
Jeong, H.J.; Hwang, W. T.; Kim, E.H.; Han, M.H.
2008-01-01
In this paper, the results obtained by inter-comparing several statistical techniques for estimating gamma dose rates, such as an exponential moving average model, a seasonal exponential smoothing model and an artificial neural networks model, are reported. Seven years of gamma dose rates data measured in Daejeon City, Korea, were divided into two parts to develop the models and validate the effectiveness of the generated predictions by the techniques mentioned above. Artificial neural networks model shows the best forecasting capability among the three statistical models. The reason why the artificial neural networks model provides a superior prediction to the other models would be its ability for a non-linear approximation. To replace the gamma dose rates when missing data for an environmental monitoring system occurs, the moving average model and the seasonal exponential smoothing model can be better because they are faster and easier for applicability than the artificial neural networks model. These kinds of statistical approaches will be helpful for a real-time control of radio emissions or for an environmental quality assessment. (authors)
Design of Monitoring Tool Heartbeat Rate and Human Body Temperature Based on WEB
Directory of Open Access Journals (Sweden)
Jalinas
2018-01-01
Full Text Available The heart is one of the most important organs in the human body. One way to know heart health is to measure the number of heart beats per minute and body temperature also shows health, many heart rate and body temperature devices but can only be accessed offline. This research aims to design a heart rate detector and human body temperature that the measurement results can be accessed via web pages anywhere and anytime. This device can be used by many users by entering different ID numbers. The design consists of input blocks: pulse sensor, DS18B20 sensor and 3x4 keypad button. Process blocks: Arduino Mega 2560 Microcontroller, Ethernet Shield, router and USB modem. And output block: 16x2 LCD and mobile phone or PC to access web page. Based on the test results, this tool successfully measures the heart rate with an average error percentage of 2.702 % when compared with the oxymeter tool. On the measurement of body temperature get the result of the average error percentage of 2.18 %.
Light-cone averaging in cosmology: formalism and applications
International Nuclear Information System (INIS)
Gasperini, M.; Marozzi, G.; Veneziano, G.; Nugier, F.
2011-01-01
We present a general gauge invariant formalism for defining cosmological averages that are relevant for observations based on light-like signals. Such averages involve either null hypersurfaces corresponding to a family of past light-cones or compact surfaces given by their intersection with timelike hypersurfaces. Generalized Buchert-Ehlers commutation rules for derivatives of these light-cone averages are given. After introducing some adapted ''geodesic light-cone'' coordinates, we give explicit expressions for averaging the redshift to luminosity-distance relation and the so-called ''redshift drift'' in a generic inhomogeneous Universe
Maróti, Ágnes; Wraight, Colin A; Maróti, Péter
2015-02-01
The 2nd electron transfer in reaction center of photosynthetic bacterium Rhodobacter sphaeroides is a two step process in which protonation of QB(-) precedes interquinone electron transfer. The thermal activation and pH dependence of the overall rate constants of different RC variants were measured and compared in solvents of water (H2O) and heavy water (D2O). The electron transfer variants where the electron transfer is rate limiting (wild type and M17DN, L210DN and H173EQ mutants) do not show solvent isotope effect and the significant decrease of the rate constant of the second electron transfer in these mutants is due to lowering the operational pKa of QB(-)/QBH: 4.5 (native), 3.9 (L210DN), 3.7 (M17DN) and 3.1 (H173EQ) at pH7. On the other hand, the proton transfer variants where the proton transfer is rate limiting demonstrate solvent isotope effect of pH-independent moderate magnitude (2.11±0.26 (WT+Ni(2+)), 2.16±0.35 (WT+Cd(2+)) and 2.34±0.44 (L210DN/M17DN)) or pH-dependent large magnitude (5.7 at pH4 (L213DN)). Upon deuteration, the free energy and the enthalpy of activation increase in all proton transfer variants by about 1 kcal/mol and the entropy of activation becomes negligible in L210DN/M17DN mutant. The results are interpreted as manifestation of equilibrium and kinetic solvent isotope effects and the structural, energetic and kinetic possibility of alternate proton delivery pathways are discussed. Copyright © 2014 Elsevier B.V. All rights reserved.
Using Time Clusters for Following Users’ Shifts in Rating Practices
Directory of Open Access Journals (Sweden)
Dionisis Margaris
2017-12-01
Full Text Available Users that enter ratings for items follow different rating practices, in the sense that, when rating items, some users are more lenient, while others are stricter. This aspect is taken into account by the most widely used similarity metric in user-user collaborative filtering, namely, the Pearson Correlation, which adjusts each individual user rating by the mean value of the ratings entered by the specific user, when computing similarities. However, a user’s rating practices change over time, i.e. a user could start as strict and subsequently become lenient or vice versa. In that sense, the practice of using a single mean value for adjusting users’ ratings is inadequate, since it fails to follow such shifts in users’ rating practices, leading to decreased rating prediction accuracy. In this work, we address this issue by using the concept of dynamic averages introduced earlier and we extend earlier work by (1 introducing the concept of rating time clusters and (2 presenting a novel algorithm for calculating dynamic user averages and exploiting them in user-user collaborative, filtering implementations. The proposed algorithm incorporates the aforementioned concept and is able to follow more successfully shifts in users’ rating practices. It has been evaluated using numerous datasets, and has been found to introduce significant gains in rating prediction accuracy, while outperforming the dynamic average computation approaches that are presented earlier.
On various validity criteria for the configuration average in collisional-radiative codes
Energy Technology Data Exchange (ETDEWEB)
Poirier, M [Commissariat a l' Energie Atomique, Service ' Photons, Atomes et Molecules' , Centre d' Etudes de Saclay, F91191 Gif-sur-Yvette Cedex (France)
2008-01-28
The characterization of out-of-local-thermal-equilibrium plasmas requires the use of collisional-radiative kinetic equations. This leads to the solution of large linear systems, for which statistical treatments such as configuration average may bring considerable simplification. In order to check the validity of this procedure, a criterion based on the comparison between a partial-rate systems and the Saha-Boltzmann solution is discussed in detail here. Several forms of this criterion are discussed. The interest of these variants is that they involve each type of relevant transition (collisional or radiative), which allows one to check separately the influence of each of these processes on the configuration-average validity. The method is illustrated by a charge-distribution analysis in carbon and neon plasmas. Finally, it is demonstrated that when the energy dispersion of every populated configuration is smaller than the electron thermal energy, the proposed criterion is fulfilled in each of its forms.
Bounds on Average Time Complexity of Decision Trees
Chikalov, Igor
2011-01-01
In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any diagnostic problem, the minimum average depth of decision tree is bounded from below by the entropy of probability distribution (with a multiplier 1/log2 k for a problem over a k-valued information system). Among diagnostic problems, the problems with a complete set of attributes have the lowest minimum average depth of decision trees (e.g, the problem of building optimal prefix code [1] and a blood test study in assumption that exactly one patient is ill [23]). For such problems, the minimum average depth of decision tree exceeds the lower bound by at most one. The minimum average depth reaches the maximum on the problems in which each attribute is "indispensable" [44] (e.g., a diagnostic problem with n attributes and kn pairwise different rows in the decision table and the problem of implementing the modulo 2 summation function). These problems have the minimum average depth of decision tree equal to the number of attributes in the problem description. © Springer-Verlag Berlin Heidelberg 2011.
Lateral dispersion coefficients as functions of averaging time
International Nuclear Information System (INIS)
Sheih, C.M.
1980-01-01
Plume dispersion coefficients are discussed in terms of single-particle and relative diffusion, and are investigated as functions of averaging time. To demonstrate the effects of averaging time on the relative importance of various dispersion processes, and observed lateral wind velocity spectrum is used to compute the lateral dispersion coefficients of total, single-particle and relative diffusion for various averaging times and plume travel times. The results indicate that for a 1 h averaging time the dispersion coefficient of a plume can be approximated by single-particle diffusion alone for travel times <250 s and by relative diffusion for longer travel times. Furthermore, it is shown that the power-law formula suggested by Turner for relating pollutant concentrations for other averaging times to the corresponding 15 min average is applicable to the present example only when the averaging time is less than 200 s and the tral time smaller than about 300 s. Since the turbulence spectrum used in the analysis is an observed one, it is hoped that the results could represent many conditions encountered in the atmosphere. However, as the results depend on the form of turbulence spectrum, the calculations are not for deriving a set of specific criteria but for demonstrating the need in discriminating various processes in studies of plume dispersion
Lobmaier, Silvia M.; Mensing van Charante, Nico; Ferrazzi, Enrico; Giussani, Dino A.; Shaw, Caroline J.; Müller, Alexander; Ortiz, Javier U.; Ostermayer, Eva; Haller, Bernhard; Prefumo, Federico; Frusca, Tiziana; Hecher, Kurt; Arabin, Birgit; Thilaganathan, Baskaran; Papageorghiou, Aris T.; Bhide, Amarnath; Martinelli, Pasquale; Duvekot, Johannes J.; van Eyck, Jim; Visser, Gerard H A; Schmidt, Georg; Ganzevoort, Wessel; Lees, Christoph C.; Schneider, Karl T M; Bilardo, Caterina M.; Brezinka, Christoph; Diemert, Anke; Derks, Jan B.; Schlembach, Dietmar; Todros, Tullia; Valcamonico, Adriana; Marlow, Neil; van Wassenaer-Leemhuis, Aleid
2016-01-01
Background Phase-rectified signal averaging, an innovative signal processing technique, can be used to investigate quasi-periodic oscillations in noisy, nonstationary signals that are obtained from fetal heart rate. Phase-rectified signal averaging is currently the best method to predict survival
Lobmaier, Silvia M.; Mensing van Charante, Nico; Ferrazzi, Enrico; Giussani, Dino A.; Shaw, Caroline J.; Müller, Alexander; Ortiz, Javier U.; Ostermayer, Eva; Haller, Bernhard; Prefumo, Federico; Frusca, Tiziana; Hecher, Kurt; Arabin, Birgit; Thilaganathan, Baskaran; Papageorghiou, Aris T.; Bhide, Amarnath; Martinelli, Pasquale; Duvekot, Johannes J.; van Eyck, Jim; Visser, Gerard H. A.; Schmidt, Georg; Ganzevoort, Wessel; Lees, Christoph C.; Schneider, Karl T. M.; Bilardo, Caterina M.; Brezinka, Christoph; Diemert, Anke; Derks, Jan B.; Schlembach, Dietmar; Todros, Tullia; Valcamonico, Adriana; Marlow, Neil; van Wassenaer-Leemhuis, Aleid
2016-01-01
Phase-rectified signal averaging, an innovative signal processing technique, can be used to investigate quasi-periodic oscillations in noisy, nonstationary signals that are obtained from fetal heart rate. Phase-rectified signal averaging is currently the best method to predict survival after
Energy Technology Data Exchange (ETDEWEB)
Hourdakis, C J, E-mail: khour@gaec.gr [Ionizing Radiation Calibration Laboratory-Greek Atomic Energy Commission, PO Box 60092, 15310 Agia Paraskevi, Athens, Attiki (Greece)
2011-04-07
The practical peak voltage (PPV) has been adopted as the reference measuring quantity for the x-ray tube voltage. However, the majority of commercial kV-meter models measure the average peak, U-bar{sub P}, the average, U-bar, the effective, U{sub eff} or the maximum peak, U{sub P} tube voltage. This work proposed a method for determination of the PPV from measurements with a kV-meter that measures the average U-bar or the average peak, U-bar{sub p} voltage. The kV-meter reading can be converted to the PPV by applying appropriate calibration coefficients and conversion factors. The average peak k{sub PPV,kVp} and the average k{sub PPV,Uav} conversion factors were calculated from virtual voltage waveforms for conventional diagnostic radiology (50-150 kV) and mammography (22-35 kV) tube voltages and for voltage ripples from 0% to 100%. Regression equation and coefficients provide the appropriate conversion factors at any given tube voltage and ripple. The influence of voltage waveform irregularities, like 'spikes' and pulse amplitude variations, on the conversion factors was investigated and discussed. The proposed method and the conversion factors were tested using six commercial kV-meters at several x-ray units. The deviations between the reference and the calculated - according to the proposed method - PPV values were less than 2%. Practical aspects on the voltage ripple measurement were addressed and discussed. The proposed method provides a rigorous base to determine the PPV with kV-meters from U-bar{sub p} and U-bar measurement. Users can benefit, since all kV-meters, irrespective of their measuring quantity, can be used to determine the PPV, complying with the IEC standard requirements.
Averaging operations on matrices
Indian Academy of Sciences (India)
2014-07-03
Jul 3, 2014 ... Role of Positive Definite Matrices. • Diffusion Tensor Imaging: 3 × 3 pd matrices model water flow at each voxel of brain scan. • Elasticity: 6 × 6 pd matrices model stress tensors. • Machine Learning: n × n pd matrices occur as kernel matrices. Tanvi Jain. Averaging operations on matrices ...
Majak, W; Hall, J W; Rode, L M; Kalnin, C M
1986-06-01
Ruminal chlorophyll and rates of passage of two water-soluble markers were simultaneously determined in cattle with different susceptibilities to alfalfa bloat. The markers showed a slower rate of passage from the rumens of more susceptible cattle where the average half-lives for cobalt-ethylenediaminetetraacetic acid and chromium-ethylenediaminetetraacetic acid were 12 to 17 h. Average half-life of the markers was 8 h in the rumens of the less susceptible animals. In agreement, chloroplast particles in the liquid phase of rumen contents showed greater accumulation in animals susceptible to bloat, but many more observations were required to detect differences in chlorophyll among animals. This was partly due to the unhomogeneous dispersion of chloroplast fragments in the reticulorumen compared with the uniform distribution of the inert markers. Differences in rumen volumes (estimated from the quantity of marker administered and its initial concentration) were detected among animals, but these did not show a relationship to bloat susceptibility. In vitro studies indicated that alfalfa chloroplast particles were not readily degraded by rumen microorganisms. Our results support earlier conclusions on slower rates of salivation for cattle that bloat compared with those that do not.
Multi-Decadal Averages of Basal Melt for Ross Ice Shelf, Antarctica Using Airborne Observations
Das, I.; Bell, R. E.; Tinto, K. J.; Frearson, N.; Kingslake, J.; Padman, L.; Siddoway, C. S.; Fricker, H. A.
2017-12-01
Changes in ice shelf mass balance are key to the long term stability of the Antarctic Ice Sheet. Although the most extensive ice shelf mass loss currently is occurring in the Amundsen Sea sector of West Antarctica, many other ice shelves experience changes in thickness on time scales from annual to ice age cycles. Here, we focus on the Ross Ice Shelf. An 18-year record (1994-2012) of satellite radar altimetry shows substantial variability in Ross Ice Shelf height on interannual time scales, complicating detection of potential long-term climate-change signals in the mass budget of this ice shelf. Variability of radar signal penetration into the ice-shelf surface snow and firn layers further complicates assessment of mass changes. We investigate Ross Ice Shelf mass balance using aerogeophysical data from the ROSETTA-Ice surveys using IcePod. We use two ice-penetrating radars; a 2 GHz unit that images fine-structure in the upper 400 m of the ice surface and a 360 MHz radar to identify the ice shelf base. We have identified internal layers that are continuous along flow from the grounding line to the ice shelf front. Based on layer continuity, we conclude that these layers must be the horizons between the continental ice of the outlet glaciers and snow accumulation once the ice is afloat. We use the Lagrangian change in thickness of these layers, after correcting for strain rates derived using modern day InSAR velocities, to estimate multidecadal averaged basal melt rates. This method provides a novel way to quantify basal melt, avoiding the confounding impacts of spatial and short-timescale variability in surface accumulation and firn densification processes. Our estimates show elevated basal melt rates (> -1m/yr) around Byrd and Mullock glaciers within 100 km from the ice shelf front. We also compare modern InSAR velocity derived strain rates with estimates from the comprehensive ground-based RIGGS observations during 1973-1978 to estimate the potential magnitude of
20 CFR 404.221 - Computing your average monthly wage.
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the average...
Estimating the exceedance probability of rain rate by logistic regression
Chiu, Long S.; Kedem, Benjamin
1990-01-01
Recent studies have shown that the fraction of an area with rain intensity above a fixed threshold is highly correlated with the area-averaged rain rate. To estimate the fractional rainy area, a logistic regression model, which estimates the conditional probability that rain rate over an area exceeds a fixed threshold given the values of related covariates, is developed. The problem of dependency in the data in the estimation procedure is bypassed by the method of partial likelihood. Analyses of simulated scanning multichannel microwave radiometer and observed electrically scanning microwave radiometer data during the Global Atlantic Tropical Experiment period show that the use of logistic regression in pixel classification is superior to multiple regression in predicting whether rain rate at each pixel exceeds a given threshold, even in the presence of noisy data. The potential of the logistic regression technique in satellite rain rate estimation is discussed.
Directory of Open Access Journals (Sweden)
Rouhollah Dehghani
2013-09-01
Full Text Available Background: Radon is a radioactive gas and the second leading cause of death due to lung cancer after smoking. Ramsar is known for having the highest levels of natural background radiation on earth. Materials and Methods: In this research study, 50 stations of high radioactivity areas of Ramsar were selected in warm season of the year. Then gamma dose and radon exhalation rate were measured.Results: Results showed that gamma dose and radon exhalation rate were in the range of 51-7100 nSv/hr and 9-15370 mBq/m2s, respectively.Conclusion: Compare to the worldwide average 16 mBq/m2s, estimated average annual effective of Radon exhalation rate in the study area is too high.
Analysis of the spatial rates dose rates during dental panoramic radiography
Energy Technology Data Exchange (ETDEWEB)
Ko, Jong Kyung [Dept. of Radiation Safety Management Commission, Daegu Health College, Daegu (Korea, Republic of); Park, Myeong Hwan [Dept. of Radiologic Technology, Daegu Health College, Daegu (Korea, Republic of); Kim, Yong Min [Dept. of Radiological Science, Catholic University of Daegu, Daegu (Korea, Republic of)
2016-12-15
A dental panoramic radiography which usually uses low level X-rays is subject to the Nuclear Safety Act when it is installed for the purpose of education. This paper measures radiation dose and spatial dose rate by usage and thereby aims to verify the effectiveness of radiation safety equipment and provide basic information for radiation safety of radiation workers and students. After glass dosimeter (GD-352M) is attached to direct exposure area, the teeth, and indirect exposure area, the eye lens and the thyroid, on the dental radiography head phantom, these exposure areas are measured. Then, after dividing the horizontal into a 45°, it is separated into seven directions which all includes 30, 60, 90, 120 cm distance. The paper shows that the spatial dose rate is the highest at 30 cm and declines as the distance increases. At 30 cm, the spatial dose rate around the starting area of rotation is 3,840 μSv/h, which is four times higher than the lowest level 778 μSv/h. Furthermore, the spatial dose rate was 408 μSv/h on average at the distance of 60 cm where radiation workers can be located. From a conservative point of view, It is possible to avoid needless exposure to radiation for the purpose of education. However, in case that an unintended exposure to radiation happens within a radiation controlled area, it is still necessary to educate radiation safety. But according to the current Medical Service Act, in medical institutions, even if they are not installed, the equipment such as interlock are obliged by the Nuclear Safety Law, considering that the spatial dose rate of the educational dental panoramic radiography room is low. It seems to be excessive regulation.
Student Ratings of Instruction in Turkish Higher Education
Directory of Open Access Journals (Sweden)
Nehir Sert
2013-05-01
Full Text Available The end-of-term student evaluations have a twofold purpose: to provide information for administrators to make personnel decisions, and to help instructors to improve the quality of their teaching. The aim of this study is to investigate the ‘utility’ of the Student Ratings of Instruction (SRI. To that end, the concerns of the administrators, instructors and students regarding the use of the SRI in formative and summative evaluations are questioned. This study also investigates possible variables associated with the SRI: 1 what are the differences in ratings among the below-average, average and the above-average students? and 2 what is the correlation between the students’ grades and ratings? The participants of the study consisted of 5 administrators, 17 instructors and 292 students from the faculty of education of a foundation university in Ankara. A triangulation of quantitative and qualitative methods was adopted. In the first phase, causal comparative and correlation research methods were implemented. In the second phase, qualitative data were collected through semi-structured interviews. The results revealed that there was no significant difference in the SRI among the below-average, average and above-average students. The correlation between the student grades and the SRI was significant at a low level. The SRI were reportedly utilised to make teaching more effective and to make decisions when employing part-time personnel only. The permanent personnel were not affected by the SRI. Suggestions have been put forward to verify the usefulness of SRI.
Determining average yarding distance.
Roger H. Twito; Charles N. Mann
1979-01-01
Emphasis on environmental and esthetic quality in timber harvesting has brought about increased use of complex boundaries of cutting units and a consequent need for a rapid and accurate method of determining the average yarding distance and area of these units. These values, needed for evaluation of road and landing locations in planning timber harvests, are easily and...
Watson, Jane; Chick, Helen
2012-01-01
This paper analyses the responses of 247 middle school students to items requiring the concept of average in three different contexts: a city's weather reported in maximum daily temperature, the number of children in a family, and the price of houses. The mixed but overall disappointing performance on the six items in the three contexts indicates…
Free Recall Shows Similar Reactivation Behavior as Recognition & Cued Recall
Tarnow, Eugen
2016-01-01
I find that the total retrieval time in word free recall increases linearly with the total number of items recalled. Measured slopes, the time to retrieve an additional item, vary from 1.4-4.5 seconds per item depending upon presentation rate, subject age and whether there is a delay after list presentation or not. These times to retrieve an additional item obey a second linear relationship as a function of the recall probability averaged over the experiment, explicitly independent of subject...
The choice of food consumption rates for radiation dose assessments
International Nuclear Information System (INIS)
Simmonds, J.R.; Webb, G.A.M.
1981-01-01
The practical problem in estimating radiation doses due to radioactive contamination of food is the choice of the appropriate food intakes. To ensure compliance or to compare with dose equivalent limits, higher than average intake rates appropriate to critical groups should be used. However for realistic estimates of health detriment in the whole exposed population, average intake rates are more appropriate. (U.K.)
Sex differences in obesity associated with total fertility rate.
Directory of Open Access Journals (Sweden)
Robert Brooks
Full Text Available The identification of biological and ecological factors that contribute to obesity may help in combating the spreading obesity crisis. Sex differences in obesity rates are particularly poorly understood. Here we show that the strong female bias in obesity in many countries is associated with high total fertility rate, which is well known to be correlated with factors such as low average income, infant mortality and female education. We also document effects of reduced access to contraception and increased inequality of income among households on obesity rates. These results are consistent with studies that implicate reproduction as a risk factor for obesity in women and that suggest the effects of reproduction interact with socioeconomic and educational factors. We discuss our results in the light of recent research in dietary ecology and the suggestion that insulin resistance during pregnancy is due to historic adaptation to protect the developing foetus during famine. Increased access to contraception and education in countries with high total fertility rate might have the additional benefit of reducing the rates of obesity in women.
On averaging the Kubo-Hall conductivity of magnetic Bloch bands leading to Chern numbers
International Nuclear Information System (INIS)
Riess, J.
1997-01-01
The authors re-examine the topological approach to the integer quantum Hall effect in its original form where an average of the Kubo-Hall conductivity of a magnetic Bloch band has been considered. For the precise definition of this average it is crucial to make a sharp distinction between the discrete Bloch wave numbers k 1 , k 2 and the two continuous integration parameters α 1 , α 2 . The average over the parameter domain 0 ≤ α j 1 , k 2 . They show how this can be transformed into a single integral over the continuous magnetic Brillouin zone 0 ≤ α j j , j = 1, 2, n j = number of unit cells in j-direction, keeping k 1 , k 2 fixed. This average prescription for the Hall conductivity of a magnetic Bloch band is exactly the same as the one used for a many-body system in the presence of disorder
When good = better than average
Directory of Open Access Journals (Sweden)
Don A. Moore
2007-10-01
Full Text Available People report themselves to be above average on simple tasks and below average on difficult tasks. This paper proposes an explanation for this effect that is simpler than prior explanations. The new explanation is that people conflate relative with absolute evaluation, especially on subjective measures. The paper then presents a series of four studies that test this conflation explanation. These tests distinguish conflation from other explanations, such as differential weighting and selecting the wrong referent. The results suggest that conflation occurs at the response stage during which people attempt to disambiguate subjective response scales in order to choose an answer. This is because conflation has little effect on objective measures, which would be equally affected if the conflation occurred at encoding.
Survey of environmental radiation dose rates in Kyoto and Shiga prefectures, Japan
International Nuclear Information System (INIS)
Minamia, Kazuyuki; Shimo, Michikuni; Oka, Mitsuaki; Ejiri, Kazutaka; Sugino, Masato; Minato, Susumu; Hosoda, Masahiro; Yamada, Junya; Fukushi, Masahiro
2008-01-01
We have measured environmental radiation dose rates in several Prefectures, such as Ai chi Prefecture, Gifu Prefecture, and Mie Prefecture, in central Japan. Recently, we measured the environmental radiation dose rates in Kyoto and Shiga Prefectures that are also located in central Japan with a car-borne survey system. At the time of measurement, Kyoto Prefecture (area: 4,613 km 2 ) had a total of 36 districts, and Shiga Prefecture (area: 3,387 km 2 ) a total of 26. Terrestrial gamma ray dose rates and secondary cosmic ray dose rates were measured by a 2 inches ψ x 2 inches NaI(Tl) scintillation counter and a handy-type altimeter (GPS eTrex Legend by Gamin), respectively. The following factors were taken into consideration the shielding effect of the car body, the effect of the road pavement, radon progeny borne by precipitation, and increases in tunnels and near the walls. Terrestrial gamma ray dose rates in Kyoto and Shiga Prefectures were estimated to be 51.7 ± 6.0 n Gy/h (district average: 52.4 ± 4.7 n Gy/h), 52.2 ± 10.5 n Gy/h (district average: 51.9 ± 8.1 n Gy/h), respectively. Secondary cosmic ray dose rates in Kyoto and Shiga Prefectures were 30.0 ± 0.6 n Gy/h (district average: 29.9 ±0.3 n Gy/h), 30.1 ± 0.3 n Gy/h (district average: 30.0 ± 0.2 n Gy/h), respectively. The environmental radiation dose rates due to the sum dose rates of terrestrial gamma ray and secondary cosmic ray in Kyoto and Shiga Prefectures were 81.7 ± 6.2 n Gy/h (district average: 82.3 ± 4.8 n Gy/h), 82.3 ± 10.6 n Gy/h (district average: 82.0 ± 8.1 n Gy/h), respectively. We confirmed that the environmental radiation dose rates in Kyoto and Shiga Prefectures mainly depended on the change of the terrestrial gamma ray dose rates, since the secondary cosmic ray dose rates had little change. Therefore, radiation dose-rate maps of the terrestrial gamma rays as well as maps of the environmental radiation dose-rate were drawn. (author)
DNA fork displacement rates in human cells
International Nuclear Information System (INIS)
Kapp, L.N.; Painter, R.B.
1981-01-01
DNA fork displacement rates were measured in 20 human cell lines by a bromodeoxyuridine-313 nm photolysis technique. Cell lines included representatives of normal diploid, Fanconi's anemia, ataxia telangiectasia, xeroderma pigmentosum, trisomy-21 and several transformed lines. The average value for all the cell lines was 0.53 +- 0.08 μm/min. The average value for individual cell lines, however, displayed a 30% variation. Less than 10% of variation in the fork displacement rate appears to be due to the experimental technique; the remainder is probably due to true variation among the cell types and to culture conditions. (Auth.)
DNA fork displacement rates in human cells
Energy Technology Data Exchange (ETDEWEB)
Kapp, L.N.; Painter, R.B. (California Univ., San Francisco (USA). Lab. of Radiobiology)
1981-11-27
DNA fork displacement rates were measured in 20 human cell lines by a bromodeoxyuridine-313 nm photolysis technique. Cell lines included representatives of normal diploid, Fanconi's anemia, ataxia telangiectasia, xeroderma pigmentosum, trisomy-21 and several transformed lines. The average value for all the cell lines was 0.53 +- 0.08 ..mu..m/min. The average value for individual cell lines, however, displayed a 30% variation. Less than 10% of variation in the fork displacement rate appears to be due to the experimental technique; the remainder is probably due to true variation among the cell types and to culture conditions.
Lyapunov Exponent and Out-of-Time-Ordered Correlator's Growth Rate in a Chaotic System.
Rozenbaum, Efim B; Ganeshan, Sriram; Galitski, Victor
2017-02-24
It was proposed recently that the out-of-time-ordered four-point correlator (OTOC) may serve as a useful characteristic of quantum-chaotic behavior, because, in the semiclassical limit ℏ→0, its rate of exponential growth resembles the classical Lyapunov exponent. Here, we calculate the four-point correlator C(t) for the classical and quantum kicked rotor-a textbook driven chaotic system-and compare its growth rate at initial times with the standard definition of the classical Lyapunov exponent. Using both quantum and classical arguments, we show that the OTOC's growth rate and the Lyapunov exponent are, in general, distinct quantities, corresponding to the logarithm of the phase-space averaged divergence rate of classical trajectories and to the phase-space average of the logarithm, respectively. The difference appears to be more pronounced in the regime of low kicking strength K, where no classical chaos exists globally. In this case, the Lyapunov exponent quickly decreases as K→0, while the OTOC's growth rate may decrease much slower, showing a higher sensitivity to small chaotic islands in the phase space. We also show that the quantum correlator as a function of time exhibits a clear singularity at the Ehrenfest time t_{E}: transitioning from a time-independent value of t^{-1}lnC(t) at ttime at t>t_{E}. We note that the underlying physics here is the same as in the theory of weak (dynamical) localization [Aleiner and Larkin, Phys. Rev. B 54, 14423 (1996)PRBMDO0163-182910.1103/PhysRevB.54.14423; Tian, Kamenev, and Larkin, Phys. Rev. Lett. 93, 124101 (2004)PRLTAO0031-900710.1103/PhysRevLett.93.124101] and is due to a delay in the onset of quantum interference effects, which occur sharply at a time of the order of the Ehrenfest time.
Average Intelligence Predicts Atheism Rates across 137 Nations
Lynn, Richard; Harvey, John; Nyborg, Helmuth
2009-01-01
Evidence is reviewed pointing to a negative relationship between intelligence and religious belief in the United States and Europe. It is shown that intelligence measured as psychometric "g" is negatively related to religious belief. We also examine whether this negative relationship between intelligence and religious belief is present between…
Energy Technology Data Exchange (ETDEWEB)
Reimold, M.; Mueller-Schauenburg, W.; Dohmen, B.M.; Bares, R. [Department of Nuclear Medicine, University of Tuebingen, Otfried-Mueller-Strasse 14, 72076, Tuebingen (Germany); Becker, G.A. [Nuclear Medicine, University of Leipzig, Leipzig (Germany); Reischl, G. [Radiopharmacy, University of Tuebingen, Tuebingen (Germany)
2004-04-01
Due to the stochastic nature of radioactive decay, any measurement of radioactivity concentration requires spatial averaging. In pharmacokinetic analysis of time-activity curves (TAC), such averaging over heterogeneous tissues may introduce a systematic error (heterogeneity error) but may also improve the accuracy and precision of parameter estimation. In addition to spatial averaging (inevitable due to limited scanner resolution and intended in ROI analysis), interindividual averaging may theoretically be beneficial, too. The aim of this study was to investigate the effect of such averaging on the binding potential (BP) calculated with Logan's non-invasive graphical analysis and the ''simplified reference tissue method'' (SRTM) proposed by Lammertsma and Hume, on the basis of simulated and measured positron emission tomography data [{sup 11}C]d-threo-methylphenidate (dMP) and [{sup 11}C]raclopride (RAC) PET. dMP was not quantified with SRTM since the low k {sub 2} (washout rate constant from the first tissue compartment) introduced a high noise sensitivity. Even for considerably different shapes of TAC (dMP PET in parkinsonian patients and healthy controls, [{sup 11}C]raclopride in patients with and without haloperidol medication) and a high variance in the rate constants (e.g. simulated standard deviation of K {sub 1}=25%), the BP obtained from average TAC was close to the mean BP (<5%). However, unfavourably distributed parameters, especially a correlated large variance in two or more parameters, may lead to larger errors. In Monte Carlo simulations, interindividual averaging before quantification reduced the variance from the SRTM (beyond a critical signal to noise ratio) and the bias in Logan's method. Interindividual averaging may further increase accuracy when there is an error term in the reference tissue assumption E=DV {sub 2}-DV ' (DV {sub 2} = distribution volume of the first tissue compartment, DV &apos
Hernandez, Ivan; Preston, Jesse Lee; Hepler, Justin
2014-01-01
Research on the timescale bias has found that observers perceive more capacity for mind in targets moving at an average speed, relative to slow or fast moving targets. The present research revisited the timescale bias as a type of halo effect, where normal-speed people elicit positive evaluations and abnormal-speed (slow and fast) people elicit negative evaluations. In two studies, participants viewed videos of people walking at a slow, average, or fast speed. We find evidence for a timescale halo effect: people walking at an average-speed were attributed more positive mental traits, but fewer negative mental traits, relative to slow or fast moving people. These effects held across both cognitive and emotional dimensions of mind and were mediated by overall positive/negative ratings of the person. These results suggest that, rather than eliciting greater perceptions of general mind, the timescale bias may reflect a generalized positivity toward average speed people relative to slow or fast moving people. PMID:24421882
Approximative Krieger-Nelkin orientation averaging and anisotropy of water molecules vibrations
International Nuclear Information System (INIS)
Markovic, M.I.
1974-01-01
Quantum-mechanics approach of water molecules dynamics should be taken into account for precise theoretical calculation of differential scattering cross sections of neutrons. Krieger and Nelkin have proposed an approximate method for averaging orientation of molecules regarding directions of incoming and scattered neutron. This paper shows that this approach can be successfully applied for general shape of water molecule vibration anisotropy
SPATIAL DISTRIBUTION OF THE AVERAGE RUNOFF IN THE IZA AND VIȘEU WATERSHEDS
Directory of Open Access Journals (Sweden)
HORVÁTH CS.
2015-03-01
Full Text Available The average runoff represents the main parameter with which one can best evaluate an area’s water resources and it is also an important characteristic in al river runoff research. In this paper we choose a GIS methodology for assessing the spatial evolution of the average runoff, using validity curves we identifies three validity areas in which the runoff changes differently with altitude. The tree curves were charted using the average runoff values of 16 hydrometric stations from the area, eight in the Vișeu and eight in the Iza river catchment. Identifying the appropriate areas of the obtained correlations curves (between specific average runoff and catchments mean altitude allowed the assessment of potential runoff at catchment level and on altitudinal intervals. By integrating the curves functions in to GIS we created an average runoff map for the area; from which one can easily extract runoff data using GIS spatial analyst functions. The study shows that from the three areas the highest runoff corresponds with the third zone but because it’s small area the water volume is also minor. It is also shown that with the use of the created runoff map we can compute relatively quickly correct runoff values for areas without hydrologic control.
Evaluation of kerma rate in radioactive waste disposal
International Nuclear Information System (INIS)
Rosa, Rodolfo O.; Silva, Joao C.P.; Santos, Joao R. dos
2014-01-01
This study aims to assess the progression of kerma rate levels in the air due to the increase of collection, storing and storage of radioactive waste in the new building (after expansion) of the radioactive waste disposal (RWD) of the Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Brazil. This review is carried out every six months at IEN with thermoluminescent dosimeter lithium fluoride LiF: Mg, Cu, P (TLD-100H). Here are the average values of kerma rate for the period 2008- 2012. In this context, the methodology used for selection and choices of detectors used in dosimeters is presented. The detectors were chosen through homogeneity criteria of the pack, standardization factor and coefficient of variation (CV%). The monitoring points and the exposure time of the detectors are chosen considering various factors, including the rate of occupation and indoor and outdoor positions to RWD. These evaluations showed that the contribution of the new waste disposal in increasing kerma rate of IEN, has proved to be insignificant, that is, the presence of RWD does not contribute to increased environmental kerma rate in the region around this installation
The calculation of average error probability in a digital fibre optical communication system
Rugemalira, R. A. M.
1980-03-01
This paper deals with the problem of determining the average error probability in a digital fibre optical communication system, in the presence of message dependent inhomogeneous non-stationary shot noise, additive Gaussian noise and intersymbol interference. A zero-forcing equalization receiver filter is considered. Three techniques for error rate evaluation are compared. The Chernoff bound and the Gram-Charlier series expansion methods are compared to the characteristic function technique. The latter predicts a higher receiver sensitivity
Krekelberg, William P; Siderius, Daniel W; Shen, Vincent K; Truskett, Thomas M; Errington, Jeffrey R
2017-12-12
Using molecular simulations, we investigate the relationship between the pore-averaged and position-dependent self-diffusivity of a fluid adsorbed in a strongly attractive pore as a function of loading. Previous work (Krekelberg, W. P.; Siderius, D. W.; Shen, V. K.; Truskett, T. M.; Errington, J. R. Connection between thermodynamics and dynamics of simple fluids in highly attractive pores. Langmuir 2013, 29, 14527-14535, doi: 10.1021/la4037327) established that pore-averaged self-diffusivity in the multilayer adsorption regime, where the fluid exhibits a dense film at the pore surface and a lower density interior pore region, is nearly constant as a function of loading. Here we show that this puzzling behavior can be understood in terms of how loading affects the fraction of particles that reside in the film and interior pore regions as well as their distinct dynamics. Specifically, the insensitivity of pore-averaged diffusivity to loading arises from the approximate cancellation of two factors: an increase in the fraction of particles in the higher diffusivity interior pore region with loading and a corresponding decrease in the particle diffusivity in that region. We also find that the position-dependent self-diffusivities scale with the position-dependent density. We present a model for predicting the pore-average self-diffusivity based on the position-dependent self-diffusivity, which captures the unusual characteristics of pore-averaged self-diffusivity in strongly attractive pores over several orders of magnitude.
No-shows, drop-outs and completers in psychotherapeutic treatment
DEFF Research Database (Denmark)
Fenger, Morten Munthe; Mortensen, Erik Lykke; Poulsen, Stig Bernt
2011-01-01
A primary challenge in mental health services is a high rate of non-attendance (i.e. no-show and drop-out) for patients referred to treatment for psychiatric disorders.......A primary challenge in mental health services is a high rate of non-attendance (i.e. no-show and drop-out) for patients referred to treatment for psychiatric disorders....
47 CFR 80.759 - Average terrain elevation.
2010-10-01
... 47 Telecommunication 5 2010-10-01 2010-10-01 false Average terrain elevation. 80.759 Section 80.759 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.759 Average terrain elevation. (a)(1) Draw radials...
Rate of ice accumulation during ice storms
Energy Technology Data Exchange (ETDEWEB)
Feknous, N. [SNC-Lavalin, Montreal, PQ (Canada); Chouinard, L. [McGill Univ., Montreal, PQ (Canada); Sabourin, G. [Hydro-Quebec, Montreal, PQ (Canada)
2005-07-01
The rate of glaze ice accumulation is the result of a complex process dependent on numerous meteorological and physical factors. The aim of this paper was to estimate the distribution rate of glaze ice accumulation on conductors in southern Quebec for use in the design of mechanical and electrical de-icing devices. The analysis was based on direct observations of ice accumulation collected on passive ice meters. The historical database of Hydro-Quebec, which contains observations at over 140 stations over period of 25 years, was used to compute accumulation rates. Data was processed so that each glaze ice event was numbered in a chronological sequence. Each event consisted of the time series of ice accumulations on each of the 8 cylinders of the ice meters, as well as on 5 of its surfaces. Observed rates were converted to represent the average ice on a 30 mm diameter conductor at 30 m above ground with a span of 300 m. Observations were corrected to account for the water content of the glaze ice as evidenced by the presence of icicles. Results indicated that despite significant spatial variations in the expected severity of ice storms as a function of location, the distribution function for rates of accumulation were fairly similar and could be assumed to be independent of location. It was concluded that the observations from several sites could be combined in order to obtain better estimates of the distribution of hourly rates of ice accumulation. However, the rates were highly variable. For de-icing strategies, it was suggested that average accumulation rates over 12 hour periods were preferable, and that analyses should be performed for other time intervals to account for the variability in ice accumulation rates over time. In addition, accumulation rates did not appear to be highly correlated with average wind speed for maximum hourly accumulation rates. 3 refs., 2 tabs., 10 figs.
Shah, Ajit
2009-07-01
Suicides may be misclassified as accidental deaths in countries with strict legal definitions of suicide, with cultural and religious factors leading to poor registration of suicide and stigma attached to suicide. The concordance between four different definitions of suicides was evaluated by examining the relationship between pure suicide and accidental death rates, gender differences, age-associated trends and potential distil risk and protective factors by conducting secondary analysis of the latest World Health Organisation data on elderly death rates. The four definitions of suicide were: (i) one-year pure suicides rates; one-year combined suicide rates (pure suicide rates combined with accidental death rates); (iii) five-year average pure suicide rates; and (iv) five-year average combined suicides rates (pure suicides rates combined with accidental death rates). The predicted negative correlation between pure suicide and accidental death rates was not observed. Gender differences were similar for all four definitions of suicide. There was a highly significant concordance for the findings of age-associated trends between one-year pure and combined suicide rates, one-year and five-year average pure suicide rates, and five-year average pure and combined suicide rates. There was poor concordance between pure and combined suicide rates for both one-year and five-year average data for the 14 potential distil risk and protective factors, but this concordance between one-year and five-year average pure suicide rates was highly significant. The use of one-year pure suicide rates in cross-national ecological studies examining gender differences, age-associated trends and potential distil risk and protective factors is likely to be practical, pragmatic and resource-efficient.
Hartmann, Manuela; Grob, Carolina; Scanlan, David J; Martin, Adrian P; Burkill, Peter H; Zubkov, Mikhail V
2011-11-01
The smallest phototrophic protists (protists meet their inorganic nutrient requirements, we compared the phosphate uptake rates of plastidic and aplastidic protists in the phosphate-depleted subtropical and tropical North Atlantic (4-29°N) using a combination of radiotracers and flow cytometric sorting on two Atlantic Meridional Transect cruises. Plastidic protists were divided into two groups according to their size (protists showed higher phosphate uptake rates per cell than the aplastidic protists. Although the phosphate uptake rates of protist cells were on average seven times (Pprotists were one fourth to one twentieth of an average bacterioplankton cell. The unsustainably low biomass-specific phosphate uptake by both plastidic and aplastidic protists suggests the existence of a common alternative means of phosphorus acquisition - predation on phosphorus-rich bacterioplankton cells. © 2011 Federation of European Microbiological Societies. Published by Blackwell Publishing Ltd. All rights reserved.
International Nuclear Information System (INIS)
Lee, Kyoung-Soo; Glikman, Eilat; Dey, Arjun; Reddy, Naveen; Jannuzi, Buell T.; Brown, Michael J. I.; Gonzalez, Anthony H.; Cooper, Michael C.; Fan Xiaohui; Bian Fuyan; Stern, Daniel; Brodwin, Mark; Cooray, Asantha
2011-01-01
We investigate the average physical properties and star formation histories (SFHs) of the most UV-luminous star-forming galaxies at z ∼ 3.7. Our results are based on the average spectral energy distributions (SEDs), constructed from stacked optical-to-infrared photometry, of a sample of the 1913 most UV-luminous star-forming galaxies found in 5.3 deg 2 of the NOAO Deep Wide-Field Survey. We find that the shape of the average SED in the rest optical and infrared is fairly constant with UV luminosity, i.e., more UV-luminous galaxies are, on average, also more luminous at longer wavelengths. In the rest UV, however, the spectral slope β (≡ dlogF λ /dlogλ; measured at 0.13 μm rest UV and thus star formation rates (SFRs) scale closely with stellar mass such that more UV-luminous galaxies are also more massive, (2) the median ages indicate that the stellar populations are relatively young (200-400 Myr) and show little correlation with UV luminosity, and (3) more UV-luminous galaxies are dustier than their less-luminous counterparts, such that L ∼ 4-5L* galaxies are extincted up to A(1600) = 2 mag while L ∼ L* galaxies have A(1600) = 0.7-1.5 mag. We argue that the average SFHs of UV-luminous galaxies are better described by models in which SFR increases with time in order to simultaneously reproduce the tight correlation between the UV-derived SFR and stellar mass and their universally young ages. We demonstrate the potential of measurements of the SFR-M * relation at multiple redshifts to discriminate between simple models of SFHs. Finally, we discuss the fate of these UV-brightest galaxies in the next 1-2 Gyr and their possible connection to the most massive galaxies at z ∼ 2.
PRE-ELECTIONAL DECREASE OF THE UNEMPLOYMENT RATE
Directory of Open Access Journals (Sweden)
Damjan Miličević
2013-02-01
Full Text Available Opportunistic business cycle models test whether the current government has the ability to reduce unemployment in pre-election period. First opportunistic business cycle models tested regressions using unemployment rate as the dependent variable, and for explanatory variables used unemployment rate in the previous two periods and political dummy variable defined as unity several quarters prior to election and zero elsewhere. Such models did not find evidence of opportunistic cycle for unemployment. Haynes and Stone in their model estimated regressions using unemployment as the dependent variable and sixteen dummy variables as explanatory variables (one for each quarter in the Presidential electoral term. Results showed that unemployment has roughly sinusoidal sixteen quarter cycle, where unemployment troughs on average the quarter of the election. Mentioned models are tested with data for the United States for the period from 1948 to 2011 where regressions results coincide with models mentioned in the article.
Annual Greenland Accumulation Rates (2009-2012) from Airborne Snow Radar
Koenig, Lora S.; Ivanoff, Alvaro; Alexander, Patrick M.; MacGregor, Joseph A.; Fettweis, Xavier; Panzer, Ben; Paden, John D.; Forster, Richard R.; Das, Indrani; McConnell, Joseph R.;
2016-01-01
Contemporary climate warming over the Arctic is accelerating mass loss from the Greenland Ice Sheet through increasing surface melt, emphasizing the need to closely monitor its surface mass balance in order to improve sea-level rise predictions. Snow accumulation is the largest component of the ice sheet's surface mass balance, but in situ observations thereof are inherently sparse and models are difficult to evaluate at large scales. Here, we quantify recent Greenland accumulation rates using ultra-wideband (2-6.5 gigahertz) airborne snow radar data collected as part of NASA's Operation IceBridge between 2009 and 2012. We use a semi-automated method to trace the observed radiostratigraphy and then derive annual net accumulation rates for 2009-2012. The uncertainty in these radar-derived accumulation rates is on average 14 percent. A comparison of the radarderived accumulation rates and contemporaneous ice cores shows that snow radar captures both the annual and longterm mean accumulation rate accurately. A comparison with outputs from a regional climate model (MAR - Modele Atmospherique Regional for Greenland and vicinity) shows that this model matches radar-derived accumulation rates in the ice sheet interior but produces higher values over southeastern Greenland. Our results demonstrate that snow radar can efficiently and accurately map patterns of snow accumulation across an ice sheet and that it is valuable for evaluating the accuracy of surface mass balance models.
Effects of Temperature and Strain Rate on Tensile Deformation Behavior of 9Cr-0.5Mo-1.8W-VNb Ferritic Heat-Resistant Steel
Guo, Xiaofeng; Weng, Xiaoxiang; Jiang, Yong; Gong, Jianming
2017-09-01
A series of uniaxial tensile tests were carried out at different strain rate and different temperatures to investigate the effects of temperature and strain rate on tensile deformation behavior of P92 steel. In the temperature range of 30-700 °C, the variations of flow stress, average work-hardening rate, tensile strength and ductility with temperature all show three temperature regimes. At intermediate temperature, the material exhibited the serrated flow behavior, the peak in flow stress, the maximum in average work-hardening rate, and the abnormal variations in tensile strength and ductility indicates the occurrence of DSA, whereas the sharp decrease in flow stress, average work-hardening rate as well as strength values, and the remarkable increase in ductility values with increasing temperature from 450 to 700 °C imply that dynamic recovery plays a dominant role in this regime. Additionally, for the temperature ranging from 550 to 650 °C, a significant decrease in flow stress values is observed with decreasing in strain rate. This phenomenon suggests the strain rate has a strong influence on flow stress. Based on the experimental results above, an Arrhenius-type constitutive equation is proposed to predict the flow stress.
Kawamura, Yoshifumi; Hikage, Takashi; Nojima, Toshio
The aim of this study is to develop a new whole-body averaged specific absorption rate (SAR) estimation method based on the external-cylindrical field scanning technique. This technique is adopted with the goal of simplifying the dosimetry estimation of human phantoms that have different postures or sizes. An experimental scaled model system is constructed. In order to examine the validity of the proposed method for realistic human models, we discuss the pros and cons of measurements and numerical analyses based on the finite-difference time-domain (FDTD) method. We consider the anatomical European human phantoms and plane-wave in the 2GHz mobile phone frequency band. The measured whole-body averaged SAR results obtained by the proposed method are compared with the results of the FDTD analyses.
The average number of partons per clan in rapidity intervals in parton showers
Energy Technology Data Exchange (ETDEWEB)
Giovannini, A. [Turin Univ. (Italy). Ist. di Fisica Teorica; Lupia, S. [Max-Planck-Institut fuer Physik, Muenchen (Germany). Werner-Heisenberg-Institut; Ugoccioni, R. [Lund Univ. (Sweden). Dept. of Theoretical Physics
1996-04-01
The dependence of the average number of partons per clan on virtuality and rapidity variables is analytically predicted in the framework of the Generalized Simplified Parton Shower model, based on the idea that clans are genuine elementary subprocesses. The obtained results are found to be qualitatively consistent with experimental trends. This study extends previous results on the behavior of the average number of clans in virtuality and rapidity and shows how important physical quantities can be calculated analytically in a model based on essentials of QCD allowing local violations of the energy-momentum conservation law, still requiring its global validity. (orig.)
The average number of partons per clan in rapidity intervals in parton showers
International Nuclear Information System (INIS)
Giovannini, A.; Lupia, S.; Ugoccioni, R.
1996-01-01
The dependence of the average number of partons per clan on virtuality and rapidity variables is analytically predicted in the framework of the Generalized Simplified Parton Shower model, based on the idea that clans are genuine elementary subprocesses. The obtained results are found to be qualitatively consistent with experimental trends. This study extends previous results on the behavior of the average number of clans in virtuality and rapidity and shows how important physical quantities can be calculated analytically in a model based on essentials of QCD allowing local violations of the energy-momentum conservation law, still requiring its global validity. (orig.)
Evaluation of subject contrast and normalized average glandular dose by semi-analytical models
International Nuclear Information System (INIS)
Tomal, A.; Poletti, M.E.; Caldas, L.V.E.
2010-01-01
In this work, two semi-analytical models are described to evaluate the subject contrast of nodules and the normalized average glandular dose in mammography. Both models were used to study the influence of some parameters, such as breast characteristics (thickness and composition) and incident spectra (kVp and target-filter combination) on the subject contrast of a nodule and on the normalized average glandular dose. From the subject contrast results, detection limits of nodules were also determined. Our results are in good agreement with those reported by other authors, who had used Monte Carlo simulation, showing the robustness of our semi-analytical method.
Grassmann Averages for Scalable Robust PCA
DEFF Research Database (Denmark)
Hauberg, Søren; Feragen, Aasa; Black, Michael J.
2014-01-01
As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...
Strain rate orientations near the Coso Geothermal Field
Ogasa, N. T.; Kaven, J. O.; Barbour, A. J.; von Huene, R.
2016-12-01
Many geothermal reservoirs derive their sustained capacity for heat exchange in large part due to continuous deformation of preexisting faults and fractures that permit permeability to be maintained. Similarly, enhanced geothermal systems rely on the creation of suitable permeability from fracture and faults networks to be viable. Stress measurements from boreholes or earthquake source mechanisms are commonly used to infer the tectonic conditions that drive deformation, but here we show that geodetic data can also be used. Specifically, we quantify variations in the horizontal strain rate tensor in the area surrounding the Coso Geothermal Field (CGF) by analyzing more than two decades of high accuracy differential GPS data from a network of 14 stations from the University of Nevada Reno Geodetic Laboratory. To handle offsets in the data, from equipment changes and coseismic deformation, we segment the data, perform a piecewise linear fit and take the average of each segment's strain rate to determine secular velocities at each station. With respect to North America, all stations tend to travel northwest at velocities ranging from 1 to 10 mm/yr. The nearest station to CGF shows anomalous motion compared to regional stations, which otherwise show a coherent increase in network velocity from the northeast to the southwest. We determine strain rates via linear approximation using GPS velocities in Cartesian reference frame due to the small area of our network. Principal strain rate components derived from this inversion show maximum extensional strain rates of 30 nanostrain/a occur at N87W with compressional strain rates of 37nanostrain/a at N3E. These results generally align with previous stress measurements from borehole breakouts, which indicate the least compressive horizontal principal stress is east-west oriented, and indicative of the basin and range tectonic setting. Our results suggest that the CGF represents an anomaly in the crustal deformation field, which
Scale-invariant Green-Kubo relation for time-averaged diffusivity
Meyer, Philipp; Barkai, Eli; Kantz, Holger
2017-12-01
In recent years it was shown both theoretically and experimentally that in certain systems exhibiting anomalous diffusion the time- and ensemble-averaged mean-squared displacement are remarkably different. The ensemble-averaged diffusivity is obtained from a scaling Green-Kubo relation, which connects the scale-invariant nonstationary velocity correlation function with the transport coefficient. Here we obtain the relation between time-averaged diffusivity, usually recorded in single-particle tracking experiments, and the underlying scale-invariant velocity correlation function. The time-averaged mean-squared displacement is given by 〈δ2¯〉 ˜2 DνtβΔν -β , where t is the total measurement time and Δ is the lag time. Here ν is the anomalous diffusion exponent obtained from ensemble-averaged measurements 〈x2〉 ˜tν , while β ≥-1 marks the growth or decline of the kinetic energy 〈v2〉 ˜tβ . Thus, we establish a connection between exponents that can be read off the asymptotic properties of the velocity correlation function and similarly for the transport constant Dν. We demonstrate our results with nonstationary scale-invariant stochastic and deterministic models, thereby highlighting that systems with equivalent behavior in the ensemble average can differ strongly in their time average. If the averaged kinetic energy is finite, β =0 , the time scaling of 〈δ2¯〉 and 〈x2〉 are identical; however, the time-averaged transport coefficient Dν is not identical to the corresponding ensemble-averaged diffusion constant.
The Effect of Organic Loading Rate on Milk WastewaterTreatment Using Sequencing Batch Reactor (SBR
Directory of Open Access Journals (Sweden)
Hooman Hajiabadi
2009-09-01
Full Text Available In this study, four aerobic sequencing batch reactors (SBRs were operated under the same conditions for the treatment of milk wastewater at different organic loading rates (OLRs. Cylindrical Plexiglas reactors were run for 56 days (including 21 days of acclimatization and 35 days of data gathering. Effective volume, influent wastewater flowrate, and sludge retention time (SRT of reactors were 5.5 L, 3.5 L/d, and 10 d, respectively. The average COD removal efficiency for the reactors R1, R2, R3, and R4 with influent OLRave values of 633, 929, 1915, and 3261 gCOD/m3d were 95, 96, 95, and 82 percent, respectively. The average effluent suspended solid (SS for all reactors was lower than 44 mg/L. Also, except for R4 with an average effluent turbidity of 270 NTU, other reactors met the Iranian wastewater emission standard (50 NTU. In addition, the average sludge volume index of reactors R1 to R3 was found to be lower than 67 mL/g. According to the results, the overall variation of COD removal efficiency versus influent OLR shows a decreasing rate with a correlation factor of 0.8 (R2.
45 CFR 98.100 - Error Rate Report.
2010-10-01
... Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND... the total dollar amount of payments made in the sample); the average amount of improper payment; and... not received. (e) Costs of Preparing the Error Rate Report—Provided the error rate calculations and...
Relationships between feeding behavior and average daily gain in cattle
Directory of Open Access Journals (Sweden)
Bruno Fagundes Cunha Lage
2013-12-01
Full Text Available Several studies have reported relationship between eating behavior and performance in feedlot cattle. The evaluation of behavior traits demands high degree of work and trained manpower, therefore, in recent years has been used an automated feed intake measurement system (GrowSafe System ®, that identify and record individual feeding patterns. The aim of this study was to evaluate the relationship between feeding behavior traits and average daily gain in Nellore calves undergoing feed efficiency test. Date from 85 Nelore males was recorded during the feed efficiency test performed in 2012, at Centro APTA Bovinos de Corte, Instituto de Zootecnia, São Paulo State. Were analyzed the behavioral traits: time at feeder (TF, head down duration (HD, representing the time when the animal is actually eating, frequency of visits (FV and feed rate (FR calculated as the amount of dry matter (DM consumed by time at feeder (g.min-1. The ADG was calculated by linear regression of individual weights on days in test. ADG classes were obtained considering the average ADG and standard deviation (SD being: high ADG (>mean + 1.0 SD, medium ADG (± 1.0 SD from the mean and low ADG (
2010-07-01
... volume of gasoline produced or imported in batch i. Si=The sulfur content of batch i determined under § 80.330. n=The number of batches of gasoline produced or imported during the averaging period. i=Individual batch of gasoline produced or imported during the averaging period. (b) All annual refinery or...
Econometric modelling of Serbian current account determinants: Jackknife Model Averaging approach
Directory of Open Access Journals (Sweden)
Petrović Predrag
2014-01-01
Full Text Available This research aims to model Serbian current account determinants for the period Q1 2002 - Q4 2012. Taking into account the majority of relevant determinants, using the Jackknife Model Averaging approach, 48 different models have been estimated, where 1254 equations needed to be estimated and averaged for each of the models. The results of selected representative models indicate moderate persistence of the CA and positive influence of: fiscal balance, oil trade balance, terms of trade, relative income and real effective exchange rates, where we should emphasise: (i a rather strong influence of relative income, (ii the fact that the worsening of oil trade balance results in worsening of other components (probably non-oil trade balance of CA and (iii that the positive influence of terms of trade reveals functionality of the Harberger-Laursen-Metzler effect in Serbia. On the other hand, negative influence is evident in case of: relative economic growth, gross fixed capital formation, net foreign assets and trade openness. What particularly stands out is the strong effect of relative economic growth that, most likely, reveals high citizens' future income growth expectations, which has negative impact on the CA.
Eliminating the Effect of Rating Bias on Reputation Systems
Directory of Open Access Journals (Sweden)
Leilei Wu
2018-01-01
Full Text Available The ongoing rapid development of the e-commercial and interest-base websites makes it more pressing to evaluate objects’ accurate quality before recommendation. The objects’ quality is often calculated based on their historical information, such as selected records or rating scores. Usually high quality products obtain higher average ratings than low quality products regardless of rating biases or errors. However, many empirical cases demonstrate that consumers may be misled by rating scores added by unreliable users or deliberate tampering. In this case, users’ reputation, that is, the ability to rate trustily and precisely, makes a big difference during the evaluation process. Thus, one of the main challenges in designing reputation systems is eliminating the effects of users’ rating bias. To give an objective evaluation of each user’s reputation and uncover an object’s intrinsic quality, we propose an iterative balance (IB method to correct users’ rating biases. Experiments on two datasets show that the IB method is a highly self-consistent and robust algorithm and it can accurately quantify movies’ actual quality and users’ stability of rating. Compared with existing methods, the IB method has higher ability to find the “dark horses,” that is, not so popular yet good movies, in the Academy Awards.
Choosing the best index for the average score intraclass correlation coefficient.
Shieh, Gwowen
2016-09-01
The intraclass correlation coefficient (ICC)(2) index from a one-way random effects model is widely used to describe the reliability of mean ratings in behavioral, educational, and psychological research. Despite its apparent utility, the essential property of ICC(2) as a point estimator of the average score intraclass correlation coefficient is seldom mentioned. This article considers several potential measures and compares their performance with ICC(2). Analytical derivations and numerical examinations are presented to assess the bias and mean square error of the alternative estimators. The results suggest that more advantageous indices can be recommended over ICC(2) for their theoretical implication and computational ease.
Liang, L. L.; Arcus, V. L.; Heskel, M.; O'Sullivan, O. S.; Weerasinghe, L. K.; Creek, D.; Egerton, J. J. G.; Tjoelker, M. G.; Atkin, O. K.; Schipper, L. A.
2017-12-01
Temperature is a crucial factor in determining the rates of ecosystem processes such as leaf respiration (R) - the flux of plant respired carbon dioxide (CO2) from leaves to the atmosphere. Generally, respiration rate increases exponentially with temperature as modelled by the Arrhenius equation, but a recent study (Heskel et al., 2016) showed a universally convergent temperature response of R using an empirical exponential/polynomial model whereby the exponent in the Arrhenius model is replaced by a quadratic function of temperature. The exponential/polynomial model has been used elsewhere to describe shoot respiration and plant respiration. What are the principles that underlie these empirical observations? Here, we demonstrate that macromolecular rate theory (MMRT), based on transition state theory for chemical kinetics, is equivalent to the exponential/polynomial model. We re-analyse the data from Heskel et al. 2016 using MMRT to show this equivalence and thus, provide an explanation based on thermodynamics, for the convergent temperature response of R. Using statistical tools, we also show the equivalent explanatory power of MMRT when compared to the exponential/polynomial model and the superiority of both of these models over the Arrhenius function. Three meaningful parameters emerge from MMRT analysis: the temperature at which the rate of respiration is maximum (the so called optimum temperature, Topt), the temperature at which the respiration rate is most sensitive to changes in temperature (the inflection temperature, Tinf) and the overall curvature of the log(rate) versus temperature plot (the so called change in heat capacity for the system, ). The latter term originates from the change in heat capacity between an enzyme-substrate complex and an enzyme transition state complex in enzyme-catalysed metabolic reactions. From MMRT, we find the average Topt and Tinf of R are 67.0±1.2 °C and 41.4±0.7 °C across global sites. The average curvature (average
Paengchit, Phacharadit; Saikaew, Charnnarong
2018-02-01
This work aims to investigate the effects of feed rate on surface roughness (Ra) and tool wear (VB) and to obtain the optimal operating condition of the feed rate in dry hard turning of AISI 4140 chromium molybdenum steel for automotive industry applications using TiN+AlCrN coated inserts. AISI 4140 steel bars were employed in order to carry out the dry hard turning experiments by varying the feed rates of 0.06, 0.08 and 0.1 mm/rev based on experimental design technique that can be analyzed by analysis of variance (ANOVA). In addition, the cutting tool inserts were examined after machining experiments by SEM to evaluate the effect of turning operations on tool wear. The results showed that averages Ra and VB were significantly affected by the feed rate at the level of significance of 0.05. Averages Ra and VB values at the feed rate of 0.06 mm/rev were lowest compared to average values at the feed rates of 0.08 and 0.1 mm/rev, based on the main effect plot.
Litter Decomposition Rate of Karst Ecosystem at Gunung Cibodas, Ciampea Bogor Indonesia
Directory of Open Access Journals (Sweden)
Sethyo Vieni Sari
2016-05-01
Full Text Available The study aims to know the productivity of litter and litter decomposition rate in karst ecosystem. This study was conducted on three altitude of 200 meter above sea level (masl, 250 masl and 300 masl in karst ecosystem at Gunung Cibodas, Ciampea, Bogor. Litter productivity measurement performed using litter-trap method and litter-bag method was used to know the rate of decomposition. Litter productivity measurement results showed that the highest total of litter productivity measurement results was on altitude of 200 masl (90.452 tons/ha/year and the lowest was on altitude of 300 masl (25.440 tons/ha/year. The litter productivity of leaves (81.425 ton/ha/year showed the highest result than twigs (16.839 ton/ha/year, as well as flowers and fruits (27.839 ton/ha/year. The rate of decomposition was influenced by rainfall. The decomposition rate and the decrease of litter dry weight on altitude of 250 masl was faster than on the altitude of 200 masl and 300 masl. The dry weight was positively correlated to the rate of decomposition. The lower of dry weight would affect the rate of decomposition become slower. The average of litter C/N ratio were ranged from 28.024%--28.716% and categorized as moderate (>25. The finding indicate that the rate of decomposition in karst ecosystem at Gunung Cibodas was slow and based on C/N ratio of litter showed the mineralization process was also slow.
Time-averaged molluscan death assemblages: Palimpsests of richness, snapshots of abundance
Kidwell, Susan M.
2002-09-01
Field tests that compare living communities to associated dead remains are the primary means of estimating the reliability of biological information in the fossil record; such tests also provide insights into the dynamics of skeletal accumulation. Contrary to expectations, molluscan death assemblages capture a strong signal of living species' rank-order abundances. This finding, combined with independent evidence for exponential postmortem destruction of dead cohorts, argues that, although the species richness of a death assemblage may be a time-averaged palimpsest of the habitat (molluscan death assemblages contain, on average, ˜25% more species than any single census of the local live community, after sample-size standardization), species' relative-abundance data from the same assemblage probably constitute a much higher acuity record dominated by the most recent dead cohorts (e.g., from the past few hundred years or so, rather than the several thousand years recorded by the total assemblage and usually taken as the acuity of species-richness information). The pervasive excess species richness of molluscan death assemblages requires further analysis and modeling to discriminate among possible sources. However, time averaging alone cannot be responsible unless rare species (species with low rates of dead-shell production) are collectively more durable (have longer taphonomic half-lives) than abundant species. Species richness and abundance data thus appear to present fundamentally different taphonomic qualities for paleobiological analysis. Relative- abundance information is more snapshot-like and thus taphonomically more straightforward than expected, especially compared to the complex origins of dead-species richness.
Minimum variance optimal rate allocation for multiplexed H.264/AVC bitstreams.
Tagliasacchi, Marco; Valenzise, Giuseppe; Tubaro, Stefano
2008-07-01
Consider the problem of transmitting multiple video streams to fulfill a constant bandwidth constraint. The available bit budget needs to be distributed across the sequences in order to meet some optimality criteria. For example, one might want to minimize the average distortion or, alternatively, minimize the distortion variance, in order to keep almost constant quality among the encoded sequences. By working in the rho-domain, we propose a low-delay rate allocation scheme that, at each time instant, provides a closed form solution for either the aforementioned problems. We show that minimizing the distortion variance instead of the average distortion leads, for each of the multiplexed sequences, to a coding penalty less than 0.5 dB, in terms of average PSNR. In addition, our analysis provides an explicit relationship between model parameters and this loss. In order to smooth the distortion also along time, we accommodate a shared encoder buffer to compensate for rate fluctuations. Although the proposed scheme is general, and it can be adopted for any video and image coding standard, we provide experimental evidence by transcoding bitstreams encoded using the state-of-the-art H.264/AVC standard. The results of our simulations reveal that is it possible to achieve distortion smoothing both in time and across the sequences, without sacrificing coding efficiency.
Hybrid Reynolds-Averaged/Large Eddy Simulation of the Flow in a Model SCRamjet Cavity Flameholder
Baurle, R. A.
2016-01-01
Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. Experimental data available for this configuration include velocity statistics obtained from particle image velocimetry. Several turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged/large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This e ort was undertaken to not only assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community, but to also begin to understand how this capability can best be used to augment standard Reynolds-averaged simulations. The numerical errors were quantified for the steady-state simulations, and at least qualitatively assessed for the scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results displayed a high degree of variability when comparing the flameholder fuel distributions obtained from each turbulence model. This prompted the consideration of applying the higher-fidelity scale-resolving simulations as a surrogate "truth" model to calibrate the Reynolds-averaged closures in a non-reacting setting prior to their use for the combusting simulations. In general, the Reynolds-averaged velocity profile predictions at the lowest fueling level matched the particle imaging measurements almost as well as was observed for the non-reacting condition. However, the velocity field predictions proved to be more sensitive to the flameholder fueling rate than was indicated in the measurements.
The ugliness-in-averageness effect: Tempering the warm glow of familiarity.
Carr, Evan W; Huber, David E; Pecher, Diane; Zeelenberg, Rene; Halberstadt, Jamin; Winkielman, Piotr
2017-06-01
Mere exposure (i.e., stimulus repetition) and blending (i.e., stimulus averaging) are classic ways to increase social preferences, including facial attractiveness. In both effects, increases in preference involve enhanced familiarity. Prominent memory theories assume that familiarity depends on a match between the target and similar items in memory. These theories predict that when individual items are weakly learned, their blends (morphs) should be relatively familiar, and thus liked-a beauty-in-averageness effect ( BiA ). However, when individual items are strongly learned, they are also more distinguishable. This "differentiation" hypothesis predicts that with strongly encoded items, familiarity (and thus, preference) for the blend will be relatively lower than individual items-an ugliness-in-averageness effect ( UiA ). We tested this novel theoretical prediction in 5 experiments. Experiment 1 showed that with weak learning, facial morphs were more attractive than contributing individuals (BiA effect). Experiments 2A and 2B demonstrated that when participants first strongly learned a subset of individual faces (either in a face-name memory task or perceptual-tracking task), morphs of trained individuals were less attractive than the trained individuals (UiA effect). Experiment 3 showed that changes in familiarity for the trained morph (rather than interstimulus conflict) drove the UiA effect. Using a within-subjects design, Experiment 4 mapped out the transition from BiA to UiA solely as a function of memory training. Finally, computational modeling using a well-known memory framework (REM) illustrated the familiarity transition observed in Experiment 4. Overall, these results highlight how memory processes illuminate classic and modern social preference phenomena. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Average System Cost Methodology : Administrator's Record of Decision.
Energy Technology Data Exchange (ETDEWEB)
United States. Bonneville Power Administration.
1984-06-01
Significant features of average system cost (ASC) methodology adopted are: retention of the jurisdictional approach where retail rate orders of regulartory agencies provide primary data for computing the ASC for utilities participating in the residential exchange; inclusion of transmission costs; exclusion of construction work in progress; use of a utility's weighted cost of debt securities; exclusion of income taxes; simplification of separation procedures for subsidized generation and transmission accounts from other accounts; clarification of ASC methodology rules; more generous review timetable for individual filings; phase-in of reformed methodology; and each exchanging utility must file under the new methodology within 20 days of implementation by the Federal Energy Regulatory Commission of the ten major participating utilities, the revised ASC will substantially only affect three. (PSB)
Martin, W.J.J.M.; Heymans, M.W.; Skorpil, N.E.; Forouzanfar, T.
2012-01-01
This study describes the comparison of multiple and single pain ratings in patients after surgical removal of the third molar. Correlation and agreement analysis were performed between the average pain intensity measured three times a day over a period of 7 days and one single pain rating
Kamminga, Tjerko; Slagman, Simen Jan; Bijlsma, Jetta J.E.; Martins dos Santos, Vitor A.P.; Suarez-Diez, Maria; Schaap, Peter J.
2017-01-01
Mycoplasma hyopneumoniae is cultured on large-scale to produce antigen for inactivated whole-cell vaccines against respiratory disease in pigs. However, the fastidious nutrient requirements of this minimal bacterium and the low growth rate make it challenging to reach sufficient biomass yield for
53 W average power few-cycle fiber laser system generating soft x rays up to the water window.
Rothhardt, Jan; Hädrich, Steffen; Klenke, Arno; Demmler, Stefan; Hoffmann, Armin; Gotschall, Thomas; Eidam, Tino; Krebs, Manuel; Limpert, Jens; Tünnermann, Andreas
2014-09-01
We report on a few-cycle laser system delivering sub-8-fs pulses with 353 μJ pulse energy and 25 GW of peak power at up to 150 kHz repetition rate. The corresponding average output power is as high as 53 W, which represents the highest average power obtained from any few-cycle laser architecture so far. The combination of both high average and high peak power provides unique opportunities for applications. We demonstrate high harmonic generation up to the water window and record-high photon flux in the soft x-ray spectral region. This tabletop source of high-photon flux soft x rays will, for example, enable coherent diffractive imaging with sub-10-nm resolution in the near future.
Avoiding dangerous missense: thermophiles display especially low mutation rates.
Directory of Open Access Journals (Sweden)
John W Drake
2009-06-01
Full Text Available Rates of spontaneous mutation have been estimated under optimal growth conditions for a variety of DNA-based microbes, including viruses, bacteria, and eukaryotes. When expressed as genomic mutation rates, most of the values were in the vicinity of 0.003-0.004 with a range of less than two-fold. Because the genome sizes varied by roughly 10(4-fold, the mutation rates per average base pair varied inversely by a similar factor. Even though the commonality of the observed genomic rates remains unexplained, it implies that mutation rates in unstressed microbes reach values that can be finely tuned by evolution. An insight originating in the 1920s and maturing in the 1960s proposed that the genomic mutation rate would reflect a balance between the deleterious effect of the average mutation and the cost of further reducing the mutation rate. If this view is correct, then increasing the deleterious impact of the average mutation should be countered by reducing the genomic mutation rate. It is a common observation that many neutral or nearly neutral mutations become strongly deleterious at higher temperatures, in which case they are called temperature-sensitive mutations. Recently, the kinds and rates of spontaneous mutations were described for two microbial thermophiles, a bacterium and an archaeon. Using an updated method to extrapolate from mutation-reporter genes to whole genomes reveals that the rate of base substitutions is substantially lower in these two thermophiles than in mesophiles. This result provides the first experimental support for the concept of an evolved balance between the total genomic impact of mutations and the cost of further reducing the basal mutation rate.
Natural and anthropogenic rates of soil erosion
Regions of land that are brought into crop production from native vegetation typically undergo a period of soil erosion instability, and long term erosion rates are greater than for natural lands as long as the land continues being used for crop production. Average rates of soil erosion under natur...
A new mathematical process for the calculation of average forms of teeth.
Mehl, A; Blanz, V; Hickel, R
2005-12-01
Qualitative visual inspections and linear metric measurements have been predominant methods for describing the morphology of teeth. No quantitative formulation exists for the description of dental features. The aim of this study was to determine and validate a mathematical process for calculation of the average form of first maxillary molars, including the general occlusal features. Stone replicas of 174 caries-free first maxillary molar crowns from young patients ranging from 6 to 9 years of age were measured 3-dimensionally with a laser scanning system at a resolution of approximately 100,000 points. Then, the average tooth was computed, which captured the common features of the molar's surface quantitatively. This new method adapts algorithms both from computer science and neuroscience to detect and associate the same features and same surface points (correspondences) between 1 reference tooth and all other teeth. In this study, the method was tested for 7 different reference teeth. The algorithm does not involve any prior knowledge about teeth and their features. Irrespective of the reference tooth used, the procedure yielded average teeth that showed nearly no differences (less than +/-30 microm). This approach provides a valid quantitative process for calculating 3-dimensional (3D) averages of occlusal surfaces of teeth even in the event of a high number of digitized surface points. Additionally, because this process detects and assigns point-wise feature correspondences between all library teeth, it may also serve as a basis for a more substantiated principal component analysis evaluating the main natural shape deviations from the 3D average.
Inverse methods for estimating primary input signals from time-averaged isotope profiles
Passey, Benjamin H.; Cerling, Thure E.; Schuster, Gerard T.; Robinson, Todd F.; Roeder, Beverly L.; Krueger, Stephen K.
2005-08-01
Mammalian teeth are invaluable archives of ancient seasonality because they record along their growth axes an isotopic record of temporal change in environment, plant diet, and animal behavior. A major problem with the intra-tooth method is that intra-tooth isotope profiles can be extremely time-averaged compared to the actual pattern of isotopic variation experienced by the animal during tooth formation. This time-averaging is a result of the temporal and spatial characteristics of amelogenesis (tooth enamel formation), and also results from laboratory sampling. This paper develops and evaluates an inverse method for reconstructing original input signals from time-averaged intra-tooth isotope profiles. The method requires that the temporal and spatial patterns of amelogenesis are known for the specific tooth and uses a minimum length solution of the linear system Am = d, where d is the measured isotopic profile, A is a matrix describing temporal and spatial averaging during amelogenesis and sampling, and m is the input vector that is sought. Accuracy is dependent on several factors, including the total measurement error and the isotopic structure of the measured profile. The method is shown to accurately reconstruct known input signals for synthetic tooth enamel profiles and the known input signal for a rabbit that underwent controlled dietary changes. Application to carbon isotope profiles of modern hippopotamus canines reveals detailed dietary histories that are not apparent from the measured data alone. Inverse methods show promise as an effective means of dealing with the time-averaging problem in studies of intra-tooth isotopic variation.
Yin, Yip Chee; Hock-Eam, Lim
2012-09-01
Our empirical results show that we can predict GDP growth rate more accurately in continent with fewer large economies, compared to smaller economies like Malaysia. This difficulty is very likely positively correlated with subsidy or social security policies. The stage of economic development and level of competiveness also appears to have interactive effects on this forecast stability. These results are generally independent of the forecasting procedures. Countries with high stability in their economic growth, forecasting by model selection is better than model averaging. Overall forecast weight averaging (FWA) is a better forecasting procedure in most countries. FWA also outperforms simple model averaging (SMA) and has the same forecasting ability as Bayesian model averaging (BMA) in almost all countries.
Worldwide trends show oropharyngeal cancer rates increasing
NCI scientists report that the incidence of oropharyngeal cancer significantly increased during the period 1983-2002 among people in countries that are economically developed. Oropharyngeal cancer occurs primarily in the middle part of the throat behind t
True mean rate measuring system
International Nuclear Information System (INIS)
Eichenlaub, D.P.
1980-01-01
A digital radiation-monitoring system for nuclear power plants uses digital and microprocessor circuitry to enable rapid processing of pulse information from remote radiation monitors. The pulse rates are analyzed to determine whether new pulse-rate information is statisticaly the same as that previously received and to determine the best possible averaging time, which can be changed so that the statistical error remains below a specified level while the system response time remains short. Several data modules each process the pulse-rate information from several remote radiation monitors. Each data module accepts pulse data from each radiation monitor and measures the true average or mean pulse rate of events occurring with a Poisson distribution to determine the radiation level. They then develop digital output signals which indciate the respective radiation levels and which can be transmitted via multiplexer circuits for additional processing and display. The data modules can accept signals from remote control stations or computer stations via the multiplexer circuit to change operating thresholds and alarm levels in their memories. A check module scans the various data modules to determine whether the output signals are valid. It also acts as a redundant data module and will automatically replace an inoperative unit. (DN)
Averaging processes in granular flows driven by gravity
Rossi, Giulia; Armanini, Aronne
2016-04-01
One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental
Average-case analysis of numerical problems
2000-01-01
The average-case analysis of numerical problems is the counterpart of the more traditional worst-case approach. The analysis of average error and cost leads to new insight on numerical problems as well as to new algorithms. The book provides a survey of results that were mainly obtained during the last 10 years and also contains new results. The problems under consideration include approximation/optimal recovery and numerical integration of univariate and multivariate functions as well as zero-finding and global optimization. Background material, e.g. on reproducing kernel Hilbert spaces and random fields, is provided.
Average Case Analysis of Java 7's Dual Pivot Quicksort
Wild, Sebastian; Nebel, Markus E.
2013-01-01
Recently, a new Quicksort variant due to Yaroslavskiy was chosen as standard sorting method for Oracle's Java 7 runtime library. The decision for the change was based on empirical studies showing that on average, the new algorithm is faster than the formerly used classic Quicksort. Surprisingly, the improvement was achieved by using a dual pivot approach, an idea that was considered not promising by several theoretical studies in the past. In this paper, we identify the reason for this unexpe...
Detection of small traumatic hemorrhages using a computer-generated average human brain CT.
Afzali-Hashemi, Liza; Hazewinkel, Marieke; Tjepkema-Cloostermans, Marleen C; van Putten, Michel J A M; Slump, Cornelis H
2018-04-01
Computed tomography is a standard diagnostic imaging technique for patients with traumatic brain injury (TBI). A limitation is the poor-to-moderate sensitivity for small traumatic hemorrhages. A pilot study using an automatic method to detect hemorrhages [Formula: see text] in diameter in patients with TBI is presented. We have created an average image from 30 normal noncontrast CT scans that were automatically aligned using deformable image registration as implemented in Elastix software. Subsequently, the average image was aligned to the scans of TBI patients, and the hemorrhages were detected by a voxelwise subtraction of the average image from the CT scans of nine TBI patients. An experienced neuroradiologist and a radiologist in training assessed the presence of hemorrhages in the final images and determined the false positives and false negatives. The 9 CT scans contained 67 small haemorrhages, of which 97% was correctly detected by our system. The neuroradiologist detected three false positives, and the radiologist in training found two false positives. For one patient, our method showed a hemorrhagic contusion that was originally missed. Comparing individual CT scans with a computed average may assist the physicians in detecting small traumatic hemorrhages in patients with TBI.
The inner state differences of preterm birth rates in Brazil: a time series study.
de Oliveira, Rosana Rosseto; Melo, Emiliana Cristina; Fujimori, Elizabeth; Mathias, Thais Aidar de Freitas
2016-05-17
Preterm birth is a serious public health problem, as it is linked to high rates of neonatal and child morbidity and mortality. The prevalence of premature births has increased worldwide, with regional differences. The objective of this study was to analyze the trend of preterm births in the state of Paraná, Brazil, according to Macro-regional and Regional Health Offices (RHOs). This is an ecological time series study using preterm births records from the national live birth registry system of Brazil's National Health Service - Live Birth Information System (Sinasc), for residents of the state of Paraná, Brazil, between 2000 and 2013. The preterm birth rates was calculated on a yearly basis and grouped into three-year periods (2000-2002, 2003-2005, 2006-2008, 2009-2011) and one two-year period (2012-2013), according to gestational age and mother's Regional Health Office of residence. The polynomial regression model was used for trend analysis. The predominance of preterm birth rate increased from 6.8 % in 2000 to 10.5 % in 2013, with an average increase of 0.20 % per year (r(2) = 0.89), and a greater share of moderate preterm births (32 to rate of prematurity and average annual growth during that period (7.55 % and 0.35 %, respectively). The trend analysis of preterm birth rates according to RHO showed a growing trend for almost all RHOs - except for the 7(th) RHO where a declining trend was observed (-0.95 a year); and in the 20(th), 21(st) and 22(nd) RHOs which remained unchanged. In the last three-year of the study period (2011-2013), no RHO showed preterm birth rates below 7.3 % or prevalence of moderate preterm birth below 9.4 %. The results show an increase in preterm births with differences among Macro-regional and RHOs, which indicate the need to improve actions during the prenatal period according to the specificities of each region.
Radon and Thoron Exhalation Rates from Surface Soil of Bangka - Belitung Islands, Indonesia
Directory of Open Access Journals (Sweden)
Syarbaini Syarbaini
2015-03-01
Full Text Available DOI:10.17014/ijog.2.1.35-42Radon and thoron exhalation rate from soil is one of the most important factors that can influence the radioactivity level in the environment. Radon and thoron gases are produced by the decay of the radioactive elements those are radium and thorium in the soil, where its concentration depends on the soil conditions and the local geological background. In this paper, the results of radon and thoron exhalation rate measurements from surface soil of Bangka Belitung Islands at thirty six measurement sites are presented. Exhalation rates of radon and thoron were measured by using an accumulation chamber equipped with a solid-state alpha particle detector. Furthermore, the correlations between radon and thoron exhalation rates with their parent nuclide (226Ra and 232Th concentrations in collected soil samples from the same locations were also evaluated. The result of the measurement shows that mostly the distribution of radon and thoron is similar to 226Ra and 232Th, eventhough it was not a good correlation between radon and thoron exhalation rate with their parent activity concentrations (226Ra and 232Th due to the environmental factors that can influence the radon and thoron mobilities in the soil. In comparison to a world average, Bangka Belitung Islands have the 222Rn and 220Rn exhalation rates higher than the world average value for the regions with normal background radiation.
International Nuclear Information System (INIS)
Markovic, M. I.; Radunovic, J. B.
1976-01-01
Determination of spatial distribution of neutron flux in water, most frequently used moderator in thermal reactors, demands microscopic scattering kernels dependence on cosine of thermal neutrons scattering angle when solving the Boltzmann equation. Since spatial orientation of water molecules influences this dependence it is necessary to perform orientation averaging or rotation-vibrational intermediate scattering function for water molecules. The calculations described in this paper and the obtained results showed that methods of orientation averaging do not influence the anisotropy of thermal neutrons scattering on water molecules, but do influence the inelastic scattering
Cooperation schemes for rate enhancement in detect-and-forward relay channels
Benjillali, Mustapha
2010-05-01
To improve the spectral efficiency of "Detect-and-Forward" (DetF) half-duplex relaying in fading channels, we propose a cooperation scheme where the relay uses a modulation whose order is higher than the one at the source. In a new common framework, we show that the proposed scheme offers considerable gains - in terms of achievable information rates - compared to the conventional DetF relaying schemes for both orthogonal and non-orthogonal source/relay cooperation. This allows us to propose an adaptive cooperation scheme based on the maximization of the information rate at the destination which needs to observe only the average signal-to-noise ratios of direct and relaying links. ©2010 IEEE.
Average L-shell fluorescence, Auger, and electron yields
International Nuclear Information System (INIS)
Krause, M.O.
1980-01-01
The dependence of the average L-shell fluorescence and Auger yields on the initial vacancy distribution is shown to be small. By contrast, the average electron yield pertaining to both Auger and Coster-Kronig transitions is shown to display a strong dependence. Numerical examples are given on the basis of Krause's evaluation of subshell radiative and radiationless yields. Average yields are calculated for widely differing vacancy distributions and are intercompared graphically for 40 3 subshell yields in most cases of inner-shell ionization
On determining dose rate constants spectroscopically
International Nuclear Information System (INIS)
Rodriguez, M.; Rogers, D. W. O.
2013-01-01
Purpose: To investigate several aspects of the Chen and Nath spectroscopic method of determining the dose rate constants of 125 I and 103 Pd seeds [Z. Chen and R. Nath, Phys. Med. Biol. 55, 6089–6104 (2010)] including the accuracy of using a line or dual-point source approximation as done in their method, and the accuracy of ignoring the effects of the scattered photons in the spectra. Additionally, the authors investigate the accuracy of the literature's many different spectra for bare, i.e., unencapsulated 125 I and 103 Pd sources. Methods: Spectra generated by 14 125 I and 6 103 Pd seeds were calculated in vacuo at 10 cm from the source in a 2.7 × 2.7 × 0.05 cm 3 voxel using the EGSnrc BrachyDose Monte Carlo code. Calculated spectra used the initial photon spectra recommended by AAPM's TG-43U1 and NCRP (National Council of Radiation Protection and Measurements) Report 58 for the 125 I seeds, or TG-43U1 and NNDC(2000) (National Nuclear Data Center, 2000) for 103 Pd seeds. The emitted spectra were treated as coming from a line or dual-point source in a Monte Carlo simulation to calculate the dose rate constant. The TG-43U1 definition of the dose rate constant was used. These calculations were performed using the full spectrum including scattered photons or using only the main peaks in the spectrum as done experimentally. Statistical uncertainties on the air kerma/history and the dose rate/history were ⩽0.2%. The dose rate constants were also calculated using Monte Carlo simulations of the full seed model. Results: The ratio of the intensity of the 31 keV line relative to that of the main peak in 125 I spectra is, on average, 6.8% higher when calculated with the NCRP Report 58 initial spectrum vs that calculated with TG-43U1 initial spectrum. The 103 Pd spectra exhibit an average 6.2% decrease in the 22.9 keV line relative to the main peak when calculated with the TG-43U1 rather than the NNDC(2000) initial spectrum. The measured values from three different
Average cross sections calculated in various neutron fields
International Nuclear Information System (INIS)
Shibata, Keiichi
2002-01-01
Average cross sections have been calculated for the reactions contained in the dosimetry files, JENDL/D-99, IRDF-90V2, and RRDF-98 in order to select the best data for the new library IRDF-2002. The neutron spectra used in the calculations are as follows: 1) 252 Cf spontaneous fission spectrum (NBS evaluation), 2) 235 U thermal fission spectrum (NBS evaluation), 3) Intermediate-energy Standard Neutron Field (ISNF), 4) Coupled Fast Reactivity Measurement Facility (CFRMF), 5) Coupled thermal/fast uranium and boron carbide spherical assembly (ΣΣ), 6) Fast neutron source reactor (YAYOI), 7) Experimental fast reactor (JOYO), 8) Japan Material Testing Reactor (JMTR), 9) d-Li neutron spectrum with a 2-MeV deuteron beam. The items 3)-7) represent fast neutron spectra, while JMTR is a light water reactor. The Q-value for the d-Li reaction mentioned above is 15.02 MeV. Therefore, neutrons with energies up to 17 MeV can be produced in the d-Li reaction. The calculated average cross sections were compared with the measurements. Figures 1-9 show the ratios of the calculations to the experimental data which are given. It is found from these figures that the 58 Fe(n, γ) cross section in JENDL/D-99 reproduces the measurements in the thermal and fast reactor spectra better than that in IRDF-90V2. (author)
Energy Technology Data Exchange (ETDEWEB)
Hellaby, Charles, E-mail: Charles.Hellaby@uct.ac.za [Dept. of Maths. and Applied Maths, University of Cape Town, Rondebosch, 7701 (South Africa)
2012-01-01
A new method for constructing exact inhomogeneous universes is presented, that allows variation in 3 dimensions. The resulting spacetime may be statistically uniform on average, or have random, non-repeating variation. The construction utilises the Darmois junction conditions to join many different component spacetime regions. In the initial simple example given, the component parts are spatially flat and uniform, but much more general combinations should be possible. Further inhomogeneity may be added via swiss cheese vacuoles and inhomogeneous metrics. This model is used to explore the proposal, that observers are located in bound, non-expanding regions, while the universe is actually in the process of becoming void dominated, and thus its average expansion rate is increasing. The model confirms qualitatively that the faster expanding components come to dominate the average, and that inhomogeneity results in average parameters which evolve differently from those of any one component, but more realistic modelling of the effect will need this construction to be generalised.
Coyle, Thomas R; Rindermann, Heiner; Hancock, Dale
2016-10-01
Cognitive ability stimulates economic productivity. However, the effects of cognitive ability may be stronger in free and open economies, where competition rewards merit and achievement. To test this hypothesis, ability levels of intellectual classes (top 5%) and average classes (country averages) were estimated using international student assessments (Programme for International Student Assessment; Trends in International Mathematics and Science Study; and Progress in International Reading Literacy Study) (N = 99 countries). The ability levels were correlated with indicators of economic freedom (Fraser Institute), scientific achievement (patent rates), innovation (Global Innovation Index), competitiveness (Global Competitiveness Index), and wealth (gross domestic product). Ability levels of intellectual and average classes strongly predicted all economic criteria. In addition, economic freedom moderated the effects of cognitive ability (for both classes), with stronger effects at higher levels of freedom. Effects were particularly robust for scientific achievements when the full range of freedom was analyzed. The results support cognitive capitalism theory: cognitive ability stimulates economic productivity, and its effects are enhanced by economic freedom. © The Author(s) 2016.
Nygren, D J; Ukeritis, M D
1992-11-01
As the healthcare crisis mounts, healthcare organizations must be managed by especially competent leaders. It is important for executives to assess and develop the competencies necessary to become "outstanding" leaders. In our study of leadership competencies among leaders of religious orders, we found that outstanding and average leaders appear to share characteristics such as the ability to articulate their group's mission, the ability to act efficiently, and the tendency to avoid impulsive behavior or excessive emotional expression. Outstanding leaders, however, differed from average leaders in seemingly small but significant ways. For instance, nearly three times as often as average leaders, outstanding leaders expressed a desire to perform tasks well--or better than they had been performed in the past. The study also assessed how members of religious orders perceived their leaders. In general, they tended to rate leaders of their religious institutes as transformational leaders--leaders who welcomed doing things in a new way and inspiring their own staffs to search out new ways to provide services.
Schille, Joerg; Schneider, Lutz; Streek, André; Kloetzer, Sascha; Loeschner, Udo
2016-09-01
High-throughput ultrashort pulse laser machining is investigated on various industrial grade metals (aluminum, copper, and stainless steel) and Al2O3 ceramic at unprecedented processing speeds. This is achieved by using a high-average power picosecond laser in conjunction with a unique, in-house developed polygon mirror-based biaxial scanning system. Therefore, different concepts of polygon scanners are engineered and tested to find the best architecture for high-speed and precision laser beam scanning. In order to identify the optimum conditions for efficient processing when using high-average laser powers, the depths of cavities made in the samples by varying the processing parameter settings are analyzed and, from the results obtained, the characteristic removal values are specified. For overlapping pulses of optimum fluence, the removal rate is as high as 27.8 mm3/min for aluminum, 21.4 mm3/min for copper, 15.3 mm3/min for stainless steel, and 129.1 mm3/min for Al2O3, when a laser beam of 187 W average laser powers irradiates. On stainless steel, it is demonstrated that the removal rate increases to 23.3 mm3/min when the laser beam is very fast moving. This is thanks to the low pulse overlap as achieved with 800 m/s beam deflection speed; thus, laser beam shielding can be avoided even when irradiating high-repetitive 20-MHz pulses.
Vortex reconnection rate, and loop birth rate, for a random wavefield
Hannay, J. H.
2017-04-01
A time dependent, complex scalar wavefield in three dimensions contains curved zero lines, wave ‘vortices’, that move around. From time to time pairs of these lines contact each other and ‘reconnect’ in a well studied manner, and at other times tiny loops of new line appear from nowhere (births) and grow, or the reverse, existing loops shrink and disappear (deaths). These three types are known to be the only generic events. Here the average rate of their occurrences per unit volume is calculated exactly for a Gaussian random wavefield that has isotropic, stationary statistics, arising from a superposition of an infinity of plane waves in different directions. A simplifying ‘axis fixing’ technique is introduced to achieve this. The resulting formulas are proportional to the standard deviation of angular frequencies, and depend in a simple way on the second and fourth moments of the power spectrum of the plane waves. Reconnections turn out to be more common than births and deaths combined. As an expository preliminary, the case of two dimensions, where the vortices are points, is studied and the average rate of pair creation (and likewise destruction) per unit area is calculated.
Vortex reconnection rate, and loop birth rate, for a random wavefield
International Nuclear Information System (INIS)
Hannay, J H
2017-01-01
A time dependent, complex scalar wavefield in three dimensions contains curved zero lines, wave ‘vortices’, that move around. From time to time pairs of these lines contact each other and ‘reconnect’ in a well studied manner, and at other times tiny loops of new line appear from nowhere (births) and grow, or the reverse, existing loops shrink and disappear (deaths). These three types are known to be the only generic events. Here the average rate of their occurrences per unit volume is calculated exactly for a Gaussian random wavefield that has isotropic, stationary statistics, arising from a superposition of an infinity of plane waves in different directions. A simplifying ‘axis fixing’ technique is introduced to achieve this. The resulting formulas are proportional to the standard deviation of angular frequencies, and depend in a simple way on the second and fourth moments of the power spectrum of the plane waves. Reconnections turn out to be more common than births and deaths combined. As an expository preliminary, the case of two dimensions, where the vortices are points, is studied and the average rate of pair creation (and likewise destruction) per unit area is calculated. (paper)
Simultaneous inference for model averaging of derived parameters
DEFF Research Database (Denmark)
Jensen, Signe Marie; Ritz, Christian
2015-01-01
Model averaging is a useful approach for capturing uncertainty due to model selection. Currently, this uncertainty is often quantified by means of approximations that do not easily extend to simultaneous inference. Moreover, in practice there is a need for both model averaging and simultaneous...... inference for derived parameters calculated in an after-fitting step. We propose a method for obtaining asymptotically correct standard errors for one or several model-averaged estimates of derived parameters and for obtaining simultaneous confidence intervals that asymptotically control the family...
Huete-Stauffer, Tamara M.
2016-05-23
Organism size reduction with increasing temperature has been suggested as a universal response to global warming. Since genome size is usually correlated to cell size, reduction of genome size in unicells could be a parallel outcome of warming at ecological and evolutionary time scales. In this study, the short-term response of cell size and nucleic acid content of coastal marine prokaryotic communities to temperature was studied over a full annual cycle at a NE Atlantic temperate site. We used flow cytometry and experimental warming incubations, spanning a 6°C range, to analyze the hypothesized reduction with temperature in the size of the widespread flow cytometric bacterial groups of high and low nucleic acid content (HNA and LNA bacteria, respectively). Our results showed decreases in size in response to experimental warming, which were more marked in 0.8 μm pre-filtered treatment rather than in the whole community treatment, thus excluding the role of protistan grazers in our findings. Interestingly, a significant effect of temperature on reducing the average nucleic acid content (NAC) of prokaryotic cells in the communities was also observed. Cell size and nucleic acid decrease with temperature were correlated, showing a common mean decrease of 0.4% per °C. The usually larger HNA bacteria consistently showed a greater reduction in cell and NAC compared with their LNA counterparts, especially during the spring phytoplankton bloom period associated to maximum bacterial growth rates in response to nutrient availability. Our results show that the already smallest planktonic microbes, yet with key roles in global biogeochemical cycling, are likely undergoing important structural shrinkage in response to rising temperatures.
Huete-Stauffer, Tamara M.; Arandia-Gorostidi, Nestor; Alonso-Sá ez, Laura; Moran, Xose Anxelu G.
2016-01-01
Organism size reduction with increasing temperature has been suggested as a universal response to global warming. Since genome size is usually correlated to cell size, reduction of genome size in unicells could be a parallel outcome of warming at ecological and evolutionary time scales. In this study, the short-term response of cell size and nucleic acid content of coastal marine prokaryotic communities to temperature was studied over a full annual cycle at a NE Atlantic temperate site. We used flow cytometry and experimental warming incubations, spanning a 6°C range, to analyze the hypothesized reduction with temperature in the size of the widespread flow cytometric bacterial groups of high and low nucleic acid content (HNA and LNA bacteria, respectively). Our results showed decreases in size in response to experimental warming, which were more marked in 0.8 μm pre-filtered treatment rather than in the whole community treatment, thus excluding the role of protistan grazers in our findings. Interestingly, a significant effect of temperature on reducing the average nucleic acid content (NAC) of prokaryotic cells in the communities was also observed. Cell size and nucleic acid decrease with temperature were correlated, showing a common mean decrease of 0.4% per °C. The usually larger HNA bacteria consistently showed a greater reduction in cell and NAC compared with their LNA counterparts, especially during the spring phytoplankton bloom period associated to maximum bacterial growth rates in response to nutrient availability. Our results show that the already smallest planktonic microbes, yet with key roles in global biogeochemical cycling, are likely undergoing important structural shrinkage in response to rising temperatures.
DEFF Research Database (Denmark)
Hansen, Ernst Albin; Jørgensen, Lars Vincents; Sjøgaard, Gisela
2004-01-01
metabolic variables and to perform a physiological evaluation of five different kinematic models for calculating IP in cycling. Results showed that IP was statistically different between the kinematic models applied. IP based on metabolic variables (IP(met)) was 15, 41, and 91 W at 61, 88, and 115 rpm...... significantly with pedal rate - leg movements accounting for the largest fraction. Further, external power (EP) affected IP significantly such that IP was larger at moderate than at low EP at the majority of the pedal rates applied but on average this difference was only 8%....
Brennan, Julia M; Bednarczyk, Robert A; Richards, Jennifer L; Allen, Kristen E; Warraich, Gohar J; Omer, Saad B
2017-01-01
To evaluate trends in rates of personal belief exemptions (PBEs) to immunization requirements for private kindergartens in California that practice alternative educational methods. We used California Department of Public Health data on kindergarten PBE rates from 2000 to 2014 to compare annual average increases in PBE rates between schools. Alternative schools had an average PBE rate of 8.7%, compared with 2.1% among public schools. Waldorf schools had the highest average PBE rate of 45.1%, which was 19 times higher than in public schools (incidence rate ratio = 19.1; 95% confidence interval = 16.4, 22.2). Montessori and holistic schools had the highest average annual increases in PBE rates, slightly higher than Waldorf schools (Montessori: 8.8%; holistic: 7.1%; Waldorf: 3.6%). Waldorf schools had exceptionally high average PBE rates, and Montessori and holistic schools had higher annual increases in PBE rates. Children in these schools may be at higher risk for spreading vaccine-preventable diseases if trends are not reversed.
Kontosic, I; Vukelić, M; Pancić, M; Kunisek, J
1994-12-01
Physical work load was estimated in a female conveyor-belt worker in a bottling plant. Estimation was based on continuous measurement and on calculation of average heart rate values in three-minute and one-hour periods and during the total measuring period. The thermal component of the heart rate was calculated by means of the corrected effective temperature, for the one-hour periods. The average heart rate at rest was also determined. The work component of the heart rate was calculated by subtraction of the resting heart rate and the heart rate measured at 50 W, using a regression equation. The average estimated gross energy expenditure during the work was 9.6 +/- 1.3 kJ/min corresponding to the category of light industrial work. The average estimated oxygen uptake was 0.42 +/- 0.06 L/min. The average performed mechanical work was 12.2 +/- 4.2 W, i.e. the energy expenditure was 8.3 +/- 1.5%.
Analytical expressions for conditional averages: A numerical test
DEFF Research Database (Denmark)
Pécseli, H.L.; Trulsen, J.
1991-01-01
Conditionally averaged random potential fluctuations are an important quantity for analyzing turbulent electrostatic plasma fluctuations. Experimentally, this averaging can be readily performed by sampling the fluctuations only when a certain condition is fulfilled at a reference position...
Mental health care and average happiness: strong effect in developed nations.
Touburg, Giorgio; Veenhoven, Ruut
2015-07-01
Mental disorder is a main cause of unhappiness in modern society and investment in mental health care is therefore likely to add to average happiness. This prediction was checked in a comparison of 143 nations around 2005. Absolute investment in mental health care was measured using the per capita number of psychiatrists and psychologists working in mental health care. Relative investment was measured using the share of mental health care in the total health budget. Average happiness in nations was measured with responses to survey questions about life-satisfaction. Average happiness appeared to be higher in countries that invest more in mental health care, both absolutely and relative to investment in somatic medicine. A data split by level of development shows that this difference exists only among developed nations. Among these nations the link between mental health care and happiness is quite strong, both in an absolute sense and compared to other known societal determinants of happiness. The correlation between happiness and share of mental health care in the total health budget is twice as strong as the correlation between happiness and size of the health budget. A causal effect is likely, but cannot be proved in this cross-sectional analysis.
Mutation rate estimation for 15 autosomal STR loci in a large population from Mainland China.
Zhao, Zhuo; Zhang, Jie; Wang, Hua; Liu, Zhi-Peng; Liu, Ming; Zhang, Yuan; Sun, Li; Zhang, Hui
2015-09-01
STR, short tandem repeats, are well known as a type of powerful genetic marker and widely used in studying human population genetics. Compared with the conventional genetic markers, the mutation rate of STR is higher. Additionally, the mutations of STR loci do not lead to genetic inconsistencies between the genotypes of parents and children; therefore, the analysis of STR mutation is more suited to assess the population mutation. In this study, we focused on 15 autosomal STR loci. DNA samples from a total of 42,416 unrelated healthy individuals (19,037 trios) from the population of Mainland China collected between Jan 2012 and May 2014 were successfully investigated. In our study, the allele frequencies, paternal mutation rates, maternal mutation rates and average mutation rates were detected. Furthermore, we also investigated the relationship between paternal ages, maternal ages, area, the time of pregnancy and average mutation rate. We found that the paternal mutation rate was higher than the maternal mutation rate and the paternal, maternal, and average mutation rates had a positive correlation with paternal age, maternal age and the time of pregnancy respectively. Additionally, the average mutation rate of coastal areas was higher than that of inland areas.
Influence of Yoga and Ayurveda on self-rated sleep in a geriatric population.
Manjunath, N K; Telles, Shirley
2005-05-01
Sleep in older persons is characterized by decreased ability to stay asleep, resulting in fragmented sleep and reduced daytime alertness. Pharmacological treatment of insomnia in older persons is associated with hazardous side effects. Hence, the present study was designed to compare the effects of Yoga and Ayurveda on the self rated sleep in a geriatric population. Of the 120 residents from a home for the aged, 69 were stratified based on age (five year intervals) and randomly allocated to three groups i.e., Yoga (physical postures, relaxation techniques, voluntarily regulated breathing and lectures on yoga philosophy), Ayurveda (a herbal preparation), and Wait-list control (no intervention). The groups were evaluated for self-assessment of sleep over a one week period at baseline, and after three and six months of the respective interventions. The Yoga group showed a significant decrease in the time taken to fall asleep (approximate group average decrease: 10 min, P<0.05), an increase in the total number of hours slept (approximate group average increase: 60 min, P< 0.05) and in the feeling of being rested in the morning based on a rating scale (P<0.05) after six months. The other groups showed no significant change. Yoga practice improved different aspects of sleep in a geriatric population.
Nonequilibrium statistical averages and thermo field dynamics
International Nuclear Information System (INIS)
Marinaro, A.; Scarpetta, Q.
1984-01-01
An extension of thermo field dynamics is proposed, which permits the computation of nonequilibrium statistical averages. The Brownian motion of a quantum oscillator is treated as an example. In conclusion it is pointed out that the procedure proposed to computation of time-dependent statistical average gives the correct two-point Green function for the damped oscillator. A simple extension can be used to compute two-point Green functions of free particles
A Constant Rate of Spontaneous Mutation in DNA-Based Microbes
Drake, John W.
1991-08-01
In terms of evolution and fitness, the most significant spontaneous mutation rate is likely to be that for the entire genome (or its nonfrivolous fraction). Information is now available to calculate this rate for several DNA-based haploid microbes, including bacteriophages with single- or double-stranded DNA, a bacterium, a yeast, and a filamentous fungus. Their genome sizes vary by ≈6500-fold. Their average mutation rates per base pair vary by ≈16,000-fold, whereas their mutation rates per genome vary by only ≈2.5-fold, apparently randomly, around a mean value of 0.0033 per DNA replication. The average mutation rate per base pair is inversely proportional to genome size. Therefore, a nearly invariant microbial mutation rate appears to have evolved. Because this rate is uniform in such diverse organisms, it is likely to be determined by deep general forces, perhaps by a balance between the usually deleterious effects of mutation and the physiological costs of further reducing mutation rates.
Time average vibration fringe analysis using Hilbert transformation
International Nuclear Information System (INIS)
Kumar, Upputuri Paul; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad
2010-01-01
Quantitative phase information from a single interferogram can be obtained using the Hilbert transform (HT). We have applied the HT method for quantitative evaluation of Bessel fringes obtained in time average TV holography. The method requires only one fringe pattern for the extraction of vibration amplitude and reduces the complexity in quantifying the data experienced in the time average reference bias modulation method, which uses multiple fringe frames. The technique is demonstrated for the measurement of out-of-plane vibration amplitude on a small scale specimen using a time average microscopic TV holography system.
Directory of Open Access Journals (Sweden)
Kou Ulla
2002-09-01
Full Text Available Abstract Background Even though the annual incidence rate of measles has dramatically decreased in industrialised countries since the implementation of universal immunisation programmes, cases continue to occur in countries where endemic measles transmission has been interrupted and in countries where adequate levels of immunisation coverage have not been maintained. The objective of this study is to develop a model to estimate the average cost per measles case and per adverse event following measles immunisation using the Netherlands (NL, the United Kingdom (UK and Canada as examples. Methods Parameter estimates were based on a review of the published literature. A decision tree was built to represent the complications associated with measles cases and adverse events following imminisation. Monte-Carlo Simulation techniques were used to account for uncertainty. Results From the perspective of society, we estimated the average cost per measles case to be US$276, US$307 and US$254 for the NL, the UK and Canada, respectively, and the average cost of adverse events following immunisation per vaccinee to be US$1.43, US$1.93 and US$1.51 for the NL, UK and Canada, respectively. Conclusions These average cost estimates could be combined with incidence estimates and costs of immunisation programmes to provide estimates of the cost of measles to industrialised countries. Such estimates could be used as a basis to estimate the potential economic gains of global measles eradication.
Safety Impact of Average Speed Control in the UK
DEFF Research Database (Denmark)
Lahrmann, Harry Spaabæk; Brassøe, Bo; Johansen, Jonas Wibert
2016-01-01
of automatic speed control was point-based, but in recent years a potentially more effective alternative automatic speed control method has been introduced. This method is based upon records of drivers’ average travel speed over selected sections of the road and is normally called average speed control...... in the UK. The study demonstrates that the introduction of average speed control results in statistically significant and substantial reductions both in speed and in number of accidents. The evaluation indicates that average speed control has a higher safety effect than point-based automatic speed control....
Experimental study of average void fraction in low-flow subcooled boiling
International Nuclear Information System (INIS)
Sun Qi; Wang Xiaojun; Xi Zhao; Zhao Hua; Yang Ruichang
2005-01-01
Low-flow subcooled void fraction in medium pressure was investigated using high-temperature high-pressure single-sensor optical probe in this paper. And then average void fraction was obtained through the integral calculation of local void fraction in the cross-section. The experimental data were compared with the void fraction model proposed in advance. The results show that the predictions of this model agree with the data quite well. The comparisons of Saha and Levy models with low-flow subcooled data show that Saha model overestimates the experimental data distinctively, and Levy model also gets relatively higher predictions although it is better than Saha model. (author)
Proverbio, Alice M; Manfrin, Luigi; Arcari, Laura A; De Benedetto, Francesco; Gazzola, Martina; Guardamagna, Matteo; Lozano Nasi, Valentina; Zani, Alberto
2015-01-01
Previous studies suggested that listening to different types of music may modulate differently psychological mood and physiological responses associated with the induced emotions. In this study the effect of listening to instrumental classical vs. atonal contemporary music was examined in a group of 50 non-expert listeners. The subjects' heart rate and diastolic and systolic blood pressure values were measured while they listened to music of different style and emotional typologies. Pieces were selected by asking a group of composers and conservatory professors to suggest a list of the most emotional music pieces (from Renaissance to present time). A total of 214 suggestions from 20 respondents were received. Then it was asked them to identify which pieces best induced in the listener feelings of agitation, joy or pathos and the number of suggested pieces per style was computed. Atonal pieces were more frequently indicated as agitating, and tonal pieces as joyful. The presence/absence of tonality in a musical piece did not affect the affective dimension of pathos (being touching). Among the most frequently cited six pieces were selected that were comparable for structure and style, to represent each emotion and style. They were equally evaluated as unfamiliar by an independent group of 10 students of the same cohort) and were then used as stimuli for the experimental session in which autonomic parameters were recorded. Overall, listening to atonal music (independent of the pieces' emotional characteristics) was associated with a reduced heart rate (fear bradycardia) and increased blood pressure (both diastolic and systolic), possibly reflecting an increase in alertness and attention, psychological tension, and anxiety. This evidence fits with the results of the esthetical assessment showing how, overall, atonal music is perceived as more agitating and less joyful than tonal one.
Directory of Open Access Journals (Sweden)
Alice Mado eProverbio
2015-10-01
Full Text Available Previous studies suggested that listening to different types of music may modulate differently psychological mood and physiological responses associated with the induced emotions. In this study the effect of listening to instrumental classical vs. atonal contemporary music was examined in a group of 50 non-expert listeners. The subjects’ heart rate and diastolic and systolic blood pressure values were measured while they listened to music of different style and emotional typologies. Pieces were selected by asking a group of composers and conservatory professors to suggest a list of the most emotional music pieces (from Renaissance to present time. A total of 214 suggestions from 20 respondents was received. Then it was asked them to identify which pieces best induced in the listener feelings of agitation, joy or pathos and the number of suggested pieces per style was computed. Atonal pieces were more frequently indicated as agitating, and tonal pieces as joyful. The presence/absence of tonality in a musical piece did not affect the affective dimension of pathos (being touching. Among the most frequently cited six pieces were selected that were comparable for structure and style, to represent each emotion and style. They were equally evaluated as unfamiliar by an independent group of 10 students of the same cohort and were then used as stimuli for the experimental session in which autonomic parameters were recorded. Overall, listening to atonal music (independent of the pieces’ emotional characteristics was associated with a reduced heart rate (fear bradycardia and increased blood pressure (both diastolic and systolic, possibly reflecting an increase in alertness and attention, psychological tension, and anxiety. This evidence fits with the results of the aesthetical assessment showing how, overall, atonal music is perceived as more agitating and less joyful than tonal one.
Proverbio, Alice M.; Manfrin, Luigi; Arcari, Laura A.; De Benedetto, Francesco; Gazzola, Martina; Guardamagna, Matteo; Lozano Nasi, Valentina; Zani, Alberto
2015-01-01
Previous studies suggested that listening to different types of music may modulate differently psychological mood and physiological responses associated with the induced emotions. In this study the effect of listening to instrumental classical vs. atonal contemporary music was examined in a group of 50 non-expert listeners. The subjects’ heart rate and diastolic and systolic blood pressure values were measured while they listened to music of different style and emotional typologies. Pieces were selected by asking a group of composers and conservatory professors to suggest a list of the most emotional music pieces (from Renaissance to present time). A total of 214 suggestions from 20 respondents were received. Then it was asked them to identify which pieces best induced in the listener feelings of agitation, joy or pathos and the number of suggested pieces per style was computed. Atonal pieces were more frequently indicated as agitating, and tonal pieces as joyful. The presence/absence of tonality in a musical piece did not affect the affective dimension of pathos (being touching). Among the most frequently cited six pieces were selected that were comparable for structure and style, to represent each emotion and style. They were equally evaluated as unfamiliar by an independent group of 10 students of the same cohort) and were then used as stimuli for the experimental session in which autonomic parameters were recorded. Overall, listening to atonal music (independent of the pieces’ emotional characteristics) was associated with a reduced heart rate (fear bradycardia) and increased blood pressure (both diastolic and systolic), possibly reflecting an increase in alertness and attention, psychological tension, and anxiety. This evidence fits with the results of the esthetical assessment showing how, overall, atonal music is perceived as more agitating and less joyful than tonal one. PMID:26579029
The drug target genes show higher evolutionary conservation than non-target genes.
Lv, Wenhua; Xu, Yongdeng; Guo, Yiying; Yu, Ziqi; Feng, Guanglong; Liu, Panpan; Luan, Meiwei; Zhu, Hongjie; Liu, Guiyou; Zhang, Mingming; Lv, Hongchao; Duan, Lian; Shang, Zhenwei; Li, Jin; Jiang, Yongshuai; Zhang, Ruijie
2016-01-26
Although evidence indicates that drug target genes share some common evolutionary features, there have been few studies analyzing evolutionary features of drug targets from an overall level. Therefore, we conducted an analysis which aimed to investigate the evolutionary characteristics of drug target genes. We compared the evolutionary conservation between human drug target genes and non-target genes by combining both the evolutionary features and network topological properties in human protein-protein interaction network. The evolution rate, conservation score and the percentage of orthologous genes of 21 species were included in our study. Meanwhile, four topological features including the average shortest path length, betweenness centrality, clustering coefficient and degree were considered for comparison analysis. Then we got four results as following: compared with non-drug target genes, 1) drug target genes had lower evolutionary rates; 2) drug target genes had higher conservation scores; 3) drug target genes had higher percentages of orthologous genes and 4) drug target genes had a tighter network structure including higher degrees, betweenness centrality, clustering coefficients and lower average shortest path lengths. These results demonstrate that drug target genes are more evolutionarily conserved than non-drug target genes. We hope that our study will provide valuable information for other researchers who are interested in evolutionary conservation of drug targets.
Bounds on Average Time Complexity of Decision Trees
Chikalov, Igor
2011-01-01
In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any
Kawamura, Yoshifumi; Hikage, Takashi; Nojima, Toshio
2010-01-01
The aim of this study is to develop a new whole-body averaged specific absorption rate (SAR) estimation method based on the external-cylindrical field scanning technique. This technique is adopted with the goal of simplifying the dosimetry estimation of human phantoms that have different postures or sizes. An experimental scaled model system is constructed. In order to examine the validity of the proposed method for realistic human models, we discuss the pros and cons of measurements and nume...
Self-Averaging Property of Minimal Investment Risk of Mean-Variance Model.
Shinzato, Takashi
2015-01-01
In portfolio optimization problems, the minimum expected investment risk is not always smaller than the expected minimal investment risk. That is, using a well-known approach from operations research, it is possible to derive a strategy that minimizes the expected investment risk, but this strategy does not always result in the best rate of return on assets. Prior to making investment decisions, it is important to an investor to know the potential minimal investment risk (or the expected minimal investment risk) and to determine the strategy that will maximize the return on assets. We use the self-averaging property to analyze the potential minimal investment risk and the concentrated investment level for the strategy that gives the best rate of return. We compare the results from our method with the results obtained by the operations research approach and with those obtained by a numerical simulation using the optimal portfolio. The results of our method and the numerical simulation are in agreement, but they differ from that of the operations research approach.
Self-Averaging Property of Minimal Investment Risk of Mean-Variance Model.
Directory of Open Access Journals (Sweden)
Takashi Shinzato
Full Text Available In portfolio optimization problems, the minimum expected investment risk is not always smaller than the expected minimal investment risk. That is, using a well-known approach from operations research, it is possible to derive a strategy that minimizes the expected investment risk, but this strategy does not always result in the best rate of return on assets. Prior to making investment decisions, it is important to an investor to know the potential minimal investment risk (or the expected minimal investment risk and to determine the strategy that will maximize the return on assets. We use the self-averaging property to analyze the potential minimal investment risk and the concentrated investment level for the strategy that gives the best rate of return. We compare the results from our method with the results obtained by the operations research approach and with those obtained by a numerical simulation using the optimal portfolio. The results of our method and the numerical simulation are in agreement, but they differ from that of the operations research approach.
International Nuclear Information System (INIS)
Benth, Fred Espen; Taib, Che Mohd Imran Che
2013-01-01
We extend the concept of half life of an Ornstein–Uhlenbeck process to Lévy-driven continuous-time autoregressive moving average processes with stochastic volatility. The half life becomes state dependent, and we analyze its properties in terms of the characteristics of the process. An empirical example based on daily temperatures observed in Petaling Jaya, Malaysia, is presented, where the proposed model is estimated and the distribution of the half life is simulated. The stationarity of the dynamics yield futures prices which asymptotically tend to constant at an exponential rate when time to maturity goes to infinity. The rate is characterized by the eigenvalues of the dynamics. An alternative description of this convergence can be given in terms of our concept of half life. - Highlights: • The concept of half life is extended to Levy-driven continuous time autoregressive moving average processes • The dynamics of Malaysian temperatures are modeled using a continuous time autoregressive model with stochastic volatility • Forward prices on temperature become constant when time to maturity tends to infinity • Convergence in time to maturity is at an exponential rate given by the eigenvalues of the model temperature model
A Constructed Freshwater Wetland Shows Signs of Declining Net Ecosystem Exchange
Anderson, F. E.; Bergamaschi, B. A.; Windham-Myers, L.; Byrd, K. B.; Drexler, J. Z.; Fujii, R.
2014-12-01
The USGS constructed a freshwater-wetland complex on Twitchell Island in the Sacramento-San Joaquin Delta (Delta), California, USA, in 1997 and maintained it until 2012 to investigate strategies for biomass accretion and reduction of oxidative soil loss. We studied an area of the wetland complex covered mainly by dense patches of hardstem bulrush (Schoenoplectus acutus) and cattails (Typha spp.), with smaller areas of floating and submerged vegetation, that was maintained at an average depth of 55 cm. Using eddy covariance measurements of carbon and energy fluxes, we found that the combination of water management and the region's Mediterranean climate created conditions where peak growing season daily means of net ecosystem exchange (NEE) reached -45 gCO2 m-2 d-1 and averaged around -30 gCO2 m-2 d-1 between 2002 through 2004. However, when measurements resumed in 2010, NEE rates were a fraction of the rates previously measured, approximately -6 gCO2 m-2 d-1. Interestingly, NEE rates in 2011 doubled compared to 2010 (-13 gCO2 m-2 d-1). Methane fluxes, collected in 2010 to assess a complete atmospheric carbon budget, were positive throughout the year, with daily mean flux values ranging from 50 to 300 mg CH4 m-2 d-1. As a result, methane flux reduced NEE values by approximately one-third, and when the global warming potential was considered, the wetland became a net global warming potential source. We found that carbon cycling in a constructed wetland is complex and can change over annual and decadal timescales. We investigated possible reasons for differences between flux measurements from 2002 to 2004 and those from 2010 and 2011: (1) changes in methodology, (2) differences in weather conditions, (3) differences in gross primary productivity relative to respiration rates, and (4) the amount of living plant tissue relative to brown accumulations of senesced plant litter. We hypothesize that large mats of senesced material within the flux footprint could have
7 CFR 1437.11 - Average market price and payment factors.
2010-01-01
... 7 Agriculture 10 2010-01-01 2010-01-01 false Average market price and payment factors. 1437.11... ASSISTANCE PROGRAM General Provisions § 1437.11 Average market price and payment factors. (a) An average... average market price by the applicable payment factor (i.e., harvested, unharvested, or prevented planting...
Anomalous behavior of q-averages in nonextensive statistical mechanics
International Nuclear Information System (INIS)
Abe, Sumiyoshi
2009-01-01
A generalized definition of average, termed the q-average, is widely employed in the field of nonextensive statistical mechanics. Recently, it has however been pointed out that such an average value may behave unphysically under specific deformations of probability distributions. Here, the following three issues are discussed and clarified. Firstly, the deformations considered are physical and may be realized experimentally. Secondly, in view of the thermostatistics, the q-average is unstable in both finite and infinite discrete systems. Thirdly, a naive generalization of the discussion to continuous systems misses a point, and a norm better than the L 1 -norm should be employed for measuring the distance between two probability distributions. Consequently, stability of the q-average is shown not to be established in all of the cases
Average Gait Differential Image Based Human Recognition
Directory of Open Access Journals (Sweden)
Jinyan Chen
2014-01-01
Full Text Available The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI, AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition.
Asynchronous Gossip for Averaging and Spectral Ranking
Borkar, Vivek S.; Makhijani, Rahul; Sundaresan, Rajesh
2014-08-01
We consider two variants of the classical gossip algorithm. The first variant is a version of asynchronous stochastic approximation. We highlight a fundamental difficulty associated with the classical asynchronous gossip scheme, viz., that it may not converge to a desired average, and suggest an alternative scheme based on reinforcement learning that has guaranteed convergence to the desired average. We then discuss a potential application to a wireless network setting with simultaneous link activation constraints. The second variant is a gossip algorithm for distributed computation of the Perron-Frobenius eigenvector of a nonnegative matrix. While the first variant draws upon a reinforcement learning algorithm for an average cost controlled Markov decision problem, the second variant draws upon a reinforcement learning algorithm for risk-sensitive control. We then discuss potential applications of the second variant to ranking schemes, reputation networks, and principal component analysis.
Benchmarking statistical averaging of spectra with HULLAC
Klapisch, Marcel; Busquet, Michel
2008-11-01
Knowledge of radiative properties of hot plasmas is important for ICF, astrophysics, etc When mid-Z or high-Z elements are present, the spectra are so complex that one commonly uses statistically averaged description of atomic systems [1]. In a recent experiment on Fe[2], performed under controlled conditions, high resolution transmission spectra were obtained. The new version of HULLAC [3] allows the use of the same model with different levels of details/averaging. We will take advantage of this feature to check the effect of averaging with comparison with experiment. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Quant. Spectros. Rad. Transf. 65, 43 (2000). [2] J. E. Bailey, G. A. Rochau, C. A. Iglesias et al., Phys. Rev. Lett. 99, 265002-4 (2007). [3]. M. Klapisch, M. Busquet, and A. Bar-Shalom, AIP Conference Proceedings 926, 206-15 (2007).
High average power supercontinuum sources
Indian Academy of Sciences (India)
The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium.
Edmonds, Grant W.; Goldberg, Lewis R.; Hampson, Sarah E.; Barckley, Maureen
2013-01-01
We report on the longitudinal stability of personality traits across an average 40 years in the Hawaii Personality and Health Cohort relating childhood teacher assessments of personality to adult self- and observer- reports. Stabilities based on self-ratings in adulthood were compared to those measured by the Structured Interview for the Five-Factor Model (SIFFM; Trull & Widiger, 1997), and trait ratings completed by interviewers. Although convergence between self-reports and observer-ratings was modest, childhood traits demonstrated similar levels of stability across methods in adulthood. Extraversion and Conscientiousness generally showed higher stabilities, whereas Neuroticism showed none. For Agreeableness and Intellect/Openness, stability was highest when assessed with observer-ratings. These findings are discussed in terms of differences in trait evaluativeness and observability across measurement methods. PMID:24039315
Directory of Open Access Journals (Sweden)
Erwin Nofyan
2016-11-01
Full Text Available Research about “Giving a mixture of Insecticide Carbofuran in cow feces to the Rate of Consumption and the Efficiency of Absorption on Land Worm Pheretima javanica Gates was held on June to August 2016 at Animal Physiology Laboratorium, Biology Department, Faculty of Mathematics and Science, Sriwijaya University, Indralaya, Ogan Ilir, South Sumatera. The purpose of this research is to learn the effect of Insecticide Carbofuran to the rate of consumption and the efficiency of absorption on land worm Pheretima javanica Gates. Contribution of this research gives the information to farmer about the effect of insecticide carbofuran to non-target animal, especially to land worm Pheretima javanica Gates. This research used Completely Randomized Design with 6 treatments and 5 times repetition. Treatment that was given to sample are the insecticide carbofuran with concentration of 0 % (control; 0.1% ; 0.2 % ; 0.3 % ; 0.4 % ; 0.5 %. Data analysis was using Varians Analysis. If there was real difference then data analysis continued with The Duncan Test on level of confidence of 95%. The results of this research show us that several concentration of insecticide carbofuran have the real effect to the average of consumption rate and the efficiency of absorption. The lowest average of consumption rate on land worm Pheretima javanica is on concentration of 0,5 % (0.23 ± 0.02 mg/g day and the highest average of consumption rate on land worm Pheretima javanica is on concentration of 0% (control (2.53 ± 0.05 mg/g day. The lowest average of absorption efficiency on land worm Pheretima javanica is on concentration of 0 % (control (40.78 ± 2.56 % and the highest average of absorption efficiency on land worm Pheretima javanica is on concentration of 0,5 % (70.76 ± 3.67 %. Keywords: carbofuran, the rate of consumption, the efficiency of absorption, Pheretima javanica Gates.
Average monthly and annual climate maps for Bolivia
Vicente-Serrano, Sergio M.
2015-02-24
This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.