Averaging and sampling for magnetic-observatory hourly data
Directory of Open Access Journals (Sweden)
J. J. Love
2010-11-01
Full Text Available A time and frequency-domain analysis is made of the effects of averaging and sampling methods used for constructing magnetic-observatory hourly data values. Using 1-min data as a proxy for continuous, geomagnetic variation, we construct synthetic hourly values of two standard types: instantaneous "spot" measurements and simple 1-h "boxcar" averages. We compare these average-sample types with others: 2-h average, Gaussian, and "brick-wall" low-frequency-pass. Hourly spot measurements provide a statistically unbiased representation of the amplitude range of geomagnetic-field variation, but as a representation of continuous field variation over time, they are significantly affected by aliasing, especially at high latitudes. The 1-h, 2-h, and Gaussian average-samples are affected by a combination of amplitude distortion and aliasing. Brick-wall values are not affected by either amplitude distortion or aliasing, but constructing them is, in an operational setting, relatively more difficult than it is for other average-sample types. It is noteworthy that 1-h average-samples, the present standard for observatory hourly data, have properties similar to Gaussian average-samples that have been optimized for a minimum residual sum of amplitude distortion and aliasing. For 1-h average-samples from medium and low-latitude observatories, the average of the combination of amplitude distortion and aliasing is less than the 5.0 nT accuracy standard established by Intermagnet for modern 1-min data. For medium and low-latitude observatories, average differences between monthly means constructed from 1-min data and monthly means constructed from any of the hourly average-sample types considered here are less than the 1.0 nT resolution of standard databases. We recommend that observatories and World Data Centers continue the standard practice of reporting simple 1-h-average hourly values.
Wang, Ling; Abdel-Aty, Mohamed; Wang, Xuesong; Yu, Rongjie
2018-02-01
There have been plenty of traffic safety studies based on average daily traffic (ADT), average hourly traffic (AHT), or microscopic traffic at 5 min intervals. Nevertheless, not enough research has compared the performance of these three types of safety studies, and seldom of previous studies have intended to find whether the results of one type of study is transferable to the other two studies. First, this study built three models: a Bayesian Poisson-lognormal model to estimate the daily crash frequency using ADT, a Bayesian Poisson-lognormal model to estimate the hourly crash frequency using AHT, and a Bayesian logistic regression model for the real-time safety analysis using microscopic traffic. The model results showed that the crash contributing factors found by different models were comparable but not the same. Four variables, i.e., the logarithm of volume, the standard deviation of speed, the logarithm of segment length, and the existence of diverge segment, were positively significant in the three models. Additionally, weaving segments experienced higher daily and hourly crash frequencies than merge and basic segments. Then, each of the ADT-based, AHT-based, and real-time models was used to estimate safety conditions at different levels: daily and hourly, meanwhile, the real-time model was also used in 5 min intervals. The results uncovered that the ADT- and AHT-based safety models performed similar in predicting daily and hourly crash frequencies, and the real-time safety model was able to provide hourly crash frequency. Copyright © 2017 Elsevier Ltd. All rights reserved.
Wintoft, Peter; Wik, Magnus; Matzka, Jürgen; Shprits, Yuri
2017-11-01
We have developed neural network models that predict Kp from upstream solar wind data. We study the importance of various input parameters, starting with the magnetic component Bz, particle density n, and velocity V and then adding total field B and the By component. As we also notice a seasonal and UT variation in average Kp we include functions of day-of-year and UT. Finally, as Kp is a global representation of the maximum range of geomagnetic variation over 3-hour UT intervals we conclude that sudden changes in the solar wind can have a big effect on Kp, even though it is a 3-hour value. Therefore, 3-hour solar wind averages will not always appropriately represent the solar wind condition, and we introduce 3-hour maxima and minima values to some degree address this problem. We find that introducing total field B and 3-hour maxima and minima, derived from 1-minute solar wind data, have a great influence on the performance. Due to the low number of samples for high Kp values there can be considerable variation in predicted Kp for different networks with similar validation errors. We address this issue by using an ensemble of networks from which we use the median predicted Kp. The models (ensemble of networks) provide prediction lead times in the range 20-90 min given by the time it takes a solar wind structure to travel from L1 to Earth. Two models are implemented that can be run with real time data: (1) IRF-Kp-2017-h3 uses the 3-hour averages of the solar wind data and (2) IRF-Kp-2017 uses in addition to the averages, also the minima and maxima values. The IRF-Kp-2017 model has RMS error of 0.55 and linear correlation of 0.92 based on an independent test set with final Kp covering 2 years using ACE Level 2 data. The IRF-Kp-2017-h3 model has RMSE = 0.63 and correlation = 0.89. We also explore the errors when tested on another two-year period with real-time ACE data which gives RMSE = 0.59 for IRF-Kp-2017 and RMSE = 0.73 for IRF-Kp-2017-h3. The errors as function
Maximum Hours Legislation and Female Employment in the 1920s: A Reasse ssment
Claudia Goldin
1986-01-01
The causes and consequences of state maximum hours laws for female workers, passed from the mid-1800s to the 1920s, are explored and are found to differ from a recent reinterpretation. Although maximum hours legislation reduced scheduled hours in 1920, the impact was minimal and it operated equally for men. Legislation affecting only women was symptomatic of a general desire by labor for lower hours, and these lower hours were achieved in the tight, and otherwise special, World War I labor ma...
The effects of disjunct sampling and averaging time on maximum mean wind speeds
DEFF Research Database (Denmark)
Larsén, Xiaoli Guo; Mann, J.
2006-01-01
Conventionally, the 50-year wind is calculated on basis of the annual maxima of consecutive 10-min averages. Very often, however, the averages are saved with a temporal spacing of several hours. We call it disjunct sampling. It may also happen that the wind speeds are averaged over a longer time...
49 CFR 398.6 - Hours of service of drivers; maximum driving time.
2010-10-01
... REGULATIONS TRANSPORTATION OF MIGRANT WORKERS § 398.6 Hours of service of drivers; maximum driving time. No person shall drive nor shall any motor carrier permit or require a driver employed or used by it to drive...
44 CFR 353.5 - Average cost per FEMA professional staff-hour.
2010-10-01
... 44 Emergency Management and Assistance 1 2010-10-01 2010-10-01 false Average cost per FEMA professional staff-hour. 353.5 Section 353.5 Emergency Management and Assistance FEDERAL EMERGENCY MANAGEMENT... of exercise objectives and scenarios, pre-exercise logistics, exercise conduct and participation...
Strips of hourly power options. Approximate hedging using average-based forward contracts
International Nuclear Information System (INIS)
Lindell, Andreas; Raab, Mikael
2009-01-01
We study approximate hedging strategies for a contingent claim consisting of a strip of independent hourly power options. The payoff of the contingent claim is a sum of the contributing hourly payoffs. As there is no forward market for specific hours, the fundamental problem is to find a reasonable hedge using exchange-traded forward contracts, e.g. average-based monthly contracts. The main result is a simple dynamic hedging strategy that reduces a significant part of the variance. The idea is to decompose the contingent claim into mathematically tractable components and to use empirical estimations to derive hedging deltas. Two benefits of the method are that the technique easily extends to more complex power derivatives and that only a few parameters need to be estimated. The hedging strategy based on the decomposition technique is compared with dynamic delta hedging strategies based on local minimum variance hedging, using a correlated traded asset. (author)
Narasimha Murthy, K. V.; Saravana, R.; Vijaya Kumar, K.
2018-04-01
The paper investigates the stochastic modelling and forecasting of monthly average maximum and minimum temperature patterns through suitable seasonal auto regressive integrated moving average (SARIMA) model for the period 1981-2015 in India. The variations and distributions of monthly maximum and minimum temperatures are analyzed through Box plots and cumulative distribution functions. The time series plot indicates that the maximum temperature series contain sharp peaks in almost all the years, while it is not true for the minimum temperature series, so both the series are modelled separately. The possible SARIMA model has been chosen based on observing autocorrelation function (ACF), partial autocorrelation function (PACF), and inverse autocorrelation function (IACF) of the logarithmic transformed temperature series. The SARIMA (1, 0, 0) × (0, 1, 1)12 model is selected for monthly average maximum and minimum temperature series based on minimum Bayesian information criteria. The model parameters are obtained using maximum-likelihood method with the help of standard error of residuals. The adequacy of the selected model is determined using correlation diagnostic checking through ACF, PACF, IACF, and p values of Ljung-Box test statistic of residuals and using normal diagnostic checking through the kernel and normal density curves of histogram and Q-Q plot. Finally, the forecasting of monthly maximum and minimum temperature patterns of India for the next 3 years has been noticed with the help of selected model.
Determination of maximum physiologic thyroid uptake and correlation with 24-hour RAI uptake value
International Nuclear Information System (INIS)
Duldulao, M.; Obaldo, J.
2007-01-01
Full text: In hyperthyroid patients, thyroid uptake values are overestimated, sometimes approaching or exceeding 100%. This is physiologically and mathematically impossible. This study was undertaken to determine the maximum physiologic thyroid uptake value through a proposed simple method using a gamma camera. Methodology: Twenty-two patients (17 females and 5 males), with ages ranging from 19-61 y/o (mean age ± SD; 41 ± 12), with 24-hour uptake value of >50%, clinically hyperthyroid and referred for subsequent radioactive iodine therapy were studied. The computed maximum physiologic thyroid uptake was compared with the 24-hour uptake using the paired Student t-test and evaluated using linear regression analysis. Results: The computed physiologic uptake correlated poorly with the 24-hour uptake value. However, in the male subgroup, there was no statistically significant difference between the two (p=0.77). Linear regression analysis gives the following relationship: physiologic uptake (%) = 77.76 - 0.284 (24-hour RAI uptake value). Conclusion: Provided that proper regions of interest are applied with correct attenuation and background subtraction, determination of physiologic thyroid uptake may be obtained using the proposed method. This simple method may be useful prior to I-131 therapy for hyperthyroidism especially when a single uptake determination is performed. (author)
International Nuclear Information System (INIS)
Jain, P.C.
1985-12-01
The monthly average daily values of the extraterrestrial irradiation on a horizontal plane and the maximum possible sunshine duration are two important parameters that are frequently needed in various solar energy applications. These are generally calculated by solar scientists and engineers each time they are needed and often by using the approximate short-cut methods. Using the accurate analytical expressions developed by Spencer for the declination and the eccentricity correction factor, computations for these parameters have been made for all the latitude values from 90 deg. N to 90 deg. S at intervals of 1 deg. and are presented in a convenient tabular form. Monthly average daily values of the maximum possible sunshine duration as recorded on a Campbell Stoke's sunshine recorder are also computed and presented. These tables would avoid the need for repetitive and approximate calculations and serve as a useful ready reference for providing accurate values to the solar energy scientists and engineers
International Nuclear Information System (INIS)
Jain, P.C.
1984-01-01
The monthly average daily values of the extraterrestrial irradiation on a horizontal surface (H 0 ) and the maximum possible sunshine duration are two important parameters that are frequently needed in various solar energy applications. These are generally calculated by scientists each time they are needed and by using the approximate short-cut methods. Computations for these values have been made once and for all for latitude values of 60 deg. N to 60 deg. S at intervals of 1 deg. and are presented in a convenient tabular form. Values of the maximum possible sunshine duration as recorded on a Campbell Stoke's sunshine recorder are also computed and presented. These tables should avoid the need for repetition and approximate calculations and serve as a useful ready reference for solar energy scientists and engineers. (author)
PATTERNS OF THE MAXIMUM RAINFALL AMOUNTS REGISTERED IN 24 HOURS WITHIN THE OLTENIA PLAIN
Directory of Open Access Journals (Sweden)
ALINA VLĂDUŢ
2012-03-01
Full Text Available Patterns of the maximum rainfall amounts registered in 24 hours within the Oltenia Plain. The present study aims at rendering the main features of the maximum rainfall amounts registered in 24 h within the Oltenia Plain. We used 30-year time series (1980-2009 for seven meteorological stations. Generally, the maximum amounts in 24 h display the same pattern as the monthly mean amounts, namely higher values in the interval May-October. In terms of mean values, the highest amounts are registered in the western and northern extremity of the plain. The maximum values generally exceed 70 mm at all meteorological stations: D.T. Severin, 224 mm, July 1999; Slatina, 104.8 mm, August 2002; Caracal, 92.2 m, July 1991; Bechet, 80.8 mm, July 2006; Craiova, 77.6 mm, April 2003. During the cold season, there was noticed a greater uniformity all over the plain, due to the cyclonic origin of rainfalls compared to the warm season, when thermal convection is quite active and it triggers local showers. In order to better emphasize the peculiarities of this parameter, we have calculated the frequency on different value classes (eight classes, as well as the probability of appearance of different amounts. Thus, it resulted that the highest frequency (25-35% is held by the first two classes of values (0-10 mm; 10.1-20 mm. The lowest frequency is registered in case of the amounts of more than 100 mm, which generally display a probability of occurrence of less than 1% and only in the western and eastern extremities of the plain.
Directory of Open Access Journals (Sweden)
Zhang Zhang
2009-06-01
Full Text Available A major analytical challenge in computational biology is the detection and description of clusters of specified site types, such as polymorphic or substituted sites within DNA or protein sequences. Progress has been stymied by a lack of suitable methods to detect clusters and to estimate the extent of clustering in discrete linear sequences, particularly when there is no a priori specification of cluster size or cluster count. Here we derive and demonstrate a maximum likelihood method of hierarchical clustering. Our method incorporates a tripartite divide-and-conquer strategy that models sequence heterogeneity, delineates clusters, and yields a profile of the level of clustering associated with each site. The clustering model may be evaluated via model selection using the Akaike Information Criterion, the corrected Akaike Information Criterion, and the Bayesian Information Criterion. Furthermore, model averaging using weighted model likelihoods may be applied to incorporate model uncertainty into the profile of heterogeneity across sites. We evaluated our method by examining its performance on a number of simulated datasets as well as on empirical polymorphism data from diverse natural alleles of the Drosophila alcohol dehydrogenase gene. Our method yielded greater power for the detection of clustered sites across a breadth of parameter ranges, and achieved better accuracy and precision of estimation of clusters, than did the existing empirical cumulative distribution function statistics.
Godolphin, E. J.
1980-01-01
It is shown that the estimation procedure of Walker leads to estimates of the parameters of a Gaussian moving average process which are asymptotically equivalent to the maximum likelihood estimates proposed by Whittle and represented by Godolphin.
Scale dependence of the average potential around the maximum in Φ4 theories
International Nuclear Information System (INIS)
Tetradis, N.; Wetterich, C.
1992-04-01
The average potential describes the physics at a length scale k - 1 by averaging out the degrees of freedom with characteristic moments larger than k. The dependence on k can be described by differential evolution equations. We solve these equations for the nonconvex part of the potential around the origin in φ 4 theories, in the phase with spontaneous symmetry breaking. The average potential is real and approaches the convex effective potential in the limit k → 0. Our calculation is relevant for processes for which the shape of the potential at a given scale is important, such as tunneling phenomena or inflation. (orig.)
Wang, H; Tang, Y; Zhang, Y; Xu, K; Zhao, J B
2018-05-10
Objective: To investigate the relationship between the maximum blood pressure fluctuation within 24 hours after admission and the prognosis at discharge. Methods: The patients with ischemic stroke admitted in Department of Neurology of the First Affiliated Hospital of Harbin Medical University within 24 hours after onset were consecutively selected from April 2016 to March 2017. The patients were grouped according to the diagnostic criteria of hypertension. Ambulatory blood pressure of the patients within 24 hours after admission were measured with bedside monitors and baseline data were collected. The patients were scored by NIHSS at discharge. The relationships between the maximum values of systolic blood pressure (SBP) or diastolic blood pressure (DBP) and the prognosis at discharge were analyzed. Results: A total of 521 patients with acute ischemic stroke were enrolled. They were divided into normal blood pressure group (82 cases) and hypertension group(439 cases). In normal blood pressure group, the maximum values of SBP and DBP were all in normal distribution ( P >0.05). The maximum value of SBP fluctuation was set at 146.6 mmHg. After adjustment for potential confounders, the OR for poor prognosis at discharge in patients with SBP fluctuation ≥146.6 mmHg was 2.669 (95 %CI : 0.594-11.992) compared with those with SBP fluctuation blood pressure at admission, the maximum values of SBP and DBP within 24 hours after admission had no relationship with prognosis at discharge. In acute ischemic stroke patients with hypertension at admission, the maximum values of SBP and DBP within 24 hours after admission were associated with poor prognosis at discharge.
Langelotz, C; Koplin, G; Pascher, A; Lohmann, R; Köhler, A; Pratschke, J; Haase, O
2017-12-01
Background Between the conflicting requirements of clinic organisation, the European Working Time Directive, patient safety, an increasing lack of junior staff, and competitiveness, the development of ideal duty hour models is vital to ensure maximum quality of care within the legal requirements. To achieve this, it is useful to evaluate the actual effects of duty hour models on staff satisfaction. Materials and Methods After the traditional 24-hour duty shift was given up in a surgical maximum care centre in 2007, an 18-hour duty shift was implemented, followed by a 12-hour shift in 2008, to improve handovers and reduce loss of information. The effects on work organisation, quality of life and salary were analysed in an anonymous survey in 2008. The staff survey was repeated in 2014. Results With a response rate of 95% of questionnaires in 2008 and a 93% response rate in 2014, the 12-hour duty model received negative ratings due to its high duty frequency and subsequent social strain. Also the physical strain and chronic tiredness were rated as most severe in the 12-hour rota. The 18-hour duty shift was the model of choice amongst staff. The 24-hour duty model was rated as the best compromise between the requirements of work organisation and staff satisfaction, and therefore this duty model was adapted accordingly in 2015. Conclusion The essential basis of a surgical department is a duty hour model suited to the requirements of work organisation, the Working Time Directive and the needs of the surgical staff. A 12-hour duty model can be ideal for work organisation, but only if augmented with an adequate number of staff members, the implementation of this model is possible without the frequency of 12-hour shifts being too high associated with strain on surgical staff and a perceived deterioration of quality of life. A staff survey should be performed on a regular basis to assess the actual effects of duty hour models and enable further optimisation. The much
Wage and Labor Standards Administration (DOL), Washington, DC.
This report describes the 1966 amendments to the Fair Labor Standards Act and summarizes the findings of three 1969 studies of the economic effects of these amendments. The studies found that economic growth continued through the third phase of the amendments, beginning February 1, 1969, despite increased wage and hours restrictions for recently…
Mena, Luis J; Felix, Vanessa G; Melgarejo, Jesus D; Maestre, Gladys E
2017-10-19
Although 24-hour blood pressure (BP) variability (BPV) is predictive of cardiovascular outcomes independent of absolute BP levels, it is not regularly assessed in clinical practice. One possible limitation to routine BPV assessment is the lack of standardized methods for accurately estimating 24-hour BPV. We conducted a systematic review to assess the predictive power of reported BPV indexes to address appropriate quantification of 24-hour BPV, including the average real variability (ARV) index. Studies chosen for review were those that presented data for 24-hour BPV in adults from meta-analysis, longitudinal or cross-sectional design, and examined BPV in terms of the following issues: (1) methods used to calculate and evaluate ARV; (2) assessment of 24-hour BPV determined using noninvasive ambulatory BP monitoring; (3) multivariate analysis adjusted for covariates, including some measure of BP; (4) association of 24-hour BPV with subclinical organ damage; and (5) the predictive value of 24-hour BPV on target organ damage and rate of cardiovascular events. Of the 19 assessed studies, 17 reported significant associations between high ARV and the presence and progression of subclinical organ damage, as well as the incidence of hard end points, such as cardiovascular events. In all these cases, ARV remained a significant independent predictor ( P <0.05) after adjustment for BP and other clinical factors. In addition, increased ARV in systolic BP was associated with risk of all cardiovascular events (hazard ratio, 1.18; 95% confidence interval, 1.09-1.27). Only 2 cross-sectional studies did not find that high ARV was a significant risk factor. Current evidence suggests that ARV index adds significant prognostic information to 24-hour ambulatory BP monitoring and is a useful approach for studying the clinical value of BPV. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley.
Malone, Stephen M.; McGue, Matt; Iacono, William G.
2010-01-01
Background: The maximum number of alcoholic drinks consumed in a single 24-hr period is an alcoholism-related phenotype with both face and empirical validity. It has been associated with severity of withdrawal symptoms and sensitivity to alcohol, genes implicated in alcohol metabolism, and amplitude of a measure of brain activity associated with…
Park, Sung Woo; Oh, Byung Kwan; Park, Hyo Seon
2015-03-30
The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs), the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.
Directory of Open Access Journals (Sweden)
Sung Woo Park
2015-03-01
Full Text Available The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs, the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.
Capelli, Riccardo; Tiana, Guido; Camilloni, Carlo
2018-05-14
Inferential methods can be used to integrate experimental informations and molecular simulations. The maximum entropy principle provides a framework for using equilibrium experimental data, and it has been shown that replica-averaged simulations, restrained using a static potential, are a practical and powerful implementation of such a principle. Here we show that replica-averaged simulations restrained using a time-dependent potential are equivalent to the principle of maximum caliber, the dynamic version of the principle of maximum entropy, and thus may allow us to integrate time-resolved data in molecular dynamics simulations. We provide an analytical proof of the equivalence as well as a computational validation making use of simple models and synthetic data. Some limitations and possible solutions are also discussed.
Energy Technology Data Exchange (ETDEWEB)
Shirai, Kiyonori [Department of Radiation Oncology, Osaka Medical Center for Cancer and Cardiovascular Diseases, Osaka (Japan); Nishiyama, Kinji, E-mail: sirai-ki@mc.pref.osaka.jp [Department of Radiation Oncology, Osaka Medical Center for Cancer and Cardiovascular Diseases, Osaka (Japan); Katsuda, Toshizo [Department of Radiology, National Cerebral and Cardiovascular Center, Osaka (Japan); Teshima, Teruki; Ueda, Yoshihiro; Miyazaki, Masayoshi; Tsujii, Katsutomo [Department of Radiation Oncology, Osaka Medical Center for Cancer and Cardiovascular Diseases, Osaka (Japan)
2014-01-01
Purpose: To determine whether maximum or average intensity projection (MIP or AIP, respectively) reconstructed from 4-dimensional computed tomography (4DCT) is preferred for alignment to cone beam CT (CBCT) images in lung stereotactic body radiation therapy. Methods and Materials: Stationary CT and 4DCT images were acquired with a target phantom at the center of motion and moving along the superior–inferior (SI) direction, respectively. Motion profiles were asymmetrical waveforms with amplitudes of 10, 15, and 20 mm and a 4-second cycle. Stationary CBCT and dynamic CBCT images were acquired in the same manner as stationary CT and 4DCT images. Stationary CBCT was aligned to stationary CT, and the couch position was used as the baseline. Dynamic CBCT was aligned to the MIP and AIP of corresponding amplitudes. Registration error was defined as the SI deviation of the couch position from the baseline. In 16 patients with isolated lung lesions, free-breathing CBCT (FBCBCT) was registered to AIP and MIP (64 sessions in total), and the difference in couch shifts was calculated. Results: In the phantom study, registration errors were within 0.1 mm for AIP and 1.5 to 1.8 mm toward the inferior direction for MIP. In the patient study, the difference in the couch shifts (mean, range) was insignificant in the right-left (0.0 mm, ≤1.0 mm) and anterior–posterior (0.0 mm, ≤2.1 mm) directions. In the SI direction, however, the couch position significantly shifted in the inferior direction after MIP registration compared with after AIP registration (mean, −0.6 mm; ranging 1.7 mm to the superior side and 3.5 mm to the inferior side, P=.02). Conclusions: AIP is recommended as the reference image for registration to FBCBCT when target alignment is performed in the presence of asymmetrical respiratory motion, whereas MIP causes systematic target positioning error.
International Nuclear Information System (INIS)
Shirai, Kiyonori; Nishiyama, Kinji; Katsuda, Toshizo; Teshima, Teruki; Ueda, Yoshihiro; Miyazaki, Masayoshi; Tsujii, Katsutomo
2014-01-01
Purpose: To determine whether maximum or average intensity projection (MIP or AIP, respectively) reconstructed from 4-dimensional computed tomography (4DCT) is preferred for alignment to cone beam CT (CBCT) images in lung stereotactic body radiation therapy. Methods and Materials: Stationary CT and 4DCT images were acquired with a target phantom at the center of motion and moving along the superior–inferior (SI) direction, respectively. Motion profiles were asymmetrical waveforms with amplitudes of 10, 15, and 20 mm and a 4-second cycle. Stationary CBCT and dynamic CBCT images were acquired in the same manner as stationary CT and 4DCT images. Stationary CBCT was aligned to stationary CT, and the couch position was used as the baseline. Dynamic CBCT was aligned to the MIP and AIP of corresponding amplitudes. Registration error was defined as the SI deviation of the couch position from the baseline. In 16 patients with isolated lung lesions, free-breathing CBCT (FBCBCT) was registered to AIP and MIP (64 sessions in total), and the difference in couch shifts was calculated. Results: In the phantom study, registration errors were within 0.1 mm for AIP and 1.5 to 1.8 mm toward the inferior direction for MIP. In the patient study, the difference in the couch shifts (mean, range) was insignificant in the right-left (0.0 mm, ≤1.0 mm) and anterior–posterior (0.0 mm, ≤2.1 mm) directions. In the SI direction, however, the couch position significantly shifted in the inferior direction after MIP registration compared with after AIP registration (mean, −0.6 mm; ranging 1.7 mm to the superior side and 3.5 mm to the inferior side, P=.02). Conclusions: AIP is recommended as the reference image for registration to FBCBCT when target alignment is performed in the presence of asymmetrical respiratory motion, whereas MIP causes systematic target positioning error
Directory of Open Access Journals (Sweden)
C. S. Malley
2018-03-01
Full Text Available Exposure to nitrogen dioxide (NO2 is associated with negative human health effects, both for short-term peak concentrations and from long-term exposure to a wider range of NO2 concentrations. For the latter, the European Union has established an air quality limit value of 40 µg m−3 as an annual average. However, factors such as proximity and strength of local emissions, atmospheric chemistry, and meteorological conditions mean that there is substantial variation in the hourly NO2 concentrations contributing to an annual average concentration. The aim of this analysis was to quantify the nature of this variation at thousands of monitoring sites across Europe through the calculation of a standard set of chemical climatology statistics. Specifically, at each monitoring site that satisfied data capture criteria for inclusion in this analysis, annual NO2 concentrations, as well as the percentage contribution from each month, hour of the day, and hourly NO2 concentrations divided into 5 µg m−3 bins were calculated. Across Europe, 2010–2014 average annual NO2 concentrations (NO2AA exceeded the annual NO2 limit value at 8 % of > 2500 monitoring sites. The application of this chemical climatology approach showed that sites with distinct monthly, hour of day, and hourly NO2 concentration bin contributions to NO2AA were not grouped into specific regions of Europe, furthermore, within relatively small geographic regions there were sites with similar NO2AA, but with differences in these contributions. Specifically, at sites with highest NO2AA, there were generally similar contributions from across the year, but there were also differences in the contribution of peak vs. moderate hourly NO2 concentrations to NO2AA, and from different hours across the day. Trends between 2000 and 2014 for 259 sites indicate that, in general, the contribution to NO2AA from winter months has increased, as has the contribution from the rush-hour periods of
Malley, Christopher S.; von Schneidemesser, Erika; Moller, Sarah; Braban, Christine F.; Hicks, W. Kevin; Heal, Mathew R.
2018-03-01
Exposure to nitrogen dioxide (NO2) is associated with negative human health effects, both for short-term peak concentrations and from long-term exposure to a wider range of NO2 concentrations. For the latter, the European Union has established an air quality limit value of 40 µg m-3 as an annual average. However, factors such as proximity and strength of local emissions, atmospheric chemistry, and meteorological conditions mean that there is substantial variation in the hourly NO2 concentrations contributing to an annual average concentration. The aim of this analysis was to quantify the nature of this variation at thousands of monitoring sites across Europe through the calculation of a standard set of chemical climatology statistics. Specifically, at each monitoring site that satisfied data capture criteria for inclusion in this analysis, annual NO2 concentrations, as well as the percentage contribution from each month, hour of the day, and hourly NO2 concentrations divided into 5 µg m-3 bins were calculated. Across Europe, 2010-2014 average annual NO2 concentrations (NO2AA) exceeded the annual NO2 limit value at 8 % of > 2500 monitoring sites. The application of this chemical climatology approach showed that sites with distinct monthly, hour of day, and hourly NO2 concentration bin contributions to NO2AA were not grouped into specific regions of Europe, furthermore, within relatively small geographic regions there were sites with similar NO2AA, but with differences in these contributions. Specifically, at sites with highest NO2AA, there were generally similar contributions from across the year, but there were also differences in the contribution of peak vs. moderate hourly NO2 concentrations to NO2AA, and from different hours across the day. Trends between 2000 and 2014 for 259 sites indicate that, in general, the contribution to NO2AA from winter months has increased, as has the contribution from the rush-hour periods of the day, while the contribution from
International Nuclear Information System (INIS)
Poveda, German; Mesa, Oscar; Toro, Vladimir; Agudelo, Paula; Alvarez, Juan F; Arias, Paola; Moreno, Hernan; Salazar, Luis; Vieira, Sara
2002-01-01
We study the distribution of maximum rainfall events during the annual cycle, for storms ranging from 1 to 24-hour in duration; by using information over 51 rain gauges locate at the Colombian Andes. Also, the effects of both phases of ENSO (El Nino and La Nina) are quantified. We found that maximum rainfall intensity events occur during the rainy periods of march-may and September-November. There is a strong similarity between the annual cycle of mean total rainfall and that of the maximum intensities of rainfall over the tropical Andes. This result is quite consistent throughout the three ranges of the Colombian Andes. At inter annual timescales, we found that both phases of ENSO are associated with disturbances of maximum rainfall events; since during La Nina there are more intense precipitation events than during El Nino, overall, for durations longer than 3 hours, rainfall intensity gets reduced by one order of magnitude with respect to shorter durations (1-3 hours). The most extreme recorded rainfall events are apparently not associated with the annual and inter annual large scales forcing and appear to be randomly generated by the important role of the land surface atmosphere in the genesis and dynamics of intense storm over central Colombia
Directory of Open Access Journals (Sweden)
G. M. J. HASAN
2014-10-01
Full Text Available Climate, one of the major controlling factors for well-being of the inhabitants in the world, has been changing in accordance with the natural forcing and manmade activities. Bangladesh, the most densely populated countries in the world is under threat due to climate change caused by excessive use or abuse of ecology and natural resources. This study checks the rainfall patterns and their associated changes in the north-eastern part of Bangladesh mainly Sylhet city through statistical analysis of daily rainfall data during the period of 1957 - 2006. It has been observed that a good correlation exists between the monthly mean and daily maximum rainfall. A linear regression analysis of the data is found to be significant for all the months. Some key statistical parameters like the mean values of Coefficient of Variability (CV, Relative Variability (RV and Percentage Inter-annual Variability (PIV have been studied and found to be at variance. Monthly, yearly and seasonal variation of rainy days also analysed to check for any significant changes.
Middlemas, David A.; Manning, James M.; Gazzillo, Linda M.; Young, John
2001-06-01
OBJECTIVE: To determine whether grade point average, hours of clinical education, or both are significant predictors of performance on the National Athletic Trainers' Association Board of Certification examination and whether curriculum and internship candidates' scores on the certification examination can be differentially predicted. DESIGN AND SETTING: Data collection forms and consent forms were mailed to the subjects to collect data for predictor variables. Subject scores on the certification examination were obtained from Columbia Assessment Services. SUBJECTS: A total of 270 first-time candidates for the April and June 1998 certification examinations. MEASUREMENTS: Grade point average, number of clinical hours completed, sex, route to certification eligibility (curriculum or internship), scores on each section of the certification examination, and pass/fail criteria for each section. RESULTS: We found no significant difference between the scores of men and women on any section of the examination. Scores for curriculum and internship candidates differed significantly on the written and practical sections of the examination but not on the simulation section. Grade point average was a significant predictor of scores on each section of the examination and the examination as a whole. Clinical hours completed did not add a significant increment for any section but did add a significant increment for the examination overall. Although no significant difference was noted between curriculum and internship candidates in predicting scores on sections of the examination, a significant difference by route was found in predicting whether candidates would pass the examination as a whole (P =.047). Proportion of variance accounted for was less than R(2) = 0.0723 for any section of the examination and R(2) = 0.057 for the examination as a whole. CONCLUSIONS: Potential predictors of performance on the certification examination can be useful to athletic training educators in
Energy Technology Data Exchange (ETDEWEB)
Jurkovic, I [University of Texas Health Science Center at San Antonio, San Antonio, TX (United States); Stathakis, S; Li, Y; Patel, A; Vincent, J; Papanikolaou, N; Mavroidis, P [Cancer Therapy and Research Center University of Texas Health Sciences Center at San Antonio, San Antonio, TX (United States)
2014-06-01
Purpose: To determine the difference in coverage between plans done on average intensity projection and maximum intensity projection CT data sets for lung patients and to establish correlations between different factors influencing the coverage. Methods: For six lung cancer patients, 10 phases of equal duration through the respiratory cycle, the maximum and average intensity projections (MIP and AIP) from their 4DCT datasets were obtained. MIP and AIP datasets had three GTVs delineated (GTVaip — delineated on AIP, GTVmip — delineated on MIP and GTVfus — delineated on each of the 10 phases and summed up). From the each GTV, planning target volumes (PTV) were then created by adding additional margins. For each of the PTVs an IMRT plan was developed on the AIP dataset. The plans were then copied to the MIP data set and were recalculated. Results: The effective depths in AIP cases were significantly smaller than in MIP (p < 0.001). The Pearson correlation coefficient of r = 0.839 indicates strong degree of positive linear relationship between the average percentage difference in effective depths and average PTV coverage on the MIP data set. The V2 0 Gy of involved lung depends on the PTV coverage. The relationship between PTVaip mean CT number difference and PTVaip coverage on MIP data set gives r = 0.830. When the plans are produced on MIP and copied to AIP, r equals −0.756. Conclusion: The correlation between the AIP and MIP data sets indicates that the selection of the data set for developing the treatment plan affects the final outcome (cases with high average percentage difference in effective depths between AIP and MIP should be calculated on AIP). The percentage of the lung volume receiving higher dose depends on how well PTV is covered, regardless of on which set plan is done.
Mossetti, Stefano; de Bartolo, Daniela; Veronese, Ivan; Cantone, Marie Claire; Cosenza, Cristina; Nava, Elisa
2017-04-01
International and national organizations have formulated guidelines establishing limits for occupational and residential electromagnetic field (EMF) exposure at high-frequency fields. Italian legislation fixed 20 V/m as a limit for public protection from exposure to EMFs in the frequency range 0.1 MHz-3 GHz and 6 V/m as a reference level. Recently, the law was changed and the reference level must now be evaluated as the 24-hour average value, instead of the previous highest 6 minutes in a day. The law refers to a technical guide (CEI 211-7/E published in 2013) for the extrapolation techniques that public authorities have to use when assessing exposure for compliance with limits. In this work, we present measurements carried out with a vectorial spectrum analyzer to identify technical critical aspects in these extrapolation techniques, when applied to UMTS and LTE signals. We focused also on finding a good balance between statistically significant values and logistic managements in control activity, as the signal trend in situ is not known. Measurements were repeated several times over several months and for different mobile companies. The outcome presented in this article allowed us to evaluate the reliability of the extrapolation results obtained and to have a starting point for defining operating procedures. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
International Nuclear Information System (INIS)
Mossetti, Stefano; Bartolo, Daniela de; Nava, Elisa; Veronese, Ivan; Cantone, Marie Claire; Cosenza, Cristina
2017-01-01
International and national organizations have formulated guidelines establishing limits for occupational and residential electromagnetic field (EMF) exposure at high-frequency fields. Italian legislation fixed 20 V/m as a limit for public protection from exposure to EMFs in the frequency range 0.1 MHz-3 GHz and 6 V/m as a reference level. Recently, the law was changed and the reference level must now be evaluated as the 24-hour average value, instead of the previous highest 6 minutes in a day. The law refers to a technical guide (CEI 211-7/E published in 2013) for the extrapolation techniques that public authorities have to use when assessing exposure for compliance with limits. In this work, we present measurements carried out with a vectorial spectrum analyzer to identify technical critical aspects in these extrapolation techniques, when applied to UMTS and LTE signals. We focused also on finding a good balance between statistically significant values and logistic managements in control activity, as the signal trend in situ is not known. Measurements were repeated several times over several months and for different mobile companies. The outcome presented in this article allowed us to evaluate the reliability of the extrapolation results obtained and to have a starting point for defining operating procedures. (authors)
Directory of Open Access Journals (Sweden)
М.А. Zemlyanova
2015-03-01
Full Text Available We presented the materials on the verification of the average daily maximum permissible concentration of styrene in the atmospheric air of settlements performed under the results of own in-depth epidemiological studies of children’s population according to the principles of the international risk assessment practice. It was established that children in the age of 4–7 years when exposed to styrene at the level above 1.2 of threshold level value for continuous exposure develop the negative exposure effects in the form of disorders of hormonal regulation, pigmentary exchange, antioxidative activity, cytolysis, immune reactivity and cytogenetic disbalance which contribute to the increased morbidity of diseases of the central nervous system, endocrine system, respiratory organs, digestion and skin. Based on the proved cause-and-effect relationships between the biomarkers of negative effects and styrene concentration in blood it was demonstrated that the benchmark styrene concentration in blood is 0.002 mg/dm3. The justified value complies with and confirms the average daily styrene concentration in the air of settlements at the level of 0.002 mg/m3 accepted in Russia which provides the safety for the health of population (1 threshold level value for continuous exposure.
Nduwayezu, Emmanuel; Kanevski, Mikhail; Jaboyedoff, Michel
2013-04-01
Climate plays a vital role in a wide range of socio-economic activities of most nations particularly of developing countries. Climate (rainfall) plays a central role in agriculture which is the main stay of the Rwandan economy and community livelihood and activities. The majority of the Rwandan population (81,1% in 2010) relies on rain fed agriculture for their livelihoods, and the impacts of variability in climate patterns are already being felt. Climate-related events like heavy rainfall or too little rainfall are becoming more frequent and are impacting on human wellbeing.The torrential rainfall that occurs every year in Rwanda could disturb the circulation for many days, damages houses, infrastructures and causes heavy economic losses and deaths. Four rainfall seasons have been identified, corresponding to the four thermal Earth ones in the south hemisphere: the normal season (summer), the rainy season (autumn), the dry season (winter) and the normo-rainy season (spring). Globally, the spatial rainfall decreasing from West to East, especially in October (spring) and February (summer) suggests an «Atlantic monsoon influence» while the homogeneous spatial rainfall distribution suggests an «Inter-tropical front» mechanism. What is the hourly variability in this mountainous area? Is there any correlation with the identified zones of the monthly average series (from 1965 to 1990 established by the Rwandan meteorological services)? Where could we have hazards with several consecutive rainy days (using forecasted datas from the Norwegian Meteorological Institute)? Spatio-temporal analysis allows for identifying and explaining large-scale anomalies which are useful for understanding hydrological characteristics and subsequently predicting these hydrological events. The objective of our current research (Rainfall variability) is to proceed to an evaluation of the potential rainfall risk by applying advanced geospatial modelling tools in Rwanda: geostatistical
Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L
2012-09-01
Geostatistical methods are widely used in estimating long-term exposures for epidemiological studies on air pollution, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and the uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian maximum entropy (BME) method and applied this framework to estimate fine particulate matter (PM(2.5)) yearly average concentrations over the contiguous US. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingness in the air-monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM(2.5) data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM(2.5). Moreover, the MWBME method further reduces the MSE by 8.4-43.7%, with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM(2.5) across large geographical domains with expected spatial non-stationarity.
Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L.
2013-01-01
Geostatistical methods are widely used in estimating long-term exposures for air pollution epidemiological studies, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian Maximum Entropy (BME) method and applied this framework to estimate fine particulate matter (PM2.5) yearly average concentrations over the contiguous U.S. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingnees in the air monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM2.5 data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM2.5. Moreover, the MWBME method further reduces the MSE by 8.4% to 43.7% with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM2.5 across large geographical domains with expected spatial non-stationarity. PMID:22739679
MERRA 3D IAU Tendency, Wind Components, Time average 3-hourly (1.25x1.25L42) V5.2.0
National Aeronautics and Space Administration — The MAT3CPUDT or tavg3_3d_udt_Cp data product is the MERRA Data Assimilation System 3-Dimensional eastward wind tendencies that is time averaged on pressure levels...
MERRA 3D IAU Diagnostic, Turbulence, Time average 3-hourly (1.25x1.25L42) V5.2.0
National Aeronautics and Space Administration — The MAT3CPTRB or tavg3_3d_trb_Cp data product is the MERRA Data Assimilation System 3-Dimensional turbulence diagnostic that is time averaged on pressure levels at a...
MERRA 3D IAU Diagnostic, Cloud Properties, Time average 3-hourly (1.25x1.25L42) V5.2.0
National Aeronautics and Space Administration — The MAT3CPCLD or tavg3_3d_cld_Cp data product is the MERRA Data Assimilation System 3-Dimensional cloud diagnostic that is time averaged on pressure levels at a...
MERRA 3D IAU Diagnostic, Moist Physics, Time average 3-hourly (1.25x1.25L42) V5.2.0
National Aeronautics and Space Administration — The MAT3CPMST or tavg3_3d_mst_Cp data product is the MERRA Data Assimilation System 3-Dimensional moist process diagnostic that is time averaged on pressure levels...
MERRA 3D IAU Diagnostic, Radiation, Time average 3-hourly (1.25x1.25L42) V5.2.0
National Aeronautics and Space Administration — The MAT3CPRAD or tavg3_3d_rad_Cp data product is the MERRA Data Assimilation System 3-Dimensional radiation diagnostic that is time averaged on pressure levels at a...
National Aeronautics and Space Administration — The MAT3FXCHM or tavg3_3d_chm_Fx data product is the MERRA Data Assimilation System Chemistry 2-Dimensional chemistry that is time averaged, single-level, at reduced...
MERRA 3D IAU Tendency, Specific Humidity, Time average 3-hourly (1.25x1.25L42) V5.2.0
National Aeronautics and Space Administration — The MAT3CPQDT or tavg3_3d_qdt_Cp data product is the MERRA Data Assimilation System 3-Dimensional moisture tendencies that is time averaged on pressure levels at a...
MERRA 3D IAU Tendency, Ozone, Time average 3-hourly (1.25x1.25L42) V5.2.0
National Aeronautics and Space Administration — The MAT3CPODT or tavg3_3d_odt_Cp data product is the MERRA Data Assimilation System 3-Dimensional ozone tendencies that is time averaged on pressure levels at a...
MERRA 2D IAU Diagnostic, Surface Fluxes, Time Average 1-hourly (2/3x1/2L1) V5.2.0
National Aeronautics and Space Administration — The MAT1NXFLX or tavg1_2d_flx_Nx data product is the MERRA Data Assimilation System 2-Dimensional surface turbulence flux diagnostic that is time averaged...
MERRA 2D IAU Diagnostic, Single Level Meteorology, Time Average 1-hourly (2/3x1/2L1) V5.2.0
National Aeronautics and Space Administration — The MAT1NXSLV or tavg1_2d_slv_Nx data product is the MERRA Data Assimilation System 2-Dimensional atmospheric single-level diagnostics that is time averaged...
MERRA 2D IAU Diagnostic, Land Only States and Diagnostics, Time Average 1-hourly (2/3x1/2L1) V5.2.0
National Aeronautics and Space Administration — The MAT1NXLND or tavg1_2d_lnd_Nx data product is the MERRA Data Assimilation System 2-Dimensional land surface diagnostic that is time averaged single-level at the...
MERRA Chem 3D IAU, Precip Mass Flux, Time average 3-hourly (eta coord edges, 1.25X1L73) V5.2.0
National Aeronautics and Space Administration — The MAT3FECHM or tavg3_3d_chm_Fe data product is the MERRA Data Assimilation System Chemistry 3-Dimensional chemistry on layers edges that is time averaged, 3D model...
MERRA 2D IAU Diagnostic, Radiation Surface and TOA, Time Average 1-hourly (2/3x1/2L1) V5.2.0
National Aeronautics and Space Administration — The MAT1NXRAD or tavg1_2d_rad_Nx data product is the MERRA Data Assimilation System 2-Dimensional surface and TOA radiation flux that is time averaged single-level...
DeVore, Matthew S; Gull, Stephen F; Johnson, Carey K
2012-04-05
We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions.
Working Hours and Productivity
Collewet, Marion; Sauermann, Jan
2017-01-01
This paper studies the link between working hours and productivity using daily information on working hours and performance of a sample of call centre agents. We exploit variation in the number of hours worked by the same employee across days and weeks due to central scheduling, enabling us to estimate the effect of working hours on productivity. We find that as the number of hours worked increases, the average handling time for a call increases, meaning that agents become less productive. Th...
Directory of Open Access Journals (Sweden)
Mary Hokazono
Full Text Available CONTEXT AND OBJECTIVE: Transcranial Doppler (TCD detects stroke risk among children with sickle cell anemia (SCA. Our aim was to evaluate TCD findings in patients with different sickle cell disease (SCD genotypes and correlate the time-averaged maximum mean (TAMM velocity with hematological characteristics. DESIGN AND SETTING: Cross-sectional analytical study in the Pediatric Hematology sector, Universidade Federal de São Paulo. METHODS: 85 SCD patients of both sexes, aged 2-18 years, were evaluated, divided into: group I (62 patients with SCA/Sß0 thalassemia; and group II (23 patients with SC hemoglobinopathy/Sß+ thalassemia. TCD was performed and reviewed by a single investigator using Doppler ultrasonography with a 2 MHz transducer, in accordance with the Stroke Prevention Trial in Sickle Cell Anemia (STOP protocol. The hematological parameters evaluated were: hematocrit, hemoglobin, reticulocytes, leukocytes, platelets and fetal hemoglobin. Univariate analysis was performed and Pearson's coefficient was calculated for hematological parameters and TAMM velocities (P < 0.05. RESULTS: TAMM velocities were 137 ± 28 and 103 ± 19 cm/s in groups I and II, respectively, and correlated negatively with hematocrit and hemoglobin in group I. There was one abnormal result (1.6% and five conditional results (8.1% in group I. All results were normal in group II. Middle cerebral arteries were the only vessels affected. CONCLUSION: There was a low prevalence of abnormal Doppler results in patients with sickle-cell disease. Time-average maximum mean velocity was significantly different between the genotypes and correlated with hematological characteristics.
International Nuclear Information System (INIS)
Culkowski, W.M.
1976-01-01
The standard deviation of horizontal wind direction sigma/sub theta/ increases with time of averaging up to a maximum value of 104 0 . The average standard deviation of horizontal wind directions averaged over periods of 3, 5, 10, 16, 24, 36, 48, 72, 144, 288, and 576 hours were calculated from wind data obtained from a 100 meter tower in the Oak Ridge area. For periods up to 100 hours, sigma/sub theta/ varies as t/sup .28/; after 100 hours sigma/sub theta/ varies as 6.5 ln t
U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...
Unsupervised/supervised learning concept for 24-hour load forecasting
Energy Technology Data Exchange (ETDEWEB)
Djukanovic, M [Electrical Engineering Inst. ' Nikola Tesla' , Belgrade (Yugoslavia); Babic, B [Electrical Power Industry of Serbia, Belgrade (Yugoslavia); Sobajic, D J; Pao, Y -H [Case Western Reserve Univ., Cleveland, OH (United States). Dept. of Electrical Engineering and Computer Science
1993-07-01
An application of artificial neural networks in short-term load forecasting is described. An algorithm using an unsupervised/supervised learning concept and historical relationship between the load and temperature for a given season, day type and hour of the day to forecast hourly electric load with a lead time of 24 hours is proposed. An additional approach using functional link net, temperature variables, average load and last one-hour load of previous day is introduced and compared with the ANN model with one hidden layer load forecast. In spite of limited available weather variables (maximum, minimum and average temperature for the day) quite acceptable results have been achieved. The 24-hour-ahead forecast errors (absolute average) ranged from 2.78% for Saturdays and 3.12% for working days to 3.54% for Sundays. (Author)
A simple method to downscale daily wind statistics to hourly wind data
Guo, Zhongling
2013-01-01
Wind is the principal driver in the wind erosion models. The hourly wind speed data were generally required for precisely wind erosion modeling. In this study, a simple method to generate hourly wind speed data from daily wind statistics (daily average and maximum wind speeds together or daily average wind speed only) was established. A typical windy location with 3285 days (9 years) measured hourly wind speed data were used to validate the downscaling method. The results showed that the over...
Fialová, Lenka
2012-01-01
Working hours The aim of this thesis that I set was a comprehensive analysis of the working hours issue. The main purpose was to summarize this area of labor law while taking into account the Labour Code amendment which came into force on 1st January 2012. The changes in the related legal terms were also included into this thesis because of the mentioned changes. The thesis is composed of three chapters. Chapter One deals briefly with history of Labour Law and regulatory development. Author`s...
National Aeronautics and Space Administration — The MAT1NXOCN or tavg1_2d_ocn_Nx data product is the MERRA Data Assimilation System 2-Dimensional ocean surface single-level diagnostics that is time averaged...
National Aeronautics and Space Administration — The MAT1NXFLX or tavg1_2d_flx_Nx data product is the MERRA Data Assimilation System 2-Dimensional surface turbulence flux diagnostic that is time averaged...
National Aeronautics and Space Administration — The MAT1NXSLV or tavg1_2d_slv_Nx data product is the MERRA Data Assimilation System 2-Dimensional atmospheric single-level diagnostics that is time averaged...
MERRA Chem 3D IAU C-Grid Wind and Mass Flux, Time Average 3-Hourly (eta coord, 2/3x1/2L72) V5.2.0
National Aeronautics and Space Administration — The MAT3NVCHM or tavg3_3d_chm_Nv data product is the MERRA Data Assimilation System Chemistry 3-Dimensional chemistry on layers that is time averaged, 3D model...
National Aeronautics and Space Administration — The MAT3FXCHM or tavg3_3d_chm_Fx data product is the MERRA Data Assimilation System Chemistry 2-Dimensional chemistry that is time averaged, single-level, at reduced...
National Aeronautics and Space Administration — The MAT3FECHM or tavg3_3d_chm_Fe data product is the MERRA Data Assimilation System Chemistry 3-Dimensional chemistry on layers edges that is time averaged, 3D model...
MERRA Chem 3D IAU C-Grid Edge Mass Flux, Time Average 3-Hourly (eta coord, 2/3x1/2L73) V5.2.0
National Aeronautics and Space Administration — The MAT3NECHM or tavg3_3d_chm_Ne data product is the MERRA Data Assimilation System Chemistry 3-Dimensional chemistry on layer Edges that is time averaged, 3D model...
National Aeronautics and Space Administration — The MAT3FVCHM or tavg3_3d_chm_Fv data product is the MERRA Data Assimilation System Chemistry 3-Dimensional chemistry on layers file that is time averaged, 3D model...
National Aeronautics and Space Administration — The MAT1NXRAD or tavg1_2d_rad_Nx data product is the MERRA Data Assimilation System 2-Dimensional surface and TOA radiation flux that is time averaged single-level...
8 Hour Ozone Design Value for 1998-2000
U.S. Environmental Protection Agency — The Ozone design value is based on the average of the annual 4th highest daily 8-hour maximum over a 3-year period (1998-2000) in this case. This is a human health...
Watson, Jane; Chick, Helen
2012-01-01
This paper analyses the responses of 247 middle school students to items requiring the concept of average in three different contexts: a city's weather reported in maximum daily temperature, the number of children in a family, and the price of houses. The mixed but overall disappointing performance on the six items in the three contexts indicates…
2010-01-01
... an 8-hour break between successive work periods when a break of less than 10 hours is necessary to...) Individuals who are working 8-hour shift schedules shall have at least 1 day off per week, averaged over the shift cycle; (ii) Individuals who are working 10-hour shift schedules shall have at least 2 days off per...
How to average logarithmic retrievals?
Directory of Open Access Journals (Sweden)
B. Funke
2012-04-01
Full Text Available Calculation of mean trace gas contributions from profiles obtained by retrievals of the logarithm of the abundance rather than retrievals of the abundance itself are prone to biases. By means of a system simulator, biases of linear versus logarithmic averaging were evaluated for both maximum likelihood and maximum a priori retrievals, for various signal to noise ratios and atmospheric variabilities. These biases can easily reach ten percent or more. As a rule of thumb we found for maximum likelihood retrievals that linear averaging better represents the true mean value in cases of large local natural variability and high signal to noise ratios, while for small local natural variability logarithmic averaging often is superior. In the case of maximum a posteriori retrievals, the mean is dominated by the a priori information used in the retrievals and the method of averaging is of minor concern. For larger natural variabilities, the appropriateness of the one or the other method of averaging depends on the particular case because the various biasing mechanisms partly compensate in an unpredictable manner. This complication arises mainly because of the fact that in logarithmic retrievals the weight of the prior information depends on abundance of the gas itself. No simple rule was found on which kind of averaging is superior, and instead of suggesting simple recipes we cannot do much more than to create awareness of the traps related with averaging of mixing ratios obtained from logarithmic retrievals.
[Travel time and distances to Norwegian out-of-hours casualty clinics].
Raknes, Guttorm; Morken, Tone; Hunskår, Steinar
2014-11-01
Geographical factors have an impact on the utilisation of out-of-hours services. In this study we have investigated the travel distance to out-of-hours casualty clinics in Norwegian municipalities in 2011 and the number of municipalities covered by the proposed recommendations for secondary on-call arrangements due to long distances. We estimated the average maximum travel times and distances in Norwegian municipalities using a postcode-based method. Separate analyses were performed for municipalities with a single, permanently located casualty clinic. Altogether 417 out of 430 municipalities were included. We present the median value of the maximum travel times and distances for the included municipalities. The median maximum average travel distance for the municipalities was 19 km. The median maximum average travel time was 22 minutes. In 40 of the municipalities (10 %) the median maximum average travel time exceeded 60 minutes, and in 97 municipalities (23 %) the median maximum average travel time exceeded 40 minutes. The population of these groups comprised 2 % and 5 % of the country's total population respectively. For municipalities with permanent emergency facilities(N = 316), the median average flight time 16 minutes and median average distance 13 km.. In many municipalities, the inhabitants have a long average journey to out-of-hours emergency health services, but seen as a whole, the inhabitants of these municipalities account for a very small proportion of the Norwegian population. The results indicate that the proposed recommendations for secondary on-call duty based on long distances apply to only a small number of inhabitants. The recommendations should therefore be adjusted and reformulated to become more relevant.
MODELS OF HOURLY DRY BULB TEMPERATURE AND ...
African Journals Online (AJOL)
Hourly meteorological data of both dry bulb temperature and relative humidity for 18 locations in Nigeria for the period 1995 to 2009 were analysed to obtain the mean monthly average and monthly hourly average of each of the two meteorological variables for each month for each location. The difference between the ...
Surface Weather Observations Hourly
National Oceanic and Atmospheric Administration, Department of Commerce — Standard hourly observations taken at Weather Bureau/National Weather Service offices and airports throughout the United States. Hourly observations began during the...
Working hours and productivity
Collewet, Marion; Sauermann, Jan
2017-01-01
This paper studies the link between working hours and productivity using daily information on working hours and performance of a sample of call centre agents. We exploit variation in the number of hours worked by the same employee across days and weeks due to central scheduling, enabling us to
Directory of Open Access Journals (Sweden)
Savita Rani Singhal
2014-09-01
Full Text Available To find shortest and reliable time period of urine collection for determination of proteinuria.It is a prospective study carried out on 125 pregnant women with preeclampsia after 20 weeks of gestation having urine albumin >1 using dipstick test. Urine was collected in five different time intervals in colors labeled containers with the assistance of nursing staff; the total collection time was 24 hours. Total urine protein of two-hour, four-hour, eight-hour, 12-hour and 24-hour urine was measured and compared with 24-hour collection. Data was analyzed using the Pearson correlation coefficient.There was significant correlation (p value < 0.01 in two, four, eight and 12-hour urine protein with 24-urine protein, with correlation coefficient of 0.97, 0.97, 0.96 and 0.97, respectively. When a cut off value of 25 mg, 50 mg. 100 mg, and 150 mg for urine protein were used for 2-hour, 4-hours, 8-hour and 12-hour urine collection, a sensitivity of 92.45%, 95.28%, 91.51%, and 96.23% and a specificity of 68.42%, 94.74%, 84.21% and 84.21% were obtained, respectively.Two-hour urine proteins can be used for assessment of proteinuria in preeclampsia instead of gold standard 24-hour urine collection for early diagnosis and better patient compliance.
New Approach To Hour-By-Hour Weather Forecast
Liao, Q. Q.; Wang, B.
2017-12-01
Fine hourly forecast in single station weather forecast is required in many human production and life application situations. Most previous MOS (Model Output Statistics) which used a linear regression model are hard to solve nonlinear natures of the weather prediction and forecast accuracy has not been sufficient at high temporal resolution. This study is to predict the future meteorological elements including temperature, precipitation, relative humidity and wind speed in a local region over a relatively short period of time at hourly level. By means of hour-to-hour NWP (Numeral Weather Prediction)meteorological field from Forcastio (https://darksky.net/dev/docs/forecast) and real-time instrumental observation including 29 stations in Yunnan and 3 stations in Tianjin of China from June to October 2016, predictions are made of the 24-hour hour-by-hour ahead. This study presents an ensemble approach to combine the information of instrumental observation itself and NWP. Use autoregressive-moving-average (ARMA) model to predict future values of the observation time series. Put newest NWP products into the equations derived from the multiple linear regression MOS technique. Handle residual series of MOS outputs with autoregressive (AR) model for the linear property presented in time series. Due to the complexity of non-linear property of atmospheric flow, support vector machine (SVM) is also introduced . Therefore basic data quality control and cross validation makes it able to optimize the model function parameters , and do 24 hours ahead residual reduction with AR/SVM model. Results show that AR model technique is better than corresponding multi-variant MOS regression method especially at the early 4 hours when the predictor is temperature. MOS-AR combined model which is comparable to MOS-SVM model outperform than MOS. Both of their root mean square error and correlation coefficients for 2 m temperature are reduced to 1.6 degree Celsius and 0.91 respectively. The
International Nuclear Information System (INIS)
Chrien, R.E.
1986-10-01
The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs
Working hours: Past, present, and future
Dolton, Peter
2017-01-01
Working hours across the world are falling, but considerable variation remains. In some countries people work 70% more hours per year, on average, than in other countries. Much of this variation is due to differences in the prevalence of part-time work and patterns of female labor market participation. Looking ahead, the question of how reducing working hours will affect productivity is significant. In addition, how individuals divide up their leisure and work time and what the appropriate wo...
Long working hours and cancer risk
DEFF Research Database (Denmark)
Heikkila, Katriina; Nyberg, Solja T.; Madsen, Ida E. H.
2016-01-01
in 116 462 men and women who were free of cancer at baseline. Incident cancers were ascertained from national cancer, hospitalisation and death registers; weekly working hours were self-reported. Results: During median follow-up of 10.8 years, 4371 participants developed cancer (n colorectal cancer: 393......Background: Working longer than the maximum recommended hours is associated with an increased risk of cardiovascular disease, but the relationship of excess working hours with incident cancer is unclear. Methods: This multi-cohort study examined the association between working hours and cancer risk......; n lung cancer: 247; n breast cancer: 833; and n prostate cancer: 534). We found no clear evidence for an association between working hours and the overall cancer risk. Working hours were also unrelated the risk of incident colorectal, lung or prostate cancers. Working greater than or equal to55 h...
A Century of Human Capital and Hours
Diego Restuccia; Guillaume Vandenbroucke
2012-01-01
An average person born in the United States in the second half of the nineteenth century completed 7 years of schooling and spent 58 hours a week working in the market. By contrast, an average person born at the end of the twentieth century completed 14 years of schooling and spent 40 hours a week working. In the span of 100 years, completed years of schooling doubled and working hours decreased by 30 percent. What explains these trends? We consider a model of human capital and labor supply t...
Energy Technology Data Exchange (ETDEWEB)
Mora, Ll
1989-11-01
The aim of this work is the generation of sequences of hourly global radiation which have similar statistically characteristics of real sequences for the city of Madrid (Spain). For this generation, a first order Markov model has been proposed. The input parameters of simulation method are the following: The maximum value of hourly radiation and the average monthly value of the transparency normalized index. The maximum value of hourly radiation has been calculated as a function of the solar height by an empirical expression. The transparency normalized index has been defined as the ratio among the measured hourly global radiation to the maximum value for the corresponding solar height. The method is based on the following observations: -The transparency normalized index shows a significant correlation only for two consecutive hours. -The months with the same average transparency normalized indies have similar probability density function. Global solar radiation, time series, simulation, Markov transition matrix, solar energy.
Hourly wind speed analysis in Sicily
Energy Technology Data Exchange (ETDEWEB)
Bivona, S.; Leone, C. [Palermo Univ., Dip di Fisica e Technologie Relative, Palermo (Italy); Burlon, R. [Palermo Univ., Dip. di Ingegnaria Nucleare, Palermo (Italy)
2003-07-01
The hourly average wind speed data recorded by CNMCA (Centro Nazionale di Meteorologia e Climatologia Aeronautica) have been used to study the statistical properties of the wind speed at nine locations on Sicily. By grouping the observations month by month, we show that the hourly average wind speed, with calms omitted, is represented by a Weibull function. The suitability of the distribution is judged by the discrepancies between the observed and calculated values of the monthly average wind speed and of the standard deviation. (Author)
Lake Basin Fetch and Maximum Length/Width
Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...
Approximate maximum parsimony and ancestral maximum likelihood.
Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat
2010-01-01
We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.
International Nuclear Information System (INIS)
Anon.
1979-01-01
This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed
DEFF Research Database (Denmark)
Ilsøe, Anna; Larsen, Trine Pernille; Felbo-Kolding, Jonas
2017-01-01
Purpose The purpose of this paper is to investigate the effect of part-time work on absolute wages. The empirical focus is wages and working hours in three selected sectors within private services in the Danish labour market – industrial cleaning, retail, hotels and restaurants – and their agreem......Purpose The purpose of this paper is to investigate the effect of part-time work on absolute wages. The empirical focus is wages and working hours in three selected sectors within private services in the Danish labour market – industrial cleaning, retail, hotels and restaurants...... in industrial cleaning includes a minimum floor of 15 weekly working hours – this is not the case in retail, hotels and restaurants. This creates a loophole in the latter two sectors that can be exploited by employers to gain wage flexibility through part-time work. Originality/value The living wage literature...
Forecasting Day-Ahead Electricity Prices: Utilizing Hourly Prices
Raviv, Eran; Bouwman, Kees E.; van Dijk, Dick
2013-01-01
This discussion paper led to a publication in 'Energy Economics' , 2015, 50, 227-239. The daily average price of electricity represents the price of electricity to be delivered over the full next day and serves as a key reference price in the electricity market. It is an aggregate that equals the average of hourly prices for delivery during each of the 24 individual hours. This paper demonstrates that the disaggregated hourly prices contain useful predictive information for the daily average ...
DEFF Research Database (Denmark)
Gramkow, Claus
1999-01-01
In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belo...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion......In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...
International Nuclear Information System (INIS)
Ichiguchi, Katsuji
1998-01-01
A new reduced set of resistive MHD equations is derived by averaging the full MHD equations on specified flux coordinates, which is consistent with 3D equilibria. It is confirmed that the total energy is conserved and the linearized equations for ideal modes are self-adjoint. (author)
Determining average yarding distance.
Roger H. Twito; Charles N. Mann
1979-01-01
Emphasis on environmental and esthetic quality in timber harvesting has brought about increased use of complex boundaries of cutting units and a consequent need for a rapid and accurate method of determining the average yarding distance and area of these units. These values, needed for evaluation of road and landing locations in planning timber harvests, are easily and...
Averaging operations on matrices
Indian Academy of Sciences (India)
2014-07-03
Jul 3, 2014 ... Role of Positive Definite Matrices. • Diffusion Tensor Imaging: 3 × 3 pd matrices model water flow at each voxel of brain scan. • Elasticity: 6 × 6 pd matrices model stress tensors. • Machine Learning: n × n pd matrices occur as kernel matrices. Tanvi Jain. Averaging operations on matrices ...
Directory of Open Access Journals (Sweden)
Patricia Bouyer
2015-09-01
Full Text Available Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in NP inter coNP and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements.
DEFF Research Database (Denmark)
Gramkow, Claus
2001-01-01
In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong ...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation.......In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...
National Aeronautics and Space Administration — The MAT1NXINT or tavg1_2d_int_Nx data product is the MERRA Data Assimilation System 2-Dimensional vertical integral that is time averaged single-level at the native...
National Aeronautics and Space Administration — The MAT1NXLND or tavg1_2d_lnd_Nx data product is the MERRA Data Assimilation System 2-Dimensional land surface diagnostic that is time averaged single-level at the...
National Aeronautics and Space Administration — The MAT3NECHM or tavg3_3d_chm_Ne data product is the MERRA Data Assimilation System Chemistry 3-Dimensional chemistry on layer Edges that is time averaged, 3D model...
2010-07-01
... a daily maximum hourly average ozone measurement that is greater than the level of the standard... determining the expected number of annual exceedances relate to accounting for incomplete sampling. In general... measurement. In some cases, a measurement might actually have been missed but in other cases no measurement...
Cátedra Intercultural. UCO
2011-01-01
2011 Hours Against Hate is a campaign to stop bigotry and promote respect across lines of culture, religion, tradition, class, and gender. Launched by Special Representative to Muslim Communities Farah Pandith, and Special Envoy to Monitor and Combat Anti-Semitism Hannah Rosenthal, the State Department is asking young people around the world to pledge their time to stop hate—to do something for someone who doesn’t look like you, pray like you, or live like you. We are asking the next generati...
GS Department
2009-01-01
Please note the new opening hours of the gates as well as the intersites tunnel from the 19 May 2009: GATE A 7h - 19h GATE B 24h/24 GATE C 7h - 9h\t17h - 19h GATE D 8h - 12h\t13h - 16h GATE E 7h - 9h\t17h - 19h Prévessin 24h/24 The intersites tunnel will be opened from 7h30 to 18h non stop. GS-SEM Group Infrastructure and General Services Department
Eliazar, Iddo
2018-02-01
The popular perception of statistical distributions is depicted by the iconic bell curve which comprises of a massive bulk of 'middle-class' values, and two thin tails - one of small left-wing values, and one of large right-wing values. The shape of the bell curve is unimodal, and its peak represents both the mode and the mean. Thomas Friedman, the famous New York Times columnist, recently asserted that we have entered a human era in which "Average is Over" . In this paper we present mathematical models for the phenomenon that Friedman highlighted. While the models are derived via different modeling approaches, they share a common foundation. Inherent tipping points cause the models to phase-shift from a 'normal' bell-shape statistical behavior to an 'anomalous' statistical behavior: the unimodal shape changes to an unbounded monotone shape, the mode vanishes, and the mean diverges. Hence: (i) there is an explosion of small values; (ii) large values become super-large; (iii) 'middle-class' values are wiped out, leaving an infinite rift between the small and the super large values; and (iv) "Average is Over" indeed.
Maximum Acceleration Recording Circuit
Bozeman, Richard J., Jr.
1995-01-01
Coarsely digitized maximum levels recorded in blown fuses. Circuit feeds power to accelerometer and makes nonvolatile record of maximum level to which output of accelerometer rises during measurement interval. In comparison with inertia-type single-preset-trip-point mechanical maximum-acceleration-recording devices, circuit weighs less, occupies less space, and records accelerations within narrower bands of uncertainty. In comparison with prior electronic data-acquisition systems designed for same purpose, circuit simpler, less bulky, consumes less power, costs and analysis of data recorded in magnetic or electronic memory devices. Circuit used, for example, to record accelerations to which commodities subjected during transportation on trucks.
Average nuclear surface properties
International Nuclear Information System (INIS)
Groote, H. von.
1979-01-01
The definition of the nuclear surface energy is discussed for semi-infinite matter. This definition is extended also for the case that there is a neutron gas instead of vacuum on the one side of the plane surface. The calculations were performed with the Thomas-Fermi Model of Syler and Blanchard. The parameters of the interaction of this model were determined by a least squares fit to experimental masses. The quality of this fit is discussed with respect to nuclear masses and density distributions. The average surface properties were calculated for different particle asymmetry of the nucleon-matter ranging from symmetry beyond the neutron-drip line until the system no longer can maintain the surface boundary and becomes homogeneous. The results of the calculations are incorporated in the nuclear Droplet Model which then was fitted to experimental masses. (orig.)
Americans' Average Radiation Exposure
International Nuclear Information System (INIS)
2000-01-01
We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body
2003-01-01
The 18th edition of the Geneva 24 hours swim competition will take place at the Vernets Swimming Pool on the 4th and 5th of October. More information and the results of previous years are given at: http://www.carouge-natation.com/24_heures/home_24_heures.htm Last year, CERN obtained first position in the inter-company category with a total of 152.3 kms swam by 45 participants. We are counting on your support to repeat this excellent performance this year. For those who would like to train, the Livron swimming pool in Meyrin is open as from Monday the 8th September. For further information please do not hesitate to contact us. Gino de Bilio and Catherine Delamare
2003-01-01
The 18th edition of the Geneva 24 hours swim competition will take place at the Vernets Swimming Pool on the 4th and 5th of October. More information and the results of previous years are given at: http://www.carouge-natation.com/24_heures/home_24_heures.htm Last year, CERN obtained first position in the inter-company category with a total of 152.3 kms swam by 45 participants. We are counting on your support to repeat this excellent performance this year. For those who would like to train, the Livron swimming pool in Meyrin is open as from Monday the 8th September. For further information please do not hesitate to contact us. Gino de Bilio and Catherine Delamare
Maximum Quantum Entropy Method
Sim, Jae-Hoon; Han, Myung Joon
2018-01-01
Maximum entropy method for analytic continuation is extended by introducing quantum relative entropy. This new method is formulated in terms of matrix-valued functions and therefore invariant under arbitrary unitary transformation of input matrix. As a result, the continuation of off-diagonal elements becomes straightforward. Without introducing any further ambiguity, the Bayesian probabilistic interpretation is maintained just as in the conventional maximum entropy method. The applications o...
International Nuclear Information System (INIS)
Biondi, L.
1998-01-01
The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some [it
Noback, Inge; Broersma, Lourens; van Dijk, Jouke; Karlsson, Charlie; Andersson, Martin; Norman, Therese
2015-01-01
The Dutch labour market differs from that of other countries due to a unique combination of high employment rates and a low average number of hours worked. Dutch employment rates are among the highest in the world, at 77 per cent in 2011. At the same time, the average number of hours worked annually
Fixed Costs and Hours Constraints
Johnson, William R.
2011-01-01
Hours constraints are typically identified by worker responses to questions asking whether they would prefer a job with more hours and more pay or fewer hours and less pay. Because jobs with different hours but the same rate of pay may be infeasible when there are fixed costs of employment or mandatory overtime premia, the constraint in those…
The Productivity Of Working Hours
John Pencavel
2013-01-01
Observations on munition workers, most of them women, are organized to examine the relationship between their output and their working hours. The relationship is nonlinear: below an hours threshold, output is proportional to hours; above a threshold, output rises at a decreasing rate as hours increase. Implications of these results for the estimation of labor supply functions are taken up. The findings also link up with current research on the effects of long working hours on accidents and in...
correlation between sunshine hours and climatic parameters at four
African Journals Online (AJOL)
Mgina
A multiple regression technique was used to assess the correlation between sunshine hours and maximum and ... solar radiation depends on the model and the climatic parameter used. ..... A stochastic Markov chain model for simulating wind ...
Long working hours and alcohol use
DEFF Research Database (Denmark)
Virtanen, Marianna; Jokela, Markus; Nyberg, Solja T
2015-01-01
OBJECTIVE: To quantify the association between long working hours and alcohol use. DESIGN: Systematic review and meta-analysis of published studies and unpublished individual participant data. DATA SOURCES: A systematic search of PubMed and Embase databases in April 2014 for published studies......, supplemented with manual searches. Unpublished individual participant data were obtained from 27 additional studies. REVIEW METHODS: The search strategy was designed to retrieve cross sectional and prospective studies of the association between long working hours and alcohol use. Summary estimates were...... countries. The pooled maximum adjusted odds ratio for the association between long working hours and alcohol use was 1.11 (95% confidence interval 1.05 to 1.18) in the cross sectional analysis of published and unpublished data. Odds ratio of new onset risky alcohol use was 1.12 (1.04 to 1...
LCLS Maximum Credible Beam Power
International Nuclear Information System (INIS)
Clendenin, J.
2005-01-01
The maximum credible beam power is defined as the highest credible average beam power that the accelerator can deliver to the point in question, given the laws of physics, the beam line design, and assuming all protection devices have failed. For a new accelerator project, the official maximum credible beam power is determined by project staff in consultation with the Radiation Physics Department, after examining the arguments and evidence presented by the appropriate accelerator physicist(s) and beam line engineers. The definitive parameter becomes part of the project's safety envelope. This technical note will first review the studies that were done for the Gun Test Facility (GTF) at SSRL, where a photoinjector similar to the one proposed for the LCLS is being tested. In Section 3 the maximum charge out of the gun for a single rf pulse is calculated. In Section 4, PARMELA simulations are used to track the beam from the gun to the end of the photoinjector. Finally in Section 5 the beam through the matching section and injected into Linac-1 is discussed
Job search, hours restrictions, and desired hours of work
Bloemen, H.G.
2008-01-01
A structural empirical job search model is presented that incorporates the labor supply decision of individuals. The arrival of a job offer is modeled as a random draw from a wage-hours offer distribution. Subjective information is used on desired working hours to identify optimal hours from offered
Maximum likely scale estimation
DEFF Research Database (Denmark)
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Robust Maximum Association Estimators
A. Alfons (Andreas); C. Croux (Christophe); P. Filzmoser (Peter)
2017-01-01
textabstractThe maximum association between two multivariate variables X and Y is defined as the maximal value that a bivariate association measure between one-dimensional projections αX and αY can attain. Taking the Pearson correlation as projection index results in the first canonical correlation
Interns shall not sleep: the duty hours boomerang
Directory of Open Access Journals (Sweden)
Quan SF
2017-04-01
Full Text Available No abstract available. Article truncated after 150 words. On March 10, 2017, the Accreditation Council for Graduate Medical Education (ACGME announced revisions to its common program requirements related to duty hours (1. Effective on July 1, 2017, the most important change will be an increase in the maximum consecutive hours that an intern may work. Interns will now be able to continuously perform patient care work up to a maximum of 24 hours with an additional 4 hours for managing care transitions. This reverses the controversial reduction to 16 hours that occurred in 2011 (2. The regulation of house staff duty hours formally began in the late 1980s. It was precipitated largely because of the publicity resulting from the 1984 death of Libby Zion in a New York teaching hospital that was attributed partly to poor decisions made by fatigued and overworked house staff (3. Consequently, the state of New York in 1989 passed laws restricting the …
US Naval Observatory Hourly Observations
National Oceanic and Atmospheric Administration, Department of Commerce — Hourly observations journal from the National Observatory in Washington DC. The observatory is the first station in the United States to produce hourly observations...
Hourly Precipitation Data (HPD) Publication
National Oceanic and Atmospheric Administration, Department of Commerce — Hourly Precipitation Data (HPD) Publication is archived and available from the National Climatic Data Center (NCDC). This publication contains hourly precipitation...
International Nuclear Information System (INIS)
Enslin, J.H.R.
1990-01-01
A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking can be m ore cost effective, has a higher reliability and can improve the quality of life in remote areas. This paper reports that a high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of 15% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply systems. The advantages at larger temperature variations and larger power rated systems are much higher. Other advantages include optimal sizing and system monitor and control
Range of monthly mean hourly land surface air temperature diurnal cycle over high northern latitudes
Wang, Aihui; Zeng, Xubin
2014-05-01
Daily maximum and minimum temperatures over global land are fundamental climate variables, and their difference represents the diurnal temperature range (DTR). While the differences between the monthly averaged DTR (MDTR) and the range of monthly averaged hourly temperature diurnal cycle (RMDT) are easy to understand qualitatively, their differences have not been quantified over global land areas. Based on our newly developed in situ data (Climatic Research Unit) reanalysis (Modern-Era Retrospective analysis for Research and Applications) merged hourly temperature data from 1979 to 2009, RMDT in January is found to be much smaller than that in July over high northern latitudes, as it is much more affected by the diurnal radiative forcing than by the horizontal advection of temperature. In contrast, MDTR in January is comparable to that in July over high northern latitudes, but it is much larger than January RMDT, as it primarily reflects the movement of lower frequency synoptic weather systems. The area-averaged RMDT trends north of 40°N are near zero in November, December, and January, while the trends of MDTR are negative. These results suggest the need to use both the traditional MDTR and RMDT suggested here in future observational and modeling studies. Furthermore, MDTR and its trend are more sensitive to the starting hour of a 24 h day used in the calculations than those for RMDT, and this factor also needs to be considered in model evaluations using observational data.
International Nuclear Information System (INIS)
Ponman, T.J.
1984-01-01
For some years now two different expressions have been in use for maximum entropy image restoration and there has been some controversy over which one is appropriate for a given problem. Here two further entropies are presented and it is argued that there is no single correct algorithm. The properties of the four different methods are compared using simple 1D simulations with a view to showing how they can be used together to gain as much information as possible about the original object. (orig.)
Clark, P.U.; Dyke, A.S.; Shakun, J.D.; Carlson, A.E.; Clark, J.; Wohlfarth, B.; Mitrovica, J.X.; Hostetler, S.W.; McCabe, A.M.
2009-01-01
We used 5704 14C, 10Be, and 3He ages that span the interval from 10,000 to 50,000 years ago (10 to 50 ka) to constrain the timing of the Last Glacial Maximum (LGM) in terms of global ice-sheet and mountain-glacier extent. Growth of the ice sheets to their maximum positions occurred between 33.0 and 26.5 ka in response to climate forcing from decreases in northern summer insolation, tropical Pacific sea surface temperatures, and atmospheric CO2. Nearly all ice sheets were at their LGM positions from 26.5 ka to 19 to 20 ka, corresponding to minima in these forcings. The onset of Northern Hemisphere deglaciation 19 to 20 ka was induced by an increase in northern summer insolation, providing the source for an abrupt rise in sea level. The onset of deglaciation of the West Antarctic Ice Sheet occurred between 14 and 15 ka, consistent with evidence that this was the primary source for an abrupt rise in sea level ???14.5 ka.
U.S. Hourly Climate Normals (1981-2010)
National Oceanic and Atmospheric Administration, Department of Commerce — The U.S. Hourly Climate Normals for 1981 to 2010 are 30-year averages of meteorological parameters for thousands of U.S. stations located across the 50 states, as...
Tailored vs Black-Box Models for Forecasting Hourly Average Solar Irradiance
Czech Academy of Sciences Publication Activity Database
Brabec, Marek; Paulescu, M.; Badescu, V.
2015-01-01
Roč. 111, January (2015), s. 320-331 ISSN 0038-092X R&D Projects: GA MŠk LD12009 Grant - others:European Cooperation in Science and Technology(XE) COST ES1002 Institutional support: RVO:67985807 Keywords : solar irradiance * forecasting * tilored statistical models Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 3.685, year: 2015
Directory of Open Access Journals (Sweden)
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Probable maximum flood control
International Nuclear Information System (INIS)
DeGabriele, C.E.; Wu, C.L.
1991-11-01
This study proposes preliminary design concepts to protect the waste-handling facilities and all shaft and ramp entries to the underground from the probable maximum flood (PMF) in the current design configuration for the proposed Nevada Nuclear Waste Storage Investigation (NNWSI) repository protection provisions were furnished by the United States Bureau of Reclamation (USSR) or developed from USSR data. Proposed flood protection provisions include site grading, drainage channels, and diversion dikes. Figures are provided to show these proposed flood protection provisions at each area investigated. These areas are the central surface facilities (including the waste-handling building and waste treatment building), tuff ramp portal, waste ramp portal, men-and-materials shaft, emplacement exhaust shaft, and exploratory shafts facility
Introduction to maximum entropy
International Nuclear Information System (INIS)
Sivia, D.S.
1988-01-01
The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. We review the need for such methods in data analysis and show, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. We conclude with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab
International Nuclear Information System (INIS)
Rust, D.M.
1984-01-01
The successful retrieval and repair of the Solar Maximum Mission (SMM) satellite by Shuttle astronauts in April 1984 permitted continuance of solar flare observations that began in 1980. The SMM carries a soft X ray polychromator, gamma ray, UV and hard X ray imaging spectrometers, a coronagraph/polarimeter and particle counters. The data gathered thus far indicated that electrical potentials of 25 MeV develop in flares within 2 sec of onset. X ray data show that flares are composed of compressed magnetic loops that have come too close together. Other data have been taken on mass ejection, impacts of electron beams and conduction fronts with the chromosphere and changes in the solar radiant flux due to sunspots. 13 references
Introduction to maximum entropy
International Nuclear Information System (INIS)
Sivia, D.S.
1989-01-01
The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. The author reviews the need for such methods in data analysis and shows, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. He concludes with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab
Functional Maximum Autocorrelation Factors
DEFF Research Database (Denmark)
Larsen, Rasmus; Nielsen, Allan Aasbjerg
2005-01-01
MAF outperforms the functional PCA in concentrating the interesting' spectra/shape variation in one end of the eigenvalue spectrum and allows for easier interpretation of effects. Conclusions. Functional MAF analysis is a useful methods for extracting low dimensional models of temporally or spatially......Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in......\\verb+~+\\$\\backslash\\$cite{ramsay97} to functional maximum autocorrelation factors (MAF)\\verb+~+\\$\\backslash\\$cite{switzer85,larsen2001d}. We apply the method to biological shapes as well as reflectance spectra. {\\$\\backslash\\$bf Methods}. MAF seeks linear combination of the original variables that maximize autocorrelation between...
Regularized maximum correntropy machine
Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin
2015-01-01
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Behrens, Susan
2013-01-01
A colleague can't make a coffee date at a time the author proposes because it would conflict with his office hour. No student has actually made an appointment with him during the hour, but he is committed to being in his office as promised in case someone drops by. The author's reaction to her colleague's faithfulness to his posted office hour…
Breaking the Long Hours Culture.
Kodz, J.; Kersley, B.; Strebler, M. T.; O'Regan, S.
Case studies of 12 leading British employers were driven by employers' interest in issues related to working long hours in light of introduction of the Working Time Directive, a European Community initiative enacted into British law that sets limits on working hours per week. Data showed over one-fourth of full-time employees worked over 48 hours…
The difference between alternative averages
Directory of Open Access Journals (Sweden)
James Vaupel
2012-09-01
Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.
International Nuclear Information System (INIS)
Ryan, J.
1981-01-01
By understanding the sun, astrophysicists hope to expand this knowledge to understanding other stars. To study the sun, NASA launched a satellite on February 14, 1980. The project is named the Solar Maximum Mission (SMM). The satellite conducted detailed observations of the sun in collaboration with other satellites and ground-based optical and radio observations until its failure 10 months into the mission. The main objective of the SMM was to investigate one aspect of solar activity: solar flares. A brief description of the flare mechanism is given. The SMM satellite was valuable in providing information on where and how a solar flare occurs. A sequence of photographs of a solar flare taken from SMM satellite shows how a solar flare develops in a particular layer of the solar atmosphere. Two flares especially suitable for detailed observations by a joint effort occurred on April 30 and May 21 of 1980. These flares and observations of the flares are discussed. Also discussed are significant discoveries made by individual experiments
Electron density variations in the F2 layer maximum during solar activity cycle
International Nuclear Information System (INIS)
Besprozvannaya, A.S.; Kozina, P.E.; AN Kazakhskoj SSR, Alma-Ata. Sektor Ionosfery)
1988-01-01
R value, characterizing for F2 relation of hourly median values in solar activity minimum and maximum, is calculated by average monthly values of F2 layer critical frequencies for June, October and December 1958 and 1964. R latitudinal-temporal distributions are plotted for different seasons according to the data from the north hemisphere west and east stations, placed within the Φ'=35-70deg latitudes interval. The following peculiarities of F2 lyer ionization relation with solar activity are pointed out. There are day-time hours, they are - winter one characterized by the gain rate increase with the widths increase, and summer one, realizing the opposite regularity. In night-time hours R value is characterized by the abnormally low values (∼ 1.2) at the latitudes to the south of the ionospheric through and to the pole from it. For all three seasons during 24 hours the periods with ionization gain maximal rate, which occur at nights in summer time and in the hours after the sunset - in winter and equinoctial months, are observed. The quantitative explanation of the peculiarities detected concerning the to-day concepts on F2 layer formation mechanisms is given
Factors influencing accruement of contact hours for nurses.
Kubsch, Sylvia; Henniges, Amy; Lorenzoni, Nancy; Eckardt, Sally; Oleniczak, Sandra
2003-01-01
A decline in attendance at continuing education (CE) in nursing activities was observed in a Midwest state where CE attendance is not required. The purpose of this research study was to identify the effect of attitude, extrinsic and intrinsic reinforcement, and deterrents on contact hour accrual. A convenience sample of 282 registered nurses was surveyed using a researcher-constructed instrument determined to be valid and reliable. Registered nurses earning 0 to 15 contact hours annually reported accruing fewer contact hours in 1999 than in an average year. Registered nurses who earned 16 to 45+ contact hours annually reported earning more contact hours in 1999 than in an average year. Intrinsic reinforcement was found to be a significant motivator (r [257] = .242; p Operant Conditioning Theory has use in explaining registered nurse attendance at CE activities. CE planners should consider placing more emphasis on intrinsic rather than extrinsic reinforcement to encourage staff to attend CE activities.
Morales-Casique, E.; Neuman, S.P.; Vesselinov, V.V.
2010-01-01
We use log permeability and porosity data obtained from single-hole pneumatic packer tests in six boreholes drilled into unsaturated fractured tuff near Superior, Arizona, to postulate, calibrate and compare five alternative variogram models (exponential, exponential with linear drift, power,
Forecasting Day-Ahead Electricity Prices : Utilizing Hourly Prices
E. Raviv (Eran); K.E. Bouwman (Kees); D.J.C. van Dijk (Dick)
2013-01-01
textabstractThe daily average price of electricity represents the price of electricity to be delivered over the full next day and serves as a key reference price in the electricity market. It is an aggregate that equals the average of hourly prices for delivery during each of the 24 individual
Directory of Open Access Journals (Sweden)
Michel Moraes Gonçalves
2017-06-01
Full Text Available It was our objective to correlate specific performance in the Special Judo Fitness Test (SJFT and the maximum isometric handgrip (HGSMax, scapular traction (STSMax and lumbar traction (LTSMax strength tests in military judo athletes. Twenty-two military athletes from the judo team of the Brazilian Navy Almirante Alexandrino Instruction Centre, with average age of 26.14 ± 3.31 years old, and average body mass of 83.23 ± 14.14 kg participated in the study. Electronic dynamometry tests for HGSMax, STSMax and LTSMax were conducted. Then, after approximately 1 hour-interval, the SJFT protocol was applied. All variables were adjusted to the body mass of the athletes. Pearson correlation coefficient for statistical analysis was used. The results showed moderate negative correlation between the SJFT index and STSMax (r= -0.550, p= 0.008, strong negative correlations between the SJFT index and HGSMax (r= -0.706, p< 0.001, SJFT index and LTSMax (r= -0.721; p= 0.001, besides the correlation between the sum of the three maximum isometric strength tests and the SJFT index (r= -0.786, p< 0.001. This study concludes that negative correlations occur between the SJFT index and maximum isometric handgrip, shoulder and lumbar traction strength and the sum of the three maximum isometric strength tests in military judokas.
Ozone Nonattainment Areas - 1 Hour
U.S. Environmental Protection Agency — This data layer identifies areas in the U.S. where air pollution levels have not met the National Ambient Air Quality Standards (NAAQS) for Ozone - 1hour (Legacy...
Flexible forms of working hours
Knapp, Viktor
2017-01-01
66 Abstract - Flexible forms of working hours This diploma thesis deals with the flexible forms of working hours and its goal is to describe this issue in intelligible and comprehensive way. It is being very interesting and current theme which is to a great extent not subject to direct legal regulations and provides its contracting parties with a big amount of freedom of contract. This fact assists in bigger flexibilization of labour market and represents a significant instrument in the fight...
Independence, Odd Girth, and Average Degree
DEFF Research Database (Denmark)
Löwenstein, Christian; Pedersen, Anders Sune; Rautenbach, Dieter
2011-01-01
We prove several tight lower bounds in terms of the order and the average degree for the independence number of graphs that are connected and/or satisfy some odd girth condition. Our main result is the extension of a lower bound for the independence number of triangle-free graphs of maximum...... degree at most three due to Heckman and Thomas [Discrete Math 233 (2001), 233–237] to arbitrary triangle-free graphs. For connected triangle-free graphs of order n and size m, our result implies the existence of an independent set of order at least (4n−m−1) / 7. ...
Lagrangian averaging with geodesic mean.
Oliver, Marcel
2017-11-01
This paper revisits the derivation of the Lagrangian averaged Euler (LAE), or Euler- α equations in the light of an intrinsic definition of the averaged flow map as the geodesic mean on the volume-preserving diffeomorphism group. Under the additional assumption that first-order fluctuations are statistically isotropic and transported by the mean flow as a vector field, averaging of the kinetic energy Lagrangian of an ideal fluid yields the LAE Lagrangian. The derivation presented here assumes a Euclidean spatial domain without boundaries.
Averaging in spherically symmetric cosmology
International Nuclear Information System (INIS)
Coley, A. A.; Pelavas, N.
2007-01-01
The averaging problem in cosmology is of fundamental importance. When applied to study cosmological evolution, the theory of macroscopic gravity (MG) can be regarded as a long-distance modification of general relativity. In the MG approach to the averaging problem in cosmology, the Einstein field equations on cosmological scales are modified by appropriate gravitational correlation terms. We study the averaging problem within the class of spherically symmetric cosmological models. That is, we shall take the microscopic equations and effect the averaging procedure to determine the precise form of the correlation tensor in this case. In particular, by working in volume-preserving coordinates, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. We find that the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background must be of the form of a spatial curvature. Inhomogeneities and spatial averaging, through this spatial curvature correction term, can have a very significant dynamical effect on the dynamics of the Universe and cosmological observations; in particular, we discuss whether spatial averaging might lead to a more conservative explanation of the observed acceleration of the Universe (without the introduction of exotic dark matter fields). We also find that the correlation tensor for a non-FLRW background can be interpreted as the sum of a spatial curvature and an anisotropic fluid. This may lead to interesting effects of averaging on astrophysical scales. We also discuss the results of averaging an inhomogeneous Lemaitre-Tolman-Bondi solution as well as calculations of linear perturbations (that is, the backreaction) in an FLRW background, which support the main conclusions of the analysis
Averaging models: parameters estimation with the R-Average procedure
Directory of Open Access Journals (Sweden)
S. Noventa
2010-01-01
Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.
International Nuclear Information System (INIS)
Nur Ubaidah Saidin; Muhamad Daud; Siti Radiah Mohd Kamarudin
2011-01-01
This report explained the test conducted in salt fog chamber to evaluate the effectiveness of mild steel, coated with rust converter, for 168 hours in artificial seawater exposure. The samples were compared with mild steel coated with commercial primer. The tests were conducted followed ASTM B117. Individual pictures were taken of each sample before the tests began, at 24, 48, 72, 96, 120, 144 and 168 hours to see the progression of the corrosion. Results showed that the samples coated with rust converter provide a good significant protection against corrosion phenomenon than the samples coated with commercial primer that available in the market. (author)
Urtis, Tom
2015-01-01
Master VBA automation quickly and easily to get more out of Excel Excel VBA 24-Hour Trainer, 2nd Edition is the quick-start guide to getting more out of Excel, using Visual Basic for Applications. This unique book/video package has been updated with fifteen new advanced video lessons, providing a total of eleven hours of video training and 45 total lessons to teach you the basics and beyond. This self-paced tutorial explains Excel VBA from the ground up, demonstrating with each advancing lesson how you can increase your productivity. Clear, concise, step-by-step instructions are combined wit
Energy Technology Data Exchange (ETDEWEB)
Bravo, J. L [Instituto de Geofisica, UNAM, Mexico, D.F. (Mexico); Nava, M. M [Instituto Mexicano del Petroleo, Mexico, D.F. (Mexico); Gay, C [Centro de Ciencias de la Atmosfera, UNAM, Mexico, D.F. (Mexico)
2001-07-01
We developed a procedure to forecast, with 2 or 3 hours, the daily maximum of surface ozone concentrations. It involves the adjustment of Autoregressive Integrated and Moving Average (ARIMA) models to daily ozone maximum concentrations at 10 monitoring atmospheric stations in Mexico City during one-year period. A one-day forecast is made and it is adjusted with the meteorological and solar radiation information acquired during the first 3 hours before the occurrence of the maximum value. The relative importance for forecasting of the history of the process and of meteorological conditions is evaluated. Finally an estimate of the daily probability of exceeding a given ozone level is made. [Spanish] Se aplica un procedimiento basado en la metodologia conocida como ARIMA, para predecir, con 2 o 3 horas de anticipacion, el valor maximo de la concentracion diaria de ozono. Esta basado en el calculo de autorregresiones y promedios moviles aplicados a los valores maximos de ozono superficial provenientes de 10 estaciones de monitoreo atmosferico en la Ciudad de Mexico y obtenidos durante un ano de muestreo. El pronostico para un dia se ajusta con la informacion meteorologica y de radiacion solar correspondiente a un periodo que antecede con al menos tres horas la ocurrencia esperada del valor maximo. Se compara la importancia relativa de la historia del proceso y de las condiciones meteorologicas previas para el pronostico. Finalmente se estima la probabilidad diaria de que un nivel normativo o preestablecido para contingencias de ozono sea rebasado.
Resolving issues concerning Eskdalemuir geomagnetic hourly values
Directory of Open Access Journals (Sweden)
S. Macmillan
2011-02-01
Full Text Available The hourly values of the geomagnetic field from 1911 to 1931 derived from measurements made at Eskdalemuir observatory in the UK, and available online from the World Data Centre for Geomagnetism at http://www.wdc.bgs.ac.uk/, have now been corrected. Previously they were 2-point averaged and transformed from the original north, east and vertical down values in the tables in the observatory yearbooks. This paper documents the course of events from discovering the post-processing done to the data to the final resolution of the problem. As it was through the development of a new index, the Inter-Hour Variability index, that this post-processing came to light, we provide a revised series of this index for Eskdalemuir and compare it with that from another European observatory. Conclusions of studies concerning long-term magnetic field variability and inferred solar variability, whilst not necessarily consistent with one another, are not obviously invalidated by the incorrect hourly values from Eskdalemuir. This series of events illustrates the challenges that lie ahead in removing any remaining errors and inconsistencies in the data holdings of different World Data Centres.
Long working hours and cancer risk: a multi-cohort study
Heikkila, K.; Nyberg, S.T.; Madsen, I.E.; Vroome, E. de; Alfredsson, L.; Bjorner, J.B.; Borritz, M.; Burr, H.; Erbel, R.; Ferrie, J.E.; Fransson, E.; Geuskens, G.A.; Hooftman, W.E.; Houtman, I.L.; Jöckel, K.H.; Knutsson, A.; Koskenvuo, M.; Lunau, T.; Nielsen, M.L.; Nordin, M.; Oksanen, T.; Pejtersen, J.H.; Pentti, J.; Shipley, M.J.; Steptoe, A.; Suominen, S.B.; Theorell, T.; Vahtera, J.; Westerholm, P.J.M.; Westerlund, H.; Dragano, N.; Rugulies, R.; Kawachi, I.; Batty, G.D.; Singh-Manoux, A.; Virtanen, M.; Kivimäki, M.
2016-01-01
Background: Working longer than the maximum recommended hours is associated with an increased risk of cardiovascular disease, but the relationship of excess working hours with incident cancer is unclear. Methods: This multi-cohort study examined the association between working hours and cancer risk
Average configuration of the geomagnetic tail
International Nuclear Information System (INIS)
Fairfield, D.H.
1979-01-01
Over 3000 hours of Imp 6 magnetic field data obtained between 20 and 33 R/sub E/ in the geomagnetic tail have been used in a statistical study of the tail configuration. A distribution of 2.5-min averages of B/sub z/ as a function of position across the tail reveals that more flux crosses the equatorial plane near the dawn and dusk flanks (B-bar/sub z/=3.γ) than near midnight (B-bar/sub z/=1.8γ). The tail field projected in the solar magnetospheric equatorial plane deviates from the x axis due to flaring and solar wind aberration by an angle α=-0.9 Y/sub SM/-2.7, where Y/sub SM/ is in earth radii and α is in degrees. After removing these effects, the B/sub y/ component of the tail field is found to depend on interplanetary sector structure. During an 'away' sector the B/sub y/ component of the tail field is on average 0.5γ greater than that during a 'toward' sector, a result that is true in both tail lobes and is independent of location across the tail. This effect means the average field reversal between northern and southern lobes of the tail is more often 178 0 rather than the 180 0 that is generally supposed
Effect of overtime work on 24-hour ambulatory blood pressure.
Hayashi, T; Kobayashi, Y; Yamaoka, K; Yano, E
1996-10-01
Recently, the adverse effects of long working hours on the cardiovascular systems of workers in Japan, including "Karoshi" (death from overwork), have been the focus of social concern. However, conventional methods of health checkups are often unable to detect the early signs of such adverse effects. To evaluate the influence of overtime work on the cardiovascular system, we compared 24-hour blood pressure measurements among several groups of male white-collar workers. As a result, for those with normal blood pressure and those with mild hypertension, the 24-hour average blood pressure of the overtime groups was higher than that of the control groups; for those who periodically did overtime work, the 24-hour average blood pressure and heart rate during the busy period increased. These results indicate that the burden on the cardiovascular system of white-collar workers increases with overtime work.
Evaluations of average level spacings
International Nuclear Information System (INIS)
Liou, H.I.
1980-01-01
The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of 168 Er data. 19 figures, 2 tables
estimation of global solar radiation from sunshine hours for warri
African Journals Online (AJOL)
DJFLEX
Multiple linear regression models were developed to estimate the monthly daily sunshine hours using four parameters during a period of eleven years (1997 to 2007) for Warri, Nigeria (Latitude of 5o. 34' 21.0''); the parameters include, Relative Humidity, Maximum and Minimum Temperature, Rainfall and Wind Speed.
Estimation of global solar radiation from sunshine hours for Warri ...
African Journals Online (AJOL)
Multiple linear regression models were developed to estimate the monthly daily sunshine hours using four parameters during a period of eleven years (1997 to 2007) for Warri, Nigeria (Latitude of 5o 34' 21.0''); the parameters include, Relative Humidity, Maximum and Minimum Temperature, Rainfall and Wind Speed.
Ergodic averages via dominating processes
DEFF Research Database (Denmark)
Møller, Jesper; Mengersen, Kerrie
2006-01-01
We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain....
Credal Networks under Maximum Entropy
Lukasiewicz, Thomas
2013-01-01
We apply the principle of maximum entropy to select a unique joint probability distribution from the set of all joint probability distributions specified by a credal network. In detail, we start by showing that the unique joint distribution of a Bayesian tree coincides with the maximum entropy model of its conditional distributions. This result, however, does not hold anymore for general Bayesian networks. We thus present a new kind of maximum entropy models, which are computed sequentially. ...
High average power supercontinuum sources
Indian Academy of Sciences (India)
The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium.
Knechtle, Beat; Knechtle, Patrizia; Rosemann, Thomas; Lepers, Romuald
2011-08-01
In recent studies, a relationship between both low body fat and low thicknesses of selected skinfolds has been demonstrated for running performance of distances from 100 m to the marathon but not in ultramarathon. We investigated the association of anthropometric and training characteristics with race performance in 63 male recreational ultrarunners in a 24-hour run using bi and multivariate analysis. The athletes achieved an average distance of 146.1 (43.1) km. In the bivariate analysis, body mass (r = -0.25), the sum of 9 skinfolds (r = -0.32), the sum of upper body skinfolds (r = -0.34), body fat percentage (r = -0.32), weekly kilometers ran (r = 0.31), longest training session before the 24-hour run (r = 0.56), and personal best marathon time (r = -0.58) were related to race performance. Stepwise multiple regression showed that both the longest training session before the 24-hour run (p = 0.0013) and the personal best marathon time (p = 0.0015) had the best correlation with race performance. Performance in these 24-hour runners may be predicted (r2 = 0.46) by the following equation: Performance in a 24-hour run, km) = 234.7 + 0.481 (longest training session before the 24-hour run, km) - 0.594 (personal best marathon time, minutes). For practical applications, training variables such as volume and intensity were associated with performance but not anthropometric variables. To achieve maximum kilometers in a 24-hour run, recreational ultrarunners should have a personal best marathon time of ∼3 hours 20 minutes and complete a long training run of ∼60 km before the race, whereas anthropometric characteristics such as low body fat or low skinfold thicknesses showed no association with performance.
Resident duty hours in Canada: a survey and national statement.
Masterson, Mark F; Shrichand, Pankaj; Maniate, Jerry M
2014-01-01
Physicians in general, and residents in particular, are adapting to duty schedules in which they have fewer continuous work hours; however, there are no Canadian guidelines on duty hours restrictions. To better inform resident duty hour policy in Canada, we set out to prepare a set of recommendations that would draw upon evidence reported in the literature and reflect the experiences of resident members of the Canadian Association of Internes and Residents (CAIR). A survey was prepared and distributed electronically to all resident members of CAIR. A total of 1796 eligible residents participated in the survey. Of those who responded, 38% (601) reported that they felt they could safely provide care for up to 16 continuous hours, and 20% (315) said that 12 continuous hours was the maximum period during which they could safely provide care (n=1592). Eighty-two percent (1316) reported their perception that the quality of care they had provided suffered because of the number of consecutive hours worked (n=1598). Only 52% (830) had received training in handover (n=1594); those who had received such training reported that it was commonly provided through informal modelling. On the basis of these data and the existing literature, CAIR recommends that resident duty hours be managed in a way that does not endanger the health of residents or patients; does not impair education; is flexible; and does not violate ethical or legal standards. Further, residents should be formally trained in handover skills and alternative duty hour models.
ACGME proposes dropping the 16 hour resident shift limit
Directory of Open Access Journals (Sweden)
Robbins RA
2016-11-01
Full Text Available No abstract available. Article truncated after 150 words. The Accreditation Council for Graduate Medical Education (ACGME is proposing that first-year residents would no longer be limited to 16-hour shifts during the 2017-2018 academic year under a controversial proposal released today (1. Instead, individual residency programs could assign first-year trainees to shifts as long as 28 hours, the current limit for all other residents. The 28-hour maximum includes 4 transitional hours that's designed in part to help residents improve continuity of care. The plan to revise training requirements does not change other rules designed to protect all residents from overwork. including the maximum80 hours per week. The ACGME capped the shifts of first-year residents at 16 hours in 2011 as a part of an ongoing effort to make trainee schedules more humane and avoid clinical errors caused by sleep deprivation. ACGME CEO Thomas Nasca, MD, told Medscape Medical News that the problem arises largely from first-year residents not being ...
Hourly temporal distribution of wind
Deligiannis, Ilias; Dimitriadis, Panayiotis; Koutsoyiannis, Demetris
2016-04-01
The wind process is essential for hydrometeorology and additionally, is one of the basic renewable energy resources. Most stochastic forecast models are limited up to daily scales disregarding the hourly scale which is significant for renewable energy management. Here, we analyze hourly wind timeseries giving emphasis on the temporal distribution of wind within the day. We finally present a periodic model based on statistical as well as hydrometeorological reasoning that shows good agreement with data. Acknowledgement: This research is conducted within the frame of the undergraduate course "Stochastic Methods in Water Resources" of the National Technical University of Athens (NTUA). The School of Civil Engineering of NTUA provided moral support for the participation of the students in the Assembly.
OPENING HOURS FOR CARDS OFFICE
Human Resources Division
2001-01-01
Due to the extra workload generated by the global renewal of French cards and in order to preserve the level of service offered by the cards office, please note that this office will in future be open every morning from 8.30 a.m. to 12.30 p.m. until further notice. The service can be contacted by telephone during the same hours. Thank you for your understanding.
When good = better than average
Directory of Open Access Journals (Sweden)
Don A. Moore
2007-10-01
Full Text Available People report themselves to be above average on simple tasks and below average on difficult tasks. This paper proposes an explanation for this effect that is simpler than prior explanations. The new explanation is that people conflate relative with absolute evaluation, especially on subjective measures. The paper then presents a series of four studies that test this conflation explanation. These tests distinguish conflation from other explanations, such as differential weighting and selecting the wrong referent. The results suggest that conflation occurs at the response stage during which people attempt to disambiguate subjective response scales in order to choose an answer. This is because conflation has little effect on objective measures, which would be equally affected if the conflation occurred at encoding.
Autoregressive Moving Average Graph Filtering
Isufi, Elvin; Loukas, Andreas; Simonetto, Andrea; Leus, Geert
2016-01-01
One of the cornerstones of the field of signal processing on graphs are graph filters, direct analogues of classical filters, but intended for signals defined on graphs. This work brings forth new insights on the distributed graph filtering problem. We design a family of autoregressive moving average (ARMA) recursions, which (i) are able to approximate any desired graph frequency response, and (ii) give exact solutions for tasks such as graph signal denoising and interpolation. The design phi...
Averaging Robertson-Walker cosmologies
International Nuclear Information System (INIS)
Brown, Iain A.; Robbers, Georg; Behrend, Juliane
2009-01-01
The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ω eff 0 ≈ 4 × 10 −6 , with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10 −8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w eff < −1/3 can be found for strongly phantom models
Flexibility of working hours in the 24-hour society.
Costa, G
2006-01-01
The 24-hour Society undergoes an ineluctable process towards a social organisation where time constraints are no more restricting human life. The borders between working and social times are no more fixed and rigidly determined, and the value of working time changes according to the different economic and social effects you may consider. Shift and night work, irregular and flexible working hours, together with new technologies, are the milestone of this epochal passage. What are the advantages and disadvantages for the individual, the companies, and the society? What is the cost/benefit ratio in terms of health and social well-being? Coping properly with this process means avoiding a passive acceptance of it with consequent maladjustments at both individual and social level, but adopting effective preventive and compensative strategies aimed at building up a more sustainable society. Flexible working times now appear to be one of the best ways to cope with the demands of the modern life, but there are different points of view about labour and temporal 'flexibility" between employers and employees. For the former it means a prompt adaptation to market demands and technological innovations; for the latter it is a way to improve working and social life, by decreasing work constraints and increasing control and autonomy. Although it can be easily speculated that individual-based 'flexibility" should improve health and well-being, and especially satisfaction, whereas company-based flexibility" might interfere negatively, the effective consequences on health and well-being have still to be analysed properly.
Full-time Workers Want to Work Fewer Hours, Part-time Workers Want to Work Longer Hours
Holst, Elke
2009-01-01
Since the reunification of Germany, average working times for men and women have followed different trends. There are various reasons for the difference. More and more women are gainfully employed; they engage in part-time and marginal employment, both of which are on the rise. The importance of full-time employment has declined. This accounts for most of the reduction in their average workweek, which decreased by 2.3 hours to 31.9 hours between 1993 and 2007. The full-time employment of men ...
Average features of cosmic ray variation associated with sudden commencement of magnetic storm
International Nuclear Information System (INIS)
Wada, Masami; Suda, Tomoshige.
1980-01-01
In order to obtain average features of cosmic ray variation associated with a passage of shock front in space, superposed epoch analysis of cosmic ray intensity with respect to the time of occurrence of sudden commencement (SC) of magnetic storm during solar cycle 20, 1964 - 1975, is carried out for hundreds of SC. When SC's are distributed evenly over the day, the onset in cosmic ray decrease is seen clearly within one hour of SC, followed by a sharp decrease in the intensity, but without any precursory fluctuation. The magnitude distribution and the rigidity spectrum for maximum depression show the features of Forbush decrease (FD). Superposed epoch analysis is also applied to solar wind and the interplanetary magnetic field data, and their relation to cosmic ray variation is studied. Effects of the superposition of the isotropic and anisotropic variations on the time profile of cosmic ray intensity observed at a station are discussed. (author)
Mansi, Ishak A
2011-03-01
In a recent report, the Institute of Medicine recommended more restrictions on residents' working hours. Several problems exist with a system that places a weekly limit on resident duty hours: (1) it assumes the presence of a linear relationship between hours of work and patient safety; (2) it fails to consider differences in intensity among programs; and (3) it does not address increases in the scientific content of medicine, and it places the burden of enforcing the duty hour limits on the Accreditation Council for Graduate Medical Education. An innovative method of calculating credit hours for graduate medical education would shift the focus from "years of residency" to "hours of residency." For example, internal medicine residents would be requested to spend 8640 hours of total training hours (assuming 60 hours per week for 48 weeks annually) instead of the traditional 3 years. This method of counting training hours is used by other professions, such as the Intern Development Program of the National Council of Architectural Registration Boards. The proposed approach would allow residents and program directors to pace training based on individual capabilities. Standards for resident education should include the average number of patients treated in each setting (inpatient or outpatient). A possible set of "multipliers" based on these parameters, and possibly others such as resident evaluation, is devised to calculate the "final adjusted accredited hours" that count toward graduation. Substituting "years of training" with "hours of training" may resolve many of the concerns with the current residency education model, as well as adapt to the demands of residents' personal lives. It also may allow residents to pace their training according to their capabilities and learning styles, and contribute to reflective learning and better quality education.
Java programming 24-hour trainer
Fain, Yakov
2015-01-01
Quick and painless Java programming with expert multimedia instruction Java Programming 24-Hour Trainer, 2nd Edition is your complete beginner's guide to the Java programming language, with easy-to-follow lessons and supplemental exercises that help you get up and running quickly. Step-by-step instruction walks you through the basics of object-oriented programming, syntax, interfaces, and more, before building upon your skills to develop games, web apps, networks, and automations. This second edition has been updated to align with Java SE 8 and Java EE 7, and includes new information on GUI b
Zipf's law, power laws and maximum entropy
International Nuclear Information System (INIS)
Visser, Matt
2013-01-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified. (paper)
Linear stochastic models for forecasting daily maxima and hourly concentrations of air pollutants
Energy Technology Data Exchange (ETDEWEB)
McCollister, G M; Wilson, K R
1975-04-01
Two related time series models were developed to forecast concentrations of various air pollutants and tested on carbon monoxide and oxidant data for the Los Angeles basin. One model forecasts daily maximum concentrations of a particular pollutant using only past daily maximum values of that pollutant as input. The other model forecasts 1 hr average concentrations using only the past hourly average values. Both are significantly more accurate than persistence, i.e., forecasting for tomorrow what occurred today (or yesterday). Model forecasts for 1972 of the daily instantaneous maxima for total oxidant made using only past pollutant concentration data are more accurate than those made by the Los Angeles APCD using meteorological input as well as pollutant concentrations. Although none of these models forecast as accurately as might be desired for a health warning system, the relative success of simple time series models, even though based solely on pollutant concentration, suggests that models incorporating meteorological data and using either multi-dimensional times series or pattern recognition techniques should be tested.
A high speed digital signal averager for pulsed NMR
International Nuclear Information System (INIS)
Srinivasan, R.; Ramakrishna, J.; Ra agopalan, S.R.
1978-01-01
A 256-channel digital signal averager suitable for pulsed nuclear magnetic resonance spectroscopy is described. It implements 'stable averaging' algorithm and hence provides a calibrated display of the average signal at all times during the averaging process on a CRT. It has a maximum sampling rate of 2.5 μ sec and a memory capacity of 256 x 12 bit words. Number of sweeps is selectable through a front panel control in binary steps from 2 3 to 2 12 . The enhanced signal can be displayed either on a CRT or by a 3.5-digit LED display. The maximum S/N improvement that can be achieved with this instrument is 36 dB. (auth.)
Directory of Open Access Journals (Sweden)
Jessica Ackerman
2013-07-01
Full Text Available 7 positions in 2 hours (2013 is a drawing that documents the process of making the short film Role Reversal Rehearsal. It became quickly apparent that the process of making the work was more dynamic and interesting than the finished piece itself. Relationships between the childcare arrangements of the participants and the collective working process brought about the necessity of collaboration for parent artists. Each participant gave their time, energy and creative insight towards filming a series of birthing positions with roles reversed. The male performer became the central figure in an attempt to prompt empathy, humour, and to embody the importance of the male role in childbirth. There were two hours to choreograph, rehearse, and film the sequence. The drawing by Ackerman encapsulates the 'rhizomatic' approach to producing creative work under the constraints of parenthood. The 'arborescent' structure of hierarchy encouraged in industrial filmmaking is subsumed in favour of a horizontal structure. This new structure allows for the creative input, and flow of collaboration between all people involved - including the 3 and 5 year olds, who contributed ideas for camera and soundtrack in situ.
Trajectory averaging for stochastic approximation MCMC algorithms
Liang, Faming
2010-10-01
The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.
Chaotic Universe, Friedmannian on the average 2
Energy Technology Data Exchange (ETDEWEB)
Marochnik, L S [AN SSSR, Moscow. Inst. Kosmicheskikh Issledovanij
1980-11-01
The cosmological solutions are found for the equations for correlators, describing a statistically chaotic Universe, Friedmannian on the average in which delta-correlated fluctuations with amplitudes h >> 1 are excited. For the equation of state of matter p = n epsilon, the kind of solutions depends on the position of maximum of the spectrum of the metric disturbances. The expansion of the Universe, in which long-wave potential and vortical motions and gravitational waves (modes diverging at t ..-->.. 0) had been excited, tends asymptotically to the Friedmannian one at t ..-->.. identity and depends critically on n: at n < 0.26, the solution for the scalefactor is situated higher than the Friedmannian one, and lower at n > 0.26. The influence of finite at t ..-->.. 0 long-wave fluctuation modes leads to an averaged quasiisotropic solution. The contribution of quantum fluctuations and of short-wave parts of the spectrum of classical fluctuations to the expansion law is considered. Their influence is equivalent to the contribution from an ultrarelativistic gas with corresponding energy density and pressure. The restrictions are obtained for the degree of chaos (the spectrum characteristics) compatible with the observed helium abundance, which could have been retained by a completely chaotic Universe during its expansion up to the nucleosynthesis epoch.
Topological quantization of ensemble averages
International Nuclear Information System (INIS)
Prodan, Emil
2009-01-01
We define the current of a quantum observable and, under well-defined conditions, we connect its ensemble average to the index of a Fredholm operator. The present work builds on a formalism developed by Kellendonk and Schulz-Baldes (2004 J. Funct. Anal. 209 388) to study the quantization of edge currents for continuous magnetic Schroedinger operators. The generalization given here may be a useful tool to scientists looking for novel manifestations of the topological quantization. As a new application, we show that the differential conductance of atomic wires is given by the index of a certain operator. We also comment on how the formalism can be used to probe the existence of edge states
Flexible time domain averaging technique
Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng
2013-09-01
Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.
Effects of renal sympathetic denervation on 24-hour blood pressure variability
Directory of Open Access Journals (Sweden)
Christine Stefanie Zuern
2012-05-01
Full Text Available Background: In patients with arterial hypertension, increased blood pressure (BP variability contributes to end organ damage independently from mean levels of arterial BP. Increased BP variability has been linked to alterations in autonomic function including sympathetic overdrive. We hypothesized that catheter-based renal sympathetic denervation (RDN confers beneficial effects on BPV. Methods and Results: Eleven consecutive patients with therapy-refractory arterial hypertension (age 68.9±7.0 years; baseline systolic BP 189±23mmHg despite medication with 5.6±2.1 antihypertensive drugs underwent bilateral RDN. Twenty-four hour ambulatory blood pressure monitoring (ABPM was performed before RDN and six months thereafter. BPV was primarily assessed by means of standard deviation of 24-hour systolic arterial blood pressures (SDsys. Secondary measures of BPV were maximum systolic blood pressure (MAXsys and maximum difference between two consecutive readings of systolic BP (deltamaxsys over 24 hours. Six months after RDN, SDsys, MAXsys and deltamaxsys were significantly reduced from 16.9±4.6mmHg to 13.5±2.5mmHg (p=0.003, from 190±22mmHg to 172±20mmHg (p<0.001 and from 40±15mmHg to 28±7mmHg (p=0.006, respectively, without changes in concomitant antihypertensive therapy. Reductions of SDsys, MAXsys and deltamaxsys were observed in 10/11 (90.9%, 11/11 (100% and 9/11 (81.8% patients, respectively. Although we noted a significant reduction of systolic office blood pressure by 30.4±27.7mmHg (p=0.007, there was only a trend in reduction of average systolic BP assessed from ABPM (149±19mmHg to 142±18mmHg; p=0.086.Conclusions: In patients with therapy-refractory arterial hypertension, RDN leads to significant reductions of BP variability. Effects of RDN on BPV over 24 hours were more pronounced than on average levels of BP.
Maximum Entropy in Drug Discovery
Directory of Open Access Journals (Sweden)
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
40 CFR 1045.140 - What is my engine's maximum engine power?
2010-07-01
...) Maximum engine power for an engine family is generally the weighted average value of maximum engine power... engine family's maximum engine power apply in the following circumstances: (1) For outboard or personal... value for maximum engine power from all the different configurations within the engine family to...
The average Indian female nose.
Patil, Surendra B; Kale, Satish M; Jaiswal, Sumeet; Khare, Nishant; Math, Mahantesh
2011-12-01
This study aimed to delineate the anthropometric measurements of the noses of young women of an Indian population and to compare them with the published ideals and average measurements for white women. This anthropometric survey included a volunteer sample of 100 young Indian women ages 18 to 35 years with Indian parents and no history of previous surgery or trauma to the nose. Standardized frontal, lateral, oblique, and basal photographs of the subjects' noses were taken, and 12 standard anthropometric measurements of the nose were determined. The results were compared with published standards for North American white women. In addition, nine nasal indices were calculated and compared with the standards for North American white women. The nose of Indian women differs significantly from the white nose. All the nasal measurements for the Indian women were found to be significantly different from those for North American white women. Seven of the nine nasal indices also differed significantly. Anthropometric analysis suggests differences between the Indian female nose and the North American white nose. Thus, a single aesthetic ideal is inadequate. Noses of Indian women are smaller and wider, with a less projected and rounded tip than the noses of white women. This study established the nasal anthropometric norms for nasal parameters, which will serve as a guide for cosmetic and reconstructive surgery in Indian women.
24-Hour Relativistic Bit Commitment.
Verbanis, Ephanielle; Martin, Anthony; Houlmann, Raphaël; Boso, Gianluca; Bussières, Félix; Zbinden, Hugo
2016-09-30
Bit commitment is a fundamental cryptographic primitive in which a party wishes to commit a secret bit to another party. Perfect security between mistrustful parties is unfortunately impossible to achieve through the asynchronous exchange of classical and quantum messages. Perfect security can nonetheless be achieved if each party splits into two agents exchanging classical information at times and locations satisfying strict relativistic constraints. A relativistic multiround protocol to achieve this was previously proposed and used to implement a 2-millisecond commitment time. Much longer durations were initially thought to be insecure, but recent theoretical progress showed that this is not so. In this Letter, we report on the implementation of a 24-hour bit commitment solely based on timed high-speed optical communication and fast data processing, with all agents located within the city of Geneva. This duration is more than 6 orders of magnitude longer than before, and we argue that it could be extended to one year and allow much more flexibility on the locations of the agents. Our implementation offers a practical and viable solution for use in applications such as digital signatures, secure voting and honesty-preserving auctions.
Maximum stellar iron core mass
Indian Academy of Sciences (India)
60, No. 3. — journal of. March 2003 physics pp. 415–422. Maximum stellar iron core mass. F W GIACOBBE. Chicago Research Center/American Air Liquide ... iron core compression due to the weight of non-ferrous matter overlying the iron cores within large .... thermal equilibrium velocities will tend to be non-relativistic.
Maximum entropy beam diagnostic tomography
International Nuclear Information System (INIS)
Mottershead, C.T.
1985-01-01
This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore. 11 refs., 4 figs
Maximum entropy beam diagnostic tomography
International Nuclear Information System (INIS)
Mottershead, C.T.
1985-01-01
This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore
A portable storage maximum thermometer
International Nuclear Information System (INIS)
Fayart, Gerard.
1976-01-01
A clinical thermometer storing the voltage corresponding to the maximum temperature in an analog memory is described. End of the measurement is shown by a lamp switch out. The measurement time is shortened by means of a low thermal inertia platinum probe. This portable thermometer is fitted with cell test and calibration system [fr
Neutron spectra unfolding with maximum entropy and maximum likelihood
International Nuclear Information System (INIS)
Itoh, Shikoh; Tsunoda, Toshiharu
1989-01-01
A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)
Preliminary analysis of the afforestation role in the maximum runoff in Valea Rece Catchment
Directory of Open Access Journals (Sweden)
Mihalcea Andreea
2017-06-01
Full Text Available The aim of this article is to demonstrate the afforestation role in maximum surface runoff. In this way, it was made a comparison of simulated flows in the current conditions of afforestation and the simulated flows in conditions of applying both afforestation and deforestation scenarios in Valea Rece catchment. Through HEC-HMS 4.1 hydrologic modeling software, using the method of unit hydrograph SCS Curve Number, were simulated flow of the river Valea Rece closing section of the basin, where precipitation amounts of 30,50,80,120 mm fallen in intervals of 1.3 to 6 hours on a soil with varying degrees of moisture: dry soil, average soil moisture and high humidity. This was done for the current degree of afforestation basin, for the results from a possible afforestation that would increase the afforestation degree to 80%, and for a possible deforestation that would lead to a degree of afforestation 15 %.
Shin, Kyong-Sok; Chung, Yun Kyung; Kwon, Young-Jun; Son, Jun-Seok; Lee, Se-Hoon
2017-09-01
This study investigated the relationship between weekly working hours and the occurrence of cerebro-cardiovascular diseases using a case-crossover study design. We investigated average working hours during the 7 days before the onset of illness (hazard period) and average weekly working hours between 8 days and 3 months before the onset of cerebro-cardiovascular diseases (control period) for 1,042 cases from the workers' compensation database for 2009. Among all subjects, the odds ratio by conditional logistic regression for the risk of cerebro-cardiovascular diseases with a 10 hr increase in average weekly working hours was 1.45 (95% confidence interval [CI]: 1.22-1.72), a significant association. An increase in average weekly working hours may trigger the onset of cerebro-cardiovascular disease. Am. J. Ind. Med. 60:753-761, 2017. © 2017. Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
On Maximum Entropy and Inference
Directory of Open Access Journals (Sweden)
Luigi Gresele
2017-11-01
Full Text Available Maximum entropy is a powerful concept that entails a sharp separation between relevant and irrelevant variables. It is typically invoked in inference, once an assumption is made on what the relevant variables are, in order to estimate a model from data, that affords predictions on all other (dependent variables. Conversely, maximum entropy can be invoked to retrieve the relevant variables (sufficient statistics directly from the data, once a model is identified by Bayesian model selection. We explore this approach in the case of spin models with interactions of arbitrary order, and we discuss how relevant interactions can be inferred. In this perspective, the dimensionality of the inference problem is not set by the number of parameters in the model, but by the frequency distribution of the data. We illustrate the method showing its ability to recover the correct model in a few prototype cases and discuss its application on a real dataset.
Maximum Water Hammer Sensitivity Analysis
Jalil Emadi; Abbas Solemani
2011-01-01
Pressure waves and Water Hammer occur in a pumping system when valves are closed or opened suddenly or in the case of sudden failure of pumps. Determination of maximum water hammer is considered one of the most important technical and economical items of which engineers and designers of pumping stations and conveyance pipelines should take care. Hammer Software is a recent application used to simulate water hammer. The present study focuses on determining significance of ...
Directory of Open Access Journals (Sweden)
Yunfeng Shan
2008-01-01
Full Text Available Genomes and genes diversify during evolution; however, it is unclear to what extent genes still retain the relationship among species. Model species for molecular phylogenetic studies include yeasts and viruses whose genomes were sequenced as well as plants that have the fossil-supported true phylogenetic trees available. In this study, we generated single gene trees of seven yeast species as well as single gene trees of nine baculovirus species using all the orthologous genes among the species compared. Homologous genes among seven known plants were used for validation of the ﬁnding. Four algorithms—maximum parsimony (MP, minimum evolution (ME, maximum likelihood (ML, and neighbor-joining (NJ—were used. Trees were reconstructed before and after weighting the DNA and protein sequence lengths among genes. Rarely a gene can always generate the “true tree” by all the four algorithms. However, the most frequent gene tree, termed “maximum gene-support tree” (MGS tree, or WMGS tree for the weighted one, in yeasts, baculoviruses, or plants was consistently found to be the “true tree” among the species. The results provide insights into the overall degree of divergence of orthologous genes of the genomes analyzed and suggest the following: 1 The true tree relationship among the species studied is still maintained by the largest group of orthologous genes; 2 There are usually more orthologous genes with higher similarities between genetically closer species than between genetically more distant ones; and 3 The maximum gene-support tree reﬂects the phylogenetic relationship among species in comparison.
CERN restaurants: opening hours during summer
2016-01-01
In the summer, the three CERN restaurants remain open during their usual hours. On Monday 1st August and Thursday 8 September, the Restaurant 1 will be open from 7:00 a.m. to 10:00 p.m. The satellites will be open as follows: Building 6: normal hours Building 13: normal hours Building 30: normal hours Building 40: closing at 4:30 p.m. instead of 5:00 pm Building 54: normal hours in July, closed in August Building 864: normal hours Building 865: normal hours Building 774: normal hours
29 CFR 778.320 - Hours that would not be hours worked if not paid for.
2010-07-01
... working hours fall in this category. The agreement of the parties to provide compensation for such hours... regular rate of an employee if the hours are compensated at the same rate as other working hours. The.... Activities of this type include eating meals between working hours. Where it appears from all the pertinent...
Hours Constraints Within and Between Jobs
Euwals, R.W.
1997-01-01
In the empirical literature on labour supply, several models are developed to incorporate constraints on working hours. These models do not address the question to which extent working hours are constrained within and between jobs. In this paper I investigate the effect of individual changes in labour supply preferences on actual working hours. The availability of subjective information on the individual’s preferred working hours gives direct measures on the degree of adjustment of working ho...
Work Hours Constraints: Impacts and Policy Implications
Constant, Amelie F.; Otterbach, Steffen
2011-01-01
If individuals reveal their preference as consumers, then they are taken seriously. What happens if individuals, as employees, reveal their preferences in working hours? And what happens if there is a misalignment between actual hours worked and preferred hours, the so-called work hours constraints? How does this affect the productivity of workers, their health, and overall life satisfaction? Labor supply and corresponding demand are fundamental to production. Labor economists know for long t...
46 CFR 15.710 - Working hours.
2010-10-01
... 46 Shipping 1 2010-10-01 2010-10-01 false Working hours. 15.710 Section 15.710 Shipping COAST... Limitations and Qualifying Factors § 15.710 Working hours. In addition to prescribing watch requirements, 46 U.S.C. 8104 sets limitations on the working hours of credentialed officers and crew members...
Extended working hours: Impacts on workers
D. Mitchell; T. Gallagher
2010-01-01
Some logging business owners are trying to manage their equipment assets by increasing the scheduled machine hours. The intent is to maximize the total tons produced by a set of equipment. This practice is referred to as multi-shifting, double-shifting, or extended working hours. One area often overlooked is the impact that working non-traditional hours can have on...
2010-01-05
... flexibility would not increase safety risks or adversely impact driver health? 3. How many hours per day and... period long enough to provide restorative sleep regardless of the number of hours worked prior to the... No. FMCSA-2004-19608] RIN 2126-AB26 Hours of Service AGENCY: Federal Motor Carrier Safety...
The Persistence of Long Work Hours
Robert Drago; David Black; Mark Wooden
2005-01-01
Previous research hypothesizes that long working hours are related to consumerism, the ideal worker norm, high levels of human capital, and a high cost-of-job-loss. The authors test these hypotheses using panel data on working hours for an Australian sample of full-time employed workers. Analyses include a static cross-sectional model and a persistence model for long hours over time. The results suggest that long hours (50 or more hours in a usual week) are often persistent, and provide stron...
Khatib, Tamer; Elmenreich, Wilfried
2015-01-01
This paper presents a model for predicting hourly solar radiation data using daily solar radiation averages. The proposed model is a generalized regression artificial neural network. This model has three inputs, namely, mean daily solar radiation, hour angle, and sunset hour angle. The output layer has one node which is mean hourly solar radiation. The training and development of the proposed model are done using MATLAB and 43800 records of hourly global solar radiation. The results show that...
Average monthly and annual climate maps for Bolivia
Vicente-Serrano, Sergio M.
2015-02-24
This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.
Generic maximum likely scale selection
DEFF Research Database (Denmark)
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
in this work is on applying this selection principle under a Brownian image model. This image model provides a simple scale invariant prior for natural images and we provide illustrative examples of the behavior of our scale estimation on such images. In these illustrative examples, estimation is based......The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...
Variation of Probable Maximum Precipitation in Brazos River Basin, TX
Bhatia, N.; Singh, V. P.
2017-12-01
The Brazos River basin, the second-largest river basin by area in Texas, generates the highest amount of flow volume of any river in a given year in Texas. With its headwaters located at the confluence of Double Mountain and Salt forks in Stonewall County, the third-longest flowline of the Brazos River traverses within narrow valleys in the area of rolling topography of west Texas, and flows through rugged terrains in mainly featureless plains of central Texas, before its confluence with Gulf of Mexico. Along its major flow network, the river basin covers six different climate regions characterized on the basis of similar attributes of vegetation, temperature, humidity, rainfall, and seasonal weather changes, by National Oceanic and Atmospheric Administration (NOAA). Our previous research on Texas climatology illustrated intensified precipitation regimes, which tend to result in extreme flood events. Such events have caused huge losses of lives and infrastructure in the Brazos River basin. Therefore, a region-specific investigation is required for analyzing precipitation regimes along the geographically-diverse river network. Owing to the topographical and hydroclimatological variations along the flow network, 24-hour Probable Maximum Precipitation (PMP) was estimated for different hydrologic units along the river network, using the revised Hershfield's method devised by Lan et al. (2017). The method incorporates the use of a standardized variable describing the maximum deviation from the average of a sample scaled by the standard deviation of the sample. The hydrometeorological literature identifies this method as more reasonable and consistent with the frequency equation. With respect to the calculation of stable data size required for statistically reliable results, this study also quantified the respective uncertainty associated with PMP values in different hydrologic units. The corresponding range of return periods of PMPs in different hydrologic units was
Extreme Maximum Land Surface Temperatures.
Garratt, J. R.
1992-09-01
There are numerous reports in the literature of observations of land surface temperatures. Some of these, almost all made in situ, reveal maximum values in the 50°-70°C range, with a few, made in desert regions, near 80°C. Consideration of a simplified form of the surface energy balance equation, utilizing likely upper values of absorbed shortwave flux (1000 W m2) and screen air temperature (55°C), that surface temperatures in the vicinity of 90°-100°C may occur for dry, darkish soils of low thermal conductivity (0.1-0.2 W m1 K1). Numerical simulations confirm this and suggest that temperature gradients in the first few centimeters of soil may reach 0.5°-1°C mm1 under these extreme conditions. The study bears upon the intrinsic interest of identifying extreme maximum temperatures and yields interesting information regarding the comfort zone of animals (including man).
Averaging of nonlinearity-managed pulses
International Nuclear Information System (INIS)
Zharnitsky, Vadim; Pelinovsky, Dmitry
2005-01-01
We consider the nonlinear Schroedinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons
Comparison of 44-hour and fixed 24-hour ambulatory blood pressure monitoring in dialysis patients.
Liu, Wenjin; Ye, Hong; Tang, Bing; Sun, Zhiping; Wen, Ping; Wu, Wenhui; Bian, Xueqing; Shen, Xia; Yang, Junwei
2014-01-01
The two most commonly used strategies to evaluate dialysis patients' blood pressure (BP) level are 44-hour and 24-hour ambulatory blood pressure monitoring (ABPM). The objective of this study was to find an appropriate 24-hour period that correlated well with the 44-hour BP level and determine the differences between these strategies. In a group of 51 dialysis patients, the authors performed 44-hour ABPM and extracted data for a fixed 24-hour ABPM. The fixed 24-hour ABPM started at 6 am on the nondialysis day. A strong correlation was found between all parameters of 44-hour and the fixed 24-hour ABPM, with paired sample t test showing only small magnitude changes in a few parameters. Both 24-hour ABPM and 44-hour ABPM were superior to clinic BP in predicting left ventricular mass index (LVMI) by multiple regression analysis. It was found that 44-hour ambulatory arterial stiffness index (AASI), but not 24-hour AASI, had a positive association with LVMI (r=0.328, P=.021). However, after adjustment for 44-hour systolic blood pressure, this association disappeared. Fixed 24-hour ABPM is a good surrogate of 44-hour ABPM to some extent, while 44-hour ABPM can provide more accurate and detailed information. ©2013 Wiley Periodicals, Inc.
Missing data and the accuracy of magnetic-observatory hour means
Directory of Open Access Journals (Sweden)
J. J. Love
2009-09-01
Full Text Available Analysis is made of the accuracy of magnetic-observatory hourly means constructed from definitive minute data having missing values (gaps. Bootstrap sampling from different data-gap distributions is used to estimate average errors on hourly means as a function of the number of missing data. Absolute and relative error results are calculated for horizontal-intensity, declination, and vertical-component data collected at high, medium, and low magnetic latitudes. For 90% complete coverage (10% missing data, average (RMS absolute errors on hourly means are generally less than errors permitted by Intermagnet for minute data. As a rule of thumb, the average relative error for hourly means with 10% missing minute data is approximately equal to 10% of the hourly standard deviation of the source minute data.
Measurement of average radon gas concentration at workplaces
International Nuclear Information System (INIS)
Kavasi, N.; Somlai, J.; Kovacs, T.; Gorjanacz, Z.; Nemeth, Cs.; Szabo, T.; Varhegyi, A.; Hakl, J.
2003-01-01
In this paper results of measurement of average radon gas concentration at workplaces (the schools and kindergartens and the ventilated workplaces) are presented. t can be stated that the one month long measurements means very high variation (as it is obvious in the cases of the hospital cave and the uranium tailing pond). Consequently, in workplaces where the expectable changes of radon concentration considerable with the seasons should be measure for 12 months long. If it is not possible, the chosen six months period should contain summer and winter months as well. The average radon concentration during working hours can be differ considerable from the average of the whole time in the cases of frequent opening the doors and windows or using artificial ventilation. (authors)
Maximum vehicle cabin temperatures under different meteorological conditions
Grundstein, Andrew; Meentemeyer, Vernon; Dowd, John
2009-05-01
A variety of studies have documented the dangerously high temperatures that may occur within the passenger compartment (cabin) of cars under clear sky conditions, even at relatively low ambient air temperatures. Our study, however, is the first to examine cabin temperatures under variable weather conditions. It uses a unique maximum vehicle cabin temperature dataset in conjunction with directly comparable ambient air temperature, solar radiation, and cloud cover data collected from April through August 2007 in Athens, GA. Maximum cabin temperatures, ranging from 41-76°C, varied considerably depending on the weather conditions and the time of year. Clear days had the highest cabin temperatures, with average values of 68°C in the summer and 61°C in the spring. Cloudy days in both the spring and summer were on average approximately 10°C cooler. Our findings indicate that even on cloudy days with lower ambient air temperatures, vehicle cabin temperatures may reach deadly levels. Additionally, two predictive models of maximum daily vehicle cabin temperatures were developed using commonly available meteorological data. One model uses maximum ambient air temperature and average daily solar radiation while the other uses cloud cover percentage as a surrogate for solar radiation. From these models, two maximum vehicle cabin temperature indices were developed to assess the level of danger. The models and indices may be useful for forecasting hazardous conditions, promoting public awareness, and to estimate past cabin temperatures for use in forensic analyses.
System for memorizing maximum values
Bozeman, Richard J., Jr.
1992-08-01
The invention discloses a system capable of memorizing maximum sensed values. The system includes conditioning circuitry which receives the analog output signal from a sensor transducer. The conditioning circuitry rectifies and filters the analog signal and provides an input signal to a digital driver, which may be either linear or logarithmic. The driver converts the analog signal to discrete digital values, which in turn triggers an output signal on one of a plurality of driver output lines n. The particular output lines selected is dependent on the converted digital value. A microfuse memory device connects across the driver output lines, with n segments. Each segment is associated with one driver output line, and includes a microfuse that is blown when a signal appears on the associated driver output line.
Remarks on the maximum luminosity
Cardoso, Vitor; Ikeda, Taishi; Moore, Christopher J.; Yoo, Chul-Moon
2018-04-01
The quest for fundamental limitations on physical processes is old and venerable. Here, we investigate the maximum possible power, or luminosity, that any event can produce. We show, via full nonlinear simulations of Einstein's equations, that there exist initial conditions which give rise to arbitrarily large luminosities. However, the requirement that there is no past horizon in the spacetime seems to limit the luminosity to below the Planck value, LP=c5/G . Numerical relativity simulations of critical collapse yield the largest luminosities observed to date, ≈ 0.2 LP . We also present an analytic solution to the Einstein equations which seems to give an unboundedly large luminosity; this will guide future numerical efforts to investigate super-Planckian luminosities.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Scintillation counter, maximum gamma aspect
International Nuclear Information System (INIS)
Thumim, A.D.
1975-01-01
A scintillation counter, particularly for counting gamma ray photons, includes a massive lead radiation shield surrounding a sample-receiving zone. The shield is disassembleable into a plurality of segments to allow facile installation and removal of a photomultiplier tube assembly, the segments being so constructed as to prevent straight-line access of external radiation through the shield into radiation-responsive areas. Provisions are made for accurately aligning the photomultiplier tube with respect to one or more sample-transmitting bores extending through the shield to the sample receiving zone. A sample elevator, used in transporting samples into the zone, is designed to provide a maximum gamma-receiving aspect to maximize the gamma detecting efficiency. (U.S.)
Maximum mutual information regularized classification
Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin
2014-01-01
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Long term forecasting of hourly electricity consumption in local areas in Denmark
DEFF Research Database (Denmark)
Møller Andersen, Frits; Larsen, Helge V.; Gaardestrup, R.B.
2013-01-01
. The model describes the entire profile of hourly consumption and is a first step towards differentiated local predictions of electricity consumption.The model is based on metering of aggregated hourly consumption at transformer stations covering selected local areas and on national statistics of hourly......Long term projections of hourly electricity consumption in local areas are important for planning of the transmission grid. In Denmark, at present the method used for grid planning is based on statistical analysis of the hour of maximum load and for each local area the maximum load is projected...... to change proportional to changes in the aggregated national electricity consumption. That is, specific local conditions are not considered. Yet, from measurements of local consumption we know that:. •consumption profiles differ between local areas,•consumption by categories of customers contribute...
Bounds on Average Time Complexity of Decision Trees
Chikalov, Igor
2011-01-01
In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any diagnostic problem, the minimum average depth of decision tree is bounded from below by the entropy of probability distribution (with a multiplier 1/log2 k for a problem over a k-valued information system). Among diagnostic problems, the problems with a complete set of attributes have the lowest minimum average depth of decision trees (e.g, the problem of building optimal prefix code [1] and a blood test study in assumption that exactly one patient is ill [23]). For such problems, the minimum average depth of decision tree exceeds the lower bound by at most one. The minimum average depth reaches the maximum on the problems in which each attribute is "indispensable" [44] (e.g., a diagnostic problem with n attributes and kn pairwise different rows in the decision table and the problem of implementing the modulo 2 summation function). These problems have the minimum average depth of decision tree equal to the number of attributes in the problem description. © Springer-Verlag Berlin Heidelberg 2011.
Paving the road to maximum productivity.
Holland, C
1998-01-01
"Job security" is an oxymoron in today's environment of downsizing, mergers, and acquisitions. Workers find themselves living by new rules in the workplace that they may not understand. How do we cope? It is the leader's charge to take advantage of this chaos and create conditions under which his or her people can understand the need for change and come together with a shared purpose to effect that change. The clinical laboratory at Arkansas Children's Hospital has taken advantage of this chaos to down-size and to redesign how the work gets done to pave the road to maximum productivity. After initial hourly cutbacks, the workers accepted the cold, hard fact that they would never get their old world back. They set goals to proactively shape their new world through reorganizing, flexing staff with workload, creating a rapid response laboratory, exploiting information technology, and outsourcing. Today the laboratory is a lean, productive machine that accepts change as a way of life. We have learned to adapt, trust, and support each other as we have journeyed together over the rough roads. We are looking forward to paving a new fork in the road to the future.
Long Work Hours: Volunteers and Conscripts
Robert Drago; Mark Wooden; David Black
2006-01-01
Panel data from Australia are used to study the prevalence of work hours mismatch among long hours workers and, more importantly, how that mismatch persists and changes over time, and what factors are associated with these changes. Particular attention is paid to the roles played by household debt, ideal worker characteristics and gender. Both static and dynamic multinomial logit models are estimated, with the dependent variable distinguishing long hours workers from other workers, and within...
Duty Hour Reporting: Conflicting Values in Professionalism.
Byrne, John M; Loo, Lawrence K; Giang, Dan W
2015-09-01
Duty hour limits challenge professional values, sometimes forcing residents to choose between patient care and regulatory compliance. This may affect truthfulness in duty hour reporting. We assessed residents' reasons for falsifying duty hour reports. We surveyed residents in 1 sponsoring institution to explore the reasons for noncompliance, frequency of violations, falsification of reports, and the residents' awareness of the option to extend hours to care for a single patient. The analysis used descriptive statistics. Linear regression was used to explore falsification of duty hour reports by year of training. The response rate was 88% (572 of 650). Primary reasons for duty hour violations were number of patients (19%) and individual patient acuity/complexity (19%). Junior residents were significantly more likely to falsify duty hours (R = -0.966). Of 124 residents who acknowledged falsification, 51 (41%) identified the primary reason as concern that the program will be in jeopardy of violating the Accreditation Council for Graduate Medical Education (ACGME) duty hour limits followed by fear of punishment (34, 27%). This accounted for more than two-thirds of the primary reasons for falsification. Residents' falsification of duty hour data appears to be motivated by concerns about adverse actions from the ACGME, and fear they might be punished. To foster professionalism, we recommend that sponsoring institutions educate residents about professionalism in duty hour reporting. The ACGME should also convey the message that duty hour limits be applied in a no-blame systems-based approach, and allow junior residents to extend duty hours for the care of individual patients.
Economic Analysis of Long Working Hours (Japanese)
OHTAKE Fumio; OKUDAIRA Hiroko
2008-01-01
In this paper we set out the economic grounds for restrictions on long working hours and conduct an empirical analysis using surveys from the perspective of behavioral economics. The results of the analysis indicate that, on a year-on-year basis, if state of health improves, the probability of working more than 60 hours per week increases significantly, but that even when state of health deteriorates there is no decrease in the probability of working long hours. Moreover, among male managemen...
Extreme working hours in Western Europe and North America: A new aspect of polarization
Burger, Anna S.
2015-01-01
This paper analyzes the trends and root causes of extreme working hours in sixteen Western European countries, Canada, and the United States between 1970 and 2010. Earlier literature has revealed increasing trends in extreme working hours in the United States and recognized the negative repercussions of this new aspect of labor market polarization. As European average working hours have declined over the past decades, scholars have turned little attention to the analysis of extreme working ho...
Long work hours and the wellbeing of fathers and their families
Ruth Weston; Matthew Gray; Lixia Qu; David Stanton
2004-01-01
The average hours worked by full-time employees in Australia have increased since the late 1970s. This, combined with increases in female labour force participation, has led to concerns about the impact of long work hours on family life. This paper explores the relationship between fathers' work hours, their own wellbeing and that of their families using data from the Household, Income and Labour Dynamics in Australia survey. The analysis is restricted to full-time employed fathers with a par...
Long Working Hours in Korea: Based on the 2014 Korean Working Conditions Survey
Park, Jungsun; Kim, Yangho; Han, Boyoung
2017-01-01
Background: Long working hours adversely affect worker safety and health. In 2004, Korea passed legislation that limited the work week to 40 hours, in an effort to improve quality-of-life and increase business competitiveness. This regulation was implemented in stages, first for large businesses and then for small businesses, from 2004 to 2011. We previously reported that average weekly working hours decreased from 2006 to 2010, based on the Korean Working Conditions Survey. Methods: In the p...
Hours of work and rest in the rail industry.
Anderson, C; Grunstein, R R; Rajaratnam, S M W
2013-06-01
Currently, the National Transport Commission is considering four options to form the regulatory framework for rail safety within Australia with respect to fatigue. While the National Transport Commission currently recommends no limitations around hours of work or rest, we provide evidence which suggests regulatory frameworks should incorporate a traditional hours of service regulation over more flexible policies. Our review highlights: Shift durations >12 h are associated with a doubling of risk for accident and injury. Fatigue builds cumulatively with each successive shift where rest in between is inadequate (hours of work and rest, including maximum shift duration and successive number of shifts. Appropriately, validated biomathematical models and technologies may be used as a part of a fatigue management system, to augment the protection afforded by limits on hours of work and rest. A comprehensive sleep disorder screening and management programme should form an essential component of any regulatory framework. © 2013 The Authors; Internal Medicine Journal © 2013 Royal Australasian College of Physicians.
New Measures of Teachers' Work Hours and Implications for Wage Comparisons
West, Kristine L.
2014-01-01
Researchers have good data on teachers' annual salaries but a hazy understanding of teachers' hours of work. This makes it difficult to calculate an accurate hourly wage and leads policy makers to default to anecdote rather than fact when debating teacher pay. Using data from the American Time Use Survey, I find that teachers work an average of…
Maximum time-dependent space-charge limited diode currents
Energy Technology Data Exchange (ETDEWEB)
Griswold, M. E. [Tri Alpha Energy, Inc., Rancho Santa Margarita, California 92688 (United States); Fisch, N. J. [Princeton Plasma Physics Laboratory, Princeton University, Princeton, New Jersey 08543 (United States)
2016-01-15
Recent papers claim that a one dimensional (1D) diode with a time-varying voltage drop can transmit current densities that exceed the Child-Langmuir (CL) limit on average, apparently contradicting a previous conjecture that there is a hard limit on the average current density across any 1D diode, as t → ∞, that is equal to the CL limit. However, these claims rest on a different definition of the CL limit, namely, a comparison between the time-averaged diode current and the adiabatic average of the expression for the stationary CL limit. If the current were considered as a function of the maximum applied voltage, rather than the average applied voltage, then the original conjecture would not have been refuted.
Maximum entropy and Bayesian methods
International Nuclear Information System (INIS)
Smith, C.R.; Erickson, G.J.; Neudorfer, P.O.
1992-01-01
Bayesian probability theory and Maximum Entropy methods are at the core of a new view of scientific inference. These 'new' ideas, along with the revolution in computational methods afforded by modern computers allow astronomers, electrical engineers, image processors of any type, NMR chemists and physicists, and anyone at all who has to deal with incomplete and noisy data, to take advantage of methods that, in the past, have been applied only in some areas of theoretical physics. The title workshops have been the focus of a group of researchers from many different fields, and this diversity is evident in this book. There are tutorial and theoretical papers, and applications in a very wide variety of fields. Almost any instance of dealing with incomplete and noisy data can be usefully treated by these methods, and many areas of theoretical research are being enhanced by the thoughtful application of Bayes' theorem. Contributions contained in this volume present a state-of-the-art overview that will be influential and useful for many years to come
The average size of ordered binary subgraphs
van Leeuwen, J.; Hartel, Pieter H.
To analyse the demands made on the garbage collector in a graph reduction system, the change in size of an average graph is studied when an arbitrary edge is removed. In ordered binary trees the average number of deleted nodes as a result of cutting a single edge is equal to the average size of a
Employer Attitudes towards Peak Hour Avoidance
Vonk Noordegraaf, D.M.; Annema, J.A.
2012-01-01
Peak Hour Avoidance is a relatively new Dutch mobility management measure. To reduce congestion frequent car drivers are given a financial reward for reducing the proportion of trips that they make during peak hours on a specific motorway section. Although previous studies show that employers are
Employer attitudes towards peak hour avoidance
Noordegraaf, D.M.V.; Annema, J.A.
2012-01-01
Peak Hour Avoidance is a relatively new Dutch mobility management measure. To reduce congestion frequent car drivers are given a financial reward for reducing the proportion of trips that they make during peak hours on a specific motorway section. Although previous studies show that employers are
Hours Constraints Within and Between Jobs
Euwals, R.W.
1997-01-01
In the empirical literature on labour supply, several models are developed to incorporate constraints on working hours. These models do not address the question to which extent working hours are constrained within and between jobs. In this paper I investigate the effect of individual changes in
17 CFR 201.104 - Business hours.
2010-04-01
... 17 Commodity and Securities Exchanges 2 2010-04-01 2010-04-01 false Business hours. 201.104 Section 201.104 Commodity and Securities Exchanges SECURITIES AND EXCHANGE COMMISSION RULES OF PRACTICE Rules of Practice General Rules § 201.104 Business hours. The Headquarters office of the Commission, at...
20 CFR 801.304 - Business hours.
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Business hours. 801.304 Section 801.304 Employees' Benefits BENEFITS REVIEW BOARD, DEPARTMENT OF LABOR ESTABLISHMENT AND OPERATION OF THE BOARD Action by the Board § 801.304 Business hours. The office of the Clerk of the Board at Washington, DC...
Cost-efficient staffing under annualized hours
van der Veen, Egbert; Hans, Elias W.; Veltman, Bart; Berrevoets, Leo M.; Berden, Hubert J.J.M.
2012-01-01
We study how flexibility in workforce capacity can be used to efficiently match capacity and demand. Flexibility in workforce capacity is introduced by the annualized hours regime. Annualized hours allow organizations to measure working time per year, instead of per month or per week. An additional
24-Hour Academic Libraries: Adjusting to Change
Bowman, Adam C.
2013-01-01
The purpose of this study was to explore the adaptive measures that academic libraries perform when implementing and operating a 24-hour schedule. Five in-depth interviews were conducted with current managerial-level librarians at 24-hour academic libraries. The exploratory interviews revealed similar measures for security, budgeting, employee…
Effectiveness of the Twelve-Hour Shift.
Brinton, Robert D.
1983-01-01
Although labor unions traditionally have fought for shorter working hours, there have been recent reversals in this trend. The Pulp and Paperboard Division of Temple-Eastex Incorporated converted to a 12-hour shift and found that safety improved, productivity increased, and overtime decreased. (JOW)
Impacts of extended working hours in logging
Dana Mitchell; Tom Gallagher
2008-01-01
Last year at the 2007 AIM in Minneapolis, MN, the authors presented the human factors impacts to consider when implementing extended working hours in the logging industry. In a continuation of this project, we have researched existing literature to identify possible actions that logging business owners can take to reduce the impact of extended working hours on their...
Du, Jie; Wimmer, Hayden; Rada, Roy
2018-01-01
This study investigates the delivery of the "Hour of Code" tutorials to college students. The college students who participated in this study were surveyed about their opinion of the Hour of Code. First, the students' comments were discussed. Next, a content analysis of the offered tutorials highlights their reliance on visual…
Experience With Flexible Hours of Work
Hartley, Jo
1976-01-01
A summary of an 80-page booklet called Hours of Work When Workers Can Choose is presented. The booklet reports a survey and focuses on the benefits of flexible hours of work. It was published by the Business and Professional Women's Foundation and is available from that organization. (EC)
The Credit Hour and Public Budgeting.
Wellman, Jane V.
2003-01-01
Discusses the ways the credit hour has come to be used by public funding systems in higher education. The literature review shows that the credit hour has become a barrier to innovation and a way to create systemic inequities between institutions or sectors in resource allocation. (SLD)
Nurses' extended work hours: Patient, nurse and organizational outcomes.
Kunaviktikul, W; Wichaikhum, O; Nantsupawat, A; Nantsupawat, R; Chontawan, R; Klunklin, A; Roongruangsri, S; Nantachaipan, P; Supamanee, T; Chitpakdee, B; Akkadechanunt, T; Sirakamon, S
2015-09-01
Nursing shortages have been associated with increased nurse workloads that may result in work errors, thus impacting patient, nurse and organizational outcomes. To examine for the first time in Thailand nurses' extended work hours (working more than 40 h per week) and its relationship to patient, nurse and organizational outcomes. Using multistage sampling, 1524 registered nurses working in 90 hospitals across Thailand completed demographic forms: the Nurses' Extended Work Hours Form; the Patient, Nurse, Organizational Outcomes Form; the Organizational Productivity Questionnaire and the Maslach Burnout Inventory. The data were analysed using descriptive statistics, Spearman's rank correlation and logistic regression. The average extended work hour of respondents was 18.82 h per week. About 80% worked two consecutive shifts. The extended work hours had a positive correlation with patient outcomes, such as patient identification errors, pressure ulcers, communication errors and patient complaints and with nurse outcomes of emotional exhaustion and depersonalization. Furthermore, we found a negative correlation between extended work hours and job satisfaction as a whole, intent to stay and organizational productivity. Nurses who had extended work hours of >16 h per week were significantly more likely to perceive all four adverse patient outcomes than participants working an extended ≤8 h per week. Patient outcomes were measured by respondents' self-reports. This may not always reflect the real occurrence of adverse events. Associations between extended work hours and outcomes for patients, nurses and the organization were found. The findings demonstrate that working two shifts (16 h) more than the regular work hours lead to negative outcomes for patients, nurses and the organization. Our findings add to increasing international evidence that nurses' poor working conditions result in negative outcomes for professionals, patients and health systems
Maximum entropy principal for transportation
International Nuclear Information System (INIS)
Bilich, F.; Da Silva, R.
2008-01-01
In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.
Latitudinal Change of Tropical Cyclone Maximum Intensity in the Western North Pacific
Choi, Jae-Won; Cha, Yumi; Kim, Hae-Dong; Kang, Sung-Dae
2016-01-01
This study obtained the latitude where tropical cyclones (TCs) show maximum intensity and applied statistical change-point analysis on the time series data of the average annual values. The analysis results found that the latitude of the TC maximum intensity increased from 1999. To investigate the reason behind this phenomenon, the difference of the average latitude between 1999 and 2013 and the average between 1977 and 1998 was analyzed. In a difference of 500 hPa streamline between the two ...
Lower wages for less hours? A simultaneous wage-hours model for Germany
Wolf, Elke
2000-01-01
In this paper the impact of working hours on the gross hourly wage rate of West German women is analyzed. We use a simultaneous wage-hours model which takes into account the participation decision. First, our estimates show that the hourly wage rate is strongly a¤ected by the working hours. In order to avoid any assumptions about the functional form, we estimate linear spline functions. Second, we detect di¤erent wage-hours profiles for specific groups of individuals. Despite these di¤erences...
The Economics of Work Schedules under the New Hours and Employment Taxes
Casey B. Mulligan
2014-01-01
Hours, employment, and income taxes are economically distinct, and all three are either introduced or expanded by the Affordable Care Act beginning in 2014. The tax wedges push some workers to work more hours per week (for the weeks that they are on a payroll), and others to work less, with an average weekly hours effect that tends to be small and may be in either direction. A conservative estimate of the law's average employment rate impact is negative three percent. The ACA's tax wedges and...
Last Glacial Maximum Salinity Reconstruction
Homola, K.; Spivack, A. J.
2016-12-01
It has been previously demonstrated that salinity can be reconstructed from sediment porewater. The goal of our study is to reconstruct high precision salinity during the Last Glacial Maximum (LGM). Salinity is usually determined at high precision via conductivity, which requires a larger volume of water than can be extracted from a sediment core, or via chloride titration, which yields lower than ideal precision. It has been demonstrated for water column samples that high precision density measurements can be used to determine salinity at the precision of a conductivity measurement using the equation of state of seawater. However, water column seawater has a relatively constant composition, in contrast to porewater, where variations from standard seawater composition occur. These deviations, which affect the equation of state, must be corrected for through precise measurements of each ion's concentration and knowledge of apparent partial molar density in seawater. We have developed a density-based method for determining porewater salinity that requires only 5 mL of sample, achieving density precisions of 10-6 g/mL. We have applied this method to porewater samples extracted from long cores collected along a N-S transect across the western North Atlantic (R/V Knorr cruise KN223). Density was determined to a precision of 2.3x10-6 g/mL, which translates to salinity uncertainty of 0.002 gms/kg if the effect of differences in composition is well constrained. Concentrations of anions (Cl-, and SO4-2) and cations (Na+, Mg+, Ca+2, and K+) were measured. To correct salinities at the precision required to unravel LGM Meridional Overturning Circulation, our ion precisions must be better than 0.1% for SO4-/Cl- and Mg+/Na+, and 0.4% for Ca+/Na+, and K+/Na+. Alkalinity, pH and Dissolved Inorganic Carbon of the porewater were determined to precisions better than 4% when ratioed to Cl-, and used to calculate HCO3-, and CO3-2. Apparent partial molar densities in seawater were
Maximum Parsimony on Phylogenetic networks
2012-01-01
Background Phylogenetic networks are generalizations of phylogenetic trees, that are used to model evolutionary events in various contexts. Several different methods and criteria have been introduced for reconstructing phylogenetic trees. Maximum Parsimony is a character-based approach that infers a phylogenetic tree by minimizing the total number of evolutionary steps required to explain a given set of data assigned on the leaves. Exact solutions for optimizing parsimony scores on phylogenetic trees have been introduced in the past. Results In this paper, we define the parsimony score on networks as the sum of the substitution costs along all the edges of the network; and show that certain well-known algorithms that calculate the optimum parsimony score on trees, such as Sankoff and Fitch algorithms extend naturally for networks, barring conflicting assignments at the reticulate vertices. We provide heuristics for finding the optimum parsimony scores on networks. Our algorithms can be applied for any cost matrix that may contain unequal substitution costs of transforming between different characters along different edges of the network. We analyzed this for experimental data on 10 leaves or fewer with at most 2 reticulations and found that for almost all networks, the bounds returned by the heuristics matched with the exhaustively determined optimum parsimony scores. Conclusion The parsimony score we define here does not directly reflect the cost of the best tree in the network that displays the evolution of the character. However, when searching for the most parsimonious network that describes a collection of characters, it becomes necessary to add additional cost considerations to prefer simpler structures, such as trees over networks. The parsimony score on a network that we describe here takes into account the substitution costs along the additional edges incident on each reticulate vertex, in addition to the substitution costs along the other edges which are
Working hours and roster structures of surgical trainees in Australia and New Zealand.
O'Grady, Gregory; Loveday, Benjamin; Harper, Simon; Adams, Brandon; Civil, Ian D; Peters, Matthew
2010-12-01
The working hours of surgical trainees are a subject of international debate. Excessive working hours are fatiguing, and compromise performance, learning and work-life balance. However, reducing hours can impact on continuity of care, training experience and service provision. This study defines the current working hours of Australasian trainees, to inform the working hours debate in our regions. An online survey was conducted of all current Australasian trainees. Questions determined hours spent at work (AW) and off-site on-call (OC) per week, and roster structures were evaluated by training year, specialty and location. The response rate was 55.3%. Trainees averaged 61.4 ± 11.7 h/week AW, with 5% working ≥80 h. OC shifts were worked by 73.5%, for an average of 27.8 ± 14.3 h/week. Trainees of all levels worked similar hours (P= 0.10); however, neurosurgical trainees worked longer hours than most other specialties (P hours (P= 0.01) and rural rotations more OC (P Long days (>12 h) were worked by 86%; median frequency 1:4.4 days; median duration 15 h. OC shifts of 24-h duration were worked by 75%; median frequency 1:4.2 days; median sleep: 5-7 h/shift; median uninterrupted sleep: 3-5 h/shift. This study has quantified the working hours and roster structures of Australasian surgical trainees. By international standards, Australasian trainee working hours are around average. However, some rosters demand long hours and/or induce chronic sleep loss, placing some trainees at risk of fatigue. Ongoing efforts are needed to promote safe rostering practices. © 2010 The Authors. ANZ Journal of Surgery © 2010 Royal Australasian College of Surgeons.
Concentration fluctuations and averaging time in vapor clouds
Wilson, David J
2010-01-01
This book contributes to more reliable and realistic predictions by focusing on sampling times from a few seconds to a few hours. Its objectives include developing clear definitions of statistical terms, such as plume sampling time, concentration averaging time, receptor exposure time, and other terms often confused with each other or incorrectly specified in hazard assessments; identifying and quantifying situations for which there is no adequate knowledge to predict concentration fluctuations in the near-field, close to sources, and far downwind where dispersion is dominated by atmospheric t
Effect of travoprost on 24-hour intraocular pressure in normal tension glaucoma
Directory of Open Access Journals (Sweden)
Yuya Nomura
2010-07-01
Full Text Available Yuya Nomura1, Shunsuke Nakakura2, Mitsuyasu Moriwaki1, Yasuhiro Takahashi1, Kunihiko Shiraki11Department of Ophthalmology and Visual Sciences, Graduate School of Medicine, Osaka City University, Japan; 2Department of Ophthalmology, Saiseikai Gose Hospital, JapanPurpose: The effect of travoprost 0.004% on 24-hour intraocular pressure (IOP was examined in patients with normal tension glaucoma (NTG.Subjects and methods: This study included 17 patients with newly diagnosed unilateral NTG. IOP was measured at three-hour intervals over 24 hours by Goldman applanation tonometer in patients taking topical travoprost 0.004% and was compared retrospectively with 24-hour IOP data in untreated eyes.Results: IOP values were significantly reduced at individual time points after treatment (P < 0.01. Mean 24-hour IOP, maximum 24-hour IOP, minimum 24-hour IOP, and 24-hour IOP fluctuations at baseline (mean ± SD were 12.9 ± 2.2 mmHg, 15.4 ± 2.7 mmHg, 10.5 ± 2.2 mmHg, and 4.9 ± 1.2 mmHg, respectively, and were significantly reduced to 10.3 ± 2.0 mmHg, 12.4 ± 2.5 mmHg, 8.5 ± 1.9 mmHg (all P < 0.001, and 3.9 ± 1.5 mmHg (P < 0.05, respectively, after treatment. The rate of IOP reduction greater than 20% was 58.8% (10 eyes for maximum 24-hour IOP and 53.0% (nine eyes for mean 24-hour IOP.Conclusion: Travoprost reduced IOP throughout the 24-hour study period, with over half of the eyes examined showing IOP reduction exceeding 20%.Keywords: 24-hour intraocular pressure, fluctuation, normal tension glaucoma, travoprost, Travatan Z
Effect of nursing care hours on the outcomes of Intensive Care assistance.
Directory of Open Access Journals (Sweden)
Tatiana do Altíssimo Nogueira
Full Text Available To correlate the average number of nursing care hours dedicated to Intensive Care Unit (ICU patients with nursing care indicators.Transverse, descriptive study conducted between 2011 and 2013. Data were obtained from the electronic records system and from the nursing staff daily schedule. Generalized Linear Models were used for analysis.A total of 1,717 patients were included in the study. The average NAS (Nursing Activities Score value was 54.87. The average ratio between the number of nursing care hours provided to the patient and the number of nursing care hours required by the patient (hours ratio was 0.87. Analysis of the correlation between nursing care indicators and the hours ratio showed that the indicators phlebitis and ventilator-associated pneumonia significantly correlated with hours ratio; that is, the higher the hours ratio, the lower the incidence of phlebitis and ventilator-associated pneumonia.The number of nursing care hours directly impacts patient outcomes, which makes adjustment of nurse staffing levels essential.
TRENDS IN ESTIMATED MIXING DEPTH DAILY MAXIMUMS
Energy Technology Data Exchange (ETDEWEB)
Buckley, R; Amy DuPont, A; Robert Kurzeja, R; Matt Parker, M
2007-11-12
Mixing depth is an important quantity in the determination of air pollution concentrations. Fireweather forecasts depend strongly on estimates of the mixing depth as a means of determining the altitude and dilution (ventilation rates) of smoke plumes. The Savannah River United States Forest Service (USFS) routinely conducts prescribed fires at the Savannah River Site (SRS), a heavily wooded Department of Energy (DOE) facility located in southwest South Carolina. For many years, the Savannah River National Laboratory (SRNL) has provided forecasts of weather conditions in support of the fire program, including an estimated mixing depth using potential temperature and turbulence change with height at a given location. This paper examines trends in the average estimated mixing depth daily maximum at the SRS over an extended period of time (4.75 years) derived from numerical atmospheric simulations using two versions of the Regional Atmospheric Modeling System (RAMS). This allows for differences to be seen between the model versions, as well as trends on a multi-year time frame. In addition, comparisons of predicted mixing depth for individual days in which special balloon soundings were released are also discussed.
Averaging for solitons with nonlinearity management
International Nuclear Information System (INIS)
Pelinovsky, D.E.; Kevrekidis, P.G.; Frantzeskakis, D.J.
2003-01-01
We develop an averaging method for solitons of the nonlinear Schroedinger equation with a periodically varying nonlinearity coefficient, which is used to effectively describe solitons in Bose-Einstein condensates, in the context of the recently proposed technique of Feshbach resonance management. Using the derived local averaged equation, we study matter-wave bright and dark solitons and demonstrate a very good agreement between solutions of the averaged and full equations
SO2 8 Hour Nonattainment Areas
U.S. Environmental Protection Agency — This data layer identifies areas in the U.S. where air pollution levels have not met the National Ambient Air Quality Standards (NAAQS) for Sulfur dioxide 8 hour...
Ozone Nonattainment Areas - 8 Hour (1997 Standard)
U.S. Environmental Protection Agency — This data layer identifies areas in the U.S. where air pollution levels have not met the National Ambient Air Quality Standards (NAAQS) for ozone over 8 hours and...
U.S. Hourly Precipitation Data
National Oceanic and Atmospheric Administration, Department of Commerce — Hourly Precipitation Data (HPD) is digital data set DSI-3240, archived at the National Climatic Data Center (NCDC). The primary source of data for this file is...
Long working hours and alcohol use
DEFF Research Database (Denmark)
Virtanen, Marianna; Jokela, Markus; Nyberg, Solja T
2015-01-01
.20) in the analysis of prospective published and unpublished data. In the 18 studies with individual participant data it was possible to assess the European Union Working Time Directive, which recommends an upper limit of 48 hours a week. Odds ratios of new onset risky alcohol use for those working 49-54 hours......OBJECTIVE: To quantify the association between long working hours and alcohol use. DESIGN: Systematic review and meta-analysis of published studies and unpublished individual participant data. DATA SOURCES: A systematic search of PubMed and Embase databases in April 2014 for published studies......, supplemented with manual searches. Unpublished individual participant data were obtained from 27 additional studies. REVIEW METHODS: The search strategy was designed to retrieve cross sectional and prospective studies of the association between long working hours and alcohol use. Summary estimates were...
U.S. Hourly Precipitation Data Publication
National Oceanic and Atmospheric Administration, Department of Commerce — This publication contains hourly precipitation amounts obtained from recording rain gages located at National Weather Service, Federal Aviation Administration, and...
DSCOVR Magnetometer Level 2 One Minute Averages
National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-minute average of Level 1 data
DSCOVR Magnetometer Level 2 One Second Averages
National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-second average of Level 1 data
Spacetime averaging of exotic singularity universes
International Nuclear Information System (INIS)
Dabrowski, Mariusz P.
2011-01-01
Taking a spacetime average as a measure of the strength of singularities we show that big-rips (type I) are stronger than big-bangs. The former have infinite spacetime averages while the latter have them equal to zero. The sudden future singularities (type II) and w-singularities (type V) have finite spacetime averages. The finite scale factor (type III) singularities for some values of the parameters may have an infinite average and in that sense they may be considered stronger than big-bangs.
NOAA Average Annual Salinity (3-Zone)
California Natural Resource Agency — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...
Bae, Sung-Heui; Yoon, Jangho
2014-10-01
To examine the degree to which states' work hour regulations for nurses-policies regarding mandatory overtime and consecutive work hours-decrease mandatory overtime practice and hours of work among registered nurses. We analyzed a nationally representative sample of registered nurses from the National Sample Survey of Registered Nurses for years 2004 and 2008. We obtained difference-in-differences estimates of the effect of the nurse work hour policies on the likelihood of working mandatory overtime, working more than 40 hours per week, and working more than 60 hours per week for all staff nurses working in hospitals and nursing homes. The mandatory overtime and consecutive work hour regulations were significantly associated with 3.9 percentage-point decreases in the likelihood of working overtime mandatorily and 11.5 percentage-point decreases in the likelihood of working more than 40 hours per week, respectively. State mandatory overtime and consecutive work hour policies are effective in reducing nurse work hours. The consecutive work hour policy appears to be a better regulatory tool for reducing long work hours for nurses. © Health Research and Educational Trust.
Site Specific Probable Maximum Precipitation Estimates and Professional Judgement
Hayes, B. D.; Kao, S. C.; Kanney, J. F.; Quinlan, K. R.; DeNeale, S. T.
2015-12-01
State and federal regulatory authorities currently rely upon the US National Weather Service Hydrometeorological Reports (HMRs) to determine probable maximum precipitation (PMP) estimates (i.e., rainfall depths and durations) for estimating flooding hazards for relatively broad regions in the US. PMP estimates for the contributing watersheds upstream of vulnerable facilities are used to estimate riverine flooding hazards while site-specific estimates for small water sheds are appropriate for individual facilities such as nuclear power plants. The HMRs are often criticized due to their limitations on basin size, questionable applicability in regions affected by orographic effects, their lack of consist methods, and generally by their age. HMR-51 for generalized PMP estimates for the United States east of the 105th meridian, was published in 1978 and is sometimes perceived as overly conservative. The US Nuclear Regulatory Commission (NRC), is currently reviewing several flood hazard evaluation reports that rely on site specific PMP estimates that have been commercially developed. As such, NRC has recently investigated key areas of expert judgement via a generic audit and one in-depth site specific review as they relate to identifying and quantifying actual and potential storm moisture sources, determining storm transposition limits, and adjusting available moisture during storm transposition. Though much of the approach reviewed was considered a logical extension of HMRs, two key points of expert judgement stood out for further in-depth review. The first relates primarily to small storms and the use of a heuristic for storm representative dew point adjustment developed for the Electric Power Research Institute by North American Weather Consultants in 1993 in order to harmonize historic storms for which only 12 hour dew point data was available with more recent storms in a single database. The second issue relates to the use of climatological averages for spatially
Improving consensus structure by eliminating averaging artifacts
Directory of Open Access Journals (Sweden)
KC Dukka B
2009-03-01
Full Text Available Abstract Background Common structural biology methods (i.e., NMR and molecular dynamics often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures. However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%; in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38 of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA 1, our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction 2, which
Two-dimensional maximum entropy image restoration
International Nuclear Information System (INIS)
Brolley, J.E.; Lazarus, R.B.; Suydam, B.R.; Trussell, H.J.
1977-07-01
An optical check problem was constructed to test P LOG P maximum entropy restoration of an extremely distorted image. Useful recovery of the original image was obtained. Comparison with maximum a posteriori restoration is made. 7 figures
7.5 MeV High Average Power Linear Accelerator System for Food Irradiation Applications
International Nuclear Information System (INIS)
Eichenberger, Carl; Palmer, Dennis; Wong, Sik-Lam; Robison, Greg; Miller, Bruce; Shimer, Daniel
2005-09-01
In December 2004 the US Food and Drug Administration (FDA) approved the use of 7.5 MeV X-rays for irradiation of food products. The increased efficiency for treatment at 7.5 MeV (versus the previous maximum allowable X-ray energy of 5 MeV) will have a significant impact on processing rates and, therefore, reduce the per-package cost of irradiation using X-rays. Titan Pulse Sciences Division is developing a new food irradiation system based on this ruling. The irradiation system incorporates a 7.5 MeV electron linear accelerator (linac) that is capable of 100 kW average power. A tantalum converter is positioned close to the exit window of the scan horn. The linac is an RF standing waveguide structure based on a 5 MeV accelerator that is used for X-ray processing of food products. The linac is powered by a 1300 MHz (L-Band) klystron tube. The electrical drive for the klystron is a solid state modulator that uses inductive energy store and solid-state opening switches. The system is designed to operate 7000 hours per year. Keywords: Rf Accelerator, Solid state modulator, X-ray processing
40 CFR 76.11 - Emissions averaging.
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Emissions averaging. 76.11 Section 76.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General...
Determinants of College Grade Point Averages
Bailey, Paul Dean
2012-01-01
Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the…
12 CFR 702.105 - Weighted-average life of investments.
2010-01-01
... investment funds. (1) For investments in registered investment companies (e.g., mutual funds) and collective investment funds, the weighted-average life is defined as the maximum weighted-average life disclosed, directly or indirectly, in the prospectus or trust instrument; (2) For investments in money market funds...
Receiver function estimated by maximum entropy deconvolution
Institute of Scientific and Technical Information of China (English)
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Maximum likelihood convolutional decoding (MCD) performance due to system losses
Webster, L.
1976-01-01
A model for predicting the computational performance of a maximum likelihood convolutional decoder (MCD) operating in a noisy carrier reference environment is described. This model is used to develop a subroutine that will be utilized by the Telemetry Analysis Program to compute the MCD bit error rate. When this computational model is averaged over noisy reference phase errors using a high-rate interpolation scheme, the results are found to agree quite favorably with experimental measurements.
Kumaraswamy autoregressive moving average models for double bounded environmental data
Bayer, Fábio Mariano; Bayer, Débora Missio; Pumi, Guilherme
2017-12-01
In this paper we introduce the Kumaraswamy autoregressive moving average models (KARMA), which is a dynamic class of models for time series taking values in the double bounded interval (a,b) following the Kumaraswamy distribution. The Kumaraswamy family of distribution is widely applied in many areas, especially hydrology and related fields. Classical examples are time series representing rates and proportions observed over time. In the proposed KARMA model, the median is modeled by a dynamic structure containing autoregressive and moving average terms, time-varying regressors, unknown parameters and a link function. We introduce the new class of models and discuss conditional maximum likelihood estimation, hypothesis testing inference, diagnostic analysis and forecasting. In particular, we provide closed-form expressions for the conditional score vector and conditional Fisher information matrix. An application to environmental real data is presented and discussed.
Making residency work hour rules work.
Cohen, I Glenn; Czeisler, Charles A; Landrigan, Christopher P
2013-01-01
In July 2011, the ACGME implemented new rules that limit interns to 16 hours of work in a row, but continue to allow 2nd-year and higher resident physicians to work for up to 28 consecutive hours. Whether the ACGME's 2011 work hour limits went too far or did not go far enough has been hotly debated. In this article, we do not seek to re-open the debate about whether these standards get matters exactly right. Instead, we wish to address the issue of effective enforcement. That is, now that new work hour limits have been established, and given that the ACGME has been unable to enforce work hour limits effectively on its own, what is the best way to make sure the new limits are followed in order to reduce harm to residents, patients, and others due to sleep-deprived residents? We focus on three possible national approaches to the problem, one rooted in funding, one rooted in disclosure, and one rooted in tort law. © 2013 American Society of Law, Medicine & Ethics, Inc.
Long working hours and depressive symptoms
DEFF Research Database (Denmark)
Virtanen, Marianna; Jokela, Markus; Madsen, Ida Eh
2018-01-01
Objectives This systematic review and meta-analysis combined published study-level data and unpublished individual-participant data with the aim of quantifying the relation between long working hours and the onset of depressive symptoms. Methods We searched PubMed and Embase for published....... In the majority of cohorts, long working hours was defined as working ≥55 hours per week. In multivariable-adjusted meta-analyses of 189 729 participants from 35 countries [96 275 men, 93 454 women, follow-up ranging from 1-5 years, 21 747 new-onset cases), there was an overall association of 1.14 (95% confidence...... interval (CI) 1.03-1.25] between long working hours and the onset of depressive symptoms, with significant evidence of heterogeneity (I 2=45.1%, P=0.004). A moderate association between working hours and depressive symptoms was found in Asian countries (1.50, 95% CI 1.13-2.01), a weaker association...
How extreme is extreme hourly precipitation?
Papalexiou, Simon Michael; Dialynas, Yannis G.; Pappas, Christoforos
2016-04-01
The importance of accurate representation of precipitation at fine time scales (e.g., hourly), directly associated with flash flood events, is crucial in hydrological design and prediction. The upper part of a probability distribution, known as the distribution tail, determines the behavior of extreme events. In general, and loosely speaking, tails can be categorized in two families: the subexponential and the hyperexponential family, with the first generating more intense and more frequent extremes compared to the latter. In past studies, the focus has been mainly on daily precipitation, with the Gamma distribution being the most popular model. Here, we investigate the behaviour of tails of hourly precipitation by comparing the upper part of empirical distributions of thousands of records with three general types of tails corresponding to the Pareto, Lognormal, and Weibull distributions. Specifically, we use thousands of hourly rainfall records from all over the USA. The analysis indicates that heavier-tailed distributions describe better the observed hourly rainfall extremes in comparison to lighter tails. Traditional representations of the marginal distribution of hourly rainfall may significantly deviate from observed behaviours of extremes, with direct implications on hydroclimatic variables modelling and engineering design.
Work hours and absenteeism among police officers.
Fekedulegn, Desta; Burchfiel, Cecil M; Hartley, Tara A; Baughman, Penelope; Charles, Luenda E; Andrew, Michael E; Violanti, John M
2013-01-01
In this study, the cross-sectional association of paid work hours with episodes of work absence was examined in a cohort of police officers. Study subjects were participants from the Buffalo Cardio-Metabolic Occupational Police Stress (BCOPS) study examined between 2004 and 2009. Among 395 study participants with complete data, day-by-day work history records during the one-year period prior to date of examination were used to determine episodes of one-day and three day work absence. The Negative binomial regression analysis was used to examine rate ratios (RR) of work absence. Analyses were also stratified by gender. A one-hour increase in total work hours was associated with 5% reduction in rate of one-day work absence (RR = 0.95, 95% CI: 0.92 - 0.98) and with 8% reduction in rate of three-day work absence (RR = 0.92, 95% CI: 0.89 - 0.95). The association of total work hours with episodes of one-day work absence was significant only in men while the association with episodes of three-day work absence was evident in men and women. In conclusion, in this cohort of police officers, work hours were negatively associated with both durations of work absence (one-day, > or = 3 consecutive days).
Long working hours and depressive symptoms
DEFF Research Database (Denmark)
Virtanen, Marianna; Jokela, Markus; Madsen, Ida Eh
2018-01-01
. In the majority of cohorts, long working hours was defined as working ≥55 hours per week. In multivariable-adjusted meta-analyses of 189 729 participants from 35 countries [96 275 men, 93 454 women, follow-up ranging from 1-5 years, 21 747 new-onset cases), there was an overall association of 1.14 (95% confidence......Objectives This systematic review and meta-analysis combined published study-level data and unpublished individual-participant data with the aim of quantifying the relation between long working hours and the onset of depressive symptoms. Methods We searched PubMed and Embase for published...... interval (CI) 1.03-1.25] between long working hours and the onset of depressive symptoms, with significant evidence of heterogeneity (I 2=45.1%, P=0.004). A moderate association between working hours and depressive symptoms was found in Asian countries (1.50, 95% CI 1.13-2.01), a weaker association...
Maximum Power from a Solar Panel
Directory of Open Access Journals (Sweden)
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
Computation of the bounce-average code
International Nuclear Information System (INIS)
Cutler, T.A.; Pearlstein, L.D.; Rensink, M.E.
1977-01-01
The bounce-average computer code simulates the two-dimensional velocity transport of ions in a mirror machine. The code evaluates and bounce-averages the collision operator and sources along the field line. A self-consistent equilibrium magnetic field is also computed using the long-thin approximation. Optionally included are terms that maintain μ, J invariance as the magnetic field changes in time. The assumptions and analysis that form the foundation of the bounce-average code are described. When references can be cited, the required results are merely stated and explained briefly. A listing of the code is appended
Long Working Hours in Korea: Based on the 2014 Korean Working Conditions Survey
Directory of Open Access Journals (Sweden)
Jungsun Park
2017-12-01
Full Text Available Background: Long working hours adversely affect worker safety and health. In 2004, Korea passed legislation that limited the work week to 40 hours, in an effort to improve quality-of-life and increase business competitiveness. This regulation was implemented in stages, first for large businesses and then for small businesses, from 2004 to 2011. We previously reported that average weekly working hours decreased from 2006 to 2010, based on the Korean Working Conditions Survey. Methods: In the present study, we examine whether average weekly working hours continued to decrease in 2014 based on the 2014 Korean Working Conditions Survey. Results: The results show that average weekly working hours among all groups of workers decreased in 2014 relative to previous years; however, self-employed individuals and employers (who are not covered by the new legislation in the specific service sectors worked > 60 h/wk in 2014. Conclusion: The Korean government should prohibit employees from working excessive hours and should also attempt to achieve social and public consensus regarding work time reduction to improve the safety, health, and quality-of-life of all citizens, including those who are employers and self-employed. Keywords: employee, employer, Korea, self-employed, working hours
Working hours and cardiovascular disease in Korean workers: a case-control study.
Jeong, Inchul; Rhie, Jeongbae; Kim, Inah; Ryu, Innshil; Jung, Pil Kyun; Park, Yoo Seok; Lim, Yong-Su; Kim, Hyoung-Ryoul; Park, Shin-Goo; Im, Hyoung-June; Lee, Mi-Young; Won, Jong-Uk
2014-01-01
Long working hours can negatively impact a worker's health. The objective of this study was to examine the association between working hours and cardiovascular diseases (CVDs) and compare the degree of risk based on CVD subtypes in Korean workers. This study was a case-control study of the patients registered in the Occupational Cardiovascular Diseases Surveillance 2010. The cases included 348 patients diagnosed with a CVD (123 cerebral infarction, 69 intracerebral hemorrhage, 57 subarachnoid hemorrhage, 99 acute myocardial infarction). Controls were 769 participants with no history of CVDs matched for gender, age, type of occupation, and region. Participants' working hours in the previous week and the average working hours over the past three months were assessed to examine short-term and long-term effects. After adjusting for confounding factors, the odds ratios (ORs) for CVDs in the short-term were 2.66 (95% Confidence interval (CI) :1.78-3.99) for working ≤40 hours, 1.85 (95% CI: 1.22-2.81) for working 50.1-60 hours and 4.23 (95% CI: 2.81-6.39) for working >60 hours compared with the 40.1-50-hour working group. The ORs in the long-term were 2.90 (95% CI: 1.86-4.52) for working ≤40 hours, 1.73 (95% CI: 1.03-2.90) for working 48.1-52 hours and 3.46 (95% CI: 2.38-5.03) for working >52 hours compared with the 40.1-48-hour working group. Long working hours are related to an increased risk of CVDs, and the degree of risk differs based on CVD subtype. Short working hours are also related to an increased risk for CVDs. More prospective studies targeting specific disease risks are required.
Long working hours and alcohol use
DEFF Research Database (Denmark)
Virtanen, Marianna; Jokela, Markus; Nyberg, Solja T
2015-01-01
.2%). There was no difference in these associations between men and women or by age or socioeconomic groups, geographical regions, sample type (population based v occupational cohort), prevalence of risky alcohol use in the cohort, or sample attrition rate. CONCLUSIONS: Individuals whose working hours exceed standard......OBJECTIVE: To quantify the association between long working hours and alcohol use. DESIGN: Systematic review and meta-analysis of published studies and unpublished individual participant data. DATA SOURCES: A systematic search of PubMed and Embase databases in April 2014 for published studies......, supplemented with manual searches. Unpublished individual participant data were obtained from 27 additional studies. REVIEW METHODS: The search strategy was designed to retrieve cross sectional and prospective studies of the association between long working hours and alcohol use. Summary estimates were...
Longer Opening Hours for the Library
2001-01-01
The scientific information service. The CERN library is open 24 hours a day, 365 days a year. So how, you might be wondering, can they improve on that? The answer is in the detail. Although you can already use the library whenever you want, items can only be checked out when the front desk is staffed. A decision taken last week by the Scientific Information Policy Board now means that there will someone at the desk through out CERN's official working hours, with an extra 90 minutes at the end of the day so that people can check out material on their way home. In other words, the library will be open from 8:30 to 19:00, Monday to Friday. The library continues, of course, to be open 24 hours a day, all year round, and services provided via the digital library remain at your disposal day and night: http://library.cern.ch
Average subentropy, coherence and entanglement of random mixed quantum states
Energy Technology Data Exchange (ETDEWEB)
Zhang, Lin, E-mail: godyalin@163.com [Institute of Mathematics, Hangzhou Dianzi University, Hangzhou 310018 (China); Singh, Uttam, E-mail: uttamsingh@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India); Pati, Arun K., E-mail: akpati@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India)
2017-02-15
Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate that mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.
Effect of tank geometry on its average performance
Orlov, Aleksey A.; Tsimbalyuk, Alexandr F.; Malyugin, Roman V.; Leontieva, Daria A.; Kotelnikova, Alexandra A.
2018-03-01
The mathematical model of non-stationary filling of vertical submerged tanks with gaseous uranium hexafluoride is presented in the paper. There are calculations of the average productivity, heat exchange area, and filling time of various volumes tanks with smooth inner walls depending on their "height : radius" ratio as well as the average productivity, degree, and filling time of horizontal ribbing tank with volume 6.10-2 m3 with change central hole diameter of the ribs. It has been shown that the growth of "height / radius" ratio in tanks with smooth inner walls up to the limiting values allows significantly increasing tank average productivity and reducing its filling time. Growth of H/R ratio of tank with volume 1.0 m3 to the limiting values (in comparison with the standard tank having H/R equal 3.49) augments tank productivity by 23.5 % and the heat exchange area by 20%. Besides, we have demonstrated that maximum average productivity and a minimum filling time are reached for the tank with volume 6.10-2 m3 having central hole diameter of horizontal ribs 6.4.10-2 m.
Rotational averaging of multiphoton absorption cross sections
Energy Technology Data Exchange (ETDEWEB)
Friese, Daniel H., E-mail: daniel.h.friese@uit.no; Beerepoot, Maarten T. P.; Ruud, Kenneth [Centre for Theoretical and Computational Chemistry, University of Tromsø — The Arctic University of Norway, N-9037 Tromsø (Norway)
2014-11-28
Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.
Sea Surface Temperature Average_SST_Master
National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using ArcGIS...
Trajectory averaging for stochastic approximation MCMC algorithms
Liang, Faming
2010-01-01
to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic
Should the average tax rate be marginalized?
Czech Academy of Sciences Publication Activity Database
Feldman, N. E.; Katuščák, Peter
-, č. 304 (2006), s. 1-65 ISSN 1211-3298 Institutional research plan: CEZ:MSM0021620846 Keywords : tax * labor supply * average tax Subject RIV: AH - Economics http://www.cerge-ei.cz/pdf/wp/Wp304.pdf
A practical guide to averaging functions
Beliakov, Gleb; Calvo Sánchez, Tomasa
2016-01-01
This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...
MN Temperature Average (1961-1990) - Line
Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...
MN Temperature Average (1961-1990) - Polygon
Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...
Mansi, Ishak A
2011-01-01
Background In a recent report, the Institute of Medicine recommended more restrictions on residents' working hours. Several problems exist with a system that places a weekly limit on resident duty hours: (1) it assumes the presence of a linear relationship between hours of work and patient safety; (2) it fails to consider differences in intensity among programs; and (3) it does not address increases in the scientific content of medicine, and it places the burden of enforcing the duty hour limits on the Accreditation Council for Graduate Medical Education. Proposal An innovative method of calculating credit hours for graduate medical education would shift the focus from “years of residency” to “hours of residency.” For example, internal medicine residents would be requested to spend 8640 hours of total training hours (assuming 60 hours per week for 48 weeks annually) instead of the traditional 3 years. This method of counting training hours is used by other professions, such as the Intern Development Program of the National Council of Architectural Registration Boards. The proposed approach would allow residents and program directors to pace training based on individual capabilities. Standards for resident education should include the average number of patients treated in each setting (inpatient or outpatient). A possible set of “multipliers” based on these parameters, and possibly others such as resident evaluation, is devised to calculate the “final adjusted accredited hours” that count toward graduation. Anticipated Benefits Substituting “years of training” with “hours of training” may resolve many of the concerns with the current residency education model, as well as adapt to the demands of residents' personal lives. It also may allow residents to pace their training according to their capabilities and learning styles, and contribute to reflective learning and better quality education. PMID:22379516
Ethical aspects of limiting residents' work hours.
Wiesing, Urban
2007-09-01
The regulation of residents' work hours involves several ethical conflicts which need to be systematically analysed and evaluated. ARGUMENTS AND CONCLUSION: The most important ethical principle when regulating work hours is to avoid the harm resulting from the over-work of physicians and from an excessive division of labour. Additionally, other ethical principles have to be taken into account, in particular the principles of nonmaleficence and beneficence for future patients and for physicians. The article presents arguments for balancing the relevant ethical principles and analyses the structural difficulties that occur unavoidably in any regulation of the complex activities of physicians.
Average Bandwidth Allocation Model of WFQ
Directory of Open Access Journals (Sweden)
Tomáš Balogh
2012-01-01
Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.
Nonequilibrium statistical averages and thermo field dynamics
International Nuclear Information System (INIS)
Marinaro, A.; Scarpetta, Q.
1984-01-01
An extension of thermo field dynamics is proposed, which permits the computation of nonequilibrium statistical averages. The Brownian motion of a quantum oscillator is treated as an example. In conclusion it is pointed out that the procedure proposed to computation of time-dependent statistical average gives the correct two-point Green function for the damped oscillator. A simple extension can be used to compute two-point Green functions of free particles
An approximate analytical approach to resampling averages
DEFF Research Database (Denmark)
Malzahn, Dorthe; Opper, M.
2004-01-01
Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....
Bias caused by water adsorption in hourly PM measurements
Directory of Open Access Journals (Sweden)
G. Kiss
2017-07-01
Full Text Available Beta-attenuation monitors are used worldwide to monitor PM mass concentration with high temporal resolution. Hourly PM10 and PM2. 5 dry mass concentrations are publicly available with the tacit assumption that water is effectively removed prior to the measurement. However, as both the filter material of the monitor and the aerosol particles are capable of retaining a significant amount of water even at low relative humidities, the basic assumption may not be valid, resulting in significant bias in reported PM10 and PM2. 5 concentrations. Here we show that in PM10 measurement, particle-free air can produce apparent hourly average PM concentrations in the range of −13–+21 µg m−3 under conditions of fluctuating relative humidity. Positive and negative apparent readings are observed with increasing and decreasing relative humidities, respectively. Similar phenomena have been observed when the instrument filter was previously loaded with atmospheric aerosol. As a result the potential measurement biases in hourly readings arising from the interaction with water may be in the range of −53… + 69 %.
Bias caused by water adsorption in hourly PM measurements
Kiss, Gyula; Imre, Kornélia; Molnár, Ágnes; Gelencsér, András
2017-07-01
Beta-attenuation monitors are used worldwide to monitor PM mass concentration with high temporal resolution. Hourly PM10 and PM2. 5 dry mass concentrations are publicly available with the tacit assumption that water is effectively removed prior to the measurement. However, as both the filter material of the monitor and the aerosol particles are capable of retaining a significant amount of water even at low relative humidities, the basic assumption may not be valid, resulting in significant bias in reported PM10 and PM2. 5 concentrations. Here we show that in PM10 measurement, particle-free air can produce apparent hourly average PM concentrations in the range of -13-+21 µg m-3 under conditions of fluctuating relative humidity. Positive and negative apparent readings are observed with increasing and decreasing relative humidities, respectively. Similar phenomena have been observed when the instrument filter was previously loaded with atmospheric aerosol. As a result the potential measurement biases in hourly readings arising from the interaction with water may be in the range of -53… + 69 %.
Neighborhood walkability, income, and hour-by-hour physical activity patterns.
Arvidsson, Daniel; Eriksson, Ulf; Lönn, Sara Larsson; Sundquist, Kristina
2013-04-01
This study aimed to investigate both the mean daily physical activity and the hour-by-hour physical activity patterns across the day using accelerometry and how they are associated with neighborhood walkability and individual income. Moderate physical activity (MPA) was assessed by accelerometry in 2252 adults in the city of Stockholm, Sweden. Neighborhood walkability (residential density, street connectivity, and land use mix) was objectively assessed within 1000m network buffers around the participants' residence and individual income was self-reported. Living in a high walkability neighborhood was associated with more mean daily MPA compared with living in a low walkability neighborhood on weekdays and weekend days. Hour-by-hour analyses showed that this association appeared mainly in the afternoon/early evening during weekdays, whereas it appeared across the middle of the day during weekend days. Individual income was associated with mean daily MPA on weekend days. On weekdays, the hour-by-hour analyses showed that high income was associated with more MPA around noon and in late afternoon/early evening, whereas low income was associated with more MPA at the hours before noon and in the early afternoon. During the weekend, high income was more consistently associated with higher MPA. Hour-by-hour accelerometry physical activity patterns provides a more comprehensive picture of the associations between neighborhood walkability and individual income and physical activity and the variability of these associations across the day.
Ejiri, A.
The second questionnaire for scientists and engineers was carried out in 2007, and status of Japanese scientists and engineers were analyzed and reported. A part of the data was reanalyzed from the viewpoint of work life balance. In particular, office/laboratory staying hour and home working hour were analyzed and dependences on various factors were investigated. It was found that these hours depend on gender, marital status, number of child, employment status and age. In addition, the total hours tend to be kept constant regardless of various factors.
Wage and Hour Farm Labor Laws.
Hertel, Catherine
This paper, by a teacher of migrants, summarizes various farm labor laws and child labor laws pertaining to migrant and seasonal workers. The Migrant and Seasonal Agricultural Worker Protection Act of 1983 provides workers with assurances about pay, hours, and working conditions, including safety and health. This legislation permits anyone…
The 24-Hour Mathematical Modeling Challenge
Galluzzo, Benjamin J.; Wendt, Theodore J.
2015-01-01
Across the mathematics curriculum there is a renewed emphasis on applications of mathematics and on mathematical modeling. Providing students with modeling experiences beyond the ordinary classroom setting remains a challenge, however. In this article, we describe the 24-hour Mathematical Modeling Challenge, an extracurricular event that exposes…
Variable Work Hours--The MONY Experience
Fields, Cynthia J.
1974-01-01
An experiment with variable work hours in one department of a large company was so successful that it has become standard procedure in various corporate areas, both staff and line. The result? Increased production, fewer errors, improved employee morale, and a significant reduction in lateness and absenteeism. (Author)
The 24-hour economy not widespread
Smulders, P.
2006-01-01
Some 74% of workers in the Netherlands usually work standard hours, while 15% normally work at weekends, 14% in the evening and 4% at night. Weekend work is frequently carried out by younger people. The sectors most associated with weekend work are: policing, nursing and elder care, hotels and
Installation Service - Changes in opening hours
GS Department
2010-01-01
For organizational matters, please note that, as from 15 March 2010, the Installation Service will have wen opening hours. The new schedule will be from 14:00 to 17:00 (Monday to Friday). Contact persons are: Martine Briant, Karine Robert and Claudia Bruggmann. The office address remains 73-3-014. Installation Service
Estimator's electrical man-hour manual
Page, John S
1999-01-01
This manual's latest edition continues to be the best source available for making accurate, reliable man-hour estimates for electrical installation. This new edition is revised and expanded to include installation of electrical instrumentation, which is used in monitoring various process systems.
Improved averaging for non-null interferometry
Fleig, Jon F.; Murphy, Paul E.
2013-09-01
Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.
Energy Technology Data Exchange (ETDEWEB)
Hourdakis, C J, E-mail: khour@gaec.gr [Ionizing Radiation Calibration Laboratory-Greek Atomic Energy Commission, PO Box 60092, 15310 Agia Paraskevi, Athens, Attiki (Greece)
2011-04-07
The practical peak voltage (PPV) has been adopted as the reference measuring quantity for the x-ray tube voltage. However, the majority of commercial kV-meter models measure the average peak, U-bar{sub P}, the average, U-bar, the effective, U{sub eff} or the maximum peak, U{sub P} tube voltage. This work proposed a method for determination of the PPV from measurements with a kV-meter that measures the average U-bar or the average peak, U-bar{sub p} voltage. The kV-meter reading can be converted to the PPV by applying appropriate calibration coefficients and conversion factors. The average peak k{sub PPV,kVp} and the average k{sub PPV,Uav} conversion factors were calculated from virtual voltage waveforms for conventional diagnostic radiology (50-150 kV) and mammography (22-35 kV) tube voltages and for voltage ripples from 0% to 100%. Regression equation and coefficients provide the appropriate conversion factors at any given tube voltage and ripple. The influence of voltage waveform irregularities, like 'spikes' and pulse amplitude variations, on the conversion factors was investigated and discussed. The proposed method and the conversion factors were tested using six commercial kV-meters at several x-ray units. The deviations between the reference and the calculated - according to the proposed method - PPV values were less than 2%. Practical aspects on the voltage ripple measurement were addressed and discussed. The proposed method provides a rigorous base to determine the PPV with kV-meters from U-bar{sub p} and U-bar measurement. Users can benefit, since all kV-meters, irrespective of their measuring quantity, can be used to determine the PPV, complying with the IEC standard requirements.
Long working hours and physical activity.
Angrave, David; Charlwood, Andy; Wooden, Mark
2015-08-01
It is widely believed that persons employed in jobs demanding long working hours are at greater risk of physical inactivity than other workers, primarily because they have less leisure time available to undertake physical activity. The aim of this study was to test this hypothesis using prospective data obtained from a nationally representative sample of employed persons. Longitudinal data from the Household, Income and Labour Dynamics in Australia Survey (93,367 observations from 17,893 individuals) were used to estimate conditional fixed effects logistic regression models of the likelihood of moderate or vigorous physical exercise for at least 30 min, at least four times a week. No significant associations between long working hours and the incidence of healthy levels of physical activity were uncovered once other exogenous influences on activity levels were controlled for. The odds of men or women who usually work 60 or more hours per week exercising at healthy levels were 6% and 11% less, respectively, than those of comparable persons working a more standard 35-40 h/week; however, neither estimate was significantly different from 0 at 95% CI. The findings suggest that there is no trade-off between long working hours and physical activity in Australia. It is argued that these findings are broadly consistent with previous research studies from Anglo-Saxon countries (where long working hours are pervasive) that employed large nationally representative samples. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Maximum permissible voltage of YBCO coated conductors
Energy Technology Data Exchange (ETDEWEB)
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Asynchronous Gossip for Averaging and Spectral Ranking
Borkar, Vivek S.; Makhijani, Rahul; Sundaresan, Rajesh
2014-08-01
We consider two variants of the classical gossip algorithm. The first variant is a version of asynchronous stochastic approximation. We highlight a fundamental difficulty associated with the classical asynchronous gossip scheme, viz., that it may not converge to a desired average, and suggest an alternative scheme based on reinforcement learning that has guaranteed convergence to the desired average. We then discuss a potential application to a wireless network setting with simultaneous link activation constraints. The second variant is a gossip algorithm for distributed computation of the Perron-Frobenius eigenvector of a nonnegative matrix. While the first variant draws upon a reinforcement learning algorithm for an average cost controlled Markov decision problem, the second variant draws upon a reinforcement learning algorithm for risk-sensitive control. We then discuss potential applications of the second variant to ranking schemes, reputation networks, and principal component analysis.
Benchmarking statistical averaging of spectra with HULLAC
Klapisch, Marcel; Busquet, Michel
2008-11-01
Knowledge of radiative properties of hot plasmas is important for ICF, astrophysics, etc When mid-Z or high-Z elements are present, the spectra are so complex that one commonly uses statistically averaged description of atomic systems [1]. In a recent experiment on Fe[2], performed under controlled conditions, high resolution transmission spectra were obtained. The new version of HULLAC [3] allows the use of the same model with different levels of details/averaging. We will take advantage of this feature to check the effect of averaging with comparison with experiment. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Quant. Spectros. Rad. Transf. 65, 43 (2000). [2] J. E. Bailey, G. A. Rochau, C. A. Iglesias et al., Phys. Rev. Lett. 99, 265002-4 (2007). [3]. M. Klapisch, M. Busquet, and A. Bar-Shalom, AIP Conference Proceedings 926, 206-15 (2007).
An approach to averaging digitized plantagram curves.
Hawes, M R; Heinemeyer, R; Sovak, D; Tory, B
1994-07-01
The averaging of outline shapes of the human foot for the purposes of determining information concerning foot shape and dimension within the context of comfort of fit of sport shoes is approached as a mathematical problem. An outline of the human footprint is obtained by standard procedures and the curvature is traced with a Hewlett Packard Digitizer. The paper describes the determination of an alignment axis, the identification of two ray centres and the division of the total curve into two overlapping arcs. Each arc is divided by equiangular rays which intersect chords between digitized points describing the arc. The radial distance of each ray is averaged within groups of foot lengths which vary by +/- 2.25 mm (approximately equal to 1/2 shoe size). The method has been used to determine average plantar curves in a study of 1197 North American males (Hawes and Sovak 1993).
Books average previous decade of economic misery.
Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.
Books Average Previous Decade of Economic Misery
Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159
Exploiting scale dependence in cosmological averaging
International Nuclear Information System (INIS)
Mattsson, Teppo; Ronkainen, Maria
2008-01-01
We study the role of scale dependence in the Buchert averaging method, using the flat Lemaitre–Tolman–Bondi model as a testing ground. Within this model, a single averaging scale gives predictions that are too coarse, but by replacing it with the distance of the objects R(z) for each redshift z, we find an O(1%) precision at z<2 in the averaged luminosity and angular diameter distances compared to their exact expressions. At low redshifts, we show the improvement for generic inhomogeneity profiles, and our numerical computations further verify it up to redshifts z∼2. At higher redshifts, the method breaks down due to its inability to capture the time evolution of the inhomogeneities. We also demonstrate that the running smoothing scale R(z) can mimic acceleration, suggesting that it could be at least as important as the backreaction in explaining dark energy as an inhomogeneity induced illusion
Stochastic Averaging and Stochastic Extremum Seeking
Liu, Shu-Jun
2012-01-01
Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering and analysis of bacterial convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...
Aperture averaging in strong oceanic turbulence
Gökçe, Muhsin Caner; Baykal, Yahya
2018-04-01
Receiver aperture averaging technique is employed in underwater wireless optical communication (UWOC) systems to mitigate the effects of oceanic turbulence, thus to improve the system performance. The irradiance flux variance is a measure of the intensity fluctuations on a lens of the receiver aperture. Using the modified Rytov theory which uses the small-scale and large-scale spatial filters, and our previously presented expression that shows the atmospheric structure constant in terms of oceanic turbulence parameters, we evaluate the irradiance flux variance and the aperture averaging factor of a spherical wave in strong oceanic turbulence. Irradiance flux variance variations are examined versus the oceanic turbulence parameters and the receiver aperture diameter are examined in strong oceanic turbulence. Also, the effect of the receiver aperture diameter on the aperture averaging factor is presented in strong oceanic turbulence.
Jornada de trabalho: o exemplo europeu Hours of work: the European example
Directory of Open Access Journals (Sweden)
Fernando Augusto M. de Mattos
2000-01-01
Full Text Available This paper examines the role played by the decline in average annual hours of work per person in employment over the behavior of unemployment rate in Europe since II World War. The results show that, during the Golden Age of Capitalism in the twentieth century, the pronounced reduction in the average annual hours of work per person in employment (which can be traced to legal action or to particularly negotiation between the social partners has been very important to keep the unemployment rate at very low levels in the main European countries. Nevertheless, after the eigthies, there has been an important decline in the rate of reduction of average annual hours of work per person in employment. Since then, this fact explains a great part of the raise of the unemployment rates in European countries.
Regional averaging and scaling in relativistic cosmology
International Nuclear Information System (INIS)
Buchert, Thomas; Carfora, Mauro
2002-01-01
Averaged inhomogeneous cosmologies lie at the forefront of interest, since cosmological parameters such as the rate of expansion or the mass density are to be considered as volume-averaged quantities and only these can be compared with observations. For this reason the relevant parameters are intrinsically scale-dependent and one wishes to control this dependence without restricting the cosmological model by unphysical assumptions. In the latter respect we contrast our way to approach the averaging problem in relativistic cosmology with shortcomings of averaged Newtonian models. Explicitly, we investigate the scale-dependence of Eulerian volume averages of scalar functions on Riemannian three-manifolds. We propose a complementary view of a Lagrangian smoothing of (tensorial) variables as opposed to their Eulerian averaging on spatial domains. This programme is realized with the help of a global Ricci deformation flow for the metric. We explain rigorously the origin of the Ricci flow which, on heuristic grounds, has already been suggested as a possible candidate for smoothing the initial dataset for cosmological spacetimes. The smoothing of geometry implies a renormalization of averaged spatial variables. We discuss the results in terms of effective cosmological parameters that would be assigned to the smoothed cosmological spacetime. In particular, we find that on the smoothed spatial domain B-bar evaluated cosmological parameters obey Ω-bar B-bar m + Ω-bar B-bar R + Ω-bar B-bar A + Ω-bar B-bar Q 1, where Ω-bar B-bar m , Ω-bar B-bar R and Ω-bar B-bar A correspond to the standard Friedmannian parameters, while Ω-bar B-bar Q is a remnant of cosmic variance of expansion and shear fluctuations on the averaging domain. All these parameters are 'dressed' after smoothing out the geometrical fluctuations, and we give the relations of the 'dressed' to the 'bare' parameters. While the former provide the framework of interpreting observations with a 'Friedmannian bias
Average: the juxtaposition of procedure and context
Watson, Jane; Chick, Helen; Callingham, Rosemary
2014-09-01
This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.
Average-case analysis of numerical problems
2000-01-01
The average-case analysis of numerical problems is the counterpart of the more traditional worst-case approach. The analysis of average error and cost leads to new insight on numerical problems as well as to new algorithms. The book provides a survey of results that were mainly obtained during the last 10 years and also contains new results. The problems under consideration include approximation/optimal recovery and numerical integration of univariate and multivariate functions as well as zero-finding and global optimization. Background material, e.g. on reproducing kernel Hilbert spaces and random fields, is provided.
Grassmann Averages for Scalable Robust PCA
DEFF Research Database (Denmark)
Hauberg, Søren; Feragen, Aasa; Black, Michael J.
2014-01-01
As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...
Long Working Hours in Korea: Based on the 2014 Korean Working Conditions Survey.
Park, Jungsun; Kim, Yangho; Han, Boyoung
2017-12-01
Long working hours adversely affect worker safety and health. In 2004, Korea passed legislation that limited the work week to 40 hours, in an effort to improve quality-of-life and increase business competitiveness. This regulation was implemented in stages, first for large businesses and then for small businesses, from 2004 to 2011. We previously reported that average weekly working hours decreased from 2006 to 2010, based on the Korean Working Conditions Survey. In the present study, we examine whether average weekly working hours continued to decrease in 2014 based on the 2014 Korean Working Conditions Survey. The results show that average weekly working hours among all groups of workers decreased in 2014 relative to previous years; however, self-employed individuals and employers (who are not covered by the new legislation) in the specific service sectors worked > 60 h/wk in 2014. The Korean government should prohibit employees from working excessive hours and should also attempt to achieve social and public consensus regarding work time reduction to improve the safety, health, and quality-of-life of all citizens, including those who are employers and self-employed.
Revealing the Maximum Strength in Nanotwinned Copper
DEFF Research Database (Denmark)
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Modelling maximum canopy conductance and transpiration in ...
African Journals Online (AJOL)
There is much current interest in predicting the maximum amount of water that can be transpired by Eucalyptus trees. It is possible that industrial waste water may be applied as irrigation water to eucalypts and it is important to predict the maximum transpiration rates of these plantations in an attempt to dispose of this ...
U.S. Environmental Protection Agency — This EnviroAtlas dataset shows the annual average potential wind energy resource in kilowatt hours per square kilometer per day for each 12-digit Hydrologic Unit...
Assessing Hourly Precipitation Forecast Skill with the Fractions Skill Score
Zhao, Bin; Zhang, Bo
2018-02-01
Statistical methods for category (yes/no) forecasts, such as the Threat Score, are typically used in the verification of precipitation forecasts. However, these standard methods are affected by the so-called "double-penalty" problem caused by slight displacements in either space or time with respect to the observations. Spatial techniques have recently been developed to help solve this problem. The fractions skill score (FSS), a neighborhood spatial verification method, directly compares the fractional coverage of events in windows surrounding the observations and forecasts. We applied the FSS to hourly precipitation verification by taking hourly forecast products from the GRAPES (Global/Regional Assimilation Prediction System) regional model and quantitative precipitation estimation products from the National Meteorological Information Center of China during July and August 2016, and investigated the difference between these results and those obtained with the traditional category score. We found that the model spin-up period affected the assessment of stability. Systematic errors had an insignificant role in the fraction Brier score and could be ignored. The dispersion of observations followed a diurnal cycle and the standard deviation of the forecast had a similar pattern to the reference maximum of the fraction Brier score. The coefficient of the forecasts and the observations is similar to the FSS; that is, the FSS may be a useful index that can be used to indicate correlation. Compared with the traditional skill score, the FSS has obvious advantages in distinguishing differences in precipitation time series, especially in the assessment of heavy rainfall.
Stochastic generation of hourly rainstorm events in Johor
International Nuclear Information System (INIS)
Nojumuddin, Nur Syereena; Yusof, Fadhilah; Yusop, Zulkifli
2015-01-01
Engineers and researchers in water-related studies are often faced with the problem of having insufficient and long rainfall record. Practical and effective methods must be developed to generate unavailable data from limited available data. Therefore, this paper presents a Monte-Carlo based stochastic hourly rainfall generation model to complement the unavailable data. The Monte Carlo simulation used in this study is based on the best fit of storm characteristics. Hence, by using the Maximum Likelihood Estimation (MLE) and Anderson Darling goodness-of-fit test, lognormal appeared to be the best rainfall distribution. Therefore, the Monte Carlo simulation based on lognormal distribution was used in the study. The proposed model was verified by comparing the statistical moments of rainstorm characteristics from the combination of the observed rainstorm events under 10 years and simulated rainstorm events under 30 years of rainfall records with those under the entire 40 years of observed rainfall data based on the hourly rainfall data at the station J1 in Johor over the period of 1972–2011. The absolute percentage error of the duration-depth, duration-inter-event time and depth-inter-event time will be used as the accuracy test. The results showed the first four product-moments of the observed rainstorm characteristics were close with the simulated rainstorm characteristics. The proposed model can be used as a basis to derive rainfall intensity-duration frequency in Johor
Stochastic generation of hourly rainstorm events in Johor
Nojumuddin, Nur Syereena; Yusof, Fadhilah; Yusop, Zulkifli
2015-02-01
Engineers and researchers in water-related studies are often faced with the problem of having insufficient and long rainfall record. Practical and effective methods must be developed to generate unavailable data from limited available data. Therefore, this paper presents a Monte-Carlo based stochastic hourly rainfall generation model to complement the unavailable data. The Monte Carlo simulation used in this study is based on the best fit of storm characteristics. Hence, by using the Maximum Likelihood Estimation (MLE) and Anderson Darling goodness-of-fit test, lognormal appeared to be the best rainfall distribution. Therefore, the Monte Carlo simulation based on lognormal distribution was used in the study. The proposed model was verified by comparing the statistical moments of rainstorm characteristics from the combination of the observed rainstorm events under 10 years and simulated rainstorm events under 30 years of rainfall records with those under the entire 40 years of observed rainfall data based on the hourly rainfall data at the station J1 in Johor over the period of 1972-2011. The absolute percentage error of the duration-depth, duration-inter-event time and depth-inter-event time will be used as the accuracy test. The results showed the first four product-moments of the observed rainstorm characteristics were close with the simulated rainstorm characteristics. The proposed model can be used as a basis to derive rainfall intensity-duration frequency in Johor.
Stochastic generation of hourly rainstorm events in Johor
Energy Technology Data Exchange (ETDEWEB)
Nojumuddin, Nur Syereena; Yusof, Fadhilah [Department of Mathematical Sciences, Faculty of Science, Universiti Teknologi Malaysia, 81310 UTM Johor Bahru, Johor (Malaysia); Yusop, Zulkifli [Institute of Environmental and Water Resources Management, Universiti Teknologi Malaysia, 81310 UTM Johor Bahru, Johor (Malaysia)
2015-02-03
Engineers and researchers in water-related studies are often faced with the problem of having insufficient and long rainfall record. Practical and effective methods must be developed to generate unavailable data from limited available data. Therefore, this paper presents a Monte-Carlo based stochastic hourly rainfall generation model to complement the unavailable data. The Monte Carlo simulation used in this study is based on the best fit of storm characteristics. Hence, by using the Maximum Likelihood Estimation (MLE) and Anderson Darling goodness-of-fit test, lognormal appeared to be the best rainfall distribution. Therefore, the Monte Carlo simulation based on lognormal distribution was used in the study. The proposed model was verified by comparing the statistical moments of rainstorm characteristics from the combination of the observed rainstorm events under 10 years and simulated rainstorm events under 30 years of rainfall records with those under the entire 40 years of observed rainfall data based on the hourly rainfall data at the station J1 in Johor over the period of 1972–2011. The absolute percentage error of the duration-depth, duration-inter-event time and depth-inter-event time will be used as the accuracy test. The results showed the first four product-moments of the observed rainstorm characteristics were close with the simulated rainstorm characteristics. The proposed model can be used as a basis to derive rainfall intensity-duration frequency in Johor.
Aptel, Florent; Tamisier, Renaud; Pépin, Jean-Louis; Mottet, Benjamin; Hubanova, Ralitsa; Romanet, Jean-Paul; Chiquet, Christophe
2014-10-01
All studies of 24-hour intraocular pressure (IOP) rhythm conducted to date have used repeated IOP measurements requiring nocturnal awakenings, potentially disturbing sleep macrostructure. To evaluate the effects on sleep architecture and IOP rhythm of hourly awakening vs a contact lens sensor (CLS) to continuously monitor IOP without awakening. Cross-sectional study at a referral center of chronobiology among 12 young healthy volunteers, with a mean (SD) age of 22.3 (2.3) years. Volunteers underwent two 24-hour IOP measurement sessions during a 2-month period. The eye order and session order were randomized. During one session, the IOP of the first eye was continuously monitored using a CLS, and the IOP of the fellow eye was measured hourly using a portable noncontact tonometer (session with nocturnal hourly awakening). During the other session, the IOP of the first eye was continuously monitored using a CLS, and the IOP of the fellow eye was not measured (session without nocturnal awakening). Overnight polysomnography was performed during the 2 sessions. A nonlinear least squares, dual-harmonic regression analysis was used to model the 24-hour IOP rhythm from the CLS data. Comparisons of acrophase, bathyphase, amplitude, and the midline estimating statistic of rhythm were used to evaluate the effect of hourly awakening on IOP rhythm. To evaluate the effects of hourly awakening on sleep architecture, comparisons of sleep structure were used, including total sleep period, rapid eye movement, wake after sleep onset, absolute and relative total sleep time, and non-rapid eye movement sleep (N1, N2, and N3). A 24-hour IOP rhythm was found in all individuals for the sessions with and without awakening (P .30). Hourly awakening during noncontact tonometer IOP measurements did not seem to alter the mean variables of the 24-hour IOP pattern evaluated using CLS, including signal, maximum signal, minimum signal, acrophase, and bathyphase (P > .15). The 24-hour IOP
International Nuclear Information System (INIS)
Messagie, Maarten; Mertens, Jan; Oliveira, Luis; Rangaraju, Surendraprabu; Sanfelix, Javier; Coosemans, Thierry; Van Mierlo, Joeri; Macharis, Cathy
2014-01-01
Highlights: • This paper brings a temporal resolution in LCA of electricity generation. • Dynamic life cycle assessment of electricity production in Belgium for 2011. • The overall average GWP per kW h is 0.184 kg CO 2 eq/kW h. • The carbon footprint of Belgian electricity ranges from 0.102 to 0.262 kg CO 2 eq/kW h. - Abstract: In the booming research on the environmental footprint of, for example, electrical vehicles, heat pumps and other (smart) electricity consuming appliances, there is a clear need to know the hourly CO 2 content of one kW h of electricity. Since the CO 2 footprint of electricity can vary every hour; the footprint of for example an electric vehicle is influenced by the time when the vehicle is charged. With the availability of the hourly CO 2 content of one kW h, a decision support tool is provided to fully exploit the advantages of a future smart grid. In this paper, the GWP (Global Warming Potential) per kW h for each hour of the year is calculated for Belgium using a Life Cycle Assessment (LCA) approach. This enables evaluating the influence of the electricity demand on the greenhouse gas emissions. Because of the LCA approach, the CO 2 equivalent content does not only reflect activities related to the production of the electricity within a power plant, but includes carbon emissions related to the building of the infrastructure and the fuel supply chain. The considered feedstocks are nuclear combustible, oil, coal, natural gas, biowaste, blast furnace gas, and wood. Furthermore, renewable electricity production technologies like photovoltaic cells, hydro installations and wind turbines are covered by the research. The production of the wind turbines and solar panels is more carbon intensive (expressed per generated kW h of electricity) than the production of other conventional power plants, due to the lower electricity output. The overall average GWP per kW h is 0.184 kg CO 2 eq/kW h. Throughout the 2011 this value ranges from a
Did liberalising bar hours decrease traffic accidents?
Green, Colin P; Heywood, John S; Navarro, Maria
2014-05-01
Legal bar closing times in England and Wales have historically been early and uniform. Recent legislation liberalised closing times with the object of reducing social problems thought associated with drinking to "beat the clock." Indeed, using both difference in difference and synthetic control approaches we show that one consequence of this liberalisation was a decrease in traffic accidents. This decrease is heavily concentrated among younger drivers. Moreover, we provide evidence that the effect was most pronounced in the hours of the week directly affected by the liberalisation: late nights and early mornings on weekends. This evidence survives a series of robustness checks and suggests at least one socially positive consequence of extending bar hours. Copyright © 2014 Elsevier B.V. All rights reserved.
Model averaging, optimal inference and habit formation
Directory of Open Access Journals (Sweden)
Thomas H B FitzGerald
2014-06-01
Full Text Available Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function – the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge – that of determining which model or models of their environment are the best for guiding behaviour. Bayesian model averaging – which says that an agent should weight the predictions of different models according to their evidence – provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent’s behaviour should show an equivalent balance. We hypothesise that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realisable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behaviour. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded Bayesian inference, focussing particularly upon the relationship between goal-directed and habitual behaviour.
Generalized Jackknife Estimators of Weighted Average Derivatives
DEFF Research Database (Denmark)
Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael
With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic...
Average beta measurement in EXTRAP T1
International Nuclear Information System (INIS)
Hedin, E.R.
1988-12-01
Beginning with the ideal MHD pressure balance equation, an expression for the average poloidal beta, Β Θ , is derived. A method for unobtrusively measuring the quantities used to evaluate Β Θ in Extrap T1 is described. The results if a series of measurements yielding Β Θ as a function of externally applied toroidal field are presented. (author)
HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS
International Nuclear Information System (INIS)
2005-01-01
Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department
Bayesian Averaging is Well-Temperated
DEFF Research Database (Denmark)
Hansen, Lars Kai
2000-01-01
Bayesian predictions are stochastic just like predictions of any other inference scheme that generalize from a finite sample. While a simple variational argument shows that Bayes averaging is generalization optimal given that the prior matches the teacher parameter distribution the situation is l...
Gibbs equilibrium averages and Bogolyubov measure
International Nuclear Information System (INIS)
Sankovich, D.P.
2011-01-01
Application of the functional integration methods in equilibrium statistical mechanics of quantum Bose-systems is considered. We show that Gibbs equilibrium averages of Bose-operators can be represented as path integrals over a special Gauss measure defined in the corresponding space of continuous functions. We consider some problems related to integration with respect to this measure
High average-power induction linacs
International Nuclear Information System (INIS)
Prono, D.S.; Barrett, D.; Bowles, E.; Caporaso, G.J.; Chen, Yu-Jiuan; Clark, J.C.; Coffield, F.; Newton, M.A.; Nexsen, W.; Ravenscroft, D.; Turner, W.C.; Watson, J.A.
1989-01-01
Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of ∼ 50-ns duration pulses to > 100 MeV. In this paper the authors report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs
Function reconstruction from noisy local averages
International Nuclear Information System (INIS)
Chen Yu; Huang Jianguo; Han Weimin
2008-01-01
A regularization method is proposed for the function reconstruction from noisy local averages in any dimension. Error bounds for the approximate solution in L 2 -norm are derived. A number of numerical examples are provided to show computational performance of the method, with the regularization parameters selected by different strategies
A singularity theorem based on spatial averages
Indian Academy of Sciences (India)
journal of. July 2007 physics pp. 31–47. A singularity theorem based on spatial ... In this paper I would like to present a result which confirms – at least partially – ... A detailed analysis of how the model fits in with the .... Further, the statement that the spatial average ...... Financial support under grants FIS2004-01626 and no.
Multiphase averaging of periodic soliton equations
International Nuclear Information System (INIS)
Forest, M.G.
1979-01-01
The multiphase averaging of periodic soliton equations is considered. Particular attention is given to the periodic sine-Gordon and Korteweg-deVries (KdV) equations. The periodic sine-Gordon equation and its associated inverse spectral theory are analyzed, including a discussion of the spectral representations of exact, N-phase sine-Gordon solutions. The emphasis is on physical characteristics of the periodic waves, with a motivation from the well-known whole-line solitons. A canonical Hamiltonian approach for the modulational theory of N-phase waves is prescribed. A concrete illustration of this averaging method is provided with the periodic sine-Gordon equation; explicit averaging results are given only for the N = 1 case, laying a foundation for a more thorough treatment of the general N-phase problem. For the KdV equation, very general results are given for multiphase averaging of the N-phase waves. The single-phase results of Whitham are extended to general N phases, and more importantly, an invariant representation in terms of Abelian differentials on a Riemann surface is provided. Several consequences of this invariant representation are deduced, including strong evidence for the Hamiltonian structure of N-phase modulational equations
A dynamic analysis of moving average rules
Chiarella, C.; He, X.Z.; Hommes, C.H.
2006-01-01
The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type
Essays on model averaging and political economics
Wang, W.
2013-01-01
This thesis first investigates various issues related with model averaging, and then evaluates two policies, i.e. West Development Drive in China and fiscal decentralization in U.S, using econometric tools. Chapter 2 proposes a hierarchical weighted least squares (HWALS) method to address multiple
2010-01-01
... 7 Agriculture 10 2010-01-01 2010-01-01 false On average. 1209.12 Section 1209.12 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS... CONSUMER INFORMATION ORDER Mushroom Promotion, Research, and Consumer Information Order Definitions § 1209...
High average-power induction linacs
International Nuclear Information System (INIS)
Prono, D.S.; Barrett, D.; Bowles, E.
1989-01-01
Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs
Average Costs versus Net Present Value
E.A. van der Laan (Erwin); R.H. Teunter (Ruud)
2000-01-01
textabstractWhile the net present value (NPV) approach is widely accepted as the right framework for studying production and inventory control systems, average cost (AC) models are more widely used. For the well known EOQ model it can be verified that (under certain conditions) the AC approach gives
Average beta-beating from random errors
Tomas Garcia, Rogelio; Langner, Andy Sven; Malina, Lukas; Franchi, Andrea; CERN. Geneva. ATS Department
2018-01-01
The impact of random errors on average β-beating is studied via analytical derivations and simulations. A systematic positive β-beating is expected from random errors quadratic with the sources or, equivalently, with the rms β-beating. However, random errors do not have a systematic eﬀect on the tune.
Reliability Estimates for Undergraduate Grade Point Average
Westrick, Paul A.
2017-01-01
Undergraduate grade point average (GPA) is a commonly employed measure in educational research, serving as a criterion or as a predictor depending on the research question. Over the decades, researchers have used a variety of reliability coefficients to estimate the reliability of undergraduate GPA, which suggests that there has been no consensus…
The Effect of Working Hours on Health
Berniell, Maria Ines; Bietenbeck, Jan
2017-01-01
Does working time causally affect workers' health? We study this question in the context of a French reform which reduced the standard workweek from 39 to 35 hours, at constant earnings. Our empirical analysis exploits variation in the adoption of this shorter workweek across employers, which is mainly driven by institutional features of the reform and thus exogenous to workers' health. Difference-in-differences and lagged dependent variable regressions reveal a negative effect of working hou...
Anaïs Vernède
2011-01-01
From 8.30 to 9.30 p.m. on Saturday, 26 March 2011, the Globe of Science and Innovation will be plunged into darkness to mark CERN's participation in Earth Hour. A growing number of countries and cities across the planet are involved in this global initiative against climate change, which was launched by the WWF in 2007. The lights on the Globe were switched off for the 2009 Earth Hour event. Along with individuals, companies and tourist attractions in thousands of towns and cities all over the world participating in the fourth annual Earth Hour event, CERN will turn off the lights of the Globe for 60 minutes at 8.30 p.m. on Saturday 26 March. CERN's participation in the initiative is one of several examples of its commitment to respect the environment and keep its ecological footprint to the minimum. A recent example under the green transport heading was the replacement of part of CERN's petrol vehicle fleet with cars running on natural gas with a view to reducing air pollution. Other examples...
Tendon surveillance requirements - average tendon force
International Nuclear Information System (INIS)
Fulton, J.F.
1982-01-01
Proposed Rev. 3 to USNRC Reg. Guide 1.35 discusses the need for comparing, for individual tendons, the measured and predicted lift-off forces. Such a comparison is intended to detect any abnormal tendon force loss which might occur. Recognizing that there are uncertainties in the prediction of tendon losses, proposed Guide 1.35.1 has allowed specific tolerances on the fundamental losses. Thus, the lift-off force acceptance criteria for individual tendons appearing in Reg. Guide 1.35, Proposed Rev. 3, is stated relative to a lower bound predicted tendon force, which is obtained using the 'plus' tolerances on the fundamental losses. There is an additional acceptance criterion for the lift-off forces which is not specifically addressed in these two Reg. Guides; however, it is included in a proposed Subsection IWX to ASME Code Section XI. This criterion is based on the overriding requirement that the magnitude of prestress in the containment structure be sufficeint to meet the minimum prestress design requirements. This design requirement can be expressed as an average tendon force for each group of vertical hoop, or dome tendons. For the purpose of comparing the actual tendon forces with the required average tendon force, the lift-off forces measured for a sample of tendons within each group can be averaged to construct the average force for the entire group. However, the individual lift-off forces must be 'corrected' (normalized) prior to obtaining the sample average. This paper derives the correction factor to be used for this purpose. (orig./RW)
Twenty-four hour care for schizophrenia.
Macpherson, Rob; Edwards, Thomas Rhys; Chilvers, Rupatharshini; David, Chris; Elliott, Helen J
2009-04-15
Despite modern treatment approaches and a focus on community care, there remains a group of people who cannot easily be discharged from psychiatric hospital directly into the community. Twenty-four hour residential rehabilitation (a 'ward-in-a-house') is one model of care that has evolved in association with psychiatric hospital closure programmes. To determine the effects of 24 hour residential rehabilitation compared with standard treatment within a hospital setting. We searched the Cochrane Schizophrenia Group Trials Register (May 2002 and February 2004). We included all randomised or quasi-randomised trials that compared 24 hour residential rehabilitation with standard care for people with severe mental illness. Studies were reliably selected, quality assessed and data extracted. Data were excluded where more than 50% of participants in any group were lost to follow-up. For binary outcomes we calculated the relative risk and its 95% confidence interval. We identified and included one study with 22 participants with important methodological shortcomings and limitations of reporting. The two-year controlled study evaluated "new long stay patients" in a hostel ward in the UK. One outcome 'unable to manage in the placement' provided usable data (n=22, RR 7.0 CI 0.4 to 121.4). The trial reported that hostel ward residents developed superior domestic skills, used more facilities in the community and were more likely to engage in constructive activities than those in hospital - although usable numerical data were not reported. These potential advantages were not purchased at a price. The limited economic data was not good but the cost of providing 24 hour care did not seem clearly different from the standard care provided by the hospital - and it may have been less. From the single, small and ill-reported, included study, the hostel ward type of facility appeared cheaper and positively effective. Currently, the value of this way of supporting people - which could be
Deep venous thrombophlebitis: detection with 4-hour versus 24-hour platelet scintigraphy
International Nuclear Information System (INIS)
Seabold, J.E.; Conrad, G.R.; Ponto, J.A.; Kimball, D.A.; Frey, E.E.; Ahmed, F.; Coughlan, J.D.; Jensen, K.C.
1987-01-01
Thirty-one nonheparinized patients with suspected deep venous thrombophlebitis (DVT) underwent contrast venography and indium-111 platelet scintigraphy (In-111 PS). Venography permitted identification of acute DVT in 12 of 31 cases (39%). One additional patient was considered to have acute DVT despite nonconclusive venography results. In-111 PS results were positive at 4 hours in nine of 13 cases (69%) and at 24 hours in 12 of 13 cases (92%). Two of four patients with false-negative 4-hour In-111 PS studies had received warfarin. Thus, the sensitivity of 4-hour In-111 PS in patients not receiving anticoagulants was 82%. Venography results were negative for acute DVT in 18 cases, and 4-hour In-111 PS studies were negative or equivocal in each. In-111 PS is an alternative to contrast venography for detecting acute DVT. If 4-hour In-111 PS results are positive, anticoagulation can be initiated. Delayed images are necessary if the 4-hour images are negative or equivocal
Operator alertness and performance on 8-hour and 12-hour work shifts
International Nuclear Information System (INIS)
Baker, T.L.; Campbell, S.S.; Dawson, D.; Moore-Ede, M.
1989-01-01
Recently, much attention has been paid to the alertness and performance problems of rotational shiftworkers in the nuclear power industry. Growing awareness of higher rates of human errors and accidents on night shifts and reports of operations personnel falling asleep on the job have contributed to the heightened interest in this subject. The industry is now considering the effects of different shift rotation systems, including evaluation of the most recent of industry trends in shift scheduling-schedules that include 12 hour work shifts. Surveys show that within the past 5 years about 20% of commercially operational nuclear power plants have instituted schedules that use only 12 hour shifts, or schedules using a combination of 8-hour and 12-hour shifts. Many more plants routinely use 12-hour work shifts during plant outages and refueling operations. In response to this growing trend, the NRC has funded research which is a first attempt to compare alertness, operator performance, and sleep-wake patterns in subjects working simulated 8-hour and 12-hour shifts at the Human Alertness Research Center (HARC), located at the Institute of Circadian Physiology in Boston, MA. This paper will describe in greater detail the design of the study, measurement techniques for alertness and sleep, work routine, work task performance measures, and cognitive performance test protocols. It will review the role of circadian factors in human alertness and performance, and discuss previous research findings in this area. It will discuss other variables that are known to influence human alertness in the workplace, such as caffeine, alcohol, and working environment. The physiological basis for shift worker sleep problems will be explained in the context of the ongoing research project at HARC. Finally, the paper presents previous research on shift work and fatigue which may be relevant to a comparison of 8-hour and 12-hour shifts
75 FR 82170 - Hours of Service of Drivers
2010-12-29
... drivers to take breaks when needed and would reduce safety and health risks associated with long hours... long work hours, without significantly compromising their ability to do their jobs and earn a living... between hours 3.5 and 7 of an 11-hour driving period. Working beyond the 7th hour without a break is...
Maximum power analysis of photovoltaic module in Ramadi city
Energy Technology Data Exchange (ETDEWEB)
Shahatha Salim, Majid; Mohammed Najim, Jassim [College of Science, University of Anbar (Iraq); Mohammed Salih, Salih [Renewable Energy Research Center, University of Anbar (Iraq)
2013-07-01
Performance of photovoltaic (PV) module is greatly dependent on the solar irradiance, operating temperature, and shading. Solar irradiance can have a significant impact on power output of PV module and energy yield. In this paper, a maximum PV power which can be obtain in Ramadi city (100km west of Baghdad) is practically analyzed. The analysis is based on real irradiance values obtained as the first time by using Soly2 sun tracker device. Proper and adequate information on solar radiation and its components at a given location is very essential in the design of solar energy systems. The solar irradiance data in Ramadi city were analyzed based on the first three months of 2013. The solar irradiance data are measured on earth's surface in the campus area of Anbar University. Actual average data readings were taken from the data logger of sun tracker system, which sets to save the average readings for each two minutes and based on reading in each one second. The data are analyzed from January to the end of March-2013. Maximum daily readings and monthly average readings of solar irradiance have been analyzed to optimize the output of photovoltaic solar modules. The results show that the system sizing of PV can be reduced by 12.5% if a tracking system is used instead of fixed orientation of PV modules.
More hours, more jobs? The employment effects of longer working hours
Martyn Andrews; Hans-Dieter Gerner; Thorsten Schank; Richard Upward
2015-01-01
Increases in standard hours of work have been a contentious policy issue in Germany. Whilst this might directly lead to a substitution of workers by hours, there may also be a positive employment effect due to reduced costs. Moreover, the response of firms may differ between firms that offer overtime and those that do not. For a panel of German plants (2001–2006) drawn from the IAB Establishment Panel, we are the first to analyse the effect of increased standard hours on employment. Using dif...
TCTE Level 3 Total Solar Irradiance 6-Hour Means V002 (TCTE3TSI6) at GES DISC
National Aeronautics and Space Administration — The Total Solar Irradiance (TSI) Calibration Transfer Experiment (TCTE) data set TCTE3TSI6 contains 6-hour averaged total solar irradiance (a.k.a solar constant)...
Effects of bruxism on the maximum bite force
Directory of Open Access Journals (Sweden)
Todić Jelena T.
2017-01-01
Full Text Available Background/Aim. Bruxism is a parafunctional activity of the masticatory system, which is characterized by clenching or grinding of teeth. The purpose of this study was to determine whether the presence of bruxism has impact on maximum bite force, with particular reference to the potential impact of gender on bite force values. Methods. This study included two groups of subjects: without and with bruxism. The presence of bruxism in the subjects was registered using a specific clinical questionnaire on bruxism and physical examination. The subjects from both groups were submitted to the procedure of measuring the maximum bite pressure and occlusal contact area using a single-sheet pressure-sensitive films (Fuji Prescale MS and HS Film. Maximal bite force was obtained by multiplying maximal bite pressure and occlusal contact area values. Results. The average values of maximal bite force were significantly higher in the subjects with bruxism compared to those without bruxism (p 0.01. Maximal bite force was significantly higher in the males compared to the females in all segments of the research. Conclusion. The presence of bruxism influences the increase in the maximum bite force as shown in this study. Gender is a significant determinant of bite force. Registration of maximum bite force can be used in diagnosing and analysing pathophysiological events during bruxism.
Statistics on exponential averaging of periodograms
Energy Technology Data Exchange (ETDEWEB)
Peeters, T.T.J.M. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Ciftcioglu, Oe. [Istanbul Technical Univ. (Turkey). Dept. of Electrical Engineering
1994-11-01
The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a {chi}{sup 2} distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.).
Statistics on exponential averaging of periodograms
International Nuclear Information System (INIS)
Peeters, T.T.J.M.; Ciftcioglu, Oe.
1994-11-01
The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a χ 2 distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.)
ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE
Directory of Open Access Journals (Sweden)
Carmen BOGHEAN
2013-12-01
Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.
MXLKID: a maximum likelihood parameter identifier
International Nuclear Information System (INIS)
Gavel, D.T.
1980-07-01
MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables
Weighted estimates for the averaging integral operator
Czech Academy of Sciences Publication Activity Database
Opic, Bohumír; Rákosník, Jiří
2010-01-01
Roč. 61, č. 3 (2010), s. 253-262 ISSN 0010-0757 R&D Projects: GA ČR GA201/05/2033; GA ČR GA201/08/0383 Institutional research plan: CEZ:AV0Z10190503 Keywords : averaging integral operator * weighted Lebesgue spaces * weights Subject RIV: BA - General Mathematics Impact factor: 0.474, year: 2010 http://link.springer.com/article/10.1007%2FBF03191231
Average Transverse Momentum Quantities Approaching the Lightfront
Boer, Daniel
2015-01-01
In this contribution to Light Cone 2014, three average transverse momentum quantities are discussed: the Sivers shift, the dijet imbalance, and the $p_T$ broadening. The definitions of these quantities involve integrals over all transverse momenta that are overly sensitive to the region of large transverse momenta, which conveys little information about the transverse momentum distributions of quarks and gluons inside hadrons. TMD factorization naturally suggests alternative definitions of su...
Time-averaged MSD of Brownian motion
Andreanov, Alexei; Grebenkov, Denis
2012-01-01
We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we de...
Unscrambling The "Average User" Of Habbo Hotel
Directory of Open Access Journals (Sweden)
Mikael Johnson
2007-01-01
Full Text Available The “user” is an ambiguous concept in human-computer interaction and information systems. Analyses of users as social actors, participants, or configured users delineate approaches to studying design-use relationships. Here, a developer’s reference to a figure of speech, termed the “average user,” is contrasted with design guidelines. The aim is to create an understanding about categorization practices in design through a case study about the virtual community, Habbo Hotel. A qualitative analysis highlighted not only the meaning of the “average user,” but also the work that both the developer and the category contribute to this meaning. The average user a represents the unknown, b influences the boundaries of the target user groups, c legitimizes the designer to disregard marginal user feedback, and d keeps the design space open, thus allowing for creativity. The analysis shows how design and use are intertwined and highlights the developers’ role in governing different users’ interests.
Changing mortality and average cohort life expectancy
Directory of Open Access Journals (Sweden)
Robert Schoen
2005-10-01
Full Text Available Period life expectancy varies with changes in mortality, and should not be confused with the life expectancy of those alive during that period. Given past and likely future mortality changes, a recent debate has arisen on the usefulness of the period life expectancy as the leading measure of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure, the average cohort life expectancy (ACLE, to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate measures of mortality are calculated for England and Wales, Norway, and Switzerland for the years 1880 to 2000. CAL is found to be sensitive to past and present changes in death rates. ACLE requires the most data, but gives the best representation of the survivorship of cohorts present at a given time.
Jarzynski equality in the context of maximum path entropy
González, Diego; Davis, Sergio
2017-06-01
In the global framework of finding an axiomatic derivation of nonequilibrium Statistical Mechanics from fundamental principles, such as the maximum path entropy - also known as Maximum Caliber principle -, this work proposes an alternative derivation of the well-known Jarzynski equality, a nonequilibrium identity of great importance today due to its applications to irreversible processes: biological systems (protein folding), mechanical systems, among others. This equality relates the free energy differences between two equilibrium thermodynamic states with the work performed when going between those states, through an average over a path ensemble. In this work the analysis of Jarzynski's equality will be performed using the formalism of inference over path space. This derivation highlights the wide generality of Jarzynski's original result, which could even be used in non-thermodynamical settings such as social systems, financial and ecological systems.
Balancing Work and Academics in College: Why Do Students Working 10 to 19 Hours Per Week Excel?
Dundes, Lauren; Marx, Jeff
2007-01-01
Given that 74% of undergraduates work an average of 25.5 hours per week while going to school, we know surprisingly little about how off-campus employment affects undergraduates and to what extent its impact varies by the number of hours worked. Our survey of undergraduates at a small liberal arts college found that the academic performance of…
Stochastic generation of hourly wind speed time series
International Nuclear Information System (INIS)
Shamshad, A.; Wan Mohd Ali Wan Hussin; Bawadi, M.A.; Mohd Sanusi, S.A.
2006-01-01
In the present study hourly wind speed data of Kuala Terengganu in Peninsular Malaysia are simulated by using transition matrix approach of Markovian process. The wind speed time series is divided into various states based on certain criteria. The next wind speed states are selected based on the previous states. The cumulative probability transition matrix has been formed in which each row ends with 1. Using the uniform random numbers between 0 and 1, a series of future states is generated. These states have been converted to the corresponding wind speed values using another uniform random number generator. The accuracy of the model has been determined by comparing the statistical characteristics such as average, standard deviation, root mean square error, probability density function and autocorrelation function of the generated data to those of the original data. The generated wind speed time series data is capable to preserve the wind speed characteristics of the observed data
Six-Hours-Rule - A Dogma for Military Surgery?
National Research Council Canada - National Science Library
Gerngross, Heinz; Kahle, Wilhelm
2004-01-01
Today, the six-hours-rule is a delicate item for military logistics and it is a great challenge for medical services to provide an adequate treatment during the first hours after wounding. DEFINITION: Six-hour-rule...
5 CFR 610.408 - Use of credit hours.
2010-01-01
... Flexible and Compressed Work Schedules § 610.408 Use of credit hours. Members of the Senior Executive Service (SES) may not accumulate credit hours under an alternative work schedule. Any credit hours...
Three Mile Island - The hour-by-hour account of what really happened
International Nuclear Information System (INIS)
Stephens, M.
1980-01-01
An hour-by-hour account is given of the progression of events leading up to and during the accident at the Three Mile Island Unit 2 reactor. The emergency procedures followed, the evacuation of local residents and the decisions taken as the possibility of a meltdown became apparent are recorded in detail together with aspects of the media coverage and the problems of communication. (U.K.)
Work shift duration: a review comparing eight hour and 12 hour shift systems.
Smith, L; Folkard, S; Tucker, P; Macdonald, I
1998-04-01
Shiftwork is now a major feature of working life across a broad range of industries. The features of the shift systems operated can impact on the wellbeing, performance, and sleep of shiftworkers. This paper reviews the current state of knowledge on one major characteristic of shift rotas-namely, shift duration. Evidence comparing the relative effects of eight hour and 12 hour shifts on fatigue and job performance, safety, sleep, and physical and psychological health are considered. At the organisational level, factors such as the mode of system implementation, attitudes towards shift rotas, sickness absence and turnover, overtime, and moonlighting are discussed. Manual and electronic searches of the shiftwork research literature were conducted to obtain information on comparisons between eight hour and 12 hour shifts. The research findings are largely equivocal. The bulk of the evidence suggests few differences between eight and 12 hour shifts in the way they affect people. There may even be advantages to 12 hour shifts in terms of lower stress levels, better physical and psychological wellbeing, improved durations and quality of off duty sleep as well as improvements in family relations. On the negative side, the main concerns are fatigue and safety. It is noted that a 12 hour shift does not equate with being active for only 12 hours. There can be considerable extension of the person's time awake either side of the shift. However, the effects of longer term exposure to extended work days have been relatively uncharted in any systematic way. Longitudinal comparative research into the chronic impact of the compressed working week is needed.
Maximum neutron flux in thermal reactors
International Nuclear Information System (INIS)
Strugar, P.V.
1968-12-01
Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples
Maximum allowable load on wheeled mobile manipulators
International Nuclear Information System (INIS)
Habibnejad Korayem, M.; Ghariblu, H.
2003-01-01
This paper develops a computational technique for finding the maximum allowable load of mobile manipulator during a given trajectory. The maximum allowable loads which can be achieved by a mobile manipulator during a given trajectory are limited by the number of factors; probably the dynamic properties of mobile base and mounted manipulator, their actuator limitations and additional constraints applied to resolving the redundancy are the most important factors. To resolve extra D.O.F introduced by the base mobility, additional constraint functions are proposed directly in the task space of mobile manipulator. Finally, in two numerical examples involving a two-link planar manipulator mounted on a differentially driven mobile base, application of the method to determining maximum allowable load is verified. The simulation results demonstrates the maximum allowable load on a desired trajectory has not a unique value and directly depends on the additional constraint functions which applies to resolve the motion redundancy
Maximum phytoplankton concentrations in the sea
DEFF Research Database (Denmark)
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collect...
Temporary new opening hours for Gate C
GS Department
2010-01-01
Please note the new temporary opening hours for the gate C as from 22 September 2010 until 29 October 2010 (working days): Morning: between 7:00 a.m. and 9:00 a.m. Lunch: between 12:00 and 2:00 p.m. Evening: between 5:00 pm and 7:00 p.m. Traffic flow will be permitted in both directions during this period. Please minimize your speed accordingly and respect all road signs. GS-SEM Group General Infrastructure Services Department
Gate A: changes to opening hours
2015-01-01
Due to maintenance work, the opening hours of Gate A (near Reception) will be modified between Monday, 13 and Friday, 17 April 2015. During this period, the gate will be open to vehicles between 7 a.m. and 9.30 a.m., then between 4.30 p.m. and 7 p.m. It will be completely closed to traffic between 9.30 a.m. and 4.30 p.m. Pedestrians and cyclists may continue to use the gate. We apologise for any inconvenience and thank you for your understanding.
New opening hours of the gates
GS Department
2009-01-01
Please note the new opening hours of the gates as well as the intersites tunnel from the 19 May 2009: GATE A 7h - 19h GATE B 24h/24 GATE C 7h - 9h\t17h - 19h GATE D 8h - 12h\t13h - 16h GATE E 7h - 9h\t17h - 19h Prévessin 24h/24 The intersites tunnel will be opened from 7h30 to 18h non stop. GS-SEM Group Infrastructure and General Services Department
Plumley, George
2015-01-01
Create and expand feature-rich sites with no programming experience Ready to build, maintain, and expand your web site with WordPress but have no prior programming experience? WordPress 24-Hour Trainer, 3rd Edition is your book-and-video learning solution that walks you step-by-step through all the important features you will need to know. Lessons range from focused, practical everyday tasks to more advanced, creative features. Learn from an industry professional how to enter content, create pages, manage menus, utilize plug-ins, connect to social media, create membership and e-commerce site
On the average configuration of the geomagnetic tail
International Nuclear Information System (INIS)
Fairfield, D.H.
1978-03-01
Over 3000 hours of IMP-6 magnetic field data obtained between 20 and 33 R sub E in the geomagnetic tail have been used in a statistical study of the tail configuration. A distribution of 2.5 minute averages of B sub Z as a function of position across the tail reveals that more flux crosses the equatorial plane near the dawn and dusk flanks than near midnight. The tail field projected in the solar magnetospheric equatorial plane deviates from the X axis due to flaring and solar wind aberration by an angle alpha = -0.9 y sub SM - 1.7, where y/sub SM/ is in earth radii and alpha is in degrees. After removing these effects the Y component of the tail field is found to depend on interplanetary sector structure. During an away sector the B/sub Y/ component of the tail field is on average 0.5 gamma greater than that during a toward sector, a result that is true in both tail lobes and is independent of location across the tail
Extracting Credible Dependencies for Averaged One-Dependence Estimator Analysis
Directory of Open Access Journals (Sweden)
LiMin Wang
2014-01-01
Full Text Available Of the numerous proposals to improve the accuracy of naive Bayes (NB by weakening the conditional independence assumption, averaged one-dependence estimator (AODE demonstrates remarkable zero-one loss performance. However, indiscriminate superparent attributes will bring both considerable computational cost and negative effect on classification accuracy. In this paper, to extract the most credible dependencies we present a new type of seminaive Bayesian operation, which selects superparent attributes by building maximum weighted spanning tree and removes highly correlated children attributes by functional dependency and canonical cover analysis. Our extensive experimental comparison on UCI data sets shows that this operation efficiently identifies possible superparent attributes at training time and eliminates redundant children attributes at classification time.
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
Kollias, Anastasios; Destounis, Antonios; Kalogeropoulos, Petros; Kyriakoulis, Konstantinos G; Ntineri, Angeliki; Stergiou, George S
2018-07-01
This study assessed the diagnostic accuracy of a novel 24-hour ambulatory blood pressure (ABP) monitor (Microlife WatchBP O3 Afib) with implemented algorithm for automated atrial fibrillation (AF) detection during each ABP measurement. One hundred subjects (mean age 70.6±8.2 [SD] years; men 53%; hypertensives 85%; 17 with permanent AF; 4 paroxysmal AF; and 79 non-AF) had simultaneous 24-hour ABP monitoring and 24-hour Holter monitoring. Among a total of 6410 valid ABP readings, 1091 (17%) were taken in ECG AF rhythm. In reading-to-reading ABP analysis, the sensitivity, specificity, and accuracy of ABP monitoring in detecting AF were 93%, 87%, and 88%, respectively. In non-AF subjects, 12.8% of the 24-hour ABP readings indicated false-positive AF, of whom 27% were taken during supraventricular premature beats. There was a strong association between the proportion of false-positive AF readings and that of supraventricular premature beats ( r =0.67; P ABP monitoring had 100%/85% sensitivity/specificity (area under the curve 0.91; P ABP monitor with AF detector has high sensitivity and moderate specificity for AF screening during routine ABP monitoring. Thus, in elderly hypertensives, a 24-hour ABP recording with at least 26% of the readings suggesting AF indicates a high probability for AF diagnosis and should be regarded as an indication for performing 24-hour Holter monitoring. © 2018 American Heart Association, Inc.
Exposure to bright light for several hours during the daytime lowers tympanic temperature.
Aizawa, S; Tokura, H
1997-11-01
The present study investigates the effect on thympanic temperature of exposure to different light intensities for several hours during the daytime. Nine healthy young adult volunteers (two male, seven female) were exposed to bright light of 4000 lx or dim light of 100 lx during the daytime from 0930 to 1800 hours; the light condition was then kept at 100 lx for a further hour. Tympanic temperature was measured continuously at a neutral condition (28 degrees C, 60% relative humidity) from 1000 to 1800 hours. Urinary samples were collected from 1100 to 1900 hours every 2 h, and melatonin excretion rate was measured by enzyme immunoassay. Of nine subjects, six showed clearly lower tympanic temperatures in the bright compared with the dim condition from 1400 to 1800 hours. Average tympanic temperatures were significantly lower in the bright than in the dim condition from 1645 to 1800 hours. Melatonin excretion rate tended to be higher in the bright than in the dim condition. It was concluded that exposure to bright light of 4000 lx during the daytime for several hours could reduce tympanic temperature, compared with that measured in dim light of 100 lx.
Exposure to bright light for several hours during the daytime lowers tympanic temperature
Aizawa, Seika; Tokura, H.
The present study investigates the effect on thympanic temperature of exposure to different light intensities for several hours during the daytime. Nine healthy young adult volunteers (two male, seven female) were exposed to bright light of 4000 lx or dim light of 100 lx during the daytime from 0930 to 1800 hours; the light condition was then kept at 100 lx for a further hour. Tympanic temperature was measured continuously at a neutral condition (28° C, 60% relative humidity) from 1000 to 1800 hours. Urinary samples were collected from 1100 to 1900 hours every 2 h, and melatonin excretion rate was measured by enzyme immunoassay. Of nine subjects, six showed clearly lower tympanic temperatures in the bright compared with the dim condition from 1400 to 1800 hours. Average tympanic temperatures were significantly lower in the bright than in the dim condition from 1645 to 1800 hours. Melatonin excretion rate tended to be higher in the bright than in the dim condition. It was concluded that exposure to bright light of 4000 lx during the daytime for several hours could reduce tympanic temperature, compared with that measured in dim light of 100 lx.
Changes in the number of resident publications after inception of the 80-hour work week.
Namdari, Surena; Baldwin, Keith D; Weinraub, Barbara; Mehta, Samir
2010-08-01
Since the inception of resident work-hour regulations, there has been considerable concern regarding the influence of decreased work hours on graduate medical education. In particular, it is unclear whether implementation of work-hour restrictions has influenced resident academic performance as defined by quantity of peer-reviewed publications while participating in graduate medical education. We determined the impact of work-hour changes on resident involvement in the number of published clinical studies, laboratory research, case reports, and review articles. We conducted a PubMed literature search of 139 consecutive orthopaedic surgery residents (789 total resident-years) at one institution from academic years 1995-1996 to 2008-2009. This represented a continuous timeline before and after implementation of work-hour restrictions. The number of resident publications before and after implementation of work-hour changes was compared. There was a greater probability of peer review authorship in any given resident-year after work-hour changes than before. Average publications per resident-year increased for total articles, clinical articles, case reports, and reviews. There was an increased rate of publications in which the resident was the first author. Since implementation of work-hour changes, total resident publications and publications per resident-year have increased.
Preferred vs Actual Working Hours in Couple Households
Yi-Ping Tseng; Mark Wooden
2005-01-01
Working hours in Australia are quite widely distributed around the population mean. That is, there are relatively many people working both relatively short hours and relatively long hours each week. From a welfare perspective, however, it is not the actual number of hours worked that is of importance, but whether the hours being worked are consistent with individual preferences. In this paper the question of how closely hours preferences are being met is examined using data collected in the f...
Validity and reproducibility of self-reported working hours among Japanese male employees.
Imai, Teppei; Kuwahara, Keisuke; Miyamoto, Toshiaki; Okazaki, Hiroko; Nishihara, Akiko; Kabe, Isamu; Mizoue, Tetsuya; Dohi, Seitaro
2016-07-22
Working long hours is a potential health hazard. Although self-reporting of working hours in various time frames has been used in epidemiologic studies, its validity is unclear. The objective of this study was to examine the validity and reproducibility of self-reported working hours among Japanese male employees. The participants were 164 male employees of four large-scale companies in Japan. For validity, the Spearman correlation between self-reported working hours in the second survey and the working hours recorded by the company was calculated for the following four time frames: daily working hours, monthly overtime working hours in the last month, average overtime working hours in the last 3 months, and the frequency of long working months (≥45 h/month) within the last 12 months. For reproducibility, the intraclass correlation between the first (September 2013) and second surveys (December 2013) was calculated for each of the four time frames. The Spearman correlations between self-reported working hours and those based on company records were 0.74, 0.81, 0.85, and 0.89 for daily, monthly, 3-monthly, and yearly time periods, respectively. The intraclass correlations for self-reported working hours between the two questionnaire surveys were 0.63, 0.66, 0.73, and 0.87 for the respective time frames. The results of the present study among Japanese male employees suggest that the validity of self-reported working hours is high for all four time frames, whereas the reproducibility is moderate to high.
Long working hours and cancer risk: a multi-cohort study.
Heikkila, Katriina; Nyberg, Solja T; Madsen, Ida E H; de Vroome, Ernest; Alfredsson, Lars; Bjorner, Jacob J; Borritz, Marianne; Burr, Hermann; Erbel, Raimund; Ferrie, Jane E; Fransson, Eleonor I; Geuskens, Goedele A; Hooftman, Wendela E; Houtman, Irene L; Jöckel, Karl-Heinz; Knutsson, Anders; Koskenvuo, Markku; Lunau, Thorsten; Nielsen, Martin L; Nordin, Maria; Oksanen, Tuula; Pejtersen, Jan H; Pentti, Jaana; Shipley, Martin J; Steptoe, Andrew; Suominen, Sakari B; Theorell, Töres; Vahtera, Jussi; Westerholm, Peter J M; Westerlund, Hugo; Dragano, Nico; Rugulies, Reiner; Kawachi, Ichiro; Batty, G David; Singh-Manoux, Archana; Virtanen, Marianna; Kivimäki, Mika
2016-03-29
Working longer than the maximum recommended hours is associated with an increased risk of cardiovascular disease, but the relationship of excess working hours with incident cancer is unclear. This multi-cohort study examined the association between working hours and cancer risk in 116 462 men and women who were free of cancer at baseline. Incident cancers were ascertained from national cancer, hospitalisation and death registers; weekly working hours were self-reported. During median follow-up of 10.8 years, 4371 participants developed cancer (n colorectal cancer: 393; n lung cancer: 247; n breast cancer: 833; and n prostate cancer: 534). We found no clear evidence for an association between working hours and the overall cancer risk. Working hours were also unrelated the risk of incident colorectal, lung or prostate cancers. Working ⩾55 h per week was associated with 1.60-fold (95% confidence interval 1.12-2.29) increase in female breast cancer risk independently of age, socioeconomic position, shift- and night-time work and lifestyle factors, but this observation may have been influenced by residual confounding from parity. Our findings suggest that working long hours is unrelated to the overall cancer risk or the risk of lung, colorectal or prostate cancers. The observed association with breast cancer would warrant further research.
Brokk Toggerson
2011-01-01
This year, the 48-hour film project (48hfp) returns to Geneva after a one-year hiatus. Organized by Neal Hartman and the CERN film-making club, Open Your Eyes Films, the 48hfp challenges teams of film-makers to write, shoot, soundtrack and edit a 4 to 7 minute film in 48 hours from 4 to 6 November. At the start of the festival, contestants picked their film genre from a hat. The films will be screened on 8 and 9 November, with the awards presentation on the 9th. The winner will receive a trip to the US to compete in the international version of the competition. “There are so many short films being made now," says Hartman, “I think, however, that the 48hfp allows a critical creative mass to form. The result is that these 20 teams make 20 better films than if each participant were making their own." Each team draws a genre from a hat and is given a character, a prop and a line of dialogue that must appear in their film. The genres run the gamut from &am...
Local Times of Galactic Cosmic Ray Intensity Maximum and Minimum in the Diurnal Variation
Directory of Open Access Journals (Sweden)
Su Yeon Oh
2006-06-01
Full Text Available The Diurnal variation of galactic cosmic ray (GCR flux intensity observed by the ground Neutron Monitor (NM shows a sinusoidal pattern with the amplitude of 1sim 2 % of daily mean. We carried out a statistical study on tendencies of the local times of GCR intensity daily maximum and minimum. To test the influences of the solar activity and the location (cut-off rigidity on the distribution in the local times of maximum and minimum GCR intensity, we have examined the data of 1996 (solar minimum and 2000 (solar maximum at the low-latitude Haleakala (latitude: 20.72 N, cut-off rigidity: 12.91 GeV and the high-latitude Oulu (latitude: 65.05 N, cut-off rigidity: 0.81 GeV NM stations. The most frequent local times of the GCR intensity daily maximum and minimum come later about 2sim3 hours in the solar activity maximum year 2000 than in the solar activity minimum year 1996. Oulu NM station whose cut-off rigidity is smaller has the most frequent local times of the GCR intensity maximum and minimum later by 2sim3 hours from those of Haleakala station. This feature is more evident at the solar maximum. The phase of the daily variation in GCR is dependent upon the interplanetary magnetic field varying with the solar activity and the cut-off rigidity varying with the geographic latitude.
Operator product expansion and its thermal average
Energy Technology Data Exchange (ETDEWEB)
Mallik, S [Saha Inst. of Nuclear Physics, Calcutta (India)
1998-05-01
QCD sum rules at finite temperature, like the ones at zero temperature, require the coefficients of local operators, which arise in the short distance expansion of the thermal average of two-point functions of currents. We extend the configuration space method, applied earlier at zero temperature, to the case at finite temperature. We find that, upto dimension four, two new operators arise, in addition to the two appearing already in the vacuum correlation functions. It is argued that the new operators would contribute substantially to the sum rules, when the temperature is not too low. (orig.) 7 refs.
Fluctuations of wavefunctions about their classical average
International Nuclear Information System (INIS)
Benet, L; Flores, J; Hernandez-Saldana, H; Izrailev, F M; Leyvraz, F; Seligman, T H
2003-01-01
Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics
Phase-averaged transport for quasiperiodic Hamiltonians
Bellissard, J; Schulz-Baldes, H
2002-01-01
For a class of discrete quasi-periodic Schroedinger operators defined by covariant re- presentations of the rotation algebra, a lower bound on phase-averaged transport in terms of the multifractal dimensions of the density of states is proven. This result is established under a Diophantine condition on the incommensuration parameter. The relevant class of operators is distinguished by invariance with respect to symmetry automorphisms of the rotation algebra. It includes the critical Harper (almost-Mathieu) operator. As a by-product, a new solution of the frame problem associated with Weyl-Heisenberg-Gabor lattices of coherent states is given.
Baseline-dependent averaging in radio interferometry
Wijnholds, S. J.; Willis, A. G.; Salvini, S.
2018-05-01
This paper presents a detailed analysis of the applicability and benefits of baseline-dependent averaging (BDA) in modern radio interferometers and in particular the Square Kilometre Array. We demonstrate that BDA does not affect the information content of the data other than a well-defined decorrelation loss for which closed form expressions are readily available. We verify these theoretical findings using simulations. We therefore conclude that BDA can be used reliably in modern radio interferometry allowing a reduction of visibility data volume (and hence processing costs for handling visibility data) by more than 80 per cent.
Multistage parallel-serial time averaging filters
International Nuclear Information System (INIS)
Theodosiou, G.E.
1980-01-01
Here, a new time averaging circuit design, the 'parallel filter' is presented, which can reduce the time jitter, introduced in time measurements using counters of large dimensions. This parallel filter could be considered as a single stage unit circuit which can be repeated an arbitrary number of times in series, thus providing a parallel-serial filter type as a result. The main advantages of such a filter over a serial one are much less electronic gate jitter and time delay for the same amount of total time uncertainty reduction. (orig.)
Time-averaged MSD of Brownian motion
International Nuclear Information System (INIS)
Andreanov, Alexei; Grebenkov, Denis S
2012-01-01
We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution
Time-dependent angularly averaged inverse transport
International Nuclear Information System (INIS)
Bal, Guillaume; Jollivet, Alexandre
2009-01-01
This paper concerns the reconstruction of the absorption and scattering parameters in a time-dependent linear transport equation from knowledge of angularly averaged measurements performed at the boundary of a domain of interest. Such measurement settings find applications in medical and geophysical imaging. We show that the absorption coefficient and the spatial component of the scattering coefficient are uniquely determined by such measurements. We obtain stability results on the reconstruction of the absorption and scattering parameters with respect to the measured albedo operator. The stability results are obtained by a precise decomposition of the measurements into components with different singular behavior in the time domain
Bootstrapping Density-Weighted Average Derivatives
DEFF Research Database (Denmark)
Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael
Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...... (1989). In many cases validity of bootstrap-based inference procedures is found to depend crucially on whether the bandwidth sequence satisfies a particular (asymptotic linearity) condition. An exception to this rule occurs for inference procedures involving a studentized estimator employing a "robust...
Average Nuclear properties based on statistical model
International Nuclear Information System (INIS)
El-Jaick, L.J.
1974-01-01
The rough properties of nuclei were investigated by statistical model, in systems with the same and different number of protons and neutrons, separately, considering the Coulomb energy in the last system. Some average nuclear properties were calculated based on the energy density of nuclear matter, from Weizsscker-Beth mass semiempiric formulae, generalized for compressible nuclei. In the study of a s surface energy coefficient, the great influence exercised by Coulomb energy and nuclear compressibility was verified. For a good adjust of beta stability lines and mass excess, the surface symmetry energy were established. (M.C.K.) [pt
Time-averaged MSD of Brownian motion
Andreanov, Alexei; Grebenkov, Denis S.
2012-07-01
We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution.
De Luca, G.; Magnus, J.R.
2011-01-01
In this article, we describe the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals, which implement, respectively, the exact Bayesian model-averaging estimator and the weighted-average least-squares
Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.
Dirks, Jean; And Others
1983-01-01
Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)
Long working hours in Korea: results of the 2010 Working Conditions Survey.
Park, Jungsun; Kwon, Oh Jun; Kim, Yangho
2012-01-01
Long working hours adversely affect workers' safety and health. In 2004, Korea passed legislation limiting the working week to 40 h, to improve quality-of-life and to increase business competitiveness. In the present study, we explored the characteristics of work in Korea and compared our data of the second Korean Working Conditions Survey (KWCS) with those of the first KWCS. We found that the average number of hours worked weekly has been reduced but the proportions of workers who work for more than 48 h per week has increased over the 4 yr between the two Korean surveys in all categories studied (male, female, employee, self-employed, and employer). We also found that self-employed and employers work much longer hours than do employees, who are protected by the Labor Standards Act. This was particularly true in the accommodation and food service sectors. In conclusion, Korean workers work longer than do workers of EU countries. The use of average figures masks differences in the numbers of working hours among those engaged in various types of employment, or in certain work sectors. Therefore, the Korean government should not simply monitor reductions in average weekly working hours, but should identify employees working for over 60 h weekly, and reduce their working time.
An Experimental Observation of Axial Variation of Average Size of Methane Clusters in a Gas Jet
International Nuclear Information System (INIS)
Ji-Feng, Han; Chao-Wen, Yang; Jing-Wei, Miao; Jian-Feng, Lu; Meng, Liu; Xiao-Bing, Luo; Mian-Gong, Shi
2010-01-01
Axial variation of average size of methane clusters in a gas jet produced by supersonic expansion of methane through a cylindrical nozzle of 0.8 mm in diameter is observed using a Rayleigh scattering method. The scattered light intensity exhibits a power scaling on the backing pressure ranging from 16 to 50 bar, and the power is strongly Z dependent varying from 8.4 (Z = 3 mm) to 5.4 (Z = 11 mm), which is much larger than that of the argon cluster. The scattered light intensity versus axial position shows that the position of 5 mm has the maximum signal intensity. The estimation of the average cluster size on axial position Z indicates that the cluster growth process goes forward until the maximum average cluster size is reached at Z = 9 mm, and the average cluster size will decrease gradually for Z > 9 mm
MPBoot: fast phylogenetic maximum parsimony tree inference and bootstrap approximation.
Hoang, Diep Thi; Vinh, Le Sy; Flouri, Tomáš; Stamatakis, Alexandros; von Haeseler, Arndt; Minh, Bui Quang
2018-02-02
The nonparametric bootstrap is widely used to measure the branch support of phylogenetic trees. However, bootstrapping is computationally expensive and remains a bottleneck in phylogenetic analyses. Recently, an ultrafast bootstrap approximation (UFBoot) approach was proposed for maximum likelihood analyses. However, such an approach is still missing for maximum parsimony. To close this gap we present MPBoot, an adaptation and extension of UFBoot to compute branch supports under the maximum parsimony principle. MPBoot works for both uniform and non-uniform cost matrices. Our analyses on biological DNA and protein showed that under uniform cost matrices, MPBoot runs on average 4.7 (DNA) to 7 times (protein data) (range: 1.2-20.7) faster than the standard parsimony bootstrap implemented in PAUP*; but 1.6 (DNA) to 4.1 times (protein data) slower than the standard bootstrap with a fast search routine in TNT (fast-TNT). However, for non-uniform cost matrices MPBoot is 5 (DNA) to 13 times (protein data) (range:0.3-63.9) faster than fast-TNT. We note that MPBoot achieves better scores more frequently than PAUP* and fast-TNT. However, this effect is less pronounced if an intensive but slower search in TNT is invoked. Moreover, experiments on large-scale simulated data show that while both PAUP* and TNT bootstrap estimates are too conservative, MPBoot bootstrap estimates appear more unbiased. MPBoot provides an efficient alternative to the standard maximum parsimony bootstrap procedure. It shows favorable performance in terms of run time, the capability of finding a maximum parsimony tree, and high bootstrap accuracy on simulated as well as empirical data sets. MPBoot is easy-to-use, open-source and available at http://www.cibiv.at/software/mpboot .
Averaged null energy condition from causality
Hartman, Thomas; Kundu, Sandipan; Tajdini, Amirhossein
2017-07-01
Unitary, Lorentz-invariant quantum field theories in flat spacetime obey mi-crocausality: commutators vanish at spacelike separation. For interacting theories in more than two dimensions, we show that this implies that the averaged null energy, ∫ duT uu , must be non-negative. This non-local operator appears in the operator product expansion of local operators in the lightcone limit, and therefore contributes to n-point functions. We derive a sum rule that isolates this contribution and is manifestly positive. The argument also applies to certain higher spin operators other than the stress tensor, generating an infinite family of new constraints of the form ∫ duX uuu··· u ≥ 0. These lead to new inequalities for the coupling constants of spinning operators in conformal field theory, which include as special cases (but are generally stronger than) the existing constraints from the lightcone bootstrap, deep inelastic scattering, conformal collider methods, and relative entropy. We also comment on the relation to the recent derivation of the averaged null energy condition from relative entropy, and suggest a more general connection between causality and information-theoretic inequalities in QFT.
Beta-energy averaging and beta spectra
International Nuclear Information System (INIS)
Stamatelatos, M.G.; England, T.R.
1976-07-01
A simple yet highly accurate method for approximately calculating spectrum-averaged beta energies and beta spectra for radioactive nuclei is presented. This method should prove useful for users who wish to obtain accurate answers without complicated calculations of Fermi functions, complex gamma functions, and time-consuming numerical integrations as required by the more exact theoretical expressions. Therefore, this method should be a good time-saving alternative for investigators who need to make calculations involving large numbers of nuclei (e.g., fission products) as well as for occasional users interested in restricted number of nuclides. The average beta-energy values calculated by this method differ from those calculated by ''exact'' methods by no more than 1 percent for nuclides with atomic numbers in the 20 to 100 range and which emit betas of energies up to approximately 8 MeV. These include all fission products and the actinides. The beta-energy spectra calculated by the present method are also of the same quality
Asymptotic Time Averages and Frequency Distributions
Directory of Open Access Journals (Sweden)
Muhammad El-Taha
2016-01-01
Full Text Available Consider an arbitrary nonnegative deterministic process (in a stochastic setting {X(t, t≥0} is a fixed realization, i.e., sample-path of the underlying stochastic process with state space S=(-∞,∞. Using a sample-path approach, we give necessary and sufficient conditions for the long-run time average of a measurable function of process to be equal to the expectation taken with respect to the same measurable function of its long-run frequency distribution. The results are further extended to allow unrestricted parameter (time space. Examples are provided to show that our condition is not superfluous and that it is weaker than uniform integrability. The case of discrete-time processes is also considered. The relationship to previously known sufficient conditions, usually given in stochastic settings, will also be discussed. Our approach is applied to regenerative processes and an extension of a well-known result is given. For researchers interested in sample-path analysis, our results will give them the choice to work with the time average of a process or its frequency distribution function and go back and forth between the two under a mild condition.
Averaging in the presence of sliding errors
International Nuclear Information System (INIS)
Yost, G.P.
1991-08-01
In many cases the precision with which an experiment can measure a physical quantity depends on the value of that quantity. Not having access to the true value, experimental groups are forced to assign their errors based on their own measured value. Procedures which attempt to derive an improved estimate of the true value by a suitable average of such measurements usually weight each experiment's measurement according to the reported variance. However, one is in a position to derive improved error estimates for each experiment from the average itself, provided an approximate idea of the functional dependence of the error on the central value is known. Failing to do so can lead to substantial biases. Techniques which avoid these biases without loss of precision are proposed and their performance is analyzed with examples. These techniques are quite general and can bring about an improvement even when the behavior of the errors is not well understood. Perhaps the most important application of the technique is in fitting curves to histograms
International Nuclear Information System (INIS)
Kharita, M. H.; Maghrabi, M.
2006-09-01
Assessment of intake and internal does requires knowing the amount of radioactivity in 24 hours urine sample, sometimes it is difficult to get 24 hour sample because this method is not comfortable and in most cases the workers refuse to collect this amount of urine. This work focuses on finding correction factor of 24 hour sample depending on knowing the amount of creatinine in the sample whatever the size of this sample. Then the 24 hours excretion of radionuclide is calculated assuming the average creatinine excretion rate is 1.7 g per 24 hours, based on the amount of activity and creatinine in the urine sample. Several urine sample were collected from occupationally exposed workers the amount and ratios of creatinine and activity in these samples were determined, then normalized to 24 excretion of radionuclide. The average chemical recovery was 77%. It should be emphasized that this method should only be used if a 24 hours sample was not possible to collect. (author)
Entanglement in random pure states: spectral density and average von Neumann entropy
Energy Technology Data Exchange (ETDEWEB)
Kumar, Santosh; Pandey, Akhilesh, E-mail: skumar.physics@gmail.com, E-mail: ap0700@mail.jnu.ac.in [School of Physical Sciences, Jawaharlal Nehru University, New Delhi 110 067 (India)
2011-11-04
Quantum entanglement plays a crucial role in quantum information, quantum teleportation and quantum computation. The information about the entanglement content between subsystems of the composite system is encoded in the Schmidt eigenvalues. We derive here closed expressions for the spectral density of Schmidt eigenvalues for all three invariant classes of random matrix ensembles. We also obtain exact results for average von Neumann entropy. We find that maximum average entanglement is achieved if the system belongs to the symplectic invariant class. (paper)
Strohbehn, Catherine; Jun, Jinhyun; Arendt, Susan
2014-01-01
Purpose/Objectives: This study investigated the influences of school foodservice employees' age and average number of hours worked per week on perceived safe food handling practices, barriers, and motivators. Methods: A bilingual survey (English and Spanish) was developed to assess reported food safety practices, barriers, and motivators to…
Eaton, Danice K; McKnight-Eily, Lela R; Lowry, Richard; Perry, Geraldine S; Presley-Cantrell, Letitia; Croft, Janet B
2010-04-01
We describe the prevalence of insufficient, borderline, and optimal sleep hours among U.S. high school students on an average school night. Most students (68.9%) reported insufficient sleep, whereas few (7.6%) reported optimal sleep. The prevalence of insufficient sleep was highest among female and black students, and students in grades 11 and 12. Published by Elsevier Inc.
Education on Adult Urinary Incontinence in Nursing School Curricula: Can It Be Done in Two Hours?
Morishita, Lynne; And Others
1994-01-01
Responses from 339 undergraduate nursing programs (74%) showed that 98% included urinary incontinence content in their curricula. Although most agreed the subject was important and felt their teaching was effective, the didactic component averaged two hours, and clinical experience was not systematic; few faculty are prepared to teach this…
Does money or the law reduce doctors' working hours in the UK?
Moreton, Adam; Collier, Andrew
2015-08-01
What can be learned from a 45-year journey to reduced junior doctors' working hours? The authors investigated the impact of financially punitive measures (the 2001 New Deal contract) and legislation (Working Time Regulations) on the average working week for doctors-in-training.
New Zealand optometrists 2006: demographics, working arrangements and hours worked.
Frederikson, Lesley G; Chamberlain, Kerry; Sangster, Andrew J
2008-07-01
Optometry is a regulated health profession in NZ, with limited student places. With 650 registered optometrists in 2005, the optometrist to population ratio was 1 : 6,291 with no apparent national shortage. If optometrists registered in NZ do not actually live there, a workforce shortage is possible. This paper presents findings from the New Zealand Association of Optometrists 2006 workforce survey of members, which aimed to profile the NZ optometric workforce and to explore factors relating to workforce capacity, job stress and future planning. A questionnaire was developed to collect information on employment status, hours worked and gender distribution of optometrists in New Zealand. It was circulated to 530 active members of the NZ Association of Optometrists representing 86 per cent of the available optometrists. Direct comparisons with the Australian optometric workforce numbers were also undertaken. Of the 243 respondents, 129 (53 per cent) were male. The median age of all respondents was 39 years (46 for males and 34 for females) and 75 per cent of the respondents were aged younger than 50 years. Fifty per cent had practised 15 years or less. Ten per cent of respondents had 'time-out' during their career and this was significantly more likely for females. Nearly half the respondents were self-employed (46 per cent) and eight per cent worked as locums. Part-time employees were more likely to be female and males were more likely to be in full-time self-employment. Half the group was under 40 (51 per cent), which accounted for 86 per cent of the full-time salaried arrangements. Those aged 30 to 39 included 52 per cent of the total part-time salaried workers. The average working week was 34 hours for women and 39 hours for men; the median was 40 hours for both groups. In the typical working week, 80 per cent of an optometrist's time was spent consulting with patients and five per cent was patient-related paperwork. The distribution of work arrangements was
Hours of Paid Work in Duel Earner Couples: The U.S. in Cross-National Perspective
Jacobs, Jerry A.; Gornick, Janet C.
2001-01-01
In this paper we examine the hours of paid work of husbands and wives in ten industrialized countries, using data from the Luxembourg Income Study. We present results on the average hours of paid work put in jointly by couples, on the proportion working very long weekly hours, and on gender equality in working time within families. The United States ranks at or near the top on most indicators of working time for couples, because of 1) a high proportion of dual-earner couples; 2) long average ...
Official CERN holidays | Restaurant opening hours
2013-01-01
Please note that the CERN Restaurants will have the following opening hours during the upcoming holidays: Restaurant #1 will be open from 7 a.m. to 11 p.m. on Wednesday 1 May, Thursday 9 May (Ascension Thursday) and Monday 20 May (Pentecost) - on Friday 10 May the restaurant will be open at the usual times. Restaurant #2 will be closed over the 3 official CERN holidays, but will be open on Friday 10 May at the usual times (brasserie will be closed). Restaurant #3 will be closed over the 3 official CERN holidays, as well as Friday 10 May.
High average power linear induction accelerator development
International Nuclear Information System (INIS)
Bayless, J.R.; Adler, R.J.
1987-07-01
There is increasing interest in linear induction accelerators (LIAs) for applications including free electron lasers, high power microwave generators and other types of radiation sources. Lawrence Livermore National Laboratory has developed LIA technology in combination with magnetic pulse compression techniques to achieve very impressive performance levels. In this paper we will briefly discuss the LIA concept and describe our development program. Our goals are to improve the reliability and reduce the cost of LIA systems. An accelerator is presently under construction to demonstrate these improvements at an energy of 1.6 MeV in 2 kA, 65 ns beam pulses at an average beam power of approximately 30 kW. The unique features of this system are a low cost accelerator design and an SCR-switched, magnetically compressed, pulse power system. 4 refs., 7 figs
FEL system with homogeneous average output
Energy Technology Data Exchange (ETDEWEB)
Douglas, David R.; Legg, Robert; Whitney, R. Roy; Neil, George; Powers, Thomas Joseph
2018-01-16
A method of varying the output of a free electron laser (FEL) on very short time scales to produce a slightly broader, but smooth, time-averaged wavelength spectrum. The method includes injecting into an accelerator a sequence of bunch trains at phase offsets from crest. Accelerating the particles to full energy to result in distinct and independently controlled, by the choice of phase offset, phase-energy correlations or chirps on each bunch train. The earlier trains will be more strongly chirped, the later trains less chirped. For an energy recovered linac (ERL), the beam may be recirculated using a transport system with linear and nonlinear momentum compactions M.sub.56, which are selected to compress all three bunch trains at the FEL with higher order terms managed.
Quetelet, the average man and medical knowledge.
Caponi, Sandra
2013-01-01
Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.
[Quetelet, the average man and medical knowledge].
Caponi, Sandra
2013-01-01
Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.
Asymmetric network connectivity using weighted harmonic averages
Morrison, Greg; Mahadevan, L.
2011-02-01
We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.
Angle-averaged Compton cross sections
International Nuclear Information System (INIS)
Nickel, G.H.
1983-01-01
The scattering of a photon by an individual free electron is characterized by six quantities: α = initial photon energy in units of m 0 c 2 ; α/sub s/ = scattered photon energy in units of m 0 c 2 ; β = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV
Average Gait Differential Image Based Human Recognition
Directory of Open Access Journals (Sweden)
Jinyan Chen
2014-01-01
Full Text Available The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI, AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition.
Reynolds averaged simulation of unsteady separated flow
International Nuclear Information System (INIS)
Iaccarino, G.; Ooi, A.; Durbin, P.A.; Behnia, M.
2003-01-01
The accuracy of Reynolds averaged Navier-Stokes (RANS) turbulence models in predicting complex flows with separation is examined. The unsteady flow around square cylinder and over a wall-mounted cube are simulated and compared with experimental data. For the cube case, none of the previously published numerical predictions obtained by steady-state RANS produced a good match with experimental data. However, evidence exists that coherent vortex shedding occurs in this flow. Its presence demands unsteady RANS computation because the flow is not statistically stationary. The present study demonstrates that unsteady RANS does indeed predict periodic shedding, and leads to much better concurrence with available experimental data than has been achieved with steady computation
Angle-averaged Compton cross sections
Energy Technology Data Exchange (ETDEWEB)
Nickel, G.H.
1983-01-01
The scattering of a photon by an individual free electron is characterized by six quantities: ..cap alpha.. = initial photon energy in units of m/sub 0/c/sup 2/; ..cap alpha../sub s/ = scattered photon energy in units of m/sub 0/c/sup 2/; ..beta.. = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV.
Maximum gravitational redshift of white dwarfs
International Nuclear Information System (INIS)
Shapiro, S.L.; Teukolsky, S.A.
1976-01-01
The stability of uniformly rotating, cold white dwarfs is examined in the framework of the Parametrized Post-Newtonian (PPN) formalism of Will and Nordtvedt. The maximum central density and gravitational redshift of a white dwarf are determined as functions of five of the nine PPN parameters (γ, β, zeta 2 , zeta 3 , and zeta 4 ), the total angular momentum J, and the composition of the star. General relativity predicts that the maximum redshifts is 571 km s -1 for nonrotating carbon and helium dwarfs, but is lower for stars composed of heavier nuclei. Uniform rotation can increase the maximum redshift to 647 km s -1 for carbon stars (the neutronization limit) and to 893 km s -1 for helium stars (the uniform rotation limit). The redshift distribution of a larger sample of white dwarfs may help determine the composition of their cores
The balanced survivor average causal effect.
Greene, Tom; Joffe, Marshall; Hu, Bo; Li, Liang; Boucher, Ken
2013-05-07
Statistical analysis of longitudinal outcomes is often complicated by the absence of observable values in patients who die prior to their scheduled measurement. In such cases, the longitudinal data are said to be "truncated by death" to emphasize that the longitudinal measurements are not simply missing, but are undefined after death. Recently, the truncation by death problem has been investigated using the framework of principal stratification to define the target estimand as the survivor average causal effect (SACE), which in the context of a two-group randomized clinical trial is the mean difference in the longitudinal outcome between the treatment and control groups for the principal stratum of always-survivors. The SACE is not identified without untestable assumptions. These assumptions have often been formulated in terms of a monotonicity constraint requiring that the treatment does not reduce survival in any patient, in conjunction with assumed values for mean differences in the longitudinal outcome between certain principal strata. In this paper, we introduce an alternative estimand, the balanced-SACE, which is defined as the average causal effect on the longitudinal outcome in a particular subset of the always-survivors that is balanced with respect to the potential survival times under the treatment and control. We propose a simple estimator of the balanced-SACE that compares the longitudinal outcomes between equivalent fractions of the longest surviving patients between the treatment and control groups and does not require a monotonicity assumption. We provide expressions for the large sample bias of the estimator, along with sensitivity analyses and strategies to minimize this bias. We consider statistical inference under a bootstrap resampling procedure.
Energy Technology Data Exchange (ETDEWEB)
NONE
2004-07-01
The MAK value (maximum workplace concentration) is the highest permissible concentration of a working material occurring in the ambient air of the workplace as a gas or vapour or in suspended form which according to the present state of knowledge does not, in general, impair the health of or pose an unreasonable molestation (for example through repulsive odour) to employees even in the case of repeated, long-term exposure, that is as a rule 8 hours daily, assuming an average working week of no more than 40 hours. As a rule, MAK values are quoted as average values over a time period of up to one working day or shift. They are primarily defined in consideration of effect characteristics of the substances in question, but also - as far as possible - of the practical conditions attending the work processes or the exposure patterns which they entail. This is done on the basis of scientifically founded criteria of health protection, not on whether providing such protection is technically or economically feasible. In addition, substances are assessed in terms of carcinogenicity, sensitising effects, any contribution to systemic toxicity following cutaneous resorption, hazards for pregnancy and germ cell mutagenicity and are classified or marked accordingly. The Commission's procedures in assessing substances with respect to these criteria are described in corresponding sections of the MAK and BAT list of values, in the ''Toxicological and occupational medical explanations of MAK values'' and in scientific journals.
Forecasting hourly patient visits in the emergency department to counteract crowding
DEFF Research Database (Denmark)
Hertzum, Morten
2017-01-01
visits. The data for 2012-2014 were used to create linear regression models, autoregressive integrated moving average (ARIMA) models, and – for purposes of comparison – naïve models of hourly patient arrivals and ED occupancy. Using the models, patient arrivals and ED occupancy were forecasted for every...... hour of January 2015. Results: Hourly patient arrivals were forecasted with a mean percentage error of 47-58% (regression), 49-58% (ARIMA), and 60-76% (naïve). Increasing the forecasting interval decreased the mean percentage error. ED occupancy was forecasted with better accuracy by ARIMA than...... regression models. With ARIMA the mean percentage error of the forecasts of the hourly ED occupancy was 69-73% for three of the EDs and 101% for the last ED. Factors beyond calendar variables might possibly have improved the models of ED occupancy, provided that information about these factors had been...
48 CFR 552.236-74 - Working Hours.
2010-10-01
... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Working Hours. 552.236-74... Hours. As prescribed in 536.570-5, insert the following clause: Working Hours (APR 1984) (a) It is contemplated that all work will be performed during the customary working hours of the trades involved unless...
12-hour shifts: an ethical dilemma for the nurse executive.
Lorenz, Susan G
2008-06-01
Flexible work hours, including 12-hour shifts, have become a common scheduling option for nurses. The author explores whether 12-hour shifts are an ethical scheduling option for nurses because recent research suggests that 12-hour shifts are a potential hazard to patients. A multistep model for ethical decision making, reflecting the concept of procedural justice, is used to examine this issue.
Change from an 8-hour shift to a 12-hour shift, attitudes, sleep, sleepiness and performance.
Lowden, A; Kecklund, G; Axelsson, J; Akerstedt, T
1998-01-01
The present study sought to evaluate the effect of a change from a rotating 3-shift (8-hour) to a 2-shift shift (12 hour) schedule on sleep, sleepiness, performance, perceived health, and well-being. Thirty-two shift workers at a chemical plant (control room operators) responded to a questionnaire a few months before a change was made in their shift schedule and 10 months after the change. Fourteen workers also filled out a diary, carried activity loggers, and carried out reaction-time tests (beginning and end of shift). Fourteen day workers served as a reference group for the questionnaires and 9 were intensively studied during a week with workdays and a free weekend. The questionnaire data showed that the shift change increased satisfaction with workhours, sleep, and time for social activities. Health, perceived accident risk, and reaction-time performance were not negatively affected. Alertness improved and subjective recovery time after night work decreased. The quick changes in the 8-hour schedule greatly increased sleep problems and fatigue. Sleepiness integrated across the entire shift cycle showed that the shift workers were less alert than the day workers, across workdays and days off (although alertness increased with the 12-hour shift). The change from 8-hour to 12-hour shifts was positive in most respects, possibly due to the shorter sequences of the workdays, the longer sequences of consecutive days off, the fewer types of shifts (easier planning), and the elimination of quick changes. The results may differ in groups with a higher work load.
A simple maximum power point tracker for thermoelectric generators
International Nuclear Information System (INIS)
Paraskevas, Alexandros; Koutroulis, Eftichios
2016-01-01
Highlights: • A Maximum Power Point Tracking (MPPT) method for thermoelectric generators is proposed. • A power converter is controlled to operate on a pre-programmed locus. • The proposed MPPT technique has the advantage of operational and design simplicity. • The experimental average deviation from the MPP power of the TEG source is 1.87%. - Abstract: ThermoElectric Generators (TEGs) are capable to harvest the ambient thermal energy for power-supplying sensors, actuators, biomedical devices etc. in the μW up to several hundreds of Watts range. In this paper, a Maximum Power Point Tracking (MPPT) method for TEG elements is proposed, which is based on controlling a power converter such that it operates on a pre-programmed locus of operating points close to the MPPs of the power–voltage curves of the TEG power source. Compared to the past-proposed MPPT methods for TEGs, the technique presented in this paper has the advantage of operational and design simplicity. Thus, its implementation using off-the-shelf microelectronic components with low-power consumption characteristics is enabled, without being required to employ specialized integrated circuits or signal processing units of high development cost. Experimental results are presented, which demonstrate that for MPP power levels of the TEG source in the range of 1–17 mW, the average deviation of the power produced by the proposed system from the MPP power of the TEG source is 1.87%.
Maximum entropy analysis of EGRET data
DEFF Research Database (Denmark)
Pohl, M.; Strong, A.W.
1997-01-01
EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....
The Maximum Resource Bin Packing Problem
DEFF Research Database (Denmark)
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...
Shower maximum detector for SDC calorimetry
International Nuclear Information System (INIS)
Ernwein, J.
1994-01-01
A prototype for the SDC end-cap (EM) calorimeter complete with a pre-shower and a shower maximum detector was tested in beams of electrons and Π's at CERN by an SDC subsystem group. The prototype was manufactured from scintillator tiles and strips read out with 1 mm diameter wave-length shifting fibers. The design and construction of the shower maximum detector is described, and results of laboratory tests on light yield and performance of the scintillator-fiber system are given. Preliminary results on energy and position measurements with the shower max detector in the test beam are shown. (authors). 4 refs., 5 figs
Topics in Bayesian statistics and maximum entropy
International Nuclear Information System (INIS)
Mutihac, R.; Cicuttin, A.; Cerdeira, A.; Stanciulescu, C.
1998-12-01
Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)
Density estimation by maximum quantum entropy
International Nuclear Information System (INIS)
Silver, R.N.; Wallstrom, T.; Martz, H.F.
1993-01-01
A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The information divergence enforces global smoothing toward default models, convexity, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing is enforced by constraints on differential operators. The linear response of the estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood (evidence). The method is demonstrated on textbook data sets
Poorer Health – Shorter Hours? Health and Flexibility of Hours of Work
Geyer, Johannes; Myck, Michal
2010-01-01
We analyse the role of health in determining the difference between desired and actual hours of work in a sample of German men using the Socio-Economic Panel Data for years 1996-2007. The effects of both self-assessed health and legal disability status are examined. About 60% of employees report working more than they would wish with the mean difference of -3.9 hours/week. We estimate static and dynamic model specifications allowing for auto-regressive nature of the dependent variable and tes...
Capturing Neutrinos from a Star's Final Hours
Hensley, Kerry
2018-04-01
What happens on the last day of a massive stars life? In the hours before the star collapses and explodes as a supernova, the rapid evolution of material in its core creates swarms of neutrinos. Observing these neutrinos may help us understand the final stages of a massive stars life but theyve never been detected.A view of some of the 1,520 phototubes within the MiniBooNE neutrino detector. Observations from this and other detectors are helping to illuminate the nature of the mysterious neutrino. [Fred Ullrich/FNAL]Silent Signposts of Stellar EvolutionThe nuclear fusion that powers stars generates tremendous amounts of energy. Much of this energy is emitted as photons, but a curious and elusive particle the neutrino carries away most of the energy in the late stages of stellar evolution.Stellar neutrinos can be created through two processes: thermal processesand beta processes. Thermal processes e.g.,pair production, in which a particle/antiparticle pair are created depend on the temperature and pressure of the stellar core. Beta processes i.e.,when a proton converts to a neutron, or vice versa are instead linked to the isotopic makeup of the stars core. This means that, if we can observe them, beta-process neutrinos may be able to tell us about the last steps of stellar nucleosynthesis in a dying star.But observing these neutrinos is not so easilydone. Neutrinos arenearly massless, neutral particles that interact only feebly with matter; out of the whopping 1060neutrinos released in a supernova explosion, even the most sensitive detectors only record the passage of just a few. Do we have a chance of detectingthe beta-process neutrinos that are released in the final few hours of a stars life, beforethe collapse?Neutrino luminosities leading up to core collapse. Shortly before collapse, the luminosity of beta-process neutrinos outshines that of any other neutrino flavor or origin. [Adapted from Patton et al. 2017]Modeling Stellar CoresTo answer this question, Kelly
Gender Differences and Predictors of Work Hours in a Sample of Ontario Dentists.
McKay, Julia C; Ahmad, Atyub; Shaw, Jodi L; Rashid, Faahim; Clancy, Alicia; David, Courtney; Figueiredo, Rafael; Quiñonez, Carlos
2016-11-01
To determine the influence of gender on weekly work hours of Ontario dentists. In 2012, a 52-item survey was sent to a random sample of 3000 Ontario dentists (1500 men and 1500 women) to collect information on personal, professional and sociodemographic characteristics. The resulting data were analyzed using descriptive statistics and linear regression modeling. The 867 respondents included 463 men, 401 women and 3 people whose gender was unreported, yielding a response rate of 29%.Most dentists worked full-time, with men working, on average, 2 h/week longer than women. Younger dentists worked more than older dentists. Practice ownership increased weekly work hours, and men reported ownership more often than women. Canadian-trained women worked significantly fewer hours than those trained internationally. Women were more likely than men to work part time and take parental leave and more often reported being primary caregivers and solely responsible for household chores. Women with partner support for such tasks worked more hours than those who were solely responsible. Dentists with children ≤ 3 years of age worked fewer hours than those without children; however, after controlling for spousal responsibility for caregiver duties, this effect was eliminated. More women than men reported making concessions in their career to devote time to family. Gender, age, practice ownership, training location and degree of spousal support for household and caregiving responsibilities were predictors of weekly work hours. For women specifically, training location and household and caregiving responsibilities predicted weekly work hours.
Fumigant dosages below maximum label rate control some soilborne pathogens
Directory of Open Access Journals (Sweden)
Shachaf Triky-Dotan
2016-08-01
Full Text Available The activity of commercial soil fumigants on some key soilborne pathogens was assessed in sandy loam soil under controlled conditions. Seven soil fumigants that are registered in California or are being or have been considered for registration were used in this study: dimethyl disulfide (DMDS mixed with chloropicrin (Pic (79% DMDS and 21% Pic, Tri-Con (50% methyl bromide and 50% Pic, Midas Gold (33% methyl iodide [MI] and 67% Pic, Midas Bronze (50% MI and 50% Pic, Midas (MI, active ingredient [a.i.] 97.8%, Pic (a.i. 99% trichloronitromethane and Pic-Clor 60 (57% Pic and 37% 1,3-dichloropropene [1–3,D]. Dose-response models were calculated for pathogen mortality after 24 hours of exposure to fumigants. Overall, the tested fumigants achieved good efficacy with dosages below the maximum label rate against the tested pathogens. In this study, Pythium ultimum and citrus nematode were sensitive to all the fumigants and Verticillium dahliae was resistant. For most fumigants, California regulations restrict application rates to less than the maximum (federal label rate, meaning that it is possible that the fumigants may not control major plant pathogens. This research provides information on the effectiveness of these alternatives at these lower application rates. The results from this study will help growers optimize application rates for registered fumigants (such as Pic and 1,3-D and will help accelerate the adoption of new fumigants (such as DMDS if they are registered in California.
Nonsymmetric entropy and maximum nonsymmetric entropy principle
International Nuclear Information System (INIS)
Liu Chengshi
2009-01-01
Under the frame of a statistical model, the concept of nonsymmetric entropy which generalizes the concepts of Boltzmann's entropy and Shannon's entropy, is defined. Maximum nonsymmetric entropy principle is proved. Some important distribution laws such as power law, can be derived from this principle naturally. Especially, nonsymmetric entropy is more convenient than other entropy such as Tsallis's entropy in deriving power laws.
Maximum speed of dewetting on a fiber
Chan, Tak Shing; Gueudre, Thomas; Snoeijer, Jacobus Hendrikus
2011-01-01
A solid object can be coated by a nonwetting liquid since a receding contact line cannot exceed a critical speed. We theoretically investigate this forced wetting transition for axisymmetric menisci on fibers of varying radii. First, we use a matched asymptotic expansion and derive the maximum speed
Maximum potential preventive effect of hip protectors
van Schoor, N.M.; Smit, J.H.; Bouter, L.M.; Veenings, B.; Asma, G.B.; Lips, P.T.A.M.
2007-01-01
OBJECTIVES: To estimate the maximum potential preventive effect of hip protectors in older persons living in the community or homes for the elderly. DESIGN: Observational cohort study. SETTING: Emergency departments in the Netherlands. PARTICIPANTS: Hip fracture patients aged 70 and older who
Maximum gain of Yagi-Uda arrays
DEFF Research Database (Denmark)
Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.
1971-01-01
Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....
correlation between maximum dry density and cohesion
African Journals Online (AJOL)
HOD
represents maximum dry density, signifies plastic limit and is liquid limit. Researchers [6, 7] estimate compaction parameters. Aside from the correlation existing between compaction parameters and other physical quantities there are some other correlations that have been investigated by other researchers. The well-known.
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
The maximum-entropy method in superspace
Czech Academy of Sciences Publication Activity Database
van Smaalen, S.; Palatinus, Lukáš; Schneider, M.
2003-01-01
Roč. 59, - (2003), s. 459-469 ISSN 0108-7673 Grant - others:DFG(DE) XX Institutional research plan: CEZ:AV0Z1010914 Keywords : maximum-entropy method, * aperiodic crystals * electron density Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.558, year: 2003
Achieving maximum sustainable yield in mixed fisheries
Ulrich, Clara; Vermard, Youen; Dolder, Paul J.; Brunel, Thomas; Jardim, Ernesto; Holmes, Steven J.; Kempf, Alexander; Mortensen, Lars O.; Poos, Jan Jaap; Rindorf, Anna
2017-01-01
Achieving single species maximum sustainable yield (MSY) in complex and dynamic fisheries targeting multiple species (mixed fisheries) is challenging because achieving the objective for one species may mean missing the objective for another. The North Sea mixed fisheries are a representative example
5 CFR 534.203 - Maximum stipends.
2010-01-01
... maximum stipend established under this section. (e) A trainee at a non-Federal hospital, clinic, or medical or dental laboratory who is assigned to a Federal hospital, clinic, or medical or dental... Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY UNDER OTHER SYSTEMS Student...
Minimal length, Friedmann equations and maximum density
Energy Technology Data Exchange (ETDEWEB)
Awad, Adel [Center for Theoretical Physics, British University of Egypt,Sherouk City 11837, P.O. Box 43 (Egypt); Department of Physics, Faculty of Science, Ain Shams University,Cairo, 11566 (Egypt); Ali, Ahmed Farag [Centre for Fundamental Physics, Zewail City of Science and Technology,Sheikh Zayed, 12588, Giza (Egypt); Department of Physics, Faculty of Science, Benha University,Benha, 13518 (Egypt)
2014-06-16
Inspired by Jacobson’s thermodynamic approach, Cai et al. have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar-Cai derivation http://dx.doi.org/10.1103/PhysRevD.75.084003 of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure p(ρ,a) leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature k. As an example we study the evolution of the equation of state p=ωρ through its phase-space diagram to show the existence of a maximum energy which is reachable in a finite time.
Generation and Applications of High Average Power Mid-IR Supercontinuum in Chalcogenide Fibers
Petersen, Christian Rosenberg
2016-01-01
Mid-infrared supercontinuum with up to 54.8 mW average power, and maximum bandwidth of 1.77-8.66 μm is demonstrated as a result of pumping tapered chalcogenide photonic crystal fibers with a MHz parametric source at 4 μm
Miao, Chiyuan; Sun, Qiaohong; Borthwick, Alistair G. L.; Duan, Qingyun
2016-01-01
We investigated changes in the temporospatial features of hourly precipitation during the warm season over mainland China. The frequency and amount of hourly precipitation displayed latitudinal zonation, especially for light and moderate precipitation, which showed successive downward change over time in northeastern and southern China. Changes in the precipitation amount resulted mainly from changes in frequency rather than changes in intensity. We also evaluated the linkage between hourly precipitation and temperature variations and found that hourly precipitation extreme was more sensitive to temperature than other categories of precipitation. A strong dependency of hourly precipitation on temperature occurred at temperatures colder than the median daily temperature; in such cases, regression slopes were greater than the Clausius-Clapeyron (C-C) relation of 7% per degree Celsius. Regression slopes for 31.6%, 59.8%, 96.9%, and 99.1% of all stations were greater than 7% per degree Celsius for the 75th, 90th, 99th, and 99.9th percentiles for precipitation, respectively. The mean regression slopes within the 99.9th percentile of precipitation were three times the C-C rate. Hourly precipitation showed a strong negative relationship with daily maximum temperature and the diurnal temperature range at most stations, whereas the equivalent correlation for daily minimum temperature was weak. PMID:26931350
Analysis of 24-hour versus 48-hour traffic counts for HPMS sampling.
2014-04-01
The Illinois Department of Transportation (IDOT) has requested a waiver from the Federal Highway Administration (FHWA) to : allow IDOT to implement a 24-hour traffic-count program on the non-state HPMS routes, as opposed to the current Highway : Perf...
Industrial Applications of High Average Power FELS
Shinn, Michelle D
2005-01-01
The use of lasers for material processing continues to expand, and the annual sales of such lasers exceeds $1 B (US). Large scale (many m2) processing of materials require the economical production of laser powers of the tens of kilowatts, and therefore are not yet commercial processes, although they have been demonstrated. The development of FELs based on superconducting RF (SRF) linac technology provides a scaleable path to laser outputs above 50 kW in the IR, rendering these applications economically viable, since the cost/photon drops as the output power increases. This approach also enables high average power ~ 1 kW output in the UV spectrum. Such FELs will provide quasi-cw (PRFs in the tens of MHz), of ultrafast (pulsewidth ~ 1 ps) output with very high beam quality. This talk will provide an overview of applications tests by our facility's users such as pulsed laser deposition, laser ablation, and laser surface modification, as well as present plans that will be tested with our upgraded FELs. These upg...
Calculating Free Energies Using Average Force
Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)
2001-01-01
A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.
Geographic Gossip: Efficient Averaging for Sensor Networks
Dimakis, Alexandros D. G.; Sarwate, Anand D.; Wainwright, Martin J.
Gossip algorithms for distributed computation are attractive due to their simplicity, distributed nature, and robustness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repeatedly recirculating redundant information. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing times of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of $n$ and $\\sqrt{n}$ respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy $\\epsilon$ using $O(\\frac{n^{1.5}}{\\sqrt{\\log n}} \\log \\epsilon^{-1})$ radio transmissions, which yields a $\\sqrt{\\frac{n}{\\log n}}$ factor improvement over standard gossip algorithms. We illustrate these theoretical results with experimental comparisons between our algorithm and standard methods as applied to various classes of random fields.
High-average-power solid state lasers
International Nuclear Information System (INIS)
Summers, M.A.
1989-01-01
In 1987, a broad-based, aggressive R ampersand D program aimed at developing the technologies necessary to make possible the use of solid state lasers that are capable of delivering medium- to high-average power in new and demanding applications. Efforts were focused along the following major lines: development of laser and nonlinear optical materials, and of coatings for parasitic suppression and evanescent wave control; development of computational design tools; verification of computational models on thoroughly instrumented test beds; and applications of selected aspects of this technology to specific missions. In the laser materials areas, efforts were directed towards producing strong, low-loss laser glasses and large, high quality garnet crystals. The crystal program consisted of computational and experimental efforts aimed at understanding the physics, thermodynamics, and chemistry of large garnet crystal growth. The laser experimental efforts were directed at understanding thermally induced wave front aberrations in zig-zag slabs, understanding fluid mechanics, heat transfer, and optical interactions in gas-cooled slabs, and conducting critical test-bed experiments with various electro-optic switch geometries. 113 refs., 99 figs., 18 tabs
The concept of average LET values determination
International Nuclear Information System (INIS)
Makarewicz, M.
1981-01-01
The concept of average LET (linear energy transfer) values determination, i.e. ordinary moments of LET in absorbed dose distribution vs. LET of ionizing radiation of any kind and any spectrum (even the unknown ones) has been presented. The method is based on measurement of ionization current with several values of voltage supplying an ionization chamber operating in conditions of columnar recombination of ions or ion recombination in clusters while the chamber is placed in the radiation field at the point of interest. By fitting a suitable algebraic expression to the measured current values one can obtain coefficients of the expression which can be interpreted as values of LET moments. One of the advantages of the method is its experimental and computational simplicity. It has been shown that for numerical estimation of certain effects dependent on LET of radiation it is not necessary to know the dose distribution but only a number of parameters of the distribution, i.e. the LET moments. (author)
On spectral averages in nuclear spectroscopy
International Nuclear Information System (INIS)
Verbaarschot, J.J.M.
1982-01-01
In nuclear spectroscopy one tries to obtain a description of systems of bound nucleons. By means of theoretical models one attemps to reproduce the eigenenergies and the corresponding wave functions which then enable the computation of, for example, the electromagnetic moments and the transition amplitudes. Statistical spectroscopy can be used for studying nuclear systems in large model spaces. In this thesis, methods are developed and applied which enable the determination of quantities in a finite part of the Hilbert space, which is defined by specific quantum values. In the case of averages in a space defined by a partition of the nucleons over the single-particle orbits, the propagation coefficients reduce to Legendre interpolation polynomials. In chapter 1 these polynomials are derived with the help of a generating function and a generalization of Wick's theorem. One can then deduce the centroid and the variance of the eigenvalue distribution in a straightforward way. The results are used to calculate the systematic energy difference between states of even and odd parity for nuclei in the mass region A=10-40. In chapter 2 an efficient method for transforming fixed angular momentum projection traces into fixed angular momentum for the configuration space traces is developed. In chapter 3 it is shown that the secular behaviour can be represented by a Gaussian function of the energies. (Auth.)
Maximum-power-point tracking control of solar heating system
Huang, Bin-Juine
2012-11-01
The present study developed a maximum-power point tracking control (MPPT) technology for solar heating system to minimize the pumping power consumption at an optimal heat collection. The net solar energy gain Q net (=Q s-W p/η e) was experimentally found to be the cost function for MPPT with maximum point. The feedback tracking control system was developed to track the optimal Q net (denoted Q max). A tracking filter which was derived from the thermal analytical model of the solar heating system was used to determine the instantaneous tracking target Q max(t). The system transfer-function model of solar heating system was also derived experimentally using a step response test and used in the design of tracking feedback control system. The PI controller was designed for a tracking target Q max(t) with a quadratic time function. The MPPT control system was implemented using a microprocessor-based controller and the test results show good tracking performance with small tracking errors. It is seen that the average mass flow rate for the specific test periods in five different days is between 18.1 and 22.9kg/min with average pumping power between 77 and 140W, which is greatly reduced as compared to the standard flow rate at 31kg/min and pumping power 450W which is based on the flow rate 0.02kg/sm 2 defined in the ANSI/ASHRAE 93-1986 Standard and the total collector area 25.9m 2. The average net solar heat collected Q net is between 8.62 and 14.1kW depending on weather condition. The MPPT control of solar heating system has been verified to be able to minimize the pumping energy consumption with optimal solar heat collection. © 2012 Elsevier Ltd.
Applications of the maximum entropy principle in nuclear physics
International Nuclear Information System (INIS)
Froehner, F.H.
1990-01-01
Soon after the advent of information theory the principle of maximum entropy was recognized as furnishing the missing rationale for the familiar rules of classical thermodynamics. More recently it has also been applied successfully in nuclear physics. As an elementary example we derive a physically meaningful macroscopic description of the spectrum of neutrons emitted in nuclear fission, and compare the well known result with accurate data on 252 Cf. A second example, derivation of an expression for resonance-averaged cross sections for nuclear reactions like scattering or fission, is less trivial. Entropy maximization, constrained by given transmission coefficients, yields probability distributions for the R- and S-matrix elements, from which average cross sections can be calculated. If constrained only by the range of the spectrum of compound-nuclear levels it produces the Gaussian Orthogonal Ensemble (GOE) of Hamiltonian matrices that again yields expressions for average cross sections. Both avenues give practically the same numbers in spite of the quite different cross section formulae. These results were employed in a new model-aided evaluation of the 238 U neutron cross sections in the unresolved resonance region. (orig.) [de
Ensemble bayesian model averaging using markov chain Monte Carlo sampling
Energy Technology Data Exchange (ETDEWEB)
Vrugt, Jasper A [Los Alamos National Laboratory; Diks, Cees G H [NON LANL; Clark, Martyn P [NON LANL
2008-01-01
Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.
International Nuclear Information System (INIS)
1991-01-01
The meaning of the term 'maximum concentration at work' in regard of various pollutants is discussed. Specifically, a number of dusts and smokes are dealt with. The valuation criteria for maximum biologically tolerable concentrations for working materials are indicated. The working materials in question are corcinogeneous substances or substances liable to cause allergies or mutate the genome. (VT) [de
2010-07-27
...-17530; Notice No. 2] RIN 2130-ZA03 Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum... remains at $250. These adjustments are required by the Federal Civil Penalties Inflation Adjustment Act [email protected] . SUPPLEMENTARY INFORMATION: The Federal Civil Penalties Inflation Adjustment Act of 1990...
Working hours of obstetrics and gynaecology trainees in Australia and New Zealand.
Acton, Jade; Tucker, Paige E; Bulsara, Max K; Cohen, Paul A
2017-10-01
The importance of doctors' working hours has gained significant attention with evidence suggesting long hours and fatigue may compromise the safety and wellbeing of both patients and doctors. This study aims to quantify the working hours of The Royal Australian and New Zealand College of Obstetricians and Gynaecologists (RANZCOG) specialist trainees in order to better inform discussions of working hours and safety within our region. An anonymous, online survey of RANZCOG trainees was conducted. Demographic data were collected. The primary outcomes were: hours per week at work and hours per week on-call. Secondary outcomes included the frequency of long days (>12 h) and 24-h shifts, time spent studying, staff shortages and opinions regarding current rostering. Response rate was 49.5% (n = 259). Full-time trainees worked an average of 53.1 ± 10.0 h/week, with 11.6% working on-call. Long-day shifts were reported by 85.8% of respondents, with an average length of 14.2 h. Fifteen percent reported working 24-h shifts, with a median duration of uninterrupted sleep during this shift being 1-2 h. Trainees in New Zealand worked 7.0 h/week more than Australian trainees (P ≤0.001), but reported less on-call (P = 0.021). Trainees in Western Australia were more likely to work on-call (P ≤0.001) and 24-h shifts (P ≤0.001). While 53.1 h/week at work is similar to the average Australian hospital doctor, high rates of long days and 24-h shifts with minimal sleep were reported by RANZCOG trainees in this survey. © 2017 The Royal Australian and New Zealand College of Obstetricians and Gynaecologists.
Maximum-entropy description of animal movement.
Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M
2015-03-01
We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.
Pareto versus lognormal: a maximum entropy test.
Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano
2011-08-01
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
Maximum likelihood estimation for integrated diffusion processes
DEFF Research Database (Denmark)
Baltazar-Larios, Fernando; Sørensen, Michael
We propose a method for obtaining maximum likelihood estimates of parameters in diffusion models when the data is a discrete time sample of the integral of the process, while no direct observations of the process itself are available. The data are, moreover, assumed to be contaminated...... EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...... by measurement errors. Integrated volatility is an example of this type of observations. Another example is ice-core data on oxygen isotopes used to investigate paleo-temperatures. The data can be viewed as incomplete observations of a model with a tractable likelihood function. Therefore we propose a simulated...
A Maximum Radius for Habitable Planets.
Alibert, Yann
2015-09-01
We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.
Maximum parsimony on subsets of taxa.
Fischer, Mareike; Thatte, Bhalchandra D
2009-09-21
In this paper we investigate mathematical questions concerning the reliability (reconstruction accuracy) of Fitch's maximum parsimony algorithm for reconstructing the ancestral state given a phylogenetic tree and a character. In particular, we consider the question whether the maximum parsimony method applied to a subset of taxa can reconstruct the ancestral state of the root more accurately than when applied to all taxa, and we give an example showing that this indeed is possible. A surprising feature of our example is that ignoring a taxon closer to the root improves the reliability of the method. On the other hand, in the case of the two-state symmetric substitution model, we answer affirmatively a conjecture of Li, Steel and Zhang which states that under a molecular clock the probability that the state at a single taxon is a correct guess of the ancestral state is a lower bound on the reconstruction accuracy of Fitch's method applied to all taxa.
A Systematic Review of the Effects of Resident Duty Hour Restrictions in Surgery
Devitt, Katharine S.; Keshet, Itay; Spicer, Jonathan; Imrie, Kevin; Feldman, Liane; Cools-Lartigue, Jonathan; Kayssi, Ahmed; Lipsman, Nir; Elmi, Maryam; Kulkarni, Abhaya V.; Parshuram, Chris; Mainprize, Todd; Warren, Richard J.; Fata, Paola; Gorman, M. Sean; Feinberg, Stan; Rutka, James
2014-01-01
Background: In 2003, the Accreditation Council for Graduate Medical Education (ACGME) mandated 80-hour resident duty limits. In 2011 the ACGME mandated 16-hour duty maximums for PGY1 (post graduate year) residents. The stated goals were to improve patient safety, resident well-being, and education. A systematic review and meta-analysis were performed to evaluate the impact of resident duty hours (RDH) on clinical and educational outcomes in surgery. Methods: A systematic review (1980–2013) was executed on CINAHL, Cochrane Database, Embase, Medline, and Scopus. Quality of articles was assessed using the GRADE guidelines. Sixteen-hour shifts and night float systems were analyzed separately. Articles that examined mortality data were combined in a random-effects meta-analysis to evaluate the impact of RDH on patient mortality. Results: A total of 135 articles met the inclusion criteria. Among these, 42% (N = 57) were considered moderate-high quality. There was no overall improvement in patient outcomes as a result of RDH; however, some studies suggest increased complication rates in high-acuity patients. There was no improvement in education related to RDH restrictions, and performance on certification examinations has declined in some specialties. Survey studies revealed a perception of worsened education and patient safety. There were improvements in resident wellness after the 80-hour workweek, but there was little improvement or negative effects on wellness after 16-hour duty maximums were implemented. Conclusions: Recent RDH changes are not consistently associated with improvements in resident well-being, and have negative impacts on patient outcomes and performance on certification examinations. Greater flexibility to accommodate resident training needs is required. Further erosion of training time should be considered with great caution. PMID:24662409
Maximum entropy analysis of liquid diffraction data
International Nuclear Information System (INIS)
Root, J.H.; Egelstaff, P.A.; Nickel, B.G.
1986-01-01
A maximum entropy method for reducing truncation effects in the inverse Fourier transform of structure factor, S(q), to pair correlation function, g(r), is described. The advantages and limitations of the method are explored with the PY hard sphere structure factor as model input data. An example using real data on liquid chlorine, is then presented. It is seen that spurious structure is greatly reduced in comparison to traditional Fourier transform methods. (author)
A Maximum Resonant Set of Polyomino Graphs
Directory of Open Access Journals (Sweden)
Zhang Heping
2016-05-01
Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.
Automatic maximum entropy spectral reconstruction in NMR
International Nuclear Information System (INIS)
Mobli, Mehdi; Maciejewski, Mark W.; Gryk, Michael R.; Hoch, Jeffrey C.
2007-01-01
Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system
maximum neutron flux at thermal nuclear reactors
International Nuclear Information System (INIS)
Strugar, P.
1968-10-01
Since actual research reactors are technically complicated and expensive facilities it is important to achieve savings by appropriate reactor lattice configurations. There is a number of papers, and practical examples of reactors with central reflector, dealing with spatial distribution of fuel elements which would result in higher neutron flux. Common disadvantage of all the solutions is that the choice of best solution is done starting from the anticipated spatial distributions of fuel elements. The weakness of these approaches is lack of defined optimization criteria. Direct approach is defined as follows: determine the spatial distribution of fuel concentration starting from the condition of maximum neutron flux by fulfilling the thermal constraints. Thus the problem of determining the maximum neutron flux is solving a variational problem which is beyond the possibilities of classical variational calculation. This variational problem has been successfully solved by applying the maximum principle of Pontrjagin. Optimum distribution of fuel concentration was obtained in explicit analytical form. Thus, spatial distribution of the neutron flux and critical dimensions of quite complex reactor system are calculated in a relatively simple way. In addition to the fact that the results are innovative this approach is interesting because of the optimization procedure itself [sr
Meeting the Canadian 24-Hour Movement Guidelines for Children and Youth.
Roberts, Karen C; Yao, Xiaoquan; Carson, Valerie; Chaput, Jean-Philippe; Janssen, Ian; Tremblay, Mark S
2017-10-18
The Canadian 24-Hour Movement Guidelines for Children and Youth: An Integration of Physical Activity, Sedentary Behaviour, and Sleep , provide specific recommendations on the amount of time over a typical 24-hour day that children and youth aged 5 to 17 should spend in moderate-to-vigorous physical activity (at least 60 minutes), recreational screen time (no more than 2 hours), and sleep (9 to 11 hours for 5- to 13-year-olds; 8 to 10 hours for 14- to 17-year-olds). Based on combined results of cycles 2 (2009-to-2011) and 3 (2012-to-2013) of the Canadian Health Measures Survey, this analysis examines average daily moderate-to-vigorous physical activity, screen time and sleep duration of 5- to 11-year-olds and 12- to 17-year-olds, and the percentages meeting the 24-Hour Guidelines' recommendations. Findings are presented overall and by age group and sex. Differences in average daily times between groups were tested for statistical significance, as weredifferences between groups in the percentages meeting each recommendation and combination of recommendations. Overall, 17.5% of children and youth met the 24-Hour Guidelines' specific time recommendations. Higher percentages of children than youth (29.6% versus 5.5%) and boys than girls (22.9% versus 11.8%) met the recommendations. About a third (36.3%) met two of the three recommendations. Recommendations for moderate-to-vigorous physical activity, sedentary behaviour, and sleep have higher levels of adherence among children than youth.
Aarthi, G.; Ramachandra Reddy, G.
2018-03-01
In our paper, the impact of adaptive transmission schemes: (i) optimal rate adaptation (ORA) and (ii) channel inversion with fixed rate (CIFR) on the average spectral efficiency (ASE) are explored for free-space optical (FSO) communications with On-Off Keying (OOK), Polarization shift keying (POLSK), and Coherent optical wireless communication (Coherent OWC) systems under different turbulence regimes. Further to enhance the ASE we have incorporated aperture averaging effects along with the above adaptive schemes. The results indicate that ORA adaptation scheme has the advantage of improving the ASE performance compared with CIFR under moderate and strong turbulence regime. The coherent OWC system with ORA excels the other modulation schemes and could achieve ASE performance of 49.8 bits/s/Hz at the average transmitted optical power of 6 dBm under strong turbulence. By adding aperture averaging effect we could achieve an ASE of 50.5 bits/s/Hz under the same conditions. This makes ORA with Coherent OWC modulation as a favorable candidate for improving the ASE of the FSO communication system.
Soccolich, Susan A; Blanco, Myra; Hanowski, Richard J; Olson, Rebecca L; Morgan, Justin F; Guo, Feng; Wu, Shih-Ching
2013-09-01
Current hours-of-service (HOS) regulations prescribe limits to commercial motor vehicle (CMV) drivers' operating hours. By using naturalistic-data-collection, researchers were able to assess activities performed in the 14-h workday and the relationship between safety-critical events (SCEs) and driving hours, work hours, and breaks. The data used in the analyses were collected in the Naturalistic Truck Driving Study and included 97 drivers and about 735,000 miles of continuous driving data. An assessment of the drivers' workday determined that, on average, drivers spent 66% of their shift driving, 23% in non-driving work, and 11% resting. Analyses evaluating the relationship between driving hours (i.e., driving only) and SCE risk found a time-on-task effect across hours, with no significant difference in safety outcomes between 11th driving hour and driving hours 8, 9 or 10. Analyses on work hours (i.e., driving in addition to non-driving work) found that risk of being involved in an SCE generally increased as work hours increased. This suggests that time-on-task effects may not be related to driving hours alone, but implies an interaction between driving hours and work hours: if a driver begins the day with several hours of non-driving work, followed by driving that goes deep into the 14-h workday, SCE risk was found to increase. Breaks from driving were found to be beneficial in reducing SCEs (during 1-h window after a break) and were effective in counteracting the negative effects of time-on-task. Copyright © 2012 Elsevier Ltd. All rights reserved.
Yu, Hwa-Lung; Wang, Chih-Hsin
2013-02-05
Understanding the daily changes in ambient air quality concentrations is important to the assessing human exposure and environmental health. However, the fine temporal scales (e.g., hourly) involved in this assessment often lead to high variability in air quality concentrations. This is because of the complex short-term physical and chemical mechanisms among the pollutants. Consequently, high heterogeneity is usually present in not only the averaged pollution levels, but also the intraday variance levels of the daily observations of ambient concentration across space and time. This characteristic decreases the estimation performance of common techniques. This study proposes a novel quantile-based Bayesian maximum entropy (QBME) method to account for the nonstationary and nonhomogeneous characteristics of ambient air pollution dynamics. The QBME method characterizes the spatiotemporal dependence among the ambient air quality levels based on their location-specific quantiles and accounts for spatiotemporal variations using a local weighted smoothing technique. The epistemic framework of the QBME method can allow researchers to further consider the uncertainty of space-time observations. This study presents the spatiotemporal modeling of daily CO and PM10 concentrations across Taiwan from 1998 to 2009 using the QBME method. Results show that the QBME method can effectively improve estimation accuracy in terms of lower mean absolute errors and standard deviations over space and time, especially for pollutants with strong nonhomogeneous variances across space. In addition, the epistemic framework can allow researchers to assimilate the site-specific secondary information where the observations are absent because of the common preferential sampling issues of environmental data. The proposed QBME method provides a practical and powerful framework for the spatiotemporal modeling of ambient pollutants.