WorldWideScience

Sample records for greater average improvement

  1. A group's physical attractiveness is greater than the average attractiveness of its members : The group attractiveness effect

    NARCIS (Netherlands)

    van Osch, Y.M.J.; Blanken, Irene; Meijs, Maartje H. J.; van Wolferen, Job

    2015-01-01

    We tested whether the perceived physical attractiveness of a group is greater than the average attractiveness of its members. In nine studies, we find evidence for the so-called group attractiveness effect (GA-effect), using female, male, and mixed-gender groups, indicating that group impressions of

  2. A group's physical attractiveness is greater than the average attractiveness of its members: the group attractiveness effect.

    Science.gov (United States)

    van Osch, Yvette; Blanken, Irene; Meijs, Maartje H J; van Wolferen, Job

    2015-04-01

    We tested whether the perceived physical attractiveness of a group is greater than the average attractiveness of its members. In nine studies, we find evidence for the so-called group attractiveness effect (GA-effect), using female, male, and mixed-gender groups, indicating that group impressions of physical attractiveness are more positive than the average ratings of the group members. A meta-analysis on 33 comparisons reveals that the effect is medium to large (Cohen's d = 0.60) and moderated by group size. We explored two explanations for the GA-effect: (a) selective attention to attractive group members, and (b) the Gestalt principle of similarity. The results of our studies are in favor of the selective attention account: People selectively attend to the most attractive members of a group and their attractiveness has a greater influence on the evaluation of the group. © 2015 by the Society for Personality and Social Psychology, Inc.

  3. Greater-than-Class C low-level waste characterization. Appendix I: Impact of concentration averaging low-level radioactive waste volume projections

    International Nuclear Information System (INIS)

    Tuite, P.; Tuite, K.; O'Kelley, M.; Ely, P.

    1991-08-01

    This study provides a quantitative framework for bounding unpackaged greater-than-Class C low-level radioactive waste types as a function of concentration averaging. The study defines the three concentration averaging scenarios that lead to base, high, and low volumetric projections; identifies those waste types that could be greater-than-Class C under the high volume, or worst case, concentration averaging scenario; and quantifies the impact of these scenarios on identified waste types relative to the base case scenario. The base volume scenario was assumed to reflect current requirements at the disposal sites as well as the regulatory views. The high volume scenario was assumed to reflect the most conservative criteria as incorporated in some compact host state requirements. The low volume scenario was assumed to reflect the 10 CFR Part 61 criteria as applicable to both shallow land burial facilities and to practices that could be employed to reduce the generation of Class C waste types

  4. Improving sensitivity in micro-free flow electrophoresis using signal averaging

    Science.gov (United States)

    Turgeon, Ryan T.; Bowser, Michael T.

    2009-01-01

    Microfluidic free-flow electrophoresis (μFFE) is a separation technique that separates continuous streams of analytes as they travel through an electric field in a planar flow channel. The continuous nature of the μFFE separation suggests that approaches more commonly applied in spectroscopy and imaging may be effective in improving sensitivity. The current paper describes the S/N improvements that can be achieved by simply averaging multiple images of a μFFE separation; 20–24-fold improvements in S/N were observed by averaging the signal from 500 images recorded for over 2 min. Up to an 80-fold improvement in S/N was observed by averaging 6500 images. Detection limits as low as 14 pM were achieved for fluorescein, which is impressive considering the non-ideal optical set-up used in these experiments. The limitation to this signal averaging approach was the stability of the μFFE separation. At separation times longer than 20 min bubbles began to form at the electrodes, which disrupted the flow profile through the device, giving rise to erratic peak positions. PMID:19319908

  5. Regional correlations of VS30 averaged over depths less than and greater than 30 meters

    Science.gov (United States)

    Boore, David M.; Thompson, Eric M.; Cadet, Héloïse

    2011-01-01

    Using velocity profiles from sites in Japan, California, Turkey, and Europe, we find that the time-averaged shear-wave velocity to 30 m (VS30), used as a proxy for site amplification in recent ground-motion prediction equations (GMPEs) and building codes, is strongly correlated with average velocities to depths less than 30 m (VSz, with z being the averaging depth). The correlations for sites in Japan (corresponding to the KiK-net network) show that VSz is systematically larger for a given VSz than for profiles from the other regions. The difference largely results from the placement of the KiK-net station locations on rock and rocklike sites, whereas stations in the other regions are generally placed in urban areas underlain by sediments. Using the KiK-net velocity profiles, we provide equations relating VS30 to VSz for z ranging from 5 to 29 m in 1-m increments. These equations (and those for California velocity profiles given in Boore, 2004b) can be used to estimate VS30 from VSz for sites in which velocity profiles do not extend to 30 m. The scatter of the residuals decreases with depth, but, even for an averaging depth of 5 m, a variation in logVS30 of ±1 standard deviation maps into less than a 20% uncertainty in ground motions given by recent GMPEs at short periods. The sensitivity of the ground motions to VS30 uncertainty is considerably larger at long periods (but is less than a factor of 1.2 for averaging depths greater than about 20 m). We also find that VS30 is correlated with VSz for z as great as 400 m for sites of the KiK-net network, providing some justification for using VS30 as a site-response variable for predicting ground motions at periods for which the wavelengths far exceed 30 m.

  6. Improving greater trochanteric reattachment with a novel cable plate system.

    Science.gov (United States)

    Baril, Yannick; Bourgeois, Yan; Brailovski, Vladimir; Duke, Kajsa; Laflamme, G Yves; Petit, Yvan

    2013-03-01

    Cable-grip systems are commonly used for greater trochanteric reattachment because they have provided the best fixation performance to date, even though they have a rather high complication rate. A novel reattachment system is proposed with the aim of improving fixation stability. It consists of a Y-shaped fixation plate combined with locking screws and superelastic cables to reduce cable loosening and limit greater trochanter movement. The novel system is compared with a commercially available reattachment system in terms of greater trochanter movement and cable tensions under different greater trochanteric abductor application angles. A factorial design of experiments was used including four independent variables: plate system, cable type, abductor application angle, and femur model. The test procedure included 50 cycles of simultaneous application of an abductor force on the greater trochanter and a hip force on the femoral head. The novel plate reduces the movements of a greater trochanter fragment within a single loading cycle up to 26%. Permanent degradation of the fixation (accumulated movement based on 50-cycle testing) is reduced up to 46%. The use of superelastic cables reduces tension loosening up to 24%. However this last improvement did not result in a significant reduction of the grater trochanter movement. The novel plate and cables present advantages over the commercially available greater trochanter reattachment system. The plate reduces movements generated by the hip abductor. The superelastic cables reduce cable loosening during cycling. Both of these positive effects could decrease the risks related to grater trochanter non-union. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.

  7. Greater-than-Class C low-level radioactive waste characterization. Appendix E-5: Impact of the 1993 NRC draft Branch Technical Position on concentration averaging of greater-than-Class C low-level radioactive waste

    International Nuclear Information System (INIS)

    Tuite, P.; Tuite, K.; Harris, G.

    1994-09-01

    This report evaluates the effects of concentration averaging practices on the disposal of greater-than-Class C low-level radioactive waste (GTCC LLW) generated by the nuclear utility industry and sealed sources. Using estimates of the number of waste components that individually exceed Class C limits, this report calculates the proportion that would be classified as GTCC LLW after applying concentration averaging; this proportion is called the concentration averaging factor. The report uses the guidance outlined in the 1993 Nuclear Regulatory Commission (NRC) draft Branch Technical Position on concentration averaging, as well as waste disposal experience at nuclear utilities, to calculate the concentration averaging factors for nuclear utility wastes. The report uses the 1993 NRC draft Branch Technical Position and the criteria from the Barnwell, South Carolina, LLW disposal site to calculate concentration averaging factors for sealed sources. The report addresses three waste groups: activated metals from light water reactors, process wastes from light-water reactors, and sealed sources. For each waste group, three concentration averaging cases are considered: high, base, and low. The base case, which is the most likely case to occur, assumes using the specific guidance given in the 1993 NRC draft Branch Technical Position on concentration averaging. To project future GTCC LLW generation, each waste category is assigned a concentration averaging factor for the high, base, and low cases

  8. Improved averaging for non-null interferometry

    Science.gov (United States)

    Fleig, Jon F.; Murphy, Paul E.

    2013-09-01

    Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.

  9. Regional correlations of V s30 and velocities averaged over depths less than and greater than 30 meters

    Science.gov (United States)

    Boore, D.M.; Thompson, E.M.; Cadet, H.

    2011-01-01

    Using velocity profiles from sites in Japan, California, Turkey, and Europe, we find that the time-averaged shear-wave velocity to 30 m (V S30), used as a proxy for site amplification in recent ground-motion prediction equations (GMPEs) and building codes, is strongly correlated with average velocities to depths less than 30 m (V Sz, with z being the averaging depth). The correlations for sites in Japan (corresponding to the KiK-net network) show that V S30 is systematically larger for a given V Sz than for profiles from the other regions. The difference largely results from the placement of the KiK-net station locations on rock and rocklike sites, whereas stations in the other regions are generally placed in urban areas underlain by sediments. Using the KiK-net velocity profiles, we provide equations relating V S30 to V Sz for z ranging from 5 to 29 m in 1-m increments. These equations (and those for California velocity profiles given in Boore, 2004b) can be used to estimate V S30 from V Sz for sites in which velocity profiles do not extend to 30 m. The scatter of the residuals decreases with depth, but, even for an averaging depth of 5 m, a variation in log V S30 of 1 standard deviation maps into less than a 20% uncertainty in ground motions given by recent GMPEs at short periods. The sensitivity of the ground motions to V S30 uncertainty is considerably larger at long periods (but is less than a factor of 1.2 for averaging depths greater than about 20 m). We also find that V S30 is correlated with V Sz for z as great as 400 m for sites of the KiK-net network, providing some justification for using V S30 as a site-response variable for predicting ground motions at periods for which the wavelengths far exceed 30 m.

  10. Laser properties of an improved average-power Nd-doped phosphate glass

    International Nuclear Information System (INIS)

    Payne, S.A.; Marshall, C.D.; Bayramian, A.J.

    1995-01-01

    The Nd-doped phosphate laser glass described herein can withstand 2.3 times greater thermal loading without fracture, compared to APG-1 (commercially-available average-power glass from Schott Glass Technologies). The enhanced thermal loading capability is established on the basis of the intrinsic thermomechanical properties (expansion, conduction, fracture toughness, and Young's modulus), and by direct thermally-induced fracture experiments using Ar-ion laser heating of the samples. This Nd-doped phosphate glass (referred to as APG-t) is found to be characterized by a 29% lower gain cross section and a 25% longer low-concentration emission lifetime

  11. Average chewing pattern improvements following Disclusion Time reduction.

    Science.gov (United States)

    Kerstein, Robert B; Radke, John

    2017-05-01

    Studies involving electrognathographic (EGN) recordings of chewing improvements obtained following occlusal adjustment therapy are rare, as most studies lack 'chewing' within the research. The objectives of this study were to determine if reducing long Disclusion Time to short Disclusion Time with the immediate complete anterior guidance development (ICAGD) coronoplasty in symptomatic subjects altered their average chewing pattern (ACP) and their muscle function. Twenty-nine muscularly symptomatic subjects underwent simultaneous EMG and EGN recordings of right and left gum chewing, before and after the ICAGD coronoplasty. Statistical differences in the mean Disclusion Time, the mean muscle contraction cycle, and the mean ACP resultant from ICAGD underwent the Student's paired t-test (α = 0.05). Disclusion Time reductions from ICAGD were significant (2.11-0.45 s. p = 0.0000). Post-ICAGD muscle changes were significant in the mean area (p = 0.000001), the peak amplitude (p = 0.00005), the time to peak contraction (p chewing position became closer to centric occlusion (p chewing velocities increased (p chewing pattern (ACP) shape, speed, consistency, muscular coordination, and vertical opening improvements can be significantly improved in muscularly dysfunctional TMD patients within one week's time of undergoing the ICAGD enameloplasty. Computer-measured and guided occlusal adjustments quickly and physiologically improved chewing, without requiring the patients to wear pre- or post-treatment appliances.

  12. Improving consensus structure by eliminating averaging artifacts

    Directory of Open Access Journals (Sweden)

    KC Dukka B

    2009-03-01

    Full Text Available Abstract Background Common structural biology methods (i.e., NMR and molecular dynamics often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures. However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%; in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38 of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA 1, our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction 2, which

  13. Exploring JLA supernova data with improved flux-averaging technique

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Shuang; Wen, Sixiang; Li, Miao, E-mail: wangshuang@mail.sysu.edu.cn, E-mail: wensx@mail2.sysu.edu.cn, E-mail: limiao9@mail.sysu.edu.cn [School of Physics and Astronomy, Sun Yat-Sen University, University Road (No. 2), Zhuhai (China)

    2017-03-01

    In this work, we explore the cosmological consequences of the ''Joint Light-curve Analysis'' (JLA) supernova (SN) data by using an improved flux-averaging (FA) technique, in which only the type Ia supernovae (SNe Ia) at high redshift are flux-averaged. Adopting the criterion of figure of Merit (FoM) and considering six dark energy (DE) parameterizations, we search the best FA recipe that gives the tightest DE constraints in the ( z {sub cut}, Δ z ) plane, where z {sub cut} and Δ z are redshift cut-off and redshift interval of FA, respectively. Then, based on the best FA recipe obtained, we discuss the impacts of varying z {sub cut} and varying Δ z , revisit the evolution of SN color luminosity parameter β, and study the effects of adopting different FA recipe on parameter estimation. We find that: (1) The best FA recipe is ( z {sub cut} = 0.6, Δ z =0.06), which is insensitive to a specific DE parameterization. (2) Flux-averaging JLA samples at z {sub cut} ≥ 0.4 will yield tighter DE constraints than the case without using FA. (3) Using FA can significantly reduce the redshift-evolution of β. (4) The best FA recipe favors a larger fractional matter density Ω {sub m} . In summary, we present an alternative method of dealing with JLA data, which can reduce the systematic uncertainties of SNe Ia and give the tighter DE constraints at the same time. Our method will be useful in the use of SNe Ia data for precision cosmology.

  14. NDVI saturation adjustment: a new approach for improving cropland performance estimates in the Greater Platte River Basin, USA

    Science.gov (United States)

    Gu, Yingxin; Wylie, Bruce K.; Howard, Daniel M.; Phuyal, Khem P.; Ji, Lei

    2013-01-01

    In this study, we developed a new approach that adjusted normalized difference vegetation index (NDVI) pixel values that were near saturation to better characterize the cropland performance (CP) in the Greater Platte River Basin (GPRB), USA. The relationship between NDVI and the ratio vegetation index (RVI) at high NDVI values was investigated, and an empirical equation for estimating saturation-adjusted NDVI (NDVIsat_adjust) based on RVI was developed. A 10-year (2000–2009) NDVIsat_adjust data set was developed using 250-m 7-day composite historical eMODIS (expedited Moderate Resolution Imaging Spectroradiometer) NDVI data. The growing season averaged NDVI (GSN), which is a proxy for ecosystem performance, was estimated and long-term NDVI non-saturation- and saturation-adjusted cropland performance (CPnon_sat_adjust, CPsat_adjust) maps were produced over the GPRB. The final CP maps were validated using National Agricultural Statistics Service (NASS) crop yield data. The relationship between CPsat_adjust and the NASS average corn yield data (r = 0.78, 113 samples) is stronger than the relationship between CPnon_sat_adjust and the NASS average corn yield data (r = 0.67, 113 samples), indicating that the new CPsat_adjust map reduces the NDVI saturation effects and is in good agreement with the corn yield ground observations. Results demonstrate that the NDVI saturation adjustment approach improves the quality of the original GSN map and better depicts the actual vegetation conditions of the GPRB cropland systems.

  15. Improved performance of high average power semiconductor arrays for applications in diode pumped solid state lasers

    International Nuclear Information System (INIS)

    Beach, R.; Emanuel, M.; Benett, W.; Freitas, B.; Ciarlo, D.; Carlson, N.; Sutton, S.; Skidmore, J.; Solarz, R.

    1994-01-01

    The average power performance capability of semiconductor diode laser arrays has improved dramatically over the past several years. These performance improvements, combined with cost reductions pursued by LLNL and others in the fabrication and packaging of diode lasers, have continued to reduce the price per average watt of laser diode radiation. Presently, we are at the point where the manufacturers of commercial high average power solid state laser systems used in material processing applications can now seriously consider the replacement of their flashlamp pumps with laser diode pump sources. Additionally, a low cost technique developed and demonstrated at LLNL for optically conditioning the output radiation of diode laser arrays has enabled a new and scalable average power diode-end-pumping architecture that can be simply implemented in diode pumped solid state laser systems (DPSSL's). This development allows the high average power DPSSL designer to look beyond the Nd ion for the first time. Along with high average power DPSSL's which are appropriate for material processing applications, low and intermediate average power DPSSL's are now realizable at low enough costs to be attractive for use in many medical, electronic, and lithographic applications

  16. Improving the Grade Point Average of Our At-Risk Students: A Collaborative Group Action Research Approach.

    Science.gov (United States)

    Saurino, Dan R.; Hinson, Kenneth; Bouma, Amy

    This paper focuses on the use of a group action research approach to help student teachers develop strategies to improve the grade point average of at-risk students. Teaching interventions such as group work and group and individual tutoring were compared to teaching strategies already used in the field. Results indicated an improvement in the…

  17. Improve Gear Fault Diagnosis and Severity Indexes Determinations via Time Synchronous Average

    Directory of Open Access Journals (Sweden)

    Mohamed El Morsy

    2016-11-01

    Full Text Available In order to reduce operation and maintenance costs, prognostics and health management (PHM of the geared system is needed to improve effective gearbox fault detection tools.  PHM system allows less costly maintenance because it can inform operators of needed repairs before a fault causes collateral damage happens to the gearbox. In this article, time synchronous average (TSA technique and complex continuous wavelet analysis enhancement are used as gear fault detection approach. In the first step, extract the periodic waveform from the noisy measured signal is considered as The main value of Time synchronous averaging (TSA for gearbox signals analyses, where it allows the vibration signature of the gear under analysis to be separated from other gears and noise sources in the gearbox that are not synchronous with faulty gear. In the second step, the complex wavelet analysis is used in case of multi-faults in same gear. The signal phased-locked with the angular position of a shaft within the system is done. The main aims for this research is to improve the gear fault diagnosis and severity index determinations based on TSA  of measured signal for investigated passenger vehicle gearbox under different operation conditions. In addition to, correct the variations in shaft speed such that the spreading of spectral energy into an adjacent gear mesh bin helps in detecting the gear fault position (faulted tooth or teeth and improve the Root Mean Square (RMS, Kurtosis, and Peak Pulse as the sensitivity of severity indexes for maintenance, prognostics and health management (PHM purposes. The open loop test stand is equipped with two dynamometers and investigated vehicle gearbox of mid-size passenger car; the total power is taken-off from one side only. Reference Number: www.asrongo.org/doi:4.2016.1.1.6

  18. Greater utilization of wood residue fuels through improved financial planning

    International Nuclear Information System (INIS)

    Billings, C.D.; Ziemke, M.C.; Stanford, R.

    1991-01-01

    Recent events have focused attention on the promotion of greater utilization of biomass fuel. Considerations include the need to reduce increases in global warming and also to improve ground level air quality by limiting the use of fossil fuels. However, despite all these important environmentally related considerations, economics remains the most important factor in the decision process used to determine the feasibility of using available renewable fuels instead of more convenient fossil fuels. In many areas of the Southeast, this decision process involves choosing between wood residue fuels such as bark, sawdust and shavings and presently plentiful natural gas. The primary candidate users of wood residue fuels are industries that use large amounts of heat and electric power and are located near centers of activity in the forest products industry such as sawmills, veneer mills and furniture factories. Given that such facilities both produce wood residues and need large amounts of heat and electricity, it is understandable that these firms are often major users of wood-fired furnaces and boilers. The authors have observed that poor or incomplete financial planning by the subject firms is a major barrier to economic utilization of inexpensive and widely available renewable fuels. In this paper, the authors suggest that wider usage of improved financial planning could double the present modest annual incidence of new commercial wood-fueled installation

  19. Greater Proptosis Is Not Associated With Improved Compressive Optic Neuropathy in Thyroid Eye Disease.

    Science.gov (United States)

    Nanda, Tavish; Dunbar, Kristen E; Campbell, Ashley A; Bathras, Ryan M; Kazim, Michael

    2018-05-18

    Despite the paucity of supporting data, it has generally been held that proptosis in thyroid eye disease (TED) may provide relative protection from compressive optic neuropathy (CON) by producing spontaneous decompression. The objective of this study was to investigate this phenomenon in patients with bilateral TED-CON. We retrospectively reviewed the charts of 67 patients (134 orbits) with bilateral TED-CON at Columbia-Presbyterian Medical Center. Significant asymmetric proptosis (Hertel) was defined as ≥ 2 mm. Significant asymmetric CON was defined first, as the presence of an relative afferent pupillary defect. Those without an relative afferent pupillary defect were evaluated according to the TED-CON formula y = -0.69 - 0.31 × (motility) - 0.2 × (mean deviation) - 0.02 × (color vision) as previously established for the diagnosis of TED-CON. A difference in the formula result ≥ 1.0 between eyes was considered significant. Patients were then divided into 4 groups. Forty-one of 67 patients demonstrated asymmetric CON (29 by relative afferent pupillary defect, 12 by formula). Twenty-one of 67 patients demonstrated asymmetric proptosis. Only 5 of 12 (41.6%) of the patients who had both asymmetric proptosis and asymmetric CON (group 1) showed greater proptosis in the eye with less CON. Twenty-nine patients (group 2) showed that asymmetric CON occurred despite symmetrical proptosis. Seventeen patients (group 3), showed the inverse, that asymmetric differences in proptosis occurred with symmetrical CON. Despite commonly held assumptions, our results suggest that greater proptosis is not associated with improved TED-CON. Combining groups 1 to 3-all of which demonstrated asymmetry of either proptosis, CON, or both-91.4% of patients did not show a relationship between greater proptosis and improved CON.

  20. Simultaneous bilateral isolated greater trochanter fracture

    Directory of Open Access Journals (Sweden)

    Maruti Kambali

    2013-01-01

    Full Text Available A 48-year-old woman sustained simultaneous isolated bilateral greater trochanteric fracture, following a road traffic accident. The patient presented to us 1 month after the injury. She presented with complaints of pain in the left hip and inability to walk. Roentgenograms revealed displaced comminuted bilateral greater trochanter fractures. The fracture of the left greater trochanter was reduced and fixed internally using the tension band wiring technique. The greater trochanter fracture on the right side was asymptomatic and was managed conservatively. The patient regained full range of motion and use of her hips after a postoperative follow-up of 6 months. Isolated fractures of the greater trochanter are unusual injuries. Because of their relative rarity and the unsettled controversy regarding their etiology and pathogenesis, several methods of treatment have been advocated. Furthermore, the reports of this particular type of injury are not plentiful and the average textbook coverage afforded to this entity is limited. In our study we discuss the mechanism of injury and the various treatment options available.

  1. Improved contrast deep optoacoustic imaging using displacement-compensated averaging: breast tumour phantom studies

    Energy Technology Data Exchange (ETDEWEB)

    Jaeger, M; Preisser, S; Kitz, M; Frenz, M [Institute of Applied Physics, University of Bern, Sidlerstrasse 5, CH-3012 Bern (Switzerland); Ferrara, D; Senegas, S; Schweizer, D, E-mail: frenz@iap.unibe.ch [Fukuda Denshi Switzerland AG, Reinacherstrasse 131, CH-4002 Basel (Switzerland)

    2011-09-21

    For real-time optoacoustic (OA) imaging of the human body, a linear array transducer and reflection mode optical irradiation is usually preferred. Such a setup, however, results in significant image background, which prevents imaging structures at the ultimate depth determined by the light distribution and the signal noise level. Therefore, we previously proposed a method for image background reduction, based on displacement-compensated averaging (DCA) of image series obtained when the tissue sample under investigation is gradually deformed. OA signals and background signals are differently affected by the deformation and can thus be distinguished. The proposed method is now experimentally applied to image artificial tumours embedded inside breast phantoms. OA images are acquired alternately with pulse-echo images using a combined OA/echo-ultrasound device. Tissue deformation is accessed via speckle tracking in pulse echo images, and used to compensate in the OA images for the local tissue displacement. In that way, OA sources are highly correlated between subsequent images, while background is decorrelated and can therefore be reduced by averaging. We show that image contrast in breast phantoms is strongly improved and detectability of embedded tumours significantly increased, using the DCA method.

  2. Socio-economic considerations of cleaning Greater Vancouver's air

    International Nuclear Information System (INIS)

    2005-08-01

    Socio-economic considerations of better air quality on the Greater Vancouver population and economy were discussed. The purpose of the study was to provide socio-economic information to staff and stakeholders of the Greater Vancouver Regional District (GVRD) who are participating in an Air Quality Management Plan (AQMP) development process and the Sustainable Region Initiative (SRI) process. The study incorporated the following methodologies: identification and review of Canadian, American, and European quantitative socio-economic, cost-benefit, cost effectiveness, competitiveness and health analyses of changes in air quality and measures to improve air quality; interviews with industry representatives in Greater Vancouver on competitiveness impacts of air quality changes and ways to improve air quality; and a qualitative analysis and discussion of secondary quantitative information that identifies and evaluates socio-economic impacts arising from changes in Greater Vancouver air quality. The study concluded that for the Greater Vancouver area, the qualitative analysis of an improvement in Greater Vancouver air quality shows positive socio-economic outcomes, as high positive economic efficiency impacts are expected along with good social quality of life impacts. 149 refs., 30 tabs., 6 appendices

  3. Greater use of wood residue fuels through improved financial planning: a case study in Alabama

    Energy Technology Data Exchange (ETDEWEB)

    Billings, C.D.; Ziemke, M.C. (Alabama Univ., Huntsville, AL (United States). Coll. of Administrative Science); Stanford, R. (Alabama Dept. of Economic and Community Affairs, Montgomery, AL (United States))

    1993-01-01

    As the world reacts to environmental concerns relating to fossil energy usage, emphasis is again placed on greater use of renewable fuels such as wood residues. Realistically, however, decisions to utilize such fuels are based on economic factors, rather than desires to improve US energy independence and/or protect the environment. Because Alabama has a large forest products industry, state authorities have long sought to assist potential users of wood residue fuels to better use biomass fuels instead of the usual alternative: natural gas. State agency experience in promoting commercial and industrial use of wood residue fuels has shown that inadequate financial planning has often resulted in rejection of viable projects or acceptance of non-optimum projects. This paper discusses the reasons for this situation and suggests remedies for its improvement. (author)

  4. Analysis and Design of Improved Weighted Average Current Control Strategy for LCL-Type Grid-Connected Inverters

    DEFF Research Database (Denmark)

    Han, Yang; Li, Zipeng; Yang, Ping

    2017-01-01

    The LCL grid-connected inverter has the ability to attenuate the high-frequency current harmonics. However, the inherent resonance of the LCL filter affects the system stability significantly. To damp the resonance effect, the dual-loop current control can be used to stabilize the system. The grid...... Control Strategy for LCL-Type Grid-Connected Inverters. Available from: https://www.researchgate.net/publication/313734269_Analysis_and_Design_of_Improved_Weighted_Average_Current_Control_Strategy_for_LCL-Type_Grid-Connected_Inverters [accessed Apr 20, 2017]....... current plus capacitor current feedback system is widely used for its better transient response and high robustness against the grid impedance variations. While the weighted average current (WAC) feedback scheme is capable to provide a wider bandwidth at higher frequencies but show poor stability...

  5. Power Efficiency Improvements through Peak-to-Average Power Ratio Reduction and Power Amplifier Linearization

    Directory of Open Access Journals (Sweden)

    Zhou G Tong

    2007-01-01

    Full Text Available Many modern communication signal formats, such as orthogonal frequency-division multiplexing (OFDM and code-division multiple access (CDMA, have high peak-to-average power ratios (PARs. A signal with a high PAR not only is vulnerable in the presence of nonlinear components such as power amplifiers (PAs, but also leads to low transmission power efficiency. Selected mapping (SLM and clipping are well-known PAR reduction techniques. We propose to combine SLM with threshold clipping and digital baseband predistortion to improve the overall efficiency of the transmission system. Testbed experiments demonstrate the effectiveness of the proposed approach.

  6. Average spectral efficiency analysis of FSO links over turbulence channel with adaptive transmissions and aperture averaging

    Science.gov (United States)

    Aarthi, G.; Ramachandra Reddy, G.

    2018-03-01

    In our paper, the impact of adaptive transmission schemes: (i) optimal rate adaptation (ORA) and (ii) channel inversion with fixed rate (CIFR) on the average spectral efficiency (ASE) are explored for free-space optical (FSO) communications with On-Off Keying (OOK), Polarization shift keying (POLSK), and Coherent optical wireless communication (Coherent OWC) systems under different turbulence regimes. Further to enhance the ASE we have incorporated aperture averaging effects along with the above adaptive schemes. The results indicate that ORA adaptation scheme has the advantage of improving the ASE performance compared with CIFR under moderate and strong turbulence regime. The coherent OWC system with ORA excels the other modulation schemes and could achieve ASE performance of 49.8 bits/s/Hz at the average transmitted optical power of 6 dBm under strong turbulence. By adding aperture averaging effect we could achieve an ASE of 50.5 bits/s/Hz under the same conditions. This makes ORA with Coherent OWC modulation as a favorable candidate for improving the ASE of the FSO communication system.

  7. Operational technology for greater confinement disposal

    International Nuclear Information System (INIS)

    Dickman, P.T.; Vollmer, A.T.; Hunter, P.H.

    1984-12-01

    Procedures and methods for the design and operation of a greater confinement disposal facility using large-diameter boreholes are discussed. It is assumed that the facility would be located at an operating low-level waste disposal site and that only a small portion of the wastes received at the site would require greater confinement disposal. The document is organized into sections addressing: facility planning process; facility construction; waste loading and handling; radiological safety planning; operations procedures; and engineering cost studies. While primarily written for low-level waste management site operators and managers, a detailed economic assessment section is included that should assist planners in performing cost analyses. Economic assessments for both commercial and US government greater confinement disposal facilities are included. The estimated disposal costs range from $27 to $104 per cubic foot for a commercial facility and from $17 to $60 per cubic foot for a government facility. These costs are based on average site preparation, construction, and waste loading costs for both contact- and remote-handled wastes. 14 figures, 22 tables

  8. A collaborative project to improve identification and management of patients with chronic kidney disease in a primary care setting in Greater Manchester.

    Science.gov (United States)

    Humphreys, John; Harvey, Gill; Coleiro, Michelle; Butler, Brook; Barclay, Anna; Gwozdziewicz, Maciek; O'Donoghue, Donal; Hegarty, Janet

    2012-08-01

    Research has demonstrated a knowledge and practice gap in the identification and management of chronic kidney disease (CKD). In 2009, published data showed that general practices in Greater Manchester had a low detection rate for CKD. A 12-month improvement collaborative, supported by an evidence-informed implementation framework and financial incentives. 19 general practices from four primary care trusts within Greater Manchester. Number of recorded patients with CKD on practice registers; percentage of patients on registers achieving nationally agreed blood pressure targets. The collaborative commenced in September 2009 and involved three joint learning sessions, interspersed with practice level rapid improvement cycles, and supported by an implementation team from the National Institute for Health Research Collaboration for Leadership in Applied Health Research and Care for Greater Manchester. At baseline, the 19 collaborative practices had 4185 patients on their CKD registers. At final data collection in September 2010, this figure had increased by 1324 to 5509. Blood pressure improved from 34% to 74% of patients on practice registers having a recorded blood pressure within recommended guidelines. Evidence-based improvement can be implemented in practice for chronic disease management. A collaborative approach has been successful in enabling teams to test and apply changes to identify patients and improve care. The model has proved to be more successful for some practices, suggesting a need to develop more context-sensitive approaches to implementation and actively manage the factors that influence the success of the collaborative.

  9. An improved procedure for determining grain boundary diffusion coefficients from averaged concentration profiles

    Science.gov (United States)

    Gryaznov, D.; Fleig, J.; Maier, J.

    2008-03-01

    Whipple's solution of the problem of grain boundary diffusion and Le Claire's relation, which is often used to determine grain boundary diffusion coefficients, are examined for a broad range of ratios of grain boundary to bulk diffusivities Δ and diffusion times t. Different reasons leading to errors in determining the grain boundary diffusivity (DGB) when using Le Claire's relation are discussed. It is shown that nonlinearities of the diffusion profiles in lnCav-y6/5 plots and deviations from "Le Claire's constant" (-0.78) are the major error sources (Cav=averaged concentration, y =coordinate in diffusion direction). An improved relation (replacing Le Claire's constant) is suggested for analyzing diffusion profiles particularly suited for small diffusion lengths (short times) as often required in diffusion experiments on nanocrystalline materials.

  10. 40 CFR 63.1035 - Quality improvement program for pumps.

    Science.gov (United States)

    2010-07-01

    ...., piston, horizontal or vertical centrifugal, gear, bellows); pump manufacturer; seal type and manufacturer... 40 Protection of Environment 10 2010-07-01 2010-07-01 false Quality improvement program for pumps... improvement program for pumps. (a) Criteria. If, on a 6-month rolling average, at least the greater of either...

  11. 40 CFR 63.176 - Quality improvement program for pumps.

    Science.gov (United States)

    2010-07-01

    ... type (e.g., piston, horizontal or vertical centrifugal, gear, bellows); pump manufacturer; seal type... 40 Protection of Environment 9 2010-07-01 2010-07-01 false Quality improvement program for pumps... improvement program for pumps. (a) In Phase III, if, on a 6-month rolling average, the greater of either 10...

  12. Commercial Integrated Heat Pump with Thermal Storage --Demonstrate Greater than 50% Average Annual Energy Savings, Compared with Baseline Heat Pump and Water Heater (Go/No-Go) FY16 4th Quarter Milestone Report

    Energy Technology Data Exchange (ETDEWEB)

    Shen, Bo [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Baxter, Van D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Rice, C. Keith [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Abu-Heiba, Ahmad [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2017-03-01

    For this study, we authored a new air source integrated heat pump (AS-IHP) model in EnergyPlus, and conducted building energy simulations to demonstrate greater than 50% average energy savings, in comparison to a baseline heat pump with electric water heater, over 10 US cities, based on the EnergyPlus quick-service restaurant template building. We also assessed water heating energy saving potentials using ASIHP versus gas heating, and pointed out climate zones where AS-IHPs are promising.

  13. Greater Biopsy Core Number Is Associated With Improved Biochemical Control in Patients Treated With Permanent Prostate Brachytherapy

    International Nuclear Information System (INIS)

    Bittner, Nathan; Merrick, Gregory S.; Galbreath, Robert W.; Butler, Wayne M.; Adamovich, Edward; Wallner, Kent E.

    2010-01-01

    Purpose: Standard prostate biopsy schemes underestimate Gleason score in a significant percentage of cases. Extended biopsy improves diagnostic accuracy and provides more reliable prognostic information. In this study, we tested the hypothesis that greater biopsy core number should result in improved treatment outcome through better tailoring of therapy. Methods and Materials: From April 1995 to May 2006, 1,613 prostate cancer patients were treated with permanent brachytherapy. Patients were divided into five groups stratified by the number of prostate biopsy cores (≤6, 7-9, 10-12, 13-20, and >20 cores). Biochemical progression-free survival (bPFS), cause-specific survival (CSS), and overall survival (OS) were evaluated as a function of core number. Results: The median patient age was 66 years, and the median preimplant prostate-specific antigen was 6.5 ng/mL. The overall 10-year bPFS, CSS, and OS were 95.6%, 98.3%, and 78.6%, respectively. When bPFS was analyzed as a function of core number, the 10-year bPFS for patients with >20, 13-20, 10-12, 7-9 and ≤6 cores was 100%, 100%, 98.3%, 95.8%, and 93.0% (p < 0.001), respectively. When evaluated by treatment era (1995-2000 vs. 2001-2006), the number of biopsy cores remained a statistically significant predictor of bPFS. On multivariate analysis, the number of biopsy cores was predictive of bPFS but did not predict for CSS or OS. Conclusion: Greater biopsy core number was associated with a statistically significant improvement in bPFS. Comprehensive regional sampling of the prostate may enhance diagnostic accuracy compared to a standard biopsy scheme, resulting in better tailoring of therapy.

  14. Butterfly valves: greater use in power plants

    International Nuclear Information System (INIS)

    McCoy, M.

    1975-01-01

    Improvements in butterfly valves, particularly in the areas of automatic control and leak tightness are described. The use of butterfly valves in nuclear power plants is discussed. These uses include service in component cooling, containment cooling, and containment isolation. The outlook for further improvements and greater uses is examined. (U.S.)

  15. The Easterlin Illusion: Economic growth does go with greater happiness

    NARCIS (Netherlands)

    R. Veenhoven (Ruut); F. Vergunst (Floris)

    2014-01-01

    markdownabstract__Abstract__ The 'Easterlin Paradox' holds that economic growth in nations does not buy greater happiness for the average citizen. This thesis was advanced in the 1970s on the basis of the then available data on happiness in nations. Later data have disproved most of the empirical

  16. MR Neurography of Greater Occipital Nerve Neuropathy: Initial Experience in Patients with Migraine.

    Science.gov (United States)

    Hwang, L; Dessouky, R; Xi, Y; Amirlak, B; Chhabra, A

    2017-11-01

    MR imaging of peripheral nerves (MR neurography) allows improved assessment of nerve anatomy and pathology. The objective of this study was to evaluate patients with unilateral occipital neuralgia using MR neurography and to assess the differences in greater occipital nerve signal and size between the symptomatic and asymptomatic sides. In this case-control evaluation using MR neurography, bilateral greater occipital nerve caliber, signal intensity, signal-to-noise ratios, and contrast-to-noise ratios were determined by 2 observers. Among 18 subjects with unilateral occipital migraines, the average greater occipital nerve diameter for the symptomatic side was significantly greater at 1.77 ± 0.4 mm than for the asymptomatic side at 1.29 ± 0.25 mm ( P = .001). The difference in nerve signal intensity between the symptomatic and asymptomatic sides was statistically significant at 269.06 ± 170.93 and 222.44 ± 170.46, respectively ( P = .043). The signal-to-noise ratios on the symptomatic side were higher at 15.79 ± 4.59 compared with the asymptomatic nerve at 14.02 ± 5.23 ( P = .009). Contrast-to-noise ratios were significantly higher on the symptomatic side than on the asymptomatic side at 2.57 ± 4.89 and -1.26 ± 5.02, respectively ( P = .004). Intraobserver performance was good to excellent (intraclass coefficient correlation, 0.68-0.93), and interobserver performance was fair to excellent (intraclass coefficient correlation, 0.54-0.81). MR neurography can be reliably used for the diagnosis of greater occipital nerve neuropathy in patients with unilateral occipital migraines with a good correlation of imaging findings to the clinical presentation. © 2017 by American Journal of Neuroradiology.

  17. Assessing the accuracy of weather radar to track intense rain cells in the Greater Lyon area, France

    Science.gov (United States)

    Renard, Florent; Chapon, Pierre-Marie; Comby, Jacques

    2012-01-01

    The Greater Lyon is a dense area located in the Rhône Valley in the south east of France. The conurbation counts 1.3 million inhabitants and the rainfall hazard is a great concern. However, until now, studies on rainfall over the Greater Lyon have only been based on the network of rain gauges, despite the presence of a C-band radar located in the close vicinity. Consequently, the first aim of this study was to investigate the hydrological quality of this radar. This assessment, based on comparison of radar estimations and rain-gauges values concludes that the radar data has overall a good quality since 2006. Given this good accuracy, this study made a next step and investigated the characteristics of intense rain cells that are responsible of the majority of floods in the Greater Lyon area. Improved knowledge on these rainfall cells is important to anticipate dangerous events and to improve the monitoring of the sewage system. This paper discusses the analysis of the ten most intense rainfall events in the 2001-2010 period. Spatial statistics pointed towards straight and linear movements of intense rainfall cells, independently on the ground surface conditions and the topography underneath. The speed of these cells was found nearly constant during a rainfall event, but depend from event to ranges on average from 25 to 66 km/h.

  18. Average configuration of the geomagnetic tail

    International Nuclear Information System (INIS)

    Fairfield, D.H.

    1979-01-01

    Over 3000 hours of Imp 6 magnetic field data obtained between 20 and 33 R/sub E/ in the geomagnetic tail have been used in a statistical study of the tail configuration. A distribution of 2.5-min averages of B/sub z/ as a function of position across the tail reveals that more flux crosses the equatorial plane near the dawn and dusk flanks (B-bar/sub z/=3.γ) than near midnight (B-bar/sub z/=1.8γ). The tail field projected in the solar magnetospheric equatorial plane deviates from the x axis due to flaring and solar wind aberration by an angle α=-0.9 Y/sub SM/-2.7, where Y/sub SM/ is in earth radii and α is in degrees. After removing these effects, the B/sub y/ component of the tail field is found to depend on interplanetary sector structure. During an 'away' sector the B/sub y/ component of the tail field is on average 0.5γ greater than that during a 'toward' sector, a result that is true in both tail lobes and is independent of location across the tail. This effect means the average field reversal between northern and southern lobes of the tail is more often 178 0 rather than the 180 0 that is generally supposed

  19. Effect of temporal averaging of meteorological data on predictions of groundwater recharge

    Directory of Open Access Journals (Sweden)

    Batalha Marcia S.

    2018-06-01

    Full Text Available Accurate estimates of infiltration and groundwater recharge are critical for many hydrologic, agricultural and environmental applications. Anticipated climate change in many regions of the world, especially in tropical areas, is expected to increase the frequency of high-intensity, short-duration precipitation events, which in turn will affect the groundwater recharge rate. Estimates of recharge are often obtained using monthly or even annually averaged meteorological time series data. In this study we employed the HYDRUS-1D software package to assess the sensitivity of groundwater recharge calculations to using meteorological time series of different temporal resolutions (i.e., hourly, daily, weekly, monthly and yearly averaged precipitation and potential evaporation rates. Calculations were applied to three sites in Brazil having different climatological conditions: a tropical savanna (the Cerrado, a humid subtropical area (the temperate southern part of Brazil, and a very wet tropical area (Amazonia. To simplify our current analysis, we did not consider any land use effects by ignoring root water uptake. Temporal averaging of meteorological data was found to lead to significant bias in predictions of groundwater recharge, with much greater estimated recharge rates in case of very uneven temporal rainfall distributions during the year involving distinct wet and dry seasons. For example, at the Cerrado site, using daily averaged data produced recharge rates of up to 9 times greater than using yearly averaged data. In all cases, an increase in the time of averaging of meteorological data led to lower estimates of groundwater recharge, especially at sites having coarse-textured soils. Our results show that temporal averaging limits the ability of simulations to predict deep penetration of moisture in response to precipitation, so that water remains in the upper part of the vadose zone subject to upward flow and evaporation.

  20. Aperture averaging in strong oceanic turbulence

    Science.gov (United States)

    Gökçe, Muhsin Caner; Baykal, Yahya

    2018-04-01

    Receiver aperture averaging technique is employed in underwater wireless optical communication (UWOC) systems to mitigate the effects of oceanic turbulence, thus to improve the system performance. The irradiance flux variance is a measure of the intensity fluctuations on a lens of the receiver aperture. Using the modified Rytov theory which uses the small-scale and large-scale spatial filters, and our previously presented expression that shows the atmospheric structure constant in terms of oceanic turbulence parameters, we evaluate the irradiance flux variance and the aperture averaging factor of a spherical wave in strong oceanic turbulence. Irradiance flux variance variations are examined versus the oceanic turbulence parameters and the receiver aperture diameter are examined in strong oceanic turbulence. Also, the effect of the receiver aperture diameter on the aperture averaging factor is presented in strong oceanic turbulence.

  1. Brief communication: Using averaged soil moisture estimates to improve the performances of a regional-scale landslide early warning system

    Science.gov (United States)

    Segoni, Samuele; Rosi, Ascanio; Lagomarsino, Daniela; Fanti, Riccardo; Casagli, Nicola

    2018-03-01

    We communicate the results of a preliminary investigation aimed at improving a state-of-the-art RSLEWS (regional-scale landslide early warning system) based on rainfall thresholds by integrating mean soil moisture values averaged over the territorial units of the system. We tested two approaches. The simplest can be easily applied to improve other RSLEWS: it is based on a soil moisture threshold value under which rainfall thresholds are not used because landslides are not expected to occur. Another approach deeply modifies the original RSLEWS: thresholds based on antecedent rainfall accumulated over long periods are substituted with soil moisture thresholds. A back analysis demonstrated that both approaches consistently reduced false alarms, while the second approach reduced missed alarms as well.

  2. Averaging in the presence of sliding errors

    International Nuclear Information System (INIS)

    Yost, G.P.

    1991-08-01

    In many cases the precision with which an experiment can measure a physical quantity depends on the value of that quantity. Not having access to the true value, experimental groups are forced to assign their errors based on their own measured value. Procedures which attempt to derive an improved estimate of the true value by a suitable average of such measurements usually weight each experiment's measurement according to the reported variance. However, one is in a position to derive improved error estimates for each experiment from the average itself, provided an approximate idea of the functional dependence of the error on the central value is known. Failing to do so can lead to substantial biases. Techniques which avoid these biases without loss of precision are proposed and their performance is analyzed with examples. These techniques are quite general and can bring about an improvement even when the behavior of the errors is not well understood. Perhaps the most important application of the technique is in fitting curves to histograms

  3. Global Positioning System Use in the Community to Evaluate Improvements in Walking After Revascularization

    Science.gov (United States)

    Gernigon, Marie; Le Faucheur, Alexis; Fradin, Dominique; Noury-Desvaux, Bénédicte; Landron, Cédric; Mahe, Guillaume; Abraham, Pierre

    2015-01-01

    Abstract Revascularization aims at improving walking ability in patients with arterial claudication. The highest measured distance between 2 stops (highest-MDCW), the average walking speed (average-WSCW), and the average stop duration (average-DSCW) can be measured by global positioning system, but their evolution after revascularization is unknown. We included 251 peripheral artery diseased patients with self-reported limiting claudication. The patients performed a 1-hour stroll, recorded by a global positioning system receiver. Patients (n = 172) with confirmed limitation (highest-MDCW the follow-up period were compared with reference patients (ie, with unchanged lifestyle medical or surgical status). Other patients (lost to follow-up or treatment change) were excluded (n = 89). We studied 44 revascularized and 39 reference patients. Changes in highest-MDCW (+442 vs. +13 m) and average-WSCW (+0.3 vs. −0.2 km h−1) were greater in revascularized than in reference patients (both P the groups. Among the revascularized patients, 13 (29.5%) had a change in average-WSCW, but not in highest-MDCW, greater than the mean + 1 standard deviation of the change observed for reference patients. Revascularization may improve highest-MDCW and/or average-WSCW. This first report of changes in community walking ability in revascularized patients suggests that, beyond measuring walking distances, average-WSCW measurement is essential to monitor these changes. Applicability to other surgical populations remains to be evaluated. Registration: http://www.clinicaltrials.gov/ct2/show/NCT01141361 PMID:25950694

  4. Exploiting scale dependence in cosmological averaging

    International Nuclear Information System (INIS)

    Mattsson, Teppo; Ronkainen, Maria

    2008-01-01

    We study the role of scale dependence in the Buchert averaging method, using the flat Lemaitre–Tolman–Bondi model as a testing ground. Within this model, a single averaging scale gives predictions that are too coarse, but by replacing it with the distance of the objects R(z) for each redshift z, we find an O(1%) precision at z<2 in the averaged luminosity and angular diameter distances compared to their exact expressions. At low redshifts, we show the improvement for generic inhomogeneity profiles, and our numerical computations further verify it up to redshifts z∼2. At higher redshifts, the method breaks down due to its inability to capture the time evolution of the inhomogeneities. We also demonstrate that the running smoothing scale R(z) can mimic acceleration, suggesting that it could be at least as important as the backreaction in explaining dark energy as an inhomogeneity induced illusion

  5. Greater-than-Class-C Low-Level Waste Data Base user's manual

    International Nuclear Information System (INIS)

    1992-07-01

    The Greater-than-Class-C Low-level Waste (GTCC LLW) Data Base characterizes GTCC LLW using low, base, and high cases for three different scenarios: unpackaged, packaged, and concentration averages. The GTCC LLW Data Base can be used to project future volumes and radionuclide activities. This manual provides instructions for users of the GTCC LLW Data Base

  6. Generalized Jackknife Estimators of Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic...

  7. Lower inhibitory control interacts with greater pain catastrophizing to predict greater pain intensity in women with migraine and overweight/obesity.

    Science.gov (United States)

    Galioto, Rachel; O'Leary, Kevin C; Thomas, J Graham; Demos, Kathryn; Lipton, Richard B; Gunstad, John; Pavlović, Jelena M; Roth, Julie; Rathier, Lucille; Bond, Dale S

    2017-12-01

    Pain catastrophizing (PC) is associated with more severe and disabling migraine attacks. However, factors that moderate this relationship are unknown. Failure of inhibitory control (IC), or the ability to suppress automatic or inappropriate responses, may be one such factor given previous research showing a relationship between higher PC and lower IC in non-migraine samples, and research showing reduced IC in migraine. Therefore, we examined whether lower IC interacts with increased PC to predict greater migraine severity as measured by pain intensity, attack frequency, and duration. Women (n = 105) aged 18-50 years old (M = 38.0 ± 1.2) with overweight/obesity and migraine who were seeking behavioral treatment for weight loss and migraine reduction completed a 28-day smartphone-based headache diary assessing migraine headache severity. Participants then completed a modified computerized Stroop task as a measure of IC and self-report measures of PC (Pain Catastrophizing Scale [PCS]), anxiety, and depression. Linear regression was used to examine independent and joint associations of PC and IC with indices of migraine severity after controlling for age, body mass index (BMI) depression, and anxiety. Participants on average had BMI of 35.1 ± 6.5 kg/m 2 and reported 5.3 ± 2.6 migraine attacks (8.3 ± 4.4 migraine days) over 28 days that produced moderate pain intensity (5.9 ± 1.4 out of 10) with duration of 20.0 ± 14.2 h. After adjusting for covariates, higher PCS total (β = .241, SE = .14, p = .03) and magnification subscale (β = .311, SE = .51, p migraine attacks. Future studies are needed to determine whether interventions to improve IC could lead to less painful migraine attacks via improvements in PC.

  8. ON IMPROVEMENT OF METHODOLOGY FOR CALCULATING THE INDICATOR «AVERAGE WAGE»

    Directory of Open Access Journals (Sweden)

    Oksana V. Kuchmaeva

    2015-01-01

    Full Text Available The article describes the approaches to the calculation of the indicator of average wages in Russia with the use of several sources of information. The proposed method is based on data collected by Rosstat and the Pension Fund of the Russian Federation. The proposed approach allows capturing data on the wages of almost all groups of employees. Results of experimental calculations on the developed technique are present in this article.

  9. A high speed digital signal averager for pulsed NMR

    International Nuclear Information System (INIS)

    Srinivasan, R.; Ramakrishna, J.; Ra agopalan, S.R.

    1978-01-01

    A 256-channel digital signal averager suitable for pulsed nuclear magnetic resonance spectroscopy is described. It implements 'stable averaging' algorithm and hence provides a calibrated display of the average signal at all times during the averaging process on a CRT. It has a maximum sampling rate of 2.5 μ sec and a memory capacity of 256 x 12 bit words. Number of sweeps is selectable through a front panel control in binary steps from 2 3 to 2 12 . The enhanced signal can be displayed either on a CRT or by a 3.5-digit LED display. The maximum S/N improvement that can be achieved with this instrument is 36 dB. (auth.)

  10. Flexible time domain averaging technique

    Science.gov (United States)

    Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng

    2013-09-01

    Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

  11. State Averages

    Data.gov (United States)

    U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...

  12. Average radiation weighting factors for specific distributed neutron spectra

    International Nuclear Information System (INIS)

    Ninkovic, M.M.; Raicevic, J.J.

    1993-01-01

    Spectrum averaged radiation weighting factors for 6 specific neutron fields in the environment of 3 categories of the neutron sources (fission, spontaneous fission and (α,n)) are determined in this paper. Obtained values of these factors are greater 1.5 to 2 times than the corresponding quality factors used for the same purpose until a few years ago. This fact is very important to have in mind in the conversion of the neutron fluence into the neutron dose equivalent. (author)

  13. Instructions to "push as hard as you can" improve average chest compression depth in dispatcher-assisted cardiopulmonary resuscitation.

    Science.gov (United States)

    Mirza, Muzna; Brown, Todd B; Saini, Devashish; Pepper, Tracy L; Nandigam, Hari Krishna; Kaza, Niroop; Cofield, Stacey S

    2008-10-01

    Cardiopulmonary resuscitation (CPR) with adequate chest compression depth appears to improve first shock success in cardiac arrest. We evaluate the effect of simplification of chest compression instructions on compression depth in dispatcher-assisted CPR protocol. Data from two randomized, double-blinded, controlled trials with identical methodology were combined to obtain 332 records for this analysis. Subjects were randomized to either modified Medical Priority Dispatch System (MPDS) v11.2 protocol or a new simplified protocol. The main difference between the protocols was the instruction to "push as hard as you can" in the simplified protocol, compared to "push down firmly 2in. (5cm)" in MPDS. Data were recorded via a Laerdal ResusciAnne SkillReporter manikin. Primary outcome measures included: chest compression depth, proportion of compressions without error, with adequate depth and with total release. Instructions to "push as hard as you can", compared to "push down firmly 2in. (5cm)", resulted in improved chest compression depth (36.4 mm vs. 29.7 mm, pCPR instructions by changing "push down firmly 2in. (5cm)" to "push as hard as you can" achieved improvement in chest compression depth at no cost to total release or average chest compression rate.

  14. A precise measurement of the average b hadron lifetime

    CERN Document Server

    Buskulic, Damir; De Bonis, I; Décamp, D; Ghez, P; Goy, C; Lees, J P; Lucotte, A; Minard, M N; Odier, P; Pietrzyk, B; Ariztizabal, F; Chmeissani, M; Crespo, J M; Efthymiopoulos, I; Fernández, E; Fernández-Bosman, M; Gaitan, V; Garrido, L; Martínez, M; Orteu, S; Pacheco, A; Padilla, C; Palla, Fabrizio; Pascual, A; Perlas, J A; Sánchez, F; Teubert, F; Colaleo, A; Creanza, D; De Palma, M; Farilla, A; Gelao, G; Girone, M; Iaselli, Giuseppe; Maggi, G; Maggi, M; Marinelli, N; Natali, S; Nuzzo, S; Ranieri, A; Raso, G; Romano, F; Ruggieri, F; Selvaggi, G; Silvestris, L; Tempesta, P; Zito, G; Huang, X; Lin, J; Ouyang, Q; Wang, T; Xie, Y; Xu, R; Xue, S; Zhang, J; Zhang, L; Zhao, W; Bonvicini, G; Cattaneo, M; Comas, P; Coyle, P; Drevermann, H; Engelhardt, A; Forty, Roger W; Frank, M; Hagelberg, R; Harvey, J; Jacobsen, R; Janot, P; Jost, B; Knobloch, J; Lehraus, Ivan; Markou, C; Martin, E B; Mato, P; Meinhard, H; Minten, Adolf G; Miquel, R; Oest, T; Palazzi, P; Pater, J R; Pusztaszeri, J F; Ranjard, F; Rensing, P E; Rolandi, Luigi; Schlatter, W D; Schmelling, M; Schneider, O; Tejessy, W; Tomalin, I R; Venturi, A; Wachsmuth, H W; Wiedenmann, W; Wildish, T; Witzeling, W; Wotschack, J; Ajaltouni, Ziad J; Bardadin-Otwinowska, Maria; Barrès, A; Boyer, C; Falvard, A; Gay, P; Guicheney, C; Henrard, P; Jousset, J; Michel, B; Monteil, S; Montret, J C; Pallin, D; Perret, P; Podlyski, F; Proriol, J; Rossignol, J M; Saadi, F; Fearnley, Tom; Hansen, J B; Hansen, J D; Hansen, J R; Hansen, P H; Nilsson, B S; Kyriakis, A; Simopoulou, Errietta; Siotis, I; Vayaki, Anna; Zachariadou, K; Blondel, A; Bonneaud, G R; Brient, J C; Bourdon, P; Passalacqua, L; Rougé, A; Rumpf, M; Tanaka, R; Valassi, Andrea; Verderi, M; Videau, H L; Candlin, D J; Parsons, M I; Focardi, E; Parrini, G; Corden, M; Delfino, M C; Georgiopoulos, C H; Jaffe, D E; Antonelli, A; Bencivenni, G; Bologna, G; Bossi, F; Campana, P; Capon, G; Chiarella, V; Felici, G; Laurelli, P; Mannocchi, G; Murtas, F; Murtas, G P; Pepé-Altarelli, M; Dorris, S J; Halley, A W; ten Have, I; Knowles, I G; Lynch, J G; Morton, W T; O'Shea, V; Raine, C; Reeves, P; Scarr, J M; Smith, K; Smith, M G; Thompson, A S; Thomson, F; Thorn, S; Turnbull, R M; Becker, U; Braun, O; Geweniger, C; Graefe, G; Hanke, P; Hepp, V; Kluge, E E; Putzer, A; Rensch, B; Schmidt, M; Sommer, J; Stenzel, H; Tittel, K; Werner, S; Wunsch, M; Beuselinck, R; Binnie, David M; Cameron, W; Colling, D J; Dornan, Peter J; Konstantinidis, N P; Moneta, L; Moutoussi, A; Nash, J; San Martin, G; Sedgbeer, J K; Stacey, A M; Dissertori, G; Girtler, P; Kneringer, E; Kuhn, D; Rudolph, G; Bowdery, C K; Brodbeck, T J; Colrain, P; Crawford, G; Finch, A J; Foster, F; Hughes, G; Sloan, Terence; Whelan, E P; Williams, M I; Galla, A; Greene, A M; Kleinknecht, K; Quast, G; Raab, J; Renk, B; Sander, H G; Wanke, R; Van Gemmeren, P; Zeitnitz, C; Aubert, Jean-Jacques; Bencheikh, A M; Benchouk, C; Bonissent, A; Bujosa, G; Calvet, D; Carr, J; Diaconu, C A; Etienne, F; Thulasidas, M; Nicod, D; Payre, P; Rousseau, D; Talby, M; Abt, I; Assmann, R W; Bauer, C; Blum, Walter; Brown, D; Dietl, H; Dydak, Friedrich; Ganis, G; Gotzhein, C; Jakobs, K; Kroha, H; Lütjens, G; Lutz, Gerhard; Männer, W; Moser, H G; Richter, R H; Rosado-Schlosser, A; Schael, S; Settles, Ronald; Seywerd, H C J; Stierlin, U; Saint-Denis, R; Wolf, G; Alemany, R; Boucrot, J; Callot, O; Cordier, A; Courault, F; Davier, M; Duflot, L; Grivaz, J F; Heusse, P; Jacquet, M; Kim, D W; Le Diberder, F R; Lefrançois, J; Lutz, A M; Musolino, G; Nikolic, I A; Park, H J; Park, I C; Schune, M H; Simion, S; Veillet, J J; Videau, I; Abbaneo, D; Azzurri, P; Bagliesi, G; Batignani, G; Bettarini, S; Bozzi, C; Calderini, G; Carpinelli, M; Ciocci, M A; Ciulli, V; Dell'Orso, R; Fantechi, R; Ferrante, I; Foà, L; Forti, F; Giassi, A; Giorgi, M A; Gregorio, A; Ligabue, F; Lusiani, A; Marrocchesi, P S; Messineo, A; Rizzo, G; Sanguinetti, G; Sciabà, A; Spagnolo, P; Steinberger, Jack; Tenchini, Roberto; Tonelli, G; Triggiani, G; Vannini, C; Verdini, P G; Walsh, J; Betteridge, A P; Blair, G A; Bryant, L M; Cerutti, F; Gao, Y; Green, M G; Johnson, D L; Medcalf, T; Mir, L M; Perrodo, P; Strong, J A; Bertin, V; Botterill, David R; Clifft, R W; Edgecock, T R; Haywood, S; Edwards, M; Maley, P; Norton, P R; Thompson, J C; Bloch-Devaux, B; Colas, P; Duarte, H; Emery, S; Kozanecki, Witold; Lançon, E; Lemaire, M C; Locci, E; Marx, B; Pérez, P; Rander, J; Renardy, J F; Rosowsky, A; Roussarie, A; Schuller, J P; Schwindling, J; Si Mohand, D; Trabelsi, A; Vallage, B; Johnson, R P; Kim, H Y; Litke, A M; McNeil, M A; Taylor, G; Beddall, A; Booth, C N; Boswell, R; Cartwright, S L; Combley, F; Dawson, I; Köksal, A; Letho, M; Newton, W M; Rankin, C; Thompson, L F; Böhrer, A; Brandt, S; Cowan, G D; Feigl, E; Grupen, Claus; Lutters, G; Minguet-Rodríguez, J A; Rivera, F; Saraiva, P; Smolik, L; Stephan, F; Apollonio, M; Bosisio, L; Della Marina, R; Giannini, G; Gobbo, B; Ragusa, F; Rothberg, J E; Wasserbaech, S R; Armstrong, S R; Bellantoni, L; Elmer, P; Feng, P; Ferguson, D P S; Gao, Y S; González, S; Grahl, J; Harton, J L; Hayes, O J; Hu, H; McNamara, P A; Nachtman, J M; Orejudos, W; Pan, Y B; Saadi, Y; Schmitt, M; Scott, I J; Sharma, V; Turk, J; Walsh, A M; Wu Sau Lan; Wu, X; Yamartino, J M; Zheng, M; Zobernig, G

    1996-01-01

    An improved measurement of the average b hadron lifetime is performed using a sample of 1.5 million hadronic Z decays, collected during the 1991-1993 runs of ALEPH, with the silicon vertex detector fully operational. This uses the three-dimensional impact parameter distribution of lepton tracks coming from semileptonic b decays and yields an average b hadron lifetime of 1.533 \\pm 0.013 \\pm 0.022 ps.

  15. Improving The Average Session Evaluation Score Of Supervisory Programby Using PDCA Cycle At PT XYZ

    Directory of Open Access Journals (Sweden)

    Jonny Jonny

    2016-09-01

    Full Text Available PT XYZ took People Development tasks as important things in order to provide great leaders for handling its business operations. It had several leadership programs such as basic management program, supervisory program, managerial program, senior management program, general management program, and the executive program. For basic management and supervisory programs, PT XYZ had appointed ABC division to solely handled them, while the rest, ABC division should cooperate with other training providers who were reputable in leadership ones. The aim of this study was to ensure that the appropriate leadership style has been delivered accordingly to the guideline to the employees by ABC division to improve the average session evaluation score of the supervisory program by using PDCA (Plan, Do, Check, and Action cycle. The method of this research was by gathering quantitative and qualitative data by using session and program evaluation format to see current condition. The research finds that the reasons why the program is below target 4,10 score. It is related to the new facilitator, no framework, and teaching aids. 

  16. Higher Physiotherapy Frequency Is Associated with Shorter Length of Stay and Greater Functional Recovery in Hospitalized Frail Older Adults: A Retrospective Observational Study.

    Science.gov (United States)

    Hartley, P; Adamson, J; Cunningham, C; Embleton, G; Romero-Ortuno, R

    2016-01-01

    Extra physiotherapy has been associated with better outcomes in hospitalized patients, but this remains an under-researched area in geriatric medicine wards. We retrospectively studied the association between average physiotherapy frequency and outcomes in hospitalized geriatric patients. High frequency physiotherapy (HFP) was defined as ≥0.5 contacts/day. Of 358 eligible patients, 131 (36.6%) received low, and 227 (63.4%) HFP. Functional improvement (discharge versus admission) in the modified Rankin scale was greater in the HFP group (1.1 versus 0.7 points, Pphysiotherapy frequency and intensity in geriatric wards.

  17. Mapping grassland productivity with 250-m eMODIS NDVI and SSURGO database over the Greater Platte River Basin, USA

    Science.gov (United States)

    Gu, Yingxin; Wylie, Bruce K.; Bliss, Norman B.

    2013-01-01

    This study assessed and described a relationship between satellite-derived growing season averaged Normalized Difference Vegetation Index (NDVI) and annual productivity for grasslands within the Greater Platte River Basin (GPRB) of the United States. We compared growing season averaged NDVI (GSN) with Soil Survey Geographic (SSURGO) database rangeland productivity and flux tower Gross Primary Productivity (GPP) for grassland areas. The GSN was calculated for each of nine years (2000–2008) using the 7-day composite 250-m eMODIS (expedited Moderate Resolution Imaging Spectroradiometer) NDVI data. Strong correlations exist between the nine-year mean GSN (MGSN) and SSURGO annual productivity for grasslands (R2 = 0.74 for approximately 8000 pixels randomly selected from eight homogeneous regions within the GPRB; R2 = 0.96 for the 14 cluster-averaged points). Results also reveal a strong correlation between GSN and flux tower growing season averaged GPP (R2 = 0.71). Finally, we developed an empirical equation to estimate grassland productivity based on the MGSN. Spatially explicit estimates of grassland productivity over the GPRB were generated, which improved the regional consistency of SSURGO grassland productivity data and can help scientists and land managers to better understand the actual biophysical and ecological characteristics of grassland systems in the GPRB. This final estimated grassland production map can also be used as an input for biogeochemical, ecological, and climate change models.

  18. Averaging Bias Correction for Future IPDA Lidar Mission MERLIN

    Science.gov (United States)

    Tellier, Yoann; Pierangelo, Clémence; Wirth, Martin; Gibert, Fabien

    2018-04-01

    The CNES/DLR MERLIN satellite mission aims at measuring methane dry-air mixing ratio column (XCH4) and thus improving surface flux estimates. In order to get a 1% precision on XCH4 measurements, MERLIN signal processing assumes an averaging of data over 50 km. The induced biases due to the non-linear IPDA lidar equation are not compliant with accuracy requirements. This paper analyzes averaging biases issues and suggests correction algorithms tested on realistic simulated scenes.

  19. OrthoANI: An improved algorithm and software for calculating average nucleotide identity.

    Science.gov (United States)

    Lee, Imchang; Ouk Kim, Yeong; Park, Sang-Cheol; Chun, Jongsik

    2016-02-01

    Species demarcation in Bacteria and Archaea is mainly based on overall genome relatedness, which serves a framework for modern microbiology. Current practice for obtaining these measures between two strains is shifting from experimentally determined similarity obtained by DNA-DNA hybridization (DDH) to genome-sequence-based similarity. Average nucleotide identity (ANI) is a simple algorithm that mimics DDH. Like DDH, ANI values between two genome sequences may be different from each other when reciprocal calculations are compared. We compared 63 690 pairs of genome sequences and found that the differences in reciprocal ANI values are significantly high, exceeding 1 % in some cases. To resolve this problem of not being symmetrical, a new algorithm, named OrthoANI, was developed to accommodate the concept of orthology for which both genome sequences were fragmented and only orthologous fragment pairs taken into consideration for calculating nucleotide identities. OrthoANI is highly correlated with ANI (using BLASTn) and the former showed approximately 0.1 % higher values than the latter. In conclusion, OrthoANI provides a more robust and faster means of calculating average nucleotide identity for taxonomic purposes. The standalone software tools are freely available at http://www.ezbiocloud.net/sw/oat.

  20. Ultra-low noise miniaturized neural amplifier with hardware averaging.

    Science.gov (United States)

    Dweiri, Yazan M; Eggers, Thomas; McCallum, Grant; Durand, Dominique M

    2015-08-01

    Peripheral nerves carry neural signals that could be used to control hybrid bionic systems. Cuff electrodes provide a robust and stable interface but the recorded signal amplitude is small (concept of hardware averaging to nerve recordings obtained with cuff electrodes. An optimization procedure is developed to minimize noise and power simultaneously. The novel design was based on existing neural amplifiers (Intan Technologies, LLC) and is validated with signals obtained from the FINE in chronic dog experiments. We showed that hardware averaging leads to a reduction in the total recording noise by a factor of 1/√N or less depending on the source resistance. Chronic recording of physiological activity with FINE using the presented design showed significant improvement on the recorded baseline noise with at least two parallel operation transconductance amplifiers leading to a 46.1% reduction at N = 8. The functionality of these recordings was quantified by the SNR improvement and shown to be significant for N = 3 or more. The present design was shown to be capable of generating hardware averaging on noise improvement for neural recording with cuff electrodes, and can accommodate the presence of high source impedances that are associated with the miniaturized contacts and the high channel count in electrode arrays. This technique can be adopted for other applications where miniaturized and implantable multichannel acquisition systems with ultra-low noise and low power are required.

  1. On the average configuration of the geomagnetic tail

    International Nuclear Information System (INIS)

    Fairfield, D.H.

    1978-03-01

    Over 3000 hours of IMP-6 magnetic field data obtained between 20 and 33 R sub E in the geomagnetic tail have been used in a statistical study of the tail configuration. A distribution of 2.5 minute averages of B sub Z as a function of position across the tail reveals that more flux crosses the equatorial plane near the dawn and dusk flanks than near midnight. The tail field projected in the solar magnetospheric equatorial plane deviates from the X axis due to flaring and solar wind aberration by an angle alpha = -0.9 y sub SM - 1.7, where y/sub SM/ is in earth radii and alpha is in degrees. After removing these effects the Y component of the tail field is found to depend on interplanetary sector structure. During an away sector the B/sub Y/ component of the tail field is on average 0.5 gamma greater than that during a toward sector, a result that is true in both tail lobes and is independent of location across the tail

  2. Averaging Bias Correction for Future IPDA Lidar Mission MERLIN

    Directory of Open Access Journals (Sweden)

    Tellier Yoann

    2018-01-01

    Full Text Available The CNES/DLR MERLIN satellite mission aims at measuring methane dry-air mixing ratio column (XCH4 and thus improving surface flux estimates. In order to get a 1% precision on XCH4 measurements, MERLIN signal processing assumes an averaging of data over 50 km. The induced biases due to the non-linear IPDA lidar equation are not compliant with accuracy requirements. This paper analyzes averaging biases issues and suggests correction algorithms tested on realistic simulated scenes.

  3. Improved Multiscale Entropy Technique with Nearest-Neighbor Moving-Average Kernel for Nonlinear and Nonstationary Short-Time Biomedical Signal Analysis

    Directory of Open Access Journals (Sweden)

    S. P. Arunachalam

    2018-01-01

    Full Text Available Analysis of biomedical signals can yield invaluable information for prognosis, diagnosis, therapy evaluation, risk assessment, and disease prevention which is often recorded as short time series data that challenges existing complexity classification algorithms such as Shannon entropy (SE and other techniques. The purpose of this study was to improve previously developed multiscale entropy (MSE technique by incorporating nearest-neighbor moving-average kernel, which can be used for analysis of nonlinear and non-stationary short time series physiological data. The approach was tested for robustness with respect to noise analysis using simulated sinusoidal and ECG waveforms. Feasibility of MSE to discriminate between normal sinus rhythm (NSR and atrial fibrillation (AF was tested on a single-lead ECG. In addition, the MSE algorithm was applied to identify pivot points of rotors that were induced in ex vivo isolated rabbit hearts. The improved MSE technique robustly estimated the complexity of the signal compared to that of SE with various noises, discriminated NSR and AF on single-lead ECG, and precisely identified the pivot points of ex vivo rotors by providing better contrast between the rotor core and the peripheral region. The improved MSE technique can provide efficient complexity analysis of variety of nonlinear and nonstationary short-time biomedical signals.

  4. Affordability Assessment to Implement Light Rail Transit (LRT for Greater Yogyakarta

    Directory of Open Access Journals (Sweden)

    Anjang Nugroho

    2015-06-01

    Full Text Available The high population density and the increasing visitors in Yogyakarta aggravate the traffic congestion problem. BRT (Bus Rapid Transit services, Trans Jogja has not managed to solve this problem yet. Introducing Light Rail Transit (LRT has been considered as one of the solutions to restrain the congestion in Greater Yogyakarta. As the first indication that the LRT can be built in Greater Yogyakarta, the transportation affordability index was used to understand whether the LRT tariff was affordable. That tariff was calculated based on government policy in determining railway tariff. The forecasted potential passengers and LRT route have been analyzed as the previous steps to get LRT tariff. Potential passenger was forecasted from gravity mode, and the proposed LRT route was chosen using Multi Criteria Decision Analysis (MCDA. The existing transportation affordability index was calculated for comparison analysis using the percentage of the expenditures for transportation made by monthly income of each household. The result showed that the LRT for Greater Yogyakarta was the most affordable transport mode compared to the Trans Jogja Bus and motorcycle. The affordability index of Tram Jogja for people having average income was 10.66% while another people with bottom quartile income was 13.56%. Keywords: Greater Yogyakarta, LRT, affordability.

  5. Averaging models: parameters estimation with the R-Average procedure

    Directory of Open Access Journals (Sweden)

    S. Noventa

    2010-01-01

    Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.

  6. Greater weight loss and hormonal changes after 6 months diet with carbohydrates eaten mostly at dinner.

    Science.gov (United States)

    Sofer, Sigal; Eliraz, Abraham; Kaplan, Sara; Voet, Hillary; Fink, Gershon; Kima, Tzadok; Madar, Zecharia

    2011-10-01

    This study was designed to investigate the effect of a low-calorie diet with carbohydrates eaten mostly at dinner on anthropometric, hunger/satiety, biochemical, and inflammatory parameters. Hormonal secretions were also evaluated. Seventy-eight police officers (BMI >30) were randomly assigned to experimental (carbohydrates eaten mostly at dinner) or control weight loss diets for 6 months. On day 0, 7, 90, and 180 blood samples and hunger scores were collected every 4 h from 0800 to 2000 hours. Anthropometric measurements were collected throughout the study. Greater weight loss, abdominal circumference, and body fat mass reductions were observed in the experimental diet in comparison to controls. Hunger scores were lower and greater improvements in fasting glucose, average daily insulin concentrations, and homeostasis model assessment for insulin resistance (HOMA(IR)), T-cholesterol, low-density lipoprotein (LDL) cholesterol, high-density lipoprotein (HDL) cholesterol, C-reactive protein (CRP), tumor necrosis factor-α (TNF-α), and interleukin-6 (IL-6) levels were observed in comparison to controls. The experimental diet modified daily leptin and adiponectin concentrations compared to those observed at baseline and to a control diet. A simple dietary manipulation of carbohydrate distribution appears to have additional benefits when compared to a conventional weight loss diet in individuals suffering from obesity. It might also be beneficial for individuals suffering from insulin resistance and the metabolic syndrome. Further research is required to confirm and clarify the mechanisms by which this relatively simple diet approach enhances satiety, leads to better anthropometric outcomes, and achieves improved metabolic response, compared to a more conventional dietary approach.

  7. An application of commercial data averaging techniques in pulsed photothermal experiments

    International Nuclear Information System (INIS)

    Grozescu, I.V.; Moksin, M.M.; Wahab, Z.A.; Yunus, W.M.M.

    1997-01-01

    We present an application of data averaging technique commonly implemented in many commercial digital oscilloscopes or waveform digitizers. The technique was used for transient data averaging in the pulsed photothermal radiometry experiments. Photothermal signals are surrounded by an important amount of noise which affect the precision of the measurements. The effect of the noise level on photothermal signal parameters in our particular case, fitted decay time, is shown. The results of the analysis can be used in choosing the most effective averaging technique and estimating the averaging parameter values. This would help to reduce the data acquisition time while improving the signal-to-noise ratio

  8. System for evaluation of the true average input-pulse rate

    International Nuclear Information System (INIS)

    Eichenlaub, D.P.; Garrett, P.

    1977-01-01

    The description is given of a digital radiation monitoring system making use of current digital circuit and microprocessor for rapidly processing the pulse data coming from remote radiation controllers. This system analyses the pulse rates in order to determine if a new datum is statistically the same as that previously received. Hence it determines the best possible average time for itself. So long as the true average pulse rate stays constant, the time required to establish an average can increase until the statistical error is under the desired level, i.e. 1%. When the digital processing of the pulse data indicates a change in the true average pulse rate, the time required to establish an average can be reduced so as to improve the response time of the system at the statistical error. This concept includes a fixed compromise between the statistical error and the response time [fr

  9. Sonography of greater trochanteric pain syndrome and the rarity of primary bursitis.

    Science.gov (United States)

    Long, Suzanne S; Surrey, David E; Nazarian, Levon N

    2013-11-01

    Greater trochanteric pain syndrome is a common condition with clinical features of pain and tenderness at the lateral aspect of the hip. Diagnosing the origin of greater trochanteric pain is important because the treatment varies depending on the cause. We hypothesized that sonographic evaluation of sources for greater trochanteric pain syndrome would show that bursitis was not the most commonly encountered abnormality. We performed a retrospective review of musculoskeletal sonographic examinations performed at our institution over a 6-year period for greater trochanteric pain syndrome; completed a tabulation of the sonographic findings; and assessed the prevalence of trochanteric bursitis, gluteal tendon abnormalities, iliotibial band abnormalities, or a combination of findings. Prevalence of abnormal findings, associations of bursitis, gluteal tendinosis, gluteal tendon tears, and iliotibial band abnormalities were calculated. The final study population consisted of 877 unique patients: 602 women, 275 men; average age, 54 years; and age range, 15-87 years). Of the 877 patients with greater trochanteric pain, 700 (79.8%) did not have bursitis on ultrasound. A minority of patients (177, 20.2%) had trochanteric bursitis. Of the 877 patients with greater trochanteric pain, 438 (49.9%) had gluteal tendinosis, four (0.5%) had gluteal tendon tears, and 250 (28.5%) had a thickened iliotibial band. The cause of greater trochanteric pain syndrome is usually some combination of pathology involving the gluteus medius and gluteus minimus tendons as well as the iliotibial band. Bursitis is present in only the minority of patients. These findings have implications for treatment of this common condition.

  10. A systematic comparison of two-equation Reynolds-averaged Navier-Stokes turbulence models applied to shock-cloud interactions

    Science.gov (United States)

    Goodson, Matthew D.; Heitsch, Fabian; Eklund, Karl; Williams, Virginia A.

    2017-07-01

    Turbulence models attempt to account for unresolved dynamics and diffusion in hydrodynamical simulations. We develop a common framework for two-equation Reynolds-averaged Navier-Stokes turbulence models, and we implement six models in the athena code. We verify each implementation with the standard subsonic mixing layer, although the level of agreement depends on the definition of the mixing layer width. We then test the validity of each model into the supersonic regime, showing that compressibility corrections can improve agreement with experiment. For models with buoyancy effects, we also verify our implementation via the growth of the Rayleigh-Taylor instability in a stratified medium. The models are then applied to the ubiquitous astrophysical shock-cloud interaction in three dimensions. We focus on the mixing of shock and cloud material, comparing results from turbulence models to high-resolution simulations (up to 200 cells per cloud radius) and ensemble-averaged simulations. We find that the turbulence models lead to increased spreading and mixing of the cloud, although no two models predict the same result. Increased mixing is also observed in inviscid simulations at resolutions greater than 100 cells per radius, which suggests that the turbulent mixing begins to be resolved.

  11. Human-experienced temperature changes exceed global average climate changes for all income groups

    Science.gov (United States)

    Hsiang, S. M.; Parshall, L.

    2009-12-01

    Global climate change alters local climates everywhere. Many climate change impacts, such as those affecting health, agriculture and labor productivity, depend on these local climatic changes, not global mean change. Traditional, spatially averaged climate change estimates are strongly influenced by the response of icecaps and oceans, providing limited information on human-experienced climatic changes. If used improperly by decision-makers, these estimates distort estimated costs of climate change. We overlay the IPCC’s 20 GCM simulations on the global population distribution to estimate local climatic changes experienced by the world population in the 21st century. The A1B scenario leads to a well-known rise in global average surface temperature of +2.0°C between the periods 2011-2030 and 2080-2099. Projected on the global population distribution in 2000, the median human will experience an annual average rise of +2.3°C (4.1°F) and the average human will experience a rise of +2.4°C (4.3°F). Less than 1% of the population will experience changes smaller than +1.0°C (1.8°F), while 25% and 10% of the population will experience changes greater than +2.9°C (5.2°F) and +3.5°C (6.2°F) respectively. 67% of the world population experiences temperature changes greater than the area-weighted average change of +2.0°C (3.6°F). Using two approaches to characterize the spatial distribution of income, we show that the wealthiest, middle and poorest thirds of the global population experience similar changes, with no group dominating the global average. Calculations for precipitation indicate that there is little change in average precipitation, but redistributions of precipitation occur in all income groups. These results suggest that economists and policy-makers using spatially averaged estimates of climate change to approximate local changes will systematically and significantly underestimate the impacts of climate change on the 21st century population. Top: The

  12. Does Greater Autonomy Improve School Performance? Evidence from a Regression Discontinuity Analysis in Chicago

    Science.gov (United States)

    Steinberg, Matthew P.

    2014-01-01

    School districts throughout the United States are increasingly providing greater autonomy to local public (non-charter) school principals. In 2005-06, Chicago Public Schools initiated the Autonomous Management and Performance Schools program, granting academic, programmatic, and operational freedoms to select principals. This paper provides…

  13. Greater happiness for a greater number: Is that possible in Austria?

    NARCIS (Netherlands)

    R. Veenhoven (Ruut)

    2011-01-01

    textabstractWhat is the final goal of public policy? Jeremy Bentham (1789) would say: greater happiness for a greater number. He thought of happiness as subjective enjoyment of life; in his words as “the sum of pleasures and pains”. In his time the happiness of the great number could not be measured

  14. Greater happiness for a greater number: Is that possible in Germany?

    NARCIS (Netherlands)

    R. Veenhoven (Ruut)

    2009-01-01

    textabstractWhat is the final goal of public policy? Jeremy Bentham (1789) would say: greater happiness for a greater number. He thought of happiness as subjective enjoyment of life; in his words as “the sum of pleasures and pains”. In his time the Happiness of the great number could not be measured

  15. Hybrid Reynolds-Averaged/Large-Eddy Simulations of a Coaxial Supersonic Free-Jet Experiment

    Science.gov (United States)

    Baurle, Robert A.; Edwards, Jack R.

    2010-01-01

    Reynolds-averaged and hybrid Reynolds-averaged/large-eddy simulations have been applied to a supersonic coaxial jet flow experiment. The experiment was designed to study compressible mixing flow phenomenon under conditions that are representative of those encountered in scramjet combustors. The experiment utilized either helium or argon as the inner jet nozzle fluid, and the outer jet nozzle fluid consisted of laboratory air. The inner and outer nozzles were designed and operated to produce nearly pressure-matched Mach 1.8 flow conditions at the jet exit. The purpose of the computational effort was to assess the state-of-the-art for each modeling approach, and to use the hybrid Reynolds-averaged/large-eddy simulations to gather insight into the deficiencies of the Reynolds-averaged closure models. The Reynolds-averaged simulations displayed a strong sensitivity to choice of turbulent Schmidt number. The initial value chosen for this parameter resulted in an over-prediction of the mixing layer spreading rate for the helium case, but the opposite trend was observed when argon was used as the injectant. A larger turbulent Schmidt number greatly improved the comparison of the results with measurements for the helium simulations, but variations in the Schmidt number did not improve the argon comparisons. The hybrid Reynolds-averaged/large-eddy simulations also over-predicted the mixing layer spreading rate for the helium case, while under-predicting the rate of mixing when argon was used as the injectant. The primary reason conjectured for the discrepancy between the hybrid simulation results and the measurements centered around issues related to the transition from a Reynolds-averaged state to one with resolved turbulent content. Improvements to the inflow conditions were suggested as a remedy to this dilemma. Second-order turbulence statistics were also compared to their modeled Reynolds-averaged counterparts to evaluate the effectiveness of common turbulence closure

  16. Evaluation of The Surface Ozone Concentrations In Greater Cairo Area With Emphasis On Helwan, Egypt

    International Nuclear Information System (INIS)

    Ramadan, A.; Kandil, A.T.; Abd Elmaged, S.M.; Mubarak, I.

    2011-01-01

    Various biogenic and anthropogenic sources emit huge quantities of surface ozone. The main purpose of this study is to evaluate the surface ozone levels present at Helwan area in order to improve the knowledge and understanding troposphere processes. Surface Ozone has been measured at 2 sites at Helwan; these sites cover the most populated area in Helwan. Ozone concentration is continuously monitored by UV absorption photometry using the equipment O 3 41 M UV Photometric Ozone Analyzer. The daily maximum values of the ozone concentration in the greater Cairo area have approached but did not exceeded the critical levels during the year 2008. Higher ozone concentrations at Helwan are mainly due to the transport of ozone from regions further to the north of greater Cairo and to a lesser extent of ozone locally generated by photochemical smog process. The summer season has the largest diurnal variation, with the tendency of the daily ozone maxima occur in the late afternoon. The night time concentration of ozone was significantly higher at Helwan because there are no fast acting sinks, destroying ozone since the average night time concentration of ozone is maintained at 40 ppb at the site. No correlation between the diurnal total suspended particulate (TSP) matter and the diurnal cumulative ozone concentration was observed during the Khamasin period

  17. Hybrid Reynolds-Averaged/Large-Eddy Simulations of a Co-Axial Supersonic Free-Jet Experiment

    Science.gov (United States)

    Baurle, R. A.; Edwards, J. R.

    2009-01-01

    Reynolds-averaged and hybrid Reynolds-averaged/large-eddy simulations have been applied to a supersonic coaxial jet flow experiment. The experiment utilized either helium or argon as the inner jet nozzle fluid, and the outer jet nozzle fluid consisted of laboratory air. The inner and outer nozzles were designed and operated to produce nearly pressure-matched Mach 1.8 flow conditions at the jet exit. The purpose of the computational effort was to assess the state-of-the-art for each modeling approach, and to use the hybrid Reynolds-averaged/large-eddy simulations to gather insight into the deficiencies of the Reynolds-averaged closure models. The Reynolds-averaged simulations displayed a strong sensitivity to choice of turbulent Schmidt number. The baseline value chosen for this parameter resulted in an over-prediction of the mixing layer spreading rate for the helium case, but the opposite trend was noted when argon was used as the injectant. A larger turbulent Schmidt number greatly improved the comparison of the results with measurements for the helium simulations, but variations in the Schmidt number did not improve the argon comparisons. The hybrid simulation results showed the same trends as the baseline Reynolds-averaged predictions. The primary reason conjectured for the discrepancy between the hybrid simulation results and the measurements centered around issues related to the transition from a Reynolds-averaged state to one with resolved turbulent content. Improvements to the inflow conditions are suggested as a remedy to this dilemma. Comparisons between resolved second-order turbulence statistics and their modeled Reynolds-averaged counterparts were also performed.

  18. Neutron resonance averaging

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1986-10-01

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs

  19. Testing averaged cosmology with type Ia supernovae and BAO data

    Energy Technology Data Exchange (ETDEWEB)

    Santos, B.; Alcaniz, J.S. [Departamento de Astronomia, Observatório Nacional, 20921-400, Rio de Janeiro – RJ (Brazil); Coley, A.A. [Department of Mathematics and Statistics, Dalhousie University, Halifax, B3H 3J5 Canada (Canada); Devi, N. Chandrachani, E-mail: thoven@on.br, E-mail: aac@mathstat.dal.ca, E-mail: chandrachaniningombam@astro.unam.mx, E-mail: alcaniz@on.br [Instituto de Astronomía, Universidad Nacional Autónoma de México, Box 70-264, México City, México (Mexico)

    2017-02-01

    An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO data is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.

  20. Testing averaged cosmology with type Ia supernovae and BAO data

    International Nuclear Information System (INIS)

    Santos, B.; Alcaniz, J.S.; Coley, A.A.; Devi, N. Chandrachani

    2017-01-01

    An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO data is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.

  1. Reductions in Average Lengths of Stays for Surgical Procedures Between the 2008 and 2014 United States National Inpatient Samples Were Not Associated With Greater Incidences of Use of Postacute Care Facilities.

    Science.gov (United States)

    Dexter, Franklin; Epstein, Richard H

    2018-03-01

    Diagnosis-related group (DRG) based reimbursement creates incentives for reduction in hospital length of stay (LOS). Such reductions might be accomplished by lesser incidences of discharges to home. However, we previously reported that, while controlling for DRG, each 1-day decrease in hospital median LOS was associated with lesser odds of transfer to a postacute care facility (P = .0008). The result, though, was limited to elective admissions, 15 common surgical DRGs, and the 2013 US National Readmission Database. We studied the same potential relationship between decreased LOS and postacute care using different methodology and over 2 different years. The observational study was performed using summary measures from the 2008 and 2014 US National Inpatient Sample, with 3 types of categories (strata): (1) Clinical Classifications Software's classes of procedures (CCS), (2) DRGs including a major operating room procedure during hospitalization, or (3) CCS limiting patients to those with US Medicare as the primary payer. Greater reductions in the mean LOS were associated with smaller percentages of patients with disposition to postacute care. Analyzed using 72 different CCSs, 174 DRGs, or 70 CCSs limited to Medicare patients, each pairwise reduction in the mean LOS by 1 day was associated with an estimated 2.6% ± 0.4%, 2.3% ± 0.3%, or 2.4% ± 0.3% (absolute) pairwise reduction in the mean incidence of use of postacute care, respectively. These 3 results obtained using bivariate weighted least squares linear regression were all P < .0001, as were the corresponding results obtained using unweighted linear regression or the Spearman rank correlation. In the United States, reductions in hospital LOS, averaged over many surgical procedures, are not accomplished through a greater incidence of use of postacute care.

  2. Greater Sudbury fuel efficient driving handbook

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2009-12-15

    Reducing the amount of fuel that people use for personal driving saves money, improves local air quality, and reduces personal contributions to climate change. This handbook was developed to be used as a tool for a fuel efficient driving pilot program in Greater Sudbury in 2009-2010. Specifically, the purpose of the handbook was to provide greater Sudbury drivers with information on how to drive and maintain their personal vehicles in order to maximize fuel efficiency. The handbook also provides tips for purchasing fuel efficient vehicles. It outlines the benefits of fuel maximization, with particular reference to reducing contributions to climate change; reducing emissions of air pollutants; safe driving; and money savings. Some tips for efficient driving are to avoid aggressive driving; use cruise control; plan trips; and remove excess weight. Tips for efficient winter driving are to avoid idling to warm up the engine; use a block heater; remove snow and ice; use snow tires; and check tire pressure. The importance of car maintenance and tire pressure was emphasized. The handbook also explains how fuel consumption ratings are developed by vehicle manufacturers. refs., figs.

  3. The effects of average revenue regulation on electricity transmission investment and pricing

    International Nuclear Information System (INIS)

    Matsukawa, Isamu

    2008-01-01

    This paper investigates the long-run effects of average revenue regulation on an electricity transmission monopolist who applies a two-part tariff comprising a variable congestion price and a non-negative fixed access fee. A binding constraint on the monopolist's expected average revenue lowers the access fee, promotes transmission investment, and improves consumer surplus. In a case of any linear or log-linear electricity demand function with a positive probability that no congestion occurs, average revenue regulation is allocatively more efficient than a Coasian two-part tariff if the level of capacity under average revenue regulation is higher than that under a Coasian two-part tariff. (author)

  4. Data requirements of GREAT-ER: Modelling and validation using LAS in four UK catchments

    International Nuclear Information System (INIS)

    Price, Oliver R.; Munday, Dawn K.; Whelan, Mick J.; Holt, Martin S.; Fox, Katharine K.; Morris, Gerard; Young, Andrew R.

    2009-01-01

    Higher-tier environmental risk assessments on 'down-the-drain' chemicals in river networks can be conducted using models such as GREAT-ER (Geography-referenced Regional Exposure Assessment Tool for European Rivers). It is important these models are evaluated and their sensitivities to input variables understood. This study had two primary objectives: evaluate GREAT-ER model performance, comparing simulated modelled predictions for LAS (linear alkylbenzene sulphonate) with measured concentrations, for four rivers in the UK, and investigate model sensitivity to input variables. We demonstrate that the GREAT-ER model is very sensitive to variability in river discharges. However it is insensitive to the form of distributions used to describe chemical usage and removal rate in sewage treatment plants (STPs). It is concluded that more effort should be directed towards improving empirical estimates of effluent load and reducing uncertainty associated with usage and removal rates in STPs. Simulations could be improved by incorporating the effect of river depth on dissipation rates. - Validation of GREAT-ER.

  5. Data requirements of GREAT-ER: Modelling and validation using LAS in four UK catchments

    Energy Technology Data Exchange (ETDEWEB)

    Price, Oliver R., E-mail: oliver.price@unilever.co [Safety and Environmental Assurance Centre, Unilever, Colworth Science Park, Sharnbrook, Bedfordshire MK44 1LQ (United Kingdom); Munday, Dawn K. [Safety and Environmental Assurance Centre, Unilever, Colworth Science Park, Sharnbrook, Bedfordshire MK44 1LQ (United Kingdom); Whelan, Mick J. [Department of Natural Resources, School of Applied Sciences, Cranfield University, College Road, Cranfield, Bedfordshire MK43 0AL (United Kingdom); Holt, Martin S. [ECETOC, Ave van Nieuwenhuyse 4, Box 6, B-1160 Brussels (Belgium); Fox, Katharine K. [85 Park Road West, Birkenhead, Merseyside CH43 8SQ (United Kingdom); Morris, Gerard [Environment Agency, Phoenix House, Global Avenue, Leeds LS11 8PG (United Kingdom); Young, Andrew R. [Wallingford HydroSolutions Ltd, Maclean building, Crowmarsh Gifford, Wallingford, Oxon OX10 8BB (United Kingdom)

    2009-10-15

    Higher-tier environmental risk assessments on 'down-the-drain' chemicals in river networks can be conducted using models such as GREAT-ER (Geography-referenced Regional Exposure Assessment Tool for European Rivers). It is important these models are evaluated and their sensitivities to input variables understood. This study had two primary objectives: evaluate GREAT-ER model performance, comparing simulated modelled predictions for LAS (linear alkylbenzene sulphonate) with measured concentrations, for four rivers in the UK, and investigate model sensitivity to input variables. We demonstrate that the GREAT-ER model is very sensitive to variability in river discharges. However it is insensitive to the form of distributions used to describe chemical usage and removal rate in sewage treatment plants (STPs). It is concluded that more effort should be directed towards improving empirical estimates of effluent load and reducing uncertainty associated with usage and removal rates in STPs. Simulations could be improved by incorporating the effect of river depth on dissipation rates. - Validation of GREAT-ER.

  6. Accurate phenotyping: Reconciling approaches through Bayesian model averaging.

    Directory of Open Access Journals (Sweden)

    Carla Chia-Ming Chen

    Full Text Available Genetic research into complex diseases is frequently hindered by a lack of clear biomarkers for phenotype ascertainment. Phenotypes for such diseases are often identified on the basis of clinically defined criteria; however such criteria may not be suitable for understanding the genetic composition of the diseases. Various statistical approaches have been proposed for phenotype definition; however our previous studies have shown that differences in phenotypes estimated using different approaches have substantial impact on subsequent analyses. Instead of obtaining results based upon a single model, we propose a new method, using Bayesian model averaging to overcome problems associated with phenotype definition. Although Bayesian model averaging has been used in other fields of research, this is the first study that uses Bayesian model averaging to reconcile phenotypes obtained using multiple models. We illustrate the new method by applying it to simulated genetic and phenotypic data for Kofendred personality disorder-an imaginary disease with several sub-types. Two separate statistical methods were used to identify clusters of individuals with distinct phenotypes: latent class analysis and grade of membership. Bayesian model averaging was then used to combine the two clusterings for the purpose of subsequent linkage analyses. We found that causative genetic loci for the disease produced higher LOD scores using model averaging than under either individual model separately. We attribute this improvement to consolidation of the cores of phenotype clusters identified using each individual method.

  7. Planning for greater confinement disposal

    International Nuclear Information System (INIS)

    Gilbert, T.L.; Luner, C.; Meshkov, N.K.; Trevorrow, L.E.; Yu, C.

    1985-01-01

    A report that provides guidance for planning for greater-confinement disposal (GCD) of low-level radioactive waste is being prepared. The report addresses procedures for selecting a GCD technology and provides information for implementing these procedures. The focus is on GCD; planning aspects common to GCD and shallow-land burial are covered by reference. Planning procedure topics covered include regulatory requirements, waste characterization, benefit-cost-risk assessment and pathway analysis methodologies, determination of need, waste-acceptance criteria, performance objectives, and comparative assessment of attributes that support these objectives. The major technologies covered include augered shafts, deep trenches, engineered structures, hydrofracture, improved waste forms, and high-integrity containers. Descriptive information is provided, and attributes that are relevant for risk assessment and operational requirements are given. 10 refs., 3 figs., 2 tabs

  8. Face averages enhance user recognition for smartphone security.

    Science.gov (United States)

    Robertson, David J; Kramer, Robin S S; Burton, A Mike

    2015-01-01

    Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy). In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual's 'face-average'--a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1) and for real faces (Experiment 2): users could unlock their phones more reliably when the device stored an average of the user's face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings.

  9. Absorption spectrum of DNA for wavelengths greater than 300 nm

    International Nuclear Information System (INIS)

    Sutherland, J.C.; Griffin, K.P.

    1981-01-01

    Although DNA absorption at wavelengths greater than 300 nm is much weaker than that at shorter wavelengths, this absorption seems to be responsible for much of the biological damage caused by solar radiation of wavelengths less than 320 nm. Accurate measurement of the absorption spectrum of DNA above 300 nm is complicated by turbidity characteristic of concentrated solutions of DNA. We have measured the absorption spectra of DNA from calf thymus, Clostridium perfringens, Escherichia coli, Micrococcus luteus, salmon testis, and human placenta using procedures which separate optical density due to true absorption from that due to turbidity. Above 300 nm, the relative absorption of DNA increases as a function of guanine-cytosine content, presumably because the absorption of guanine is much greater than the absorption of adenine at these wavelengths. This result suggests that the photophysical processes which follow absorption of a long-wavelength photon may, on the average, differ from those induced by shorter-wavelength photons. It may also explain the lower quantum yield for the killing of cells by wavelengths above 300 nm compared to that by shorter wavelengths

  10. The value of model averaging and dynamical climate model predictions for improving statistical seasonal streamflow forecasts over Australia

    Science.gov (United States)

    Pokhrel, Prafulla; Wang, Q. J.; Robertson, David E.

    2013-10-01

    Seasonal streamflow forecasts are valuable for planning and allocation of water resources. In Australia, the Bureau of Meteorology employs a statistical method to forecast seasonal streamflows. The method uses predictors that are related to catchment wetness at the start of a forecast period and to climate during the forecast period. For the latter, a predictor is selected among a number of lagged climate indices as candidates to give the "best" model in terms of model performance in cross validation. This study investigates two strategies for further improvement in seasonal streamflow forecasts. The first is to combine, through Bayesian model averaging, multiple candidate models with different lagged climate indices as predictors, to take advantage of different predictive strengths of the multiple models. The second strategy is to introduce additional candidate models, using rainfall and sea surface temperature predictions from a global climate model as predictors. This is to take advantage of the direct simulations of various dynamic processes. The results show that combining forecasts from multiple statistical models generally yields more skillful forecasts than using only the best model and appears to moderate the worst forecast errors. The use of rainfall predictions from the dynamical climate model marginally improves the streamflow forecasts when viewed over all the study catchments and seasons, but the use of sea surface temperature predictions provide little additional benefit.

  11. Decreasing food fussiness in children with obesity leads to greater weight loss in family-based treatment.

    Science.gov (United States)

    Hayes, Jacqueline F; Altman, Myra; Kolko, Rachel P; Balantekin, Katherine N; Holland, Jodi Cahill; Stein, Richard I; Saelens, Brian E; Welch, R Robinson; Perri, Michael G; Schechtman, Kenneth B; Epstein, Leonard H; Wilfley, Denise E

    2016-10-01

    Food fussiness (FF), or the frequent rejection of both familiar and unfamiliar foods, is common among children and, given its link to poor diet quality, may contribute to the onset and/or maintenance of childhood obesity. This study examined child FF in association with anthropometric variables and diet in children with overweight/obesity participating in family-based behavioral weight loss treatment (FBT). Change in FF was assessed in relation to FBT outcome, including whether change in diet quality mediated the relation between change in FF and change in child weight. Child (N = 170; age = 9.41 ± 1.23) height and weight were measured, and parents completed FF questionnaires and three 24-h recalls of child diet at baseline and post-treatment. Healthy Eating Index-2005 scores were calculated. At baseline, child FF was related to lower vegetable intake. Average child FF decreased from start to end of FBT. Greater decreases in FF were associated with greater reductions in child body mass index and improved overall diet quality. Overall, diet quality change through FBT mediated the relation between child FF change and child body mass index change. Children with high FF can benefit from FBT, and addressing FF may be important in childhood obesity treatment to maximize weight outcomes. © 2016 The Obesity Society.

  12. Capillary Electrophoresis Sensitivity Enhancement Based on Adaptive Moving Average Method.

    Science.gov (United States)

    Drevinskas, Tomas; Telksnys, Laimutis; Maruška, Audrius; Gorbatsova, Jelena; Kaljurand, Mihkel

    2018-06-05

    In the present work, we demonstrate a novel approach to improve the sensitivity of the "out of lab" portable capillary electrophoretic measurements. Nowadays, many signal enhancement methods are (i) underused (nonoptimal), (ii) overused (distorts the data), or (iii) inapplicable in field-portable instrumentation because of a lack of computational power. The described innovative migration velocity-adaptive moving average method uses an optimal averaging window size and can be easily implemented with a microcontroller. The contactless conductivity detection was used as a model for the development of a signal processing method and the demonstration of its impact on the sensitivity. The frequency characteristics of the recorded electropherograms and peaks were clarified. Higher electrophoretic mobility analytes exhibit higher-frequency peaks, whereas lower electrophoretic mobility analytes exhibit lower-frequency peaks. On the basis of the obtained data, a migration velocity-adaptive moving average algorithm was created, adapted, and programmed into capillary electrophoresis data-processing software. Employing the developed algorithm, each data point is processed depending on a certain migration time of the analyte. Because of the implemented migration velocity-adaptive moving average method, the signal-to-noise ratio improved up to 11 times for sampling frequency of 4.6 Hz and up to 22 times for sampling frequency of 25 Hz. This paper could potentially be used as a methodological guideline for the development of new smoothing algorithms that require adaptive conditions in capillary electrophoresis and other separation methods.

  13. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    1999-01-01

    In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belo...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion......In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  14. Behavioral correlates of heart rates of free-living Greater White-fronted Geese

    Science.gov (United States)

    Ely, Craig R.; Ward, D.H.; Bollinger, K.S.

    1999-01-01

    We simultaneously monitored the heart rate and behavior of nine free-living Greater White-fronted Geese (Anser albifrons) on their wintering grounds in northern California. Heart rates of wild geese were monitored via abdominally-implanted radio transmitters with electrodes that received electrical impulses of the heart and emitted a radio signal with each ventricular contraction. Post-operative birds appeared to behave normally, readily rejoining flocks and flying up to 15 km daily from night-time roost sites to feed in surrounding agricultural fields. Heart rates varied significantly among individuals and among behaviors, and ranged from less than 100 beats per minute (BPM) during resting, to over 400 BPM during flight. Heart rates varied from 80 to 140 BPM during non-strenuous activities such as walking, feeding, and maintenance activities, to about 180 BPM when birds became alert, and over 400 BPM when birds were startled, even if they did not take flight. Postflight heart rate recovery time averaged postures, as heart rates were context-dependent, and were highest in initial encounters among individuals. Instantaneous measures of physiological parameters, such as heart rate, are often better indicators of the degree of response to external stimuli than visual observations and can be used to improve estimates of energy expenditure based solely on activity data.

  15. Cost Analysis of Total Joint Arthroplasty Readmissions in a Bundled Payment Care Improvement Initiative.

    Science.gov (United States)

    Clair, Andrew J; Evangelista, Perry J; Lajam, Claudette M; Slover, James D; Bosco, Joseph A; Iorio, Richard

    2016-09-01

    The Bundled Payment for Care Improvement (BPCI) Initiative is a Centers for Medicare and Medicaid Services program designed to promote coordinated and efficient care. This study seeks to report costs of readmissions within a 90-day episode of care for BPCI Initiative patients receiving total knee arthroplasty (TKA) or total hip arthroplasty (THA). From January 2013 through December 2013, 1 urban, tertiary, academic orthopedic hospital admitted 664 patients undergoing either primary TKA or THA through the BPCI Initiative. All patients readmitted to our hospital or an outside hospital within 90-days from the index episode were identified. The diagnosis and cost for each readmission were analyzed. Eighty readmissions in 69 of 664 patients (10%) were identified within 90-days. There were 53 readmissions (45 patients) after THA and 27 readmissions (24 patients) after TKA. Surgical complications accounted for 54% of THA readmissions and 44% of TKA readmissions. These complications had an average cost of $36,038 (range, $6375-$60,137) for THA and $38,953 (range, $4790-$104,794) for TKA. Eliminating the TKA outlier of greater than $100,000 yields an average cost of $27,979. Medical complications of THA and TKA had an average cost of $22,775 (range, $5678-$82,940) for THA and $24,183 (range, $3306-$186,069) for TKA. Eliminating the TKA outlier of greater than $100,000 yields an average cost of $11,682. Hospital readmissions after THA and TKA are common and costly. Identifying the causes for readmission and assessing the cost will guide quality improvement efforts. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Average years of life lost due to breast and cervical cancer and the association with the marginalization index in Mexico in 2000 and 2010.

    Science.gov (United States)

    Cervantes, Claudio Alberto Dávila; Botero, Marcela Agudelo

    2014-05-01

    The objective of this study was to calculate average years of life lost due to breast and cervical cancer in Mexico in 2000 and 2010. Data on mortality in women aged between 20 and 84 years was obtained from the National Institute for Statistics and Geography. Age-specific mortality rates and average years of life lost, which is an estimate of the number of years that a person would have lived if he or she had not died prematurely, were estimated for both diseases. Data was disaggregated into five-year age groups and socioeconomic status based on the 2010 marginalization index obtained from the National Population Council. A decrease in average years of life lost due to cervical cancer (37.4%) and an increase in average years of life lost due breast cancer (8.9%) was observed during the period studied. Average years of life lost due to cervical cancer was greater among women living in areas with a high marginalization index, while average years of life lost due to breast cancer was greater in women from areas with a low marginalization index.

  17. Average-energy games

    Directory of Open Access Journals (Sweden)

    Patricia Bouyer

    2015-09-01

    Full Text Available Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in NP inter coNP and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements.

  18. Averaging of nonlinearity-managed pulses

    International Nuclear Information System (INIS)

    Zharnitsky, Vadim; Pelinovsky, Dmitry

    2005-01-01

    We consider the nonlinear Schroedinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons

  19. Simulated effects of projected ground-water withdrawals in the Floridan aquifer system, greater Orlando metropolitan area, east-central Florida

    Science.gov (United States)

    Murray, Louis C.; Halford, Keith J.

    1999-01-01

    Ground-water levels in the Floridan aquifer system within the greater Orlando metropolitan area are expected to decline because of a projected increase in the average pumpage rate from 410 million gallons per day in 1995 to 576 million gallons per day in 2020. The potential decline in ground-water levels and spring discharge within the area was investigated with a calibrated, steady-state, ground-water flow model. A wetter-than-average condition scenario and a drought-condition scenario were simulated to bracket the range of water-levels and springflow that may occur in 2020 under average rainfall conditions. Pumpage used to represent the drought-condition scenario totaled 865 million gallons per day, about 50 percent greater than the projected average pumpage rate in 2020. Relative to average 1995 steady-state conditions, drawdowns simulated in the Upper Floridan aquifer exceeded 10 and 25 feet for wet and dry conditions, respectively, in parts of central and southwest Orange County and in north Osceola County. In Seminole County, drawdowns of up to 20 feet were simulated for dry conditions, compared with 5 to 10 feet simulated for wet conditions. Computed springflow was reduced by 10 percent for wet conditions and by 38 percent for dry conditions, with the largest reductions (28 and 76 percent) occurring at the Sanlando Springs group. In the Lower Floridan aquifer, drawdowns simulated in southwest Orange County exceeded 20 and 40 feet for wet and dry conditions, respectively.

  20. Greater happiness for a greater number: Is that possible? If so how? (Arabic)

    NARCIS (Netherlands)

    R. Veenhoven (Ruut); E. Samuel (Emad)

    2012-01-01

    textabstractWhat is the final goal of public policy? Jeremy Bentham (1789) would say: greater happiness for a greater number. He thought of happiness as subjective enjoyment of life; in his words as “the sum of pleasures and pains”. In his time, the happiness of the great number could not be

  1. Average size of random polygons with fixed knot topology.

    Science.gov (United States)

    Matsuda, Hiroshi; Yao, Akihisa; Tsukahara, Hiroshi; Deguchi, Tetsuo; Furuta, Ko; Inami, Takeo

    2003-07-01

    We have evaluated by numerical simulation the average size R(K) of random polygons of fixed knot topology K=,3(1),3(1) musical sharp 4(1), and we have confirmed the scaling law R(2)(K) approximately N(2nu(K)) for the number N of polygonal nodes in a wide range; N=100-2200. The best fit gives 2nu(K) approximately 1.11-1.16 with good fitting curves in the whole range of N. The estimate of 2nu(K) is consistent with the exponent of self-avoiding polygons. In a limited range of N (N greater, similar 600), however, we have another fit with 2nu(K) approximately 1.01-1.07, which is close to the exponent of random polygons.

  2. Planning for greater-confinement disposal

    International Nuclear Information System (INIS)

    Gilbert, T.L.; Luner, C.; Meshkov, N.K.; Trevorrow, L.E.; Yu, C.

    1984-01-01

    This contribution is a progress report for preparation of a document that will summarize procedures and technical information needed to plan for and implement greater-confinement disposal (GCD) of low-level radioactive waste. Selection of a site and a facility design (Phase I), and construction, operation, and extended care (Phase II) will be covered in the document. This progress report is limited to Phase I. Phase I includes determination of the need for GCD, design alternatives, and selection of a site and facility design. Alternative designs considered are augered shafts, deep trenches, engineered structures, high-integrity containers, hydrofracture, and improved waste form. Design considerations and specifications, performance elements, cost elements, and comparative advantages and disadvantages of the different designs are covered. Procedures are discussed for establishing overall performance objectives and waste-acceptance criteria, and for comparative assessment of the performance and cost of the different alternatives. 16 references

  3. Planning for greater-confinement disposal

    International Nuclear Information System (INIS)

    Gilbert, T.L.; Luner, C.; Meshkov, N.K.; Trevorrow, L.E.; Yu, C.

    1984-01-01

    This contribution is a progress report for preparation of a document that will summarize procedures and technical information needed to plan for and implement greater-confinement disposal (GCD) of low-level radioactive waste. Selection of a site and a facility design (Phase I), and construction, operation, and extended care (Phase II) will be covered in the document. This progress report is limited to Phase I. Phase I includes determination of the need for GCD, design alternatives, and selection of a site and facility design. Alternative designs considered are augered shafts, deep trenches, engineered structures, high-integrity containers, hydrofracture, and improved waste form. Design considerations and specifications, performance elements, cost elements, and comparative advantages and disadvantages of the different designs are covered. Procedures are discussed for establishing overall performance objecties and waste-acceptance criteria, and for comparative assessment of the performance and cost of the different alternatives. 16 refs

  4. High average power linear induction accelerator development

    International Nuclear Information System (INIS)

    Bayless, J.R.; Adler, R.J.

    1987-07-01

    There is increasing interest in linear induction accelerators (LIAs) for applications including free electron lasers, high power microwave generators and other types of radiation sources. Lawrence Livermore National Laboratory has developed LIA technology in combination with magnetic pulse compression techniques to achieve very impressive performance levels. In this paper we will briefly discuss the LIA concept and describe our development program. Our goals are to improve the reliability and reduce the cost of LIA systems. An accelerator is presently under construction to demonstrate these improvements at an energy of 1.6 MeV in 2 kA, 65 ns beam pulses at an average beam power of approximately 30 kW. The unique features of this system are a low cost accelerator design and an SCR-switched, magnetically compressed, pulse power system. 4 refs., 7 figs

  5. The Effects of Average Revenue Regulation on Electricity Transmission Investment and Pricing

    OpenAIRE

    Isamu Matsukawa

    2005-01-01

    This paper investigates the long-run effects of average revenue regulation on an electricity transmission monopolist who applies a two- part tariff comprising a variable congestion price and a non-negative fixed access fee. A binding constraint on the monopolist fs expected average revenue lowers the access fee, promotes transmission investment, and improves consumer surplus. In a case of any linear or log-linear electricity demand function with a positive probability that no congestion occur...

  6. The difference between alternative averages

    Directory of Open Access Journals (Sweden)

    James Vaupel

    2012-09-01

    Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.

  7. Review of the different methods to derive average spacing from resolved resonance parameters sets

    International Nuclear Information System (INIS)

    Fort, E.; Derrien, H.; Lafond, D.

    1979-12-01

    The average spacing of resonances is an important parameter for statistical model calculations, especially concerning non fissile nuclei. The different methods to derive this average value from resonance parameters sets have been reviewed and analyzed in order to tentatively detect their respective weaknesses and propose recommendations. Possible improvements are suggested

  8. [Three-dimensional gait analysis of patients with osteonecrosis of femoral head before and after treatments with vascularized greater trochanter bone flap].

    Science.gov (United States)

    Cui, Daping; Zhao, Dewei

    2011-03-01

    To provide the objective basis for the evaluation of the operative results of vascularized greater trochanter bone flap in treating osteonecrosis of the femoral head (ONFH) by three-dimensional gait analysis. Between March 2006 and March 2007, 35 patients with ONFH were treated with vascularized greater trochanter bone flap, and gait analysis was made by using three-dimensional gait analysis system before operation and at 1, 2 years after operation. There were 23 males and 12 females, aged 21-52 years (mean, 35.2 years), including 8 cases of steroid-induced, 7 cases of traumatic, 6 cases of alcoholic, and 14 cases of idiopathic ONFH. The left side was involved in 15 cases, and right side in 20 cases. According to Association Research Circulation Osseous (ARCO) classification, all patients were diagnosed as having femoral-head necrosis at stage III. Preoperative Harris hip functional score (HHS) was 56.2 +/- 5.6. The disease duration was 1.5-18.6 years (mean, 5.2 years). All incisions healed at stage I without early postoperative complications of deep vein thrombosis and infections of incision. Thirty-five patients were followed up 2-3 years with an average of 2.5 years. At 2 years after operation, the HHS score was 85.8 +/- 4.1, showing significant difference when compared with the preoperative score (t = 23.200, P = 0.000). Before operation, patients showed a hip muscles gait, short gait, reduce pain gait, and the pathological gaits significantly improved at 1 year after operation. At 1 year and 2 years after operation, step frequency, pace, step length and hip flexion, hip extension, knee flexion, ankle flexion were significantly improved (P petronas wave appeared at swing phase; the preoperative situation was three normal phase waves. These results suggest that three-dimensional gait analysis before and after vascularized greater trochanter for ONFH can evaluate precisely hip vitodynamics variation.

  9. Taphonomic trade-offs in tropical marine death assemblages: Differential time averaging, shell loss, and probable bias in siliciclastic vs. carbonate facies

    Science.gov (United States)

    Kidwell, Susan M.; Best, Mairi M. R.; Kaufman, Darrell S.

    2005-09-01

    Radiocarbon-calibrated amino-acid racemization ages of individually dated bivalve mollusk shells from Caribbean reef, nonreefal carbonate, and siliciclastic sediments in Panama indicate that siliciclastic sands and muds contain significantly older shells (median 375 yr, range up to ˜5400 yr) than nearby carbonate seafloors (median 72 yr, range up to ˜2900 yr; maximum shell ages differ significantly at p < 0.02 using extreme-value statistics). The implied difference in shell loss rates is contrary to physicochemical expectations but is consistent with observed differences in shell condition (greater bioerosion and dissolution in carbonates). Higher rates of shell loss in carbonate sediments should lead to greater compositional bias in surviving skeletal material, resulting in taphonomic trade-offs: less time averaging but probably higher taxonomic bias in pure carbonate sediments, and lower bias but greater time averaging in siliciclastic sediments from humid-weathered accretionary arc terrains, which are a widespread setting of tropical sedimentation.

  10. Average years of life lost due to breast and cervical cancer and the association with the marginalization index in Mexico in 2000 and 2010

    Directory of Open Access Journals (Sweden)

    Claudio Alberto Dávila Cervantes

    2014-05-01

    Full Text Available The objective of this study was to calculate average years of life lost due to breast and cervical cancer in Mexico in 2000 and 2010. Data on mortality in women aged between 20 and 84 years was obtained from the National Institute for Statistics and Geography. Age-specific mortality rates and average years of life lost, which is an estimate of the number of years that a person would have lived if he or she had not died prematurely, were estimated for both diseases. Data was disaggregated into five-year age groups and socioeconomic status based on the 2010 marginalization index obtained from the National Population Council. A decrease in average years of life lost due to cervical cancer (37.4% and an increase in average years of life lost due breast cancer (8.9% was observed during the period studied. Average years of life lost due to cervical cancer was greater among women living in areas with a high marginalization index, while average years of life lost due to breast cancer was greater in women from areas with a low marginalization index.

  11. An Experimental Study Related to Planning Abilities of Gifted and Average Students

    Directory of Open Access Journals (Sweden)

    Marilena Z. Leana-Taşcılar

    2016-02-01

    Full Text Available Gifted students differ from their average peers in psychological, social, emotional and cognitive development. One of these differences in the cognitive domain is related to executive functions. One of the most important executive functions is planning and organization ability. The aim of this study was to compare planning abilities of gifted students with those of their average peers and to test the effectiveness of a training program on planning abilities of gifted students and average students. First, students’ intelligence and planning abilities were measured and then assigned to either experimental or control group. The groups were matched by intelligence and planning ability (experimental: (13 gifted and 8 average; control: 14 gifted and 8 average. In total 182 students (79 gifted and 103 average participated in the study. Then, a training program was implemented in the experimental group to find out if it improved students’ planning ability. Results showed that boys had better planning abilities than girls did, and gifted students had better planning abilities than their average peers did. Significant results were obtained in favor of the experimental group in the posttest scores

  12. A new probabilistic seismic hazard assessment for greater Tokyo

    Science.gov (United States)

    Stein, R.S.; Toda, S.; Parsons, T.; Grunewald, E.; Blong, R.; Sparks, S.; Shah, H.; Kennedy, J.

    2006-01-01

    Tokyo and its outlying cities are home to one-quarter of Japan's 127 million people. Highly destructive earthquakes struck the capital in 1703, 1855 and 1923, the last of which took 105 000 lives. Fuelled by greater Tokyo's rich seismological record, but challenged by its magnificent complexity, our joint Japanese-US group carried out a new study of the capital's earthquake hazards. We used the prehistoric record of great earthquakes preserved by uplifted marine terraces and tsunami deposits (17 M???8 shocks in the past 7000 years), a newly digitized dataset of historical shaking (10 000 observations in the past 400 years), the dense modern seismic network (300 000 earthquakes in the past 30 years), and Japan's GeoNet array (150 GPS vectors in the past 10 years) to reinterpret the tectonic structure, identify active faults and their slip rates and estimate their earthquake frequency. We propose that a dislodged fragment of the Pacific plate is jammed between the Pacific, Philippine Sea and Eurasian plates beneath the Kanto plain on which Tokyo sits. We suggest that the Kanto fragment controls much of Tokyo's seismic behaviour for large earthquakes, including the damaging 1855 M???7.3 Ansei-Edo shock. On the basis of the frequency of earthquakes beneath greater Tokyo, events with magnitude and location similar to the M??? 7.3 Ansei-Edo event have a ca 20% likelihood in an average 30 year period. In contrast, our renewal (time-dependent) probability for the great M??? 7.9 plate boundary shocks such as struck in 1923 and 1703 is 0.5% for the next 30 years, with a time-averaged 30 year probability of ca 10%. The resulting net likelihood for severe shaking (ca 0.9g peak ground acceleration (PGA)) in Tokyo, Kawasaki and Yokohama for the next 30 years is ca 30%. The long historical record in Kanto also affords a rare opportunity to calculate the probability of shaking in an alternative manner exclusively from intensity observations. This approach permits robust estimates

  13. Effect of force tightening on cable tension and displacement in greater trochanter reattachment.

    Science.gov (United States)

    Canet, Fanny; Duke, Kajsa; Bourgeois, Yan; Laflamme, G-Yves; Brailovski, Vladimir; Petit, Yvan

    2011-01-01

    The purpose of this study was to evaluate cable tension during installation, and during loading similar to walking in a cable grip type greater trochanter (GT), reattachment system. A 4th generation Sawbones composite femur with osteotomised GT was reattached with four Cable-Ready® systems (Zimmer, Warsaw, IN). Cables were tightened at 3 different target installation forces (178, 356 and 534 N) and retightened once as recommended by the manufacturer. Cables tension was continuously monitored using in-situ load cells. To simulate walking, a custom frame was used to apply quasi static load on the head of a femoral stem implant (2340 N) and abductor pull (667 N) on the GT. GT displacement (gap and sliding) relative to the femur was measured using a 3D camera system. During installation, a drop in cable tension was observed when tightening subsequent cables: an average 40+12.2% and 11 ± 5.9% tension loss was measured in the first and second cable. Therefore, retightening the cables, as recommended by the manufacturer, is important. During simulated walking, the second cable additionally lost up to 12.2+3.6% of tension. No difference was observed between the GT-femur gaps measured with cables tightened at different installation forces (p=0.32). The GT sliding however was significantly greater (0.9 ± 0.3 mm) when target installation force was set to only 178 N compared to 356 N (0.2 ± 0.1 mm); pcable tightening force should be as close as possible to that recommended by the manufacturer, because reducing it compromises the stability of the GT fragment, whereas increasing it does not improve this stability, but could lead to cable breakage.

  14. High-average-power diode-pumped Yb: YAG lasers

    International Nuclear Information System (INIS)

    Avizonis, P V; Beach, R; Bibeau, C M; Emanuel, M A; Harris, D G; Honea, E C; Monroe, R S; Payne, S A; Skidmore, J A; Sutton, S B

    1999-01-01

    A scaleable diode end-pumping technology for high-average-power slab and rod lasers has been under development for the past several years at Lawrence Livermore National Laboratory (LLNL). This technology has particular application to high average power Yb:YAG lasers that utilize a rod configured gain element. Previously, this rod configured approach has achieved average output powers in a single 5 cm long by 2 mm diameter Yb:YAG rod of 430 W cw and 280 W q-switched. High beam quality (M(sup 2)= 2.4) q-switched operation has also been demonstrated at over 180 W of average output power. More recently, using a dual rod configuration consisting of two, 5 cm long by 2 mm diameter laser rods with birefringence compensation, we have achieved 1080 W of cw output with an M(sup 2) value of 13.5 at an optical-to-optical conversion efficiency of 27.5%. With the same dual rod laser operated in a q-switched mode, we have also demonstrated 532 W of average power with an M(sup 2) and lt; 2.5 at 17% optical-to-optical conversion efficiency. These q-switched results were obtained at a 10 kHz repetition rate and resulted in 77 nsec pulse durations. These improved levels of operational performance have been achieved as a result of technology advancements made in several areas that will be covered in this manuscript. These enhancements to our architecture include: (1) Hollow lens ducts that enable the use of advanced cavity architectures permitting birefringence compensation and the ability to run in large aperture-filling near-diffraction-limited modes. (2) Compound laser rods with flanged-nonabsorbing-endcaps fabricated by diffusion bonding. (3) Techniques for suppressing amplified spontaneous emission (ASE) and parasitics in the polished barrel rods

  15. Practicing more retrieval routes leads to greater memory retention.

    Science.gov (United States)

    Zheng, Jun; Zhang, Wei; Li, Tongtong; Liu, Zhaomin; Luo, Liang

    2016-09-01

    A wealth of research has shown that retrieval practice plays a significant role in improving memory retention. The current study focused on one simple yet rarely examined question: would repeated retrieval using two different retrieval routes or using the same retrieval route twice lead to greater long-term memory retention? Participants elaborately learned 22 Japanese-Chinese translation word pairs using two different mediators. Half an hour after the initial study phase, the participants completed two retrieval sessions using either one mediator (Tm1Tm1) or two different mediators (Tm1Tm2). On the final test, which was performed 1week after the retrieval practice phase, the participants received only the cue with a request to report the mediator (M1 or M2) followed by the target (Experiment 1) or only the mediator (M1 or M2) with a request to report the target (Experiment 2). The results of Experiment 1 indicated that the participants who practiced under the Tm1Tm2 condition exhibited greater target retention than those who practiced under the Tm1Tm1 condition. This difference in performance was due to the significant disadvantage in mediator retrieval and decoding of the unpracticed mediator under the Tm1Tm1 condition. Although mediators were provided to participants on the final test in Experiment 2, decoding of the unpracticed mediators remained less effective than decoding of the practiced mediators. We conclude that practicing multiple retrieval routes leads to greater memory retention than focusing on a single retrieval route. Thus, increasing retrieval variability during repeated retrieval practice indeed significantly improves long-term retention in a delay test. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Genetic variance and covariance and breed differences for feed intake and average daily gain to improve feed efficiency in growing cattle.

    Science.gov (United States)

    Retallick, K J; Bormann, J M; Weaber, R L; MacNeil, M D; Bradford, H L; Freetly, H C; Hales, K E; Moser, D W; Snelling, W M; Thallman, R M; Kuehn, L A

    2017-04-01

    Feed costs are a major economic expense in finishing and developing cattle; however, collection of feed intake data is costly. Examining relationships among measures of growth and intake, including breed differences, could facilitate selection for efficient cattle. Objectives of this study were to estimate genetic parameters for growth and intake traits and compare indices for feed efficiency to accelerate selection response. On-test ADFI and on-test ADG (TESTADG) and postweaning ADG (PWADG) records for 5,606 finishing steers and growing heifers were collected at the U.S. Meat Animal Research Center in Clay Center, NE. On-test ADFI and ADG data were recorded over testing periods that ranged from 62 to 148 d. Individual quadratic regressions were fitted for BW on time, and TESTADG was predicted from the resulting equations. We included PWADG in the model to improve estimates of growth and intake parameters; PWADG was derived by dividing gain from weaning weight to yearling weight by the number of days between the weights. Genetic parameters were estimated using multiple-trait REML animal models with TESTADG, ADFI, and PWADG for both sexes as dependent variables. Fixed contemporary groups were cohorts of calves simultaneously tested, and covariates included age on test, age of dam, direct and maternal heterosis, and breed composition. Genetic correlations (SE) between steer TESTADG and ADFI, PWADG and ADFI, and TESTADG and PWADG were 0.33 (0.10), 0.59 (0.06), and 0.50 (0.09), respectively, and corresponding estimates for heifers were 0.66 (0.073), 0.77 (0.05), and 0.88 (0.05), respectively. Indices combining EBV for ADFI with EBV for ADG were developed and evaluated. Greater improvement in feed efficiency can be expected using an unrestricted index versus a restricted index. Heterosis significantly affected each trait contributing to greater ADFI and TESTADG. Breed additive effects were estimated for ADFI, TESTADG, and the efficiency indices.

  17. Global positioning system use in the community to evaluate improvements in walking after revascularization: a prospective multicenter study with 6-month follow-up in patients with peripheral arterial disease.

    Science.gov (United States)

    Gernigon, Marie; Le Faucheur, Alexis; Fradin, Dominique; Noury-Desvaux, Bénédicte; Landron, Cédric; Mahe, Guillaume; Abraham, Pierre

    2015-05-01

    Revascularization aims at improving walking ability in patients with arterial claudication. The highest measured distance between 2 stops (highest-MDCW), the average walking speed (average-WSCW), and the average stop duration (average-DSCW) can be measured by global positioning system, but their evolution after revascularization is unknown.We included 251 peripheral artery diseased patients with self-reported limiting claudication. The patients performed a 1-hour stroll, recorded by a global positioning system receiver. Patients (n = 172) with confirmed limitation (highest-MDCW the follow-up period were compared with reference patients (ie, with unchanged lifestyle medical or surgical status). Other patients (lost to follow-up or treatment change) were excluded (n = 89).We studied 44 revascularized and 39 reference patients. Changes in highest-MDCW (+442 vs. +13 m) and average-WSCW (+0.3 vs. -0.2 km h) were greater in revascularized than in reference patients (both P the groups. Among the revascularized patients, 13 (29.5%) had a change in average-WSCW, but not in highest-MDCW, greater than the mean + 1 standard deviation of the change observed for reference patients.Revascularization may improve highest-MDCW and/or average-WSCW. This first report of changes in community walking ability in revascularized patients suggests that, beyond measuring walking distances, average-WSCW measurement is essential to monitor these changes. Applicability to other surgical populations remains to be evaluated. http://www.clinicaltrials.gov/ct2/show/NCT01141361.

  18. Black breast cancer survivors experience greater upper extremity disability.

    Science.gov (United States)

    Dean, Lorraine T; DeMichele, Angela; LeBlanc, Mously; Stephens-Shields, Alisa; Li, Susan Q; Colameco, Chris; Coursey, Morgan; Mao, Jun J

    2015-11-01

    Over one-third of breast cancer survivors experience upper extremity disability. Black women present with factors associated with greater upper extremity disability, including: increased body mass index (BMI), more advanced disease stage at diagnosis, and varying treatment type compared with Whites. No prior research has evaluated the relationship between race and upper extremity disability using validated tools and controlling for these factors. Data were drawn from a survey study among 610 women with stage I-III hormone receptor positive breast cancer. The disabilities of the arm, shoulder and hand (QuickDASH) is an 11-item self-administered questionnaire that has been validated for breast cancer survivors to assess global upper extremity function over the past 7 days. Linear regression and mediation analysis estimated the relationships between race, BMI and QuickDASH score, adjusting for demographics and treatment types. Black women (n = 98) had 7.3 points higher average QuickDASH scores than White (n = 512) women (p disability by 40 %. Even several years post-treatment, Black breast cancer survivors had greater upper extremity disability, which was partially mediated by higher BMIs. Close monitoring of high BMI Black women may be an important step in reducing disparities in cancer survivorship. More research is needed on the relationship between race, BMI, and upper extremity disability.

  19. Forecasting house prices in the 50 states using Dynamic Model Averaging and Dynamic Model Selection

    DEFF Research Database (Denmark)

    Bork, Lasse; Møller, Stig Vinther

    2015-01-01

    We examine house price forecastability across the 50 states using Dynamic Model Averaging and Dynamic Model Selection, which allow for model change and parameter shifts. By allowing the entire forecasting model to change over time and across locations, the forecasting accuracy improves substantia......We examine house price forecastability across the 50 states using Dynamic Model Averaging and Dynamic Model Selection, which allow for model change and parameter shifts. By allowing the entire forecasting model to change over time and across locations, the forecasting accuracy improves...

  20. Bayesian model averaging and weighted average least squares : Equivariance, stability, and numerical issues

    NARCIS (Netherlands)

    De Luca, G.; Magnus, J.R.

    2011-01-01

    In this article, we describe the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals, which implement, respectively, the exact Bayesian model-averaging estimator and the weighted-average least-squares

  1. Some implications of batch average burnup calculations on predicted spent fuel compositions

    International Nuclear Information System (INIS)

    Alexander, C.W.; Croff, A.G.

    1984-01-01

    The accuracy of using batch-averaged burnups to determine spent fuel characteristics (such as isotopic composition, activity, etc.) was examined for a typical pressurized-water reactor (PWR) fuel discharge batch by comparing characteristics computed by (a) performing a single depletion calculation using the average burnup of the spent fuel and (b) performing separate depletion calculations based on the relative amounts of spent fuel in each of twelve burnup ranges and summing the results. The computations were done using ORIGEN 2. Procedure (b) showed a significant shift toward a greater quantity of the heavier transuranics, which derive from multiple neutron captures, and a corresponding decrease in the amounts of lower transuranics. Those characteristics which derive primarily from fission products, such as total radioactivity and total thermal power, are essentially identical for the two procedures. Those characteristics that derive primarily from the heavier transuranics, such as spontaneous fission neutrons, are underestimated by procedure (a)

  2. Declining average daily census. Part 1: Implications and options.

    Science.gov (United States)

    Weil, T P

    1985-12-01

    A national trend toward declining average daily (inpatient) census (ADC) started in late 1982 even before the Medicare prospective payment system began. The decrease in total days will continue despite an increasing number of aged persons in the U.S. population. This decline could have been predicted from trends during 1978 to 1983, such as increasing available beds but decreasing occupancy, 100 percent increases in hospital expenses, and declining lengths of stay. Assuming that health care costs will remain as a relatively fixed part of the gross national product and no major medical advances will occur in the next five years, certain implications and options exist for facilities experiencing a declining ADC. This article discusses several considerations: Attempts to improve market share; Reduction of full-time equivalent employees; Impact of greater acuity of illness among remaining inpatients; Implications of increasing the number of physicians on medical staffs; Option of a closed medical staff by clinical specialty; Unbundling with not-for-profit and profit-making corporations; Review of mergers, consolidations, and multihospital systems to decide when this option is most appropriate; Sale of a not-for-profit hospital to an investor-owned chain, with implications facing Catholic hospitals choosing this option; Impact and difficulty of developing meaningful alternative health care systems with the hospital's medical staff; Special problems of teaching hospitals; The social issue of the hospital shifting from the community's health center to a cost center; Increased turnover of hospital CEOs; With these in mind, institutions can then focus on solutions that can sometimes be used in tandem to resolve this problem's impact. The second part of this article will discuss some of them.

  3. Vermicompost Improves Tomato Yield and Quality and the Biochemical Properties of Soils with Different Tomato Planting History in a Greenhouse Study.

    Science.gov (United States)

    Wang, Xin-Xin; Zhao, Fengyan; Zhang, Guoxian; Zhang, Yongyong; Yang, Lijuan

    2017-01-01

    A greenhouse pot test was conducted to study the impacts of replacing mineral fertilizer with organic fertilizers for one full growing period on soil fertility, tomato yield and quality using soils with different tomato planting history. Four types of fertilization regimes were compared: (1) conventional fertilizer with urea, (2) chicken manure compost, (3) vermicompost, and (4) no fertilizer. The effects on plant growth, yield and fruit quality and soil properties (including microbial biomass carbon and nitrogen, [Formula: see text]-N, [Formula: see text]-N, soil water-soluble organic carbon, soil pH and electrical conductivity) were investigated in samples collected from the experimental soils at different tomato growth stages. The main results showed that: (1) vermicompost and chicken manure compost more effectively promoted plant growth, including stem diameter and plant height compared with other fertilizer treatments, in all three types of soil; (2) vermicompost improved fruit quality in each type of soil, and increased the sugar/acid ratio, and decreased nitrate concentration in fresh fruit compared with the CK treatment; (3) vermicompost led to greater improvements in fruit yield (74%), vitamin C (47%), and soluble sugar (71%) in soils with no tomato planting history compared with those in soils with long tomato planting history; and (4) vermicompost led to greater improvements in soil quality than chicken manure compost, including higher pH (averaged 7.37 vs. averaged 7.23) and lower soil electrical conductivity (averaged 204.1 vs. averaged 234.6 μS/cm) at the end of experiment in each type of soil. We conclude that vermicompost can be recommended as a fertilizer to improve tomato fruit quality and yield and soil quality, particularly for soils with no tomato planting history.

  4. Timescale Halo: Average-Speed Targets Elicit More Positive and Less Negative Attributions than Slow or Fast Targets

    Science.gov (United States)

    Hernandez, Ivan; Preston, Jesse Lee; Hepler, Justin

    2014-01-01

    Research on the timescale bias has found that observers perceive more capacity for mind in targets moving at an average speed, relative to slow or fast moving targets. The present research revisited the timescale bias as a type of halo effect, where normal-speed people elicit positive evaluations and abnormal-speed (slow and fast) people elicit negative evaluations. In two studies, participants viewed videos of people walking at a slow, average, or fast speed. We find evidence for a timescale halo effect: people walking at an average-speed were attributed more positive mental traits, but fewer negative mental traits, relative to slow or fast moving people. These effects held across both cognitive and emotional dimensions of mind and were mediated by overall positive/negative ratings of the person. These results suggest that, rather than eliciting greater perceptions of general mind, the timescale bias may reflect a generalized positivity toward average speed people relative to slow or fast moving people. PMID:24421882

  5. Averaging in spherically symmetric cosmology

    International Nuclear Information System (INIS)

    Coley, A. A.; Pelavas, N.

    2007-01-01

    The averaging problem in cosmology is of fundamental importance. When applied to study cosmological evolution, the theory of macroscopic gravity (MG) can be regarded as a long-distance modification of general relativity. In the MG approach to the averaging problem in cosmology, the Einstein field equations on cosmological scales are modified by appropriate gravitational correlation terms. We study the averaging problem within the class of spherically symmetric cosmological models. That is, we shall take the microscopic equations and effect the averaging procedure to determine the precise form of the correlation tensor in this case. In particular, by working in volume-preserving coordinates, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. We find that the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background must be of the form of a spatial curvature. Inhomogeneities and spatial averaging, through this spatial curvature correction term, can have a very significant dynamical effect on the dynamics of the Universe and cosmological observations; in particular, we discuss whether spatial averaging might lead to a more conservative explanation of the observed acceleration of the Universe (without the introduction of exotic dark matter fields). We also find that the correlation tensor for a non-FLRW background can be interpreted as the sum of a spatial curvature and an anisotropic fluid. This may lead to interesting effects of averaging on astrophysical scales. We also discuss the results of averaging an inhomogeneous Lemaitre-Tolman-Bondi solution as well as calculations of linear perturbations (that is, the backreaction) in an FLRW background, which support the main conclusions of the analysis

  6. Activation time and material stiffness of sequential removable orthodontic appliances. Part 2: Dental improvements.

    Science.gov (United States)

    Clements, Karen Michelle; Bollen, Anne-Marie; Huang, Greg; King, Greg; Hujoel, Philippe; Ma, Tsun

    2003-11-01

    Fifty-one patients were enrolled in this study to explore the treatment effects of material stiffness and frequency of appliance change when using clear, sequential, removable appliances (aligners). Patients were stratified based on pretreatment peer assessment rating (PAR) scores and need for extractions. They were randomized into 4 treatment protocols: 1-week activation with soft material, 1-week activation with hard material, 2-week activation with soft material, and 2-week activation with hard material. Patients continued with their protocols until either the series of aligners was completed, or until it was determined that the aligner was not fitting well (study end point). Weighted PAR score and average incisor irregularity (AII) indexes were used to measure pretreatment and end-point study models, and average improvement was compared among the 4 groups. In addition to the evaluation of the 4 treatment groups, comparisons were made of the individual components of the PAR score to determine which occlusal components experienced the most correction with the aligners. The percentages and absolute extraction space closures were evaluated, and papillary bleeding scores before and during treatment were compared. Although no statistical difference was observed between the 4 treatment groups, a trend was noted with the 2-week frequency having a larger percentage of reduction in weighted PAR and AII scores, and greater extraction space closure. Anterior alignment was the most improved component, and buccal occlusion was the least improved. When analyzed by type of extraction, incisor extraction sites had a significantly greater percentage of closure than either maxillary or mandibular premolar extraction sites. A statistically significant decrease in mean average papillary bleeding score was found during treatment when compared with pretreatment scores, although this difference was not clinically significant.

  7. How to average logarithmic retrievals?

    Directory of Open Access Journals (Sweden)

    B. Funke

    2012-04-01

    Full Text Available Calculation of mean trace gas contributions from profiles obtained by retrievals of the logarithm of the abundance rather than retrievals of the abundance itself are prone to biases. By means of a system simulator, biases of linear versus logarithmic averaging were evaluated for both maximum likelihood and maximum a priori retrievals, for various signal to noise ratios and atmospheric variabilities. These biases can easily reach ten percent or more. As a rule of thumb we found for maximum likelihood retrievals that linear averaging better represents the true mean value in cases of large local natural variability and high signal to noise ratios, while for small local natural variability logarithmic averaging often is superior. In the case of maximum a posteriori retrievals, the mean is dominated by the a priori information used in the retrievals and the method of averaging is of minor concern. For larger natural variabilities, the appropriateness of the one or the other method of averaging depends on the particular case because the various biasing mechanisms partly compensate in an unpredictable manner. This complication arises mainly because of the fact that in logarithmic retrievals the weight of the prior information depends on abundance of the gas itself. No simple rule was found on which kind of averaging is superior, and instead of suggesting simple recipes we cannot do much more than to create awareness of the traps related with averaging of mixing ratios obtained from logarithmic retrievals.

  8. Similarity-based distortion of visual short-term memory is due to perceptual averaging.

    Science.gov (United States)

    Dubé, Chad; Zhou, Feng; Kahana, Michael J; Sekuler, Robert

    2014-03-01

    A task-irrelevant stimulus can distort recall from visual short-term memory (VSTM). Specifically, reproduction of a task-relevant memory item is biased in the direction of the irrelevant memory item (Huang & Sekuler, 2010a). The present study addresses the hypothesis that such effects reflect the influence of neural averaging under conditions of uncertainty about the contents of VSTM (Alvarez, 2011; Ball & Sekuler, 1980). We manipulated subjects' attention to relevant and irrelevant study items whose similarity relationships were held constant, while varying how similar the study items were to a subsequent recognition probe. On each trial, subjects were shown one or two Gabor patches, followed by the probe; their task was to indicate whether the probe matched one of the study items. A brief cue told subjects which Gabor, first or second, would serve as that trial's target item. Critically, this cue appeared either before, between, or after the study items. A distributional analysis of the resulting mnemometric functions showed an inflation in probability density in the region spanning the spatial frequency of the average of the two memory items. This effect, due to an elevation in false alarms to probes matching the perceptual average, was diminished when cues were presented before both study items. These results suggest that (a) perceptual averages are computed obligatorily and (b) perceptual averages are relied upon to a greater extent when item representations are weakened. Implications of these results for theories of VSTM are discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Greater-than-Class C low-level radioactive waste characterization. Appendix E-3: GTCC LLW assumptions matrix

    International Nuclear Information System (INIS)

    1995-01-01

    This study identifies four categories of GTCC LLW: nuclear utility; sealed sources; DOE-held; and other generators. Within each category, inventory and projection data are modeled in three scenarios: (1) Unpackaged volume--this is the unpackaged volume of waste that would exceed Class C limits if the waste calculation methods in 10 CFR 61.55 were applied to the discrete items before concentration averaging methods were applied to the volume; (2) Not-concentration-averaged (NCA) packaged volume--this is the packaged volume of GTCC LLW assuming that no concentration averaging is allowed; and (3) After-concentration-averaging (ACA) packaged volume--this is the packaged volume of GTCC LLW, which, for regulatory or practical reasons, cannot be disposed of in a LLW disposal facility using allowable concentration averaging practices. Three cases are calculated for each of the volumes described above. These values are defined as the low, base, and high cases. The following tables explain the assumptions used to determine low, base, and high case estimates for each scenario, within each generator category. The appendices referred to in these tables are appendices to Greater-Than-Class C Low-Level Radioactive Waste Characterization: Estimated Volumes, Radionuclide Activities, and Other Characteristics (DOE/LLW-114, Revision 1)

  10. Chirally improving Wilson fermions I. O(a) improvement

    International Nuclear Information System (INIS)

    Frezzotti, R.; Rossi, G.C.

    2004-01-01

    We show that it is possible to improve the chiral behaviour and the approach to the continuum limit of correlation functions in lattice QCD with Wilson fermions by taking arithmetic averages of correlators computed in theories regularized with Wilson terms of opposite sign. Improved hadronic masses and matrix elements can be obtained by similarly averaging the corresponding physical quantities separately computed within the two regularizations. To deal with the problems related to the spectrum of the Wilson-Dirac operator, which are particularly worrisome when Wilson and mass terms are such as to give contributions of opposite sign to the real part of the eigenvalues, we propose to use twisted-mass lattice QCD for the actual computation of the quantities taking part to the averages. The choice ±π/2 for the twisting angle is particularly interesting, as O(a) improved estimates of physical quantities can be obtained even without averaging data from lattice formulations with opposite Wilson terms. In all cases little or no extra computing power is necessary, compared to simulations with standard Wilson fermions or twisted-mass lattice QCD. (author)

  11. An effective method to improve the robustness of small-world networks under attack

    International Nuclear Information System (INIS)

    Zhang Zheng-Zhen; Xu Wen-Jun; Lin Jia-Ru; Zeng Shang-You

    2014-01-01

    In this study, the robustness of small-world networks to three types of attack is investigated. Global efficiency is introduced as the network coefficient to measure the robustness of a small-world network. The simulation results prove that an increase in rewiring probability or average degree can enhance the robustness of the small-world network under all three types of attack. The effectiveness of simultaneously increasing both rewiring probability and average degree is also studied, and the combined increase is found to significantly improve the robustness of the small-world network. Furthermore, the combined effect of rewiring probability and average degree on network robustness is shown to be several times greater than that of rewiring probability or average degree individually. This means that small-world networks with a relatively high rewiring probability and average degree have advantages both in network communications and in good robustness to attacks. Therefore, simultaneously increasing rewiring probability and average degree is an effective method of constructing realistic networks. Consequently, the proposed method is useful to construct efficient and robust networks in a realistic scenario. (interdisciplinary physics and related areas of science and technology)

  12. Average chest wall thickness at two anatomic locations in trauma patients.

    Science.gov (United States)

    Schroeder, Elizabeth; Valdez, Carrie; Krauthamer, Andres; Khati, Nadia; Rasmus, Jessica; Amdur, Richard; Brindle, Kathleen; Sarani, Babak

    2013-09-01

    Needle thoracostomy is the emergent treatment for tension pneumothorax. This procedure is commonly done using a 4.5cm catheter, and the optimal site for chest wall puncture is controversial. We hypothesize that needle thoracostomy cannot be performed using this catheter length irrespective of the site chosen in either gender. A retrospective review of all chest computed tomography (CT) scans obtained on trauma patients from January 1, 2011 to December 31, 2011 was performed. Patients aged 18 and 80 years were included and patients whose chest wall thickness exceeded the boundary of the images acquired were excluded. Chest wall thickness was measured at the 2nd intercostal (ICS), midclavicular line (MCL) and the 5th ICS, anterior axillary line (AAL). Injury severity score (ISS), chest wall thickness, and body mass index (BMI) were analyzed. 201 patients were included, 54% male. Average (SD) BMI was 26 (7)kg/m(2). The average chest wall thickness in the overall cohort was 4.08 (1.4)cm at the 2nd ICS/MCL and 4.55 (1.7)cm at the 5th ICS/AAL. 29% of the overall cohort (27 male and 32 female) had a chest wall thickness greater than 4.5cm at the 2nd ICS/MCL and 45% (54 male and 36 female) had a chest wall thickness greater than 4.5cm at the 5th ICS/AAL. There was no significant interaction between gender and chest wall thickness at either site. BMI was positively associated with chest wall thickness at both the 2nd and 5th ICS/AAL. A 4.5cm catheter is inadequate for needle thoracostomy in most patients regardless of puncture site or gender. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    2001-01-01

    In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong ...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation.......In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  14. Greater-confinement disposal of low-level radioactive wastes

    International Nuclear Information System (INIS)

    Trevorrow, L.E.; Gilbert, T.L.; Luner, C.; Merry-Libby, P.A.; Meshkov, N.K.; Yu, C.

    1985-01-01

    Low-level radioactive wastes include a broad spectrum of wastes that have different radionuclide concentrations, half-lives, and physical and chemical properties. Standard shallow-land burial practice can provide adequate protection of public health and safety for most low-level wastes, but a small volume fraction (about 1%) containing most of the activity inventory (approx.90%) requires specific measures known as ''greater-confinement disposal'' (GCD). Different site characteristics and different waste characteristics - such as high radionuclide concentrations, long radionuclide half-lives, high radionuclide mobility, and physical or chemical characteristics that present exceptional hazards - lead to different GCD facility design requirements. Facility design alternatives considered for GCD include the augered shaft, deep trench, engineered structure, hydrofracture, improved waste form, and high-integrity container. Selection of an appropriate design must also consider the interplay between basic risk limits for protection of public health and safety, performance characteristics and objectives, costs, waste-acceptance criteria, waste characteristics, and site characteristics. This paper presents an overview of the factors that must be considered in planning the application of methods proposed for providing greater confinement of low-level wastes. 27 refs

  15. Bootstrap inference for pre-averaged realized volatility based on non-overlapping returns

    DEFF Research Database (Denmark)

    Gonçalves, Sílvia; Hounyo, Ulrich; Meddahi, Nour

    The main contribution of this paper is to propose bootstrap methods for realized volatility-like estimators defined on pre-averaged returns. In particular, we focus on the pre-averaged realized volatility estimator proposed by Podolskij and Vetter (2009). This statistic can be written (up to a bias......-overlapping nature of the pre-averaged returns implies that these are asymptotically independent, but possibly heteroskedastic. This motivates the application of the wild bootstrap in this context. We provide a proof of the first order asymptotic validity of this method for percentile and percentile-t intervals. Our...... Monte Carlo simulations show that the wild bootstrap can improve the finite sample properties of the existing first order asymptotic theory provided we choose the external random variable appropriately. We use empirical work to illustrate its use in practice....

  16. Greater confinement disposal of radioactive wastes

    International Nuclear Information System (INIS)

    Trevorrow, L.E.; Gilbert, T.L.; Luner, C.; Merry-Libby, P.A.; Meshkov, N.K.; Yu, C.

    1985-01-01

    Low-level radioactive waste (LLW) includes a broad spectrum of different radionuclide concentrations, half-lives, and hazards. Standard shallow-land burial practice can provide adequate protection of public health and safety for most LLW. A small volume fraction (approx. 1%) containing most of the activity inventory (approx. 90%) requires specific measures known as greater-confinement disposal (GCD). Different site characteristics and different waste characteristics - such as high radionuclide concentrations, long radionuclide half-lives, high radionuclide mobility, and physical or chemical characteristics that present exceptional hazards - lead to different GCD facility design requirements. Facility design alternatives considered for GCD include the augered shaft, deep trench, engineered structure, hydrofracture, improved waste form, and high-integrity container. Selection of an appropriate design must also consider the interplay between basic risk limits for protection of public health and safety, performance characteristics and objectives, costs, waste-acceptance criteria, waste characteristics, and site characteristics

  17. Call to action: Better care, better health, and greater value in college health.

    Science.gov (United States)

    Ciotoli, Carlo; Smith, Allison J; Keeling, Richard P

    2018-03-05

    It is time for action by leaders across higher education to strengthen quality improvement (QI) in college health, in pursuit of better care, better health, and increased value - goals closely linked to students' learning and success. The size and importance of the college student population; the connections between wellbeing, and therefore QI, and student success; the need for improved standards and greater accountability; and the positive contributions of QI to employee satisfaction and professionalism all warrant a widespread commitment to building greater capacity and capability for QI in college health. This report aims to inspire, motivate, and challenge college health professionals and their colleagues, campus leaders, and national entities to take both immediate and sustainable steps to bring QI to the forefront of college health practice - and, by doing so, to elevate care, health, and value of college health as a key pathway to advancing student success.

  18. Vapour cloud explosion hazard greater with light feedstocks

    Energy Technology Data Exchange (ETDEWEB)

    Windebank, C.S.

    1980-03-03

    Because lighter chemical feedstocks such as propylene and butylenes are more reactive than LPG's they pose a greater risk of vapor cloud explosion, particularly during their transport. According to C.S. Windebank (Insurance Tech. Bur.), percussive unconfined vapor cloud explosions (PUVCE's) do not usually occur below the ten-ton threshold for saturated hydrocarbons but can occur well below this threshold in the case of unsaturated hydrocarbons such as propylene and butylenes. Boiling liquid expanding vapor explosions (BLEVE's) are more likely to be ''hot'' (i.e., the original explosion is associated with fire) than ''cold'' in the case of unsaturated hydrocarbons. No PUVCE or BLEVE incident has been reported in the UK. In the US, 16 out of 20 incidents recorded between 1970 and 1975 were related to chemical feedstocks, including propylene and butylenes, and only 4 were LPG-related. The average losses were $20 million per explosion. Between 1968 and 1978, 8% of LPG pipeline spillages led to explosions.

  19. Lagrangian averaging with geodesic mean.

    Science.gov (United States)

    Oliver, Marcel

    2017-11-01

    This paper revisits the derivation of the Lagrangian averaged Euler (LAE), or Euler- α equations in the light of an intrinsic definition of the averaged flow map as the geodesic mean on the volume-preserving diffeomorphism group. Under the additional assumption that first-order fluctuations are statistically isotropic and transported by the mean flow as a vector field, averaging of the kinetic energy Lagrangian of an ideal fluid yields the LAE Lagrangian. The derivation presented here assumes a Euclidean spatial domain without boundaries.

  20. Greater physician involvement improves coding outcomes in endobronchial ultrasound-guided transbronchial needle aspiration procedures.

    Science.gov (United States)

    Pillai, Anilkumar; Medford, Andrew R L

    2013-01-01

    Correct coding is essential for accurate reimbursement for clinical activity. Published data confirm that significant aberrations in coding occur, leading to considerable financial inaccuracies especially in interventional procedures such as endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA). Previous data reported a 15% coding error for EBUS-TBNA in a U.K. service. We hypothesised that greater physician involvement with coders would reduce EBUS-TBNA coding errors and financial disparity. The study was done as a prospective cohort study in the tertiary EBUS-TBNA service in Bristol. 165 consecutive patients between October 2009 and March 2012 underwent EBUS-TBNA for evaluation of unexplained mediastinal adenopathy on computed tomography. The chief coder was prospectively electronically informed of all procedures and cross-checked on a prospective database and by Trust Informatics. Cost and coding analysis was performed using the 2010-2011 tariffs. All 165 procedures (100%) were coded correctly as verified by Trust Informatics. This compares favourably with the 14.4% coding inaccuracy rate for EBUS-TBNA in a previous U.K. prospective cohort study [odds ratio 201.1 (1.1-357.5), p = 0.006]. Projected income loss was GBP 40,000 per year in the previous study, compared to a GBP 492,195 income here with no coding-attributable loss in revenue. Greater physician engagement with coders prevents coding errors and financial losses which can be significant especially in interventional specialties. The intervention can be as cheap, quick and simple as a prospective email to the coding team with cross-checks by Trust Informatics and against a procedural database. We suggest that all specialties should engage more with their coders using such a simple intervention to prevent revenue losses. Copyright © 2013 S. Karger AG, Basel.

  1. Participant characteristics associated with greater reductions in waist circumference during a four-month, pedometer-based, workplace health program.

    Science.gov (United States)

    Freak-Poli, Rosanne L A; Wolfe, Rory; Walls, Helen; Backholer, Kathryn; Peeters, Anna

    2011-10-25

    Workplace health programs have demonstrated improvements in a number of risk factors for chronic disease. However, there has been little investigation of participant characteristics that may be associated with change in risk factors during such programs. The aim of this paper is to identify participant characteristics associated with improved waist circumference (WC) following participation in a four-month, pedometer-based, physical activity, workplace health program. 762 adults employed in primarily sedentary occupations and voluntarily enrolled in a four-month workplace program aimed at increasing physical activity were recruited from ten Australian worksites in 2008. Seventy-nine percent returned at the end of the health program. Data included demographic, behavioural, anthropometric and biomedical measurements. WC change (before versus after) was assessed by multivariable linear and logistic regression analyses. Seven groupings of potential associated variables from baseline were sequentially added to build progressively larger regression models. Greater improvement in WC during the program was associated with having completed tertiary education, consuming two or less standard alcoholic beverages in one occasion in the twelve months prior to baseline, undertaking less baseline weekend sitting time and lower baseline total cholesterol. A greater WC at baseline was strongly associated with a greater improvement in WC. A sub-analysis in participants with a 'high-risk' baseline WC revealed that younger age, enrolling for reasons other than appearance, undertaking less weekend sitting time at baseline, eating two or more pieces of fruit per day at baseline, higher baseline physical functioning and lower baseline body mass index were associated with greater odds of moving to 'low risk' WC at the end of the program. While employees with 'high-risk' WC at baseline experienced the greatest improvements in WC, the other variables associated with greater WC improvement

  2. Greater-confinement disposal

    International Nuclear Information System (INIS)

    Trevorrow, L.E.; Schubert, J.P.

    1989-01-01

    Greater-confinement disposal (GCD) is a general term for low-level waste (LLW) disposal technologies that employ natural and/or engineered barriers and provide a degree of confinement greater than that of shallow-land burial (SLB) but possibly less than that of a geologic repository. Thus GCD is associated with lower risk/hazard ratios than SLB. Although any number of disposal technologies might satisfy the definition of GCD, eight have been selected for consideration in this discussion. These technologies include: (1) earth-covered tumuli, (2) concrete structures, both above and below grade, (3) deep trenches, (4) augered shafts, (5) rock cavities, (6) abandoned mines, (7) high-integrity containers, and (8) hydrofracture. Each of these technologies employ several operations that are mature,however, some are at more advanced stages of development and demonstration than others. Each is defined and further described by information on design, advantages and disadvantages, special equipment requirements, and characteristic operations such as construction, waste emplacement, and closure

  3. Remotely Sensed Estimation of Net Primary Productivity (NPP and Its Spatial and Temporal Variations in the Greater Khingan Mountain Region, China

    Directory of Open Access Journals (Sweden)

    Qiang Zhu

    2017-07-01

    Full Text Available We improved the CASA model based on differences in the types of land use, the values of the maximum light use efficiency, and the calculation methods of solar radiation. Then, the parameters of the model were examined and recombined into 16 cases. We estimated the net primary productivity (NPP using the NDVI3g dataset, meteorological data, and vegetation classification data from the Greater Khingan Mountain region, China. We assessed the accuracy and temporal-spatial distribution characteristics of NPP in the Greater Khingan Mountain region from 1982 to 2013. Based on a comparison of the results of the 16 cases, we found that different values of maximum light use efficiency affect the estimation more than differences in the fraction of photosynthetically active radiation (FPAR. However, the FPARmax and the constant Tε2 values did not show marked effects. Different schemes were used to assess different model combinations. Models using a combination of parameters established by scholars from China and the United States produced different results and had large errors. These ideas are meaningful references for the estimation of NPP in other regions. The results reveal that the annual average NPP in the Greater Khingan Mountain region was 760 g C/m2·a in 1982–2013 and that the inter-annual fluctuations were not dramatic. The NPP estimation results of the 16 cases exhibit an increasing trend. In terms of the spatial distribution of the changes, the model indicated that the values in 75% of this area seldom or never increased. Prominent growth occurred in the areas of Taipingling, Genhe, and the Oroqen Autonomous Banner. Notably, NPP decreased in the southeastern region of the Greater Khingan Mountains, the Hulunbuir Pasture Land, and Holingol.

  4. Zonally averaged chemical-dynamical model of the lower thermosphere

    International Nuclear Information System (INIS)

    Kasting, J.F.; Roble, R.G.

    1981-01-01

    A zonally averaged numerical model of the thermosphere is used to examine the coupling between neutral composition, including N 2 , O 2 and O, temperature, and winds at solstice for solar minimum conditions. The meridional circulation forced by solar heating results in a summer-to-winter flow, with a winter enhancement in atomic oxygen density that is a factor of about 1.8 greater than the summer hemisphere at 160 km. The O 2 and N 2 variations are associated with a latitudinal gradient in total number density, which is required to achieve pressure balance in the presence of large zonal jets. Latitudinal profiles OI (5577A) green line emission intensity are calculated by using both Chapman and Barth mechanisms. Composition of the lower thermosphere is shown to be strongly influenced by circulation patterns initiated in the stratosphere and lower mesosphere, below the lower boundary used in the model

  5. Strontium isotopic geochemistry of intrusive rocks, Puerto Rico, Greater Antilles

    International Nuclear Information System (INIS)

    Jones, L.M.; Kesler, S.E.

    1980-01-01

    The strontium isotope geochemistry is given for three Puerto Rican intrusive rocks: the granodioritic Morovis and San Lorenzo plutons and the Rio Blanco stock of quartz dioritic composition. The average calculated initial 87 Sr/ 86 Sr ratios are 0.70370, 0.70355 and 0.70408, respectively. In addition, the San Lorenzo data establish a whole-rock isochron of 71 +- 2 m.y., which agrees with the previously reported K-Ar age of 73 m.y. Similarity of most of the intrusive rocks in the Greater Antilles with respect to their strontium isotopic geochemistry regardless of their major element composition indicates that intrusive magmas with a wide range of composition can be derived from a single source material. The most likely source material, in view of the available isotopic data, is the mantle wedge overlying the subduction zone. (orig.)

  6. Application of NMR circuit for superconducting magnet using signal averaging

    International Nuclear Information System (INIS)

    Yamada, R.; Ishimoto, H.; Shea, M.F.; Schmidt, E.E.; Borer, K.

    1977-01-01

    An NMR circuit was used to measure the absolute field values of Fermilab Energy Doubler magnets up to 44 kG. A signal averaging method to improve the S/N ratio was implemented by means of a Tektronix Digital Processing Oscilloscope, followed by the development of an inexpensive microprocessor based system contained in a NIM module. Some of the data obtained from measuring two superconducting dipole magnets are presented

  7. Predicting Greater Prairie-Chicken Lek Site Suitability to Inform Conservation Actions.

    Directory of Open Access Journals (Sweden)

    Torre J Hovick

    Full Text Available The demands of a growing human population dictates that expansion of energy infrastructure, roads, and other development frequently takes place in native rangelands. Particularly, transmission lines and roads commonly divide rural landscapes and increase fragmentation. This has direct and indirect consequences on native wildlife that can be mitigated through thoughtful planning and proactive approaches to identifying areas of high conservation priority. We used nine years (2003-2011 of Greater Prairie-Chicken (Tympanuchus cupido lek locations totaling 870 unique leks sites in Kansas and seven geographic information system (GIS layers describing land cover, topography, and anthropogenic structures to model habitat suitability across the state. The models obtained had low omission rates (0.81, indicating high model performance and reliability of predicted habitat suitability for Greater Prairie-Chickens. We found that elevation was the most influential in predicting lek locations, contributing three times more predictive power than any other variable. However, models were improved by the addition of land cover and anthropogenic features (transmission lines, roads, and oil and gas structures. Overall, our analysis provides a hierarchal understanding of Greater Prairie-Chicken habitat suitability that is broadly based on geomorphological features followed by land cover suitability. We found that when land features and vegetation cover are suitable for Greater Prairie-Chickens, fragmentation by anthropogenic sources such as roadways and transmission lines are a concern. Therefore, it is our recommendation that future human development in Kansas avoid areas that our models identified as highly suitable for Greater Prairie-Chickens and focus development on land cover types that are of lower conservation concern.

  8. Space-Varying Iterative Restoration of Diffuse Optical Tomograms Reconstructed by the Photon Average Trajectories Method

    Directory of Open Access Journals (Sweden)

    Kravtsenyuk Olga V

    2007-01-01

    Full Text Available The possibility of improving the spatial resolution of diffuse optical tomograms reconstructed by the photon average trajectories (PAT method is substantiated. The PAT method recently presented by us is based on a concept of an average statistical trajectory for transfer of light energy, the photon average trajectory (PAT. The inverse problem of diffuse optical tomography is reduced to a solution of an integral equation with integration along a conditional PAT. As a result, the conventional algorithms of projection computed tomography can be used for fast reconstruction of diffuse optical images. The shortcoming of the PAT method is that it reconstructs the images blurred due to averaging over spatial distributions of photons which form the signal measured by the receiver. To improve the resolution, we apply a spatially variant blur model based on an interpolation of the spatially invariant point spread functions simulated for the different small subregions of the image domain. Two iterative algorithms for solving a system of linear algebraic equations, the conjugate gradient algorithm for least squares problem and the modified residual norm steepest descent algorithm, are used for deblurring. It is shown that a gain in spatial resolution can be obtained.

  9. Space-Varying Iterative Restoration of Diffuse Optical Tomograms Reconstructed by the Photon Average Trajectories Method

    Directory of Open Access Journals (Sweden)

    Vladimir V. Lyubimov

    2007-01-01

    Full Text Available The possibility of improving the spatial resolution of diffuse optical tomograms reconstructed by the photon average trajectories (PAT method is substantiated. The PAT method recently presented by us is based on a concept of an average statistical trajectory for transfer of light energy, the photon average trajectory (PAT. The inverse problem of diffuse optical tomography is reduced to a solution of an integral equation with integration along a conditional PAT. As a result, the conventional algorithms of projection computed tomography can be used for fast reconstruction of diffuse optical images. The shortcoming of the PAT method is that it reconstructs the images blurred due to averaging over spatial distributions of photons which form the signal measured by the receiver. To improve the resolution, we apply a spatially variant blur model based on an interpolation of the spatially invariant point spread functions simulated for the different small subregions of the image domain. Two iterative algorithms for solving a system of linear algebraic equations, the conjugate gradient algorithm for least squares problem and the modified residual norm steepest descent algorithm, are used for deblurring. It is shown that a 27% gain in spatial resolution can be obtained.

  10. Spacetime averaging of exotic singularity universes

    International Nuclear Information System (INIS)

    Dabrowski, Mariusz P.

    2011-01-01

    Taking a spacetime average as a measure of the strength of singularities we show that big-rips (type I) are stronger than big-bangs. The former have infinite spacetime averages while the latter have them equal to zero. The sudden future singularities (type II) and w-singularities (type V) have finite spacetime averages. The finite scale factor (type III) singularities for some values of the parameters may have an infinite average and in that sense they may be considered stronger than big-bangs.

  11. Weak average persistence and extinction of a predator-prey system in a polluted environment with impulsive toxicant input

    International Nuclear Information System (INIS)

    Yang Xiaofeng; Jin Zhen; Xue Yakui

    2007-01-01

    In this paper, we have investigated a predator-prey system in a polluted environment with impulsive toxicant input at fixed moments. We have obtained two thresholds on the impulsive period by assuming the toxicant amount input is fixed to the environment at each pulse moment. If the impulsive period is greater than the big threshold, then both populations are weak average persistent. If the period lies between of the two thresholds, then the prey population will be weak average persistent while the predator population extinct. If the period is less than the small threshold, both populations tend to extinction. Finally, our theoretical results are confirmed by own numerical simulations

  12. Rumination time around calving: an early signal to detect cows at greater risk of disease.

    Science.gov (United States)

    Calamari, L; Soriani, N; Panella, G; Petrera, F; Minuti, A; Trevisi, E

    2014-01-01

    The main objective of this experiment was to evaluate the use of rumination time (RT) during the peripartum period as a tool for early disease detection. The study was carried out in an experimental freestall barn and involved 23 Italian Friesian cows (9 primiparous and 14 multiparous). The RT was continuously recorded by using an automatic system (Hr-Tag, SCR Engineers Ltd., Netanya, Israel), and data were summarized in 2-h intervals. Blood samples were collected from 30 d before calving to 42 d in milk (DIM) to assess biochemical indicators related to energy, protein, and mineral metabolism, as well as markers of inflammation and some enzyme activities. The liver functionality index, which includes some negative acute-phase proteins and related parameters (albumin, cholesterol, and bilirubin), was used to evaluate the severity of inflammatory conditions occurring around calving. The cows were retrospectively categorized according to RT observed between 3 and 6 DIM into those with the lowest (L) and highest (H) RT. The average RT before calving (-20 to -2d) was 479 min/d (range 264 to 599), reached a minimum value at calving (30% of RT before calving), and was nearly stable after 15 DIM (on average 452 min/d). Milk yield in early lactation (on average 26.8 kg/d) was positively correlated with RT (r = 0.33). After calving, compared with H cows, the L cows had higher values of haptoglobin (0.61 and 0.34 g/L at 10 DIM in L and H, respectively) for a longer time, had a greater increase in total bilirubin (9.5 and 5.7 μmol/L at 5 DIM in L and H), had greater reductions of albumin (31.2 and 33.5 g/L at 10 DIM in L and H) and paraoxonase (54 and 76 U/ml at 10 DIM in L and H), and had a slower increase of total cholesterol (2.7 and 3.2 mmol/L at 20 DIM in L and H). Furthermore, a lower average value of liver functionality index was observed in L (-6.97) compared with H (-1.91) cows. These results suggest that severe inflammation around parturition is associated with a

  13. Average cost per person victimized by an intimate partner of the opposite gender: a comparison of men and women.

    Science.gov (United States)

    Arias, Ileana; Corso, Phaedra

    2005-08-01

    Differences in prevalence, injury, and utilization of services between female and male victims of intimate partner violence (IPV) have been noted. However, there are no studies indicating approximate costs of men's IPV victimization. This study explored gender differences in service utilization for physical IPV injuries and average cost per person victimized by an intimate partner of the opposite gender. Significantly more women than men reported physical IPV victimization and related injuries. A greater proportion of women than men reported seeking mental health services and reported more visits on average in response to physical IPV victimization. Women were more likely than men to report using emergency department, inpatient hospital, and physician services, and were more likely than men to take time off from work and from childcare or household duties because of their injuries. The total average per person cost for women experiencing at least one physical IPV victimization was more than twice the average per person cost for men.

  14. Participant characteristics associated with greater reductions in waist circumference during a four-month, pedometer-based, workplace health program

    Directory of Open Access Journals (Sweden)

    Freak-Poli Rosanne LA

    2011-10-01

    Full Text Available Abstract Background Workplace health programs have demonstrated improvements in a number of risk factors for chronic disease. However, there has been little investigation of participant characteristics that may be associated with change in risk factors during such programs. The aim of this paper is to identify participant characteristics associated with improved waist circumference (WC following participation in a four-month, pedometer-based, physical activity, workplace health program. Methods 762 adults employed in primarily sedentary occupations and voluntarily enrolled in a four-month workplace program aimed at increasing physical activity were recruited from ten Australian worksites in 2008. Seventy-nine percent returned at the end of the health program. Data included demographic, behavioural, anthropometric and biomedical measurements. WC change (before versus after was assessed by multivariable linear and logistic regression analyses. Seven groupings of potential associated variables from baseline were sequentially added to build progressively larger regression models. Results Greater improvement in WC during the program was associated with having completed tertiary education, consuming two or less standard alcoholic beverages in one occasion in the twelve months prior to baseline, undertaking less baseline weekend sitting time and lower baseline total cholesterol. A greater WC at baseline was strongly associated with a greater improvement in WC. A sub-analysis in participants with a 'high-risk' baseline WC revealed that younger age, enrolling for reasons other than appearance, undertaking less weekend sitting time at baseline, eating two or more pieces of fruit per day at baseline, higher baseline physical functioning and lower baseline body mass index were associated with greater odds of moving to 'low risk' WC at the end of the program. Conclusions While employees with 'high-risk' WC at baseline experienced the greatest improvements in

  15. Modified parity space averaging approaches for online cross-calibration of redundant sensors in nuclear reactors

    Directory of Open Access Journals (Sweden)

    Moath Kassim

    2018-05-01

    Full Text Available To maintain safety and reliability of reactors, redundant sensors are usually used to measure critical variables and estimate their averaged time-dependency. Nonhealthy sensors can badly influence the estimation result of the process variable. Since online condition monitoring was introduced, the online cross-calibration method has been widely used to detect any anomaly of sensor readings among the redundant group. The cross-calibration method has four main averaging techniques: simple averaging, band averaging, weighted averaging, and parity space averaging (PSA. PSA is used to weigh redundant signals based on their error bounds and their band consistency. Using the consistency weighting factor (C, PSA assigns more weight to consistent signals that have shared bands, based on how many bands they share, and gives inconsistent signals of very low weight. In this article, three approaches are introduced for improving the PSA technique: the first is to add another consistency factor, so called trend consistency (TC, to include a consideration of the preserving of any characteristic edge that reflects the behavior of equipment/component measured by the process parameter; the second approach proposes replacing the error bound/accuracy based weighting factor (Wa with a weighting factor based on the Euclidean distance (Wd, and the third approach proposes applying Wd,TC,andC, all together. Cold neutron source data sets of four redundant hydrogen pressure transmitters from a research reactor were used to perform the validation and verification. Results showed that the second and third modified approaches lead to reasonable improvement of the PSA technique. All approaches implemented in this study were similar in that they have the capability to (1 identify and isolate a drifted sensor that should undergo calibration, (2 identify a faulty sensor/s due to long and continuous missing data range, and (3 identify a healthy sensor. Keywords: Nuclear Reactors

  16. Using Bayes Model Averaging for Wind Power Forecasts

    Science.gov (United States)

    Preede Revheim, Pål; Beyer, Hans Georg

    2014-05-01

    For operational purposes predictions of the forecasts of the lumped output of groups of wind farms spread over larger geographic areas will often be of interest. A naive approach is to make forecasts for each individual site and sum them up to get the group forecast. It is however well documented that a better choice is to use a model that also takes advantage of spatial smoothing effects. It might however be the case that some sites tends to more accurately reflect the total output of the region, either in general or for certain wind directions. It will then be of interest giving these a greater influence over the group forecast. Bayesian model averaging (BMA) is a statistical post-processing method for producing probabilistic forecasts from ensembles. Raftery et al. [1] show how BMA can be used for statistical post processing of forecast ensembles, producing PDFs of future weather quantities. The BMA predictive PDF of a future weather quantity is a weighted average of the ensemble members' PDFs, where the weights can be interpreted as posterior probabilities and reflect the ensemble members' contribution to overall forecasting skill over a training period. In Revheim and Beyer [2] the BMA procedure used in Sloughter, Gneiting and Raftery [3] were found to produce fairly accurate PDFs for the future mean wind speed of a group of sites from the single sites wind speeds. However, when the procedure was attempted applied to wind power it resulted in either problems with the estimation of the parameters (mainly caused by longer consecutive periods of no power production) or severe underestimation (mainly caused by problems with reflecting the power curve). In this paper the problems that arose when applying BMA to wind power forecasting is met through two strategies. First, the BMA procedure is run with a combination of single site wind speeds and single site wind power production as input. This solves the problem with longer consecutive periods where the input data

  17. Olympic weightlifting and plyometric training with children provides similar or greater performance improvements than traditional resistance training.

    Science.gov (United States)

    Chaouachi, Anis; Hammami, Raouf; Kaabi, Sofiene; Chamari, Karim; Drinkwater, Eric J; Behm, David G

    2014-06-01

    A number of organizations recommend that advanced resistance training (RT) techniques can be implemented with children. The objective of this study was to evaluate the effectiveness of Olympic-style weightlifting (OWL), plyometrics, and traditional RT programs with children. Sixty-three children (10-12 years) were randomly allocated to a 12-week control OWL, plyometric, or traditional RT program. Pre- and post-training tests included body mass index (BMI), sum of skinfolds, countermovement jump (CMJ), horizontal jump, balance, 5- and 20-m sprint times, isokinetic force and power at 60 and 300° · s(-1). Magnitude-based inferences were used to analyze the likelihood of an effect having a standardized (Cohen's) effect size exceeding 0.20. All interventions were generally superior to the control group. Olympic weightlifting was >80% likely to provide substantially better improvements than plyometric training for CMJ, horizontal jump, and 5- and 20-m sprint times, whereas >75% likely to substantially exceed traditional RT for balance and isokinetic power at 300° · s(-1). Plyometric training was >78% likely to elicit substantially better training adaptations than traditional RT for balance, isokinetic force at 60 and 300° · s(-1), isokinetic power at 300° · s(-1), and 5- and 20-m sprints. Traditional RT only exceeded plyometric training for BMI and isokinetic power at 60° · s(-1). Hence, OWL and plyometrics can provide similar or greater performance adaptations for children. It is recommended that any of the 3 training modalities can be implemented under professional supervision with proper training progressions to enhance training adaptations in children.

  18. Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.

    Science.gov (United States)

    Dirks, Jean; And Others

    1983-01-01

    Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)

  19. Efficacy of spatial averaging of infrasonic pressure in varying wind speeds

    International Nuclear Information System (INIS)

    DeWolf, Scott; Walker, Kristoffer T.; Zumberge, Mark A.; Denis, Stephane

    2013-01-01

    Wind noise reduction (WNR) is important in the measurement of infra-sound. Spatial averaging theory led to the development of rosette pipe arrays. The efficacy of rosettes decreases with increasing wind speed and only provides a maximum of 20 dB WNR due to a maximum size limitation. An Optical Fiber Infra-sound Sensor (OFIS) reduces wind noise by instantaneously averaging infra-sound along the sensor's length. In this study two experiments quantify the WNR achieved by rosettes and OFISs of various sizes and configurations. Specifically, it is shown that the WNR for a circular OFIS 18 m in diameter is the same as a collocated 32-inlet pipe array of the same diameter. However, linear OFISs ranging in length from 30 to 270 m provide a WNR of up to 30 dB in winds up to 5 m/s. The measured WNR is a logarithmic function of the OFIS length and depends on the orientation of the OFIS with respect to wind direction. OFISs oriented parallel to the wind direction achieve 4 dB greater WNR than those oriented perpendicular to the wind. Analytical models for the rosette and OFIS are developed that predict the general observed relationships between wind noise reduction, frequency, and wind speed. (authors)

  20. Evaluations of average level spacings

    International Nuclear Information System (INIS)

    Liou, H.I.

    1980-01-01

    The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of 168 Er data. 19 figures, 2 tables

  1. Geographic Gossip: Efficient Averaging for Sensor Networks

    Science.gov (United States)

    Dimakis, Alexandros D. G.; Sarwate, Anand D.; Wainwright, Martin J.

    Gossip algorithms for distributed computation are attractive due to their simplicity, distributed nature, and robustness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repeatedly recirculating redundant information. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing times of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of $n$ and $\\sqrt{n}$ respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy $\\epsilon$ using $O(\\frac{n^{1.5}}{\\sqrt{\\log n}} \\log \\epsilon^{-1})$ radio transmissions, which yields a $\\sqrt{\\frac{n}{\\log n}}$ factor improvement over standard gossip algorithms. We illustrate these theoretical results with experimental comparisons between our algorithm and standard methods as applied to various classes of random fields.

  2. Seasonal Habitat Use by Greater Sage-Grouse (Centrocercus urophasianus) on a Landscape with Low Density Oil and Gas Development.

    Science.gov (United States)

    Rice, Mindy B; Rossi, Liza G; Apa, Anthony D

    2016-01-01

    Fragmentation of the sagebrush (Artemisia spp.) ecosystem has led to concern about a variety of sagebrush obligates including the greater sage-grouse (Centrocercus urophasianus). Given the increase of energy development within greater sage-grouse habitats, mapping seasonal habitats in pre-development populations is critical. The North Park population in Colorado is one of the largest and most stable in the state and provides a unique case study for investigating resource selection at a relatively low level of energy development compared to other populations both within and outside the state. We used locations from 117 radio-marked female greater sage-grouse in North Park, Colorado to develop seasonal resource selection models. We then added energy development variables to the base models at both a landscape and local scale to determine if energy variables improved the fit of the seasonal models. The base models for breeding and winter resource selection predicted greater use in large expanses of sagebrush whereas the base summer model predicted greater use along the edge of riparian areas. Energy development variables did not improve the winter or the summer models at either scale of analysis, but distance to oil/gas roads slightly improved model fit at both scales in the breeding season, albeit in opposite ways. At the landscape scale, greater sage-grouse were closer to oil/gas roads whereas they were further from oil/gas roads at the local scale during the breeding season. Although we found limited effects from low level energy development in the breeding season, the scale of analysis can influence the interpretation of effects. The lack of strong effects from energy development may be indicative that energy development at current levels are not impacting greater sage-grouse in North Park. Our baseline seasonal resource selection maps can be used for conservation to help identify ways of minimizing the effects of energy development.

  3. The Chicken Soup Effect: The Role of Recreation and Intramural Participation in Boosting Freshman Grade Point Average

    Science.gov (United States)

    Gibbison, Godfrey A.; Henry, Tracyann L.; Perkins-Brown, Jayne

    2011-01-01

    Freshman grade point average, in particular first semester grade point average, is an important predictor of survival and eventual student success in college. As many institutions of higher learning are searching for ways to improve student success, one would hope that policies geared towards the success of freshmen have long term benefits…

  4. Hybrid Reynolds-Averaged/Large Eddy Simulation of a Cavity Flameholder; Assessment of Modeling Sensitivities

    Science.gov (United States)

    Baurle, R. A.

    2015-01-01

    Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. The cases simulated corresponded to those used to examine this flowfield experimentally using particle image velocimetry. A variety of turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged / large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This effort was undertaken to formally assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community. The numerical errors were quantified for both the steady-state and scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results showed a high degree of variability when comparing the predictions obtained from each turbulence model, with the non-linear eddy viscosity model (an explicit algebraic stress model) providing the most accurate prediction of the measured values. The hybrid Reynolds-averaged/large eddy simulation results were carefully scrutinized to ensure that even the coarsest grid had an acceptable level of resolution for large eddy simulation, and that the time-averaged statistics were acceptably accurate. The autocorrelation and its Fourier transform were the primary tools used for this assessment. The statistics extracted from the hybrid simulation strategy proved to be more accurate than the Reynolds-averaged results obtained using the linear eddy viscosity models. However, there was no predictive improvement noted over the results obtained from the explicit

  5. Averaged RMHD equations

    International Nuclear Information System (INIS)

    Ichiguchi, Katsuji

    1998-01-01

    A new reduced set of resistive MHD equations is derived by averaging the full MHD equations on specified flux coordinates, which is consistent with 3D equilibria. It is confirmed that the total energy is conserved and the linearized equations for ideal modes are self-adjoint. (author)

  6. [Clinical Results of Endoscopic Treatment of Greater Trochanteric Pain Syndrome].

    Science.gov (United States)

    Zeman, P; Rafi, M; Skala, P; Zeman, J; Matějka, J; Pavelka, T

    2017-01-01

    PURPOSE OF THE STUDY This retrospective study aims to present short-term clinical outcomes of endoscopic treatment of patients with greater trochanteric pain syndrome (GTPS). MATERIAL AND METHODS The evaluated study population was composed of a total of 19 patients (16 women, 3 men) with the mean age of 47 years (19-63 years). In twelve cases the right hip joint was affected, in the remaining seven cases it was the left side. The retrospective evaluation was carried out only in patients with greater trochanteric pain syndrome caused by independent chronic trochanteric bursitis without the presence of m. gluteus medius tear not responding to at least 3 months of conservative treatment. In patients from the followed-up study population, endoscopic trochanteric bursectomy was performed alone or in combination with iliotibial band release. The clinical results were evaluated preoperatively and with a minimum follow-up period of 1 year after the surgery (mean 16 months). The Visual Analogue Scale (VAS) for assessment of pain and WOMAC (Western Ontario MacMaster) score were used. In both the evaluated criteria (VAS and WOMAC score) preoperative and postoperative results were compared. Moreover, duration of surgery and presence of postoperative complications were assessed. Statistical evaluation of clinical results was carried out by an independent statistician. In order to compare the parameter of WOMAC score and VAS pre- and post-operatively the Mann-Whitney Exact Test was used. The statistical significance was set at 0.05. RESULTS The preoperative VAS score ranged 5-9 (mean 7.6) and the postoperative VAS ranged 0-5 (mean 2.3). The WOMAC score ranged 56.3-69.7 (mean 64.2) preoperatively and 79.8-98.3 (mean 89.7) postoperatively. When both the evaluated parameters of VAS and WOMAC score were compared in time, a statistically significant improvement (ppain syndrome yields statistically significant improvement of clinical results with the concurrent minimum incidence of

  7. Reproducing multi-model ensemble average with Ensemble-averaged Reconstructed Forcings (ERF) in regional climate modeling

    Science.gov (United States)

    Erfanian, A.; Fomenko, L.; Wang, G.

    2016-12-01

    Multi-model ensemble (MME) average is considered the most reliable for simulating both present-day and future climates. It has been a primary reference for making conclusions in major coordinated studies i.e. IPCC Assessment Reports and CORDEX. The biases of individual models cancel out each other in MME average, enabling the ensemble mean to outperform individual members in simulating the mean climate. This enhancement however comes with tremendous computational cost, which is especially inhibiting for regional climate modeling as model uncertainties can originate from both RCMs and the driving GCMs. Here we propose the Ensemble-based Reconstructed Forcings (ERF) approach to regional climate modeling that achieves a similar level of bias reduction at a fraction of cost compared with the conventional MME approach. The new method constructs a single set of initial and boundary conditions (IBCs) by averaging the IBCs of multiple GCMs, and drives the RCM with this ensemble average of IBCs to conduct a single run. Using a regional climate model (RegCM4.3.4-CLM4.5), we tested the method over West Africa for multiple combination of (up to six) GCMs. Our results indicate that the performance of the ERF method is comparable to that of the MME average in simulating the mean climate. The bias reduction seen in ERF simulations is achieved by using more realistic IBCs in solving the system of equations underlying the RCM physics and dynamics. This endows the new method with a theoretical advantage in addition to reducing computational cost. The ERF output is an unaltered solution of the RCM as opposed to a climate state that might not be physically plausible due to the averaging of multiple solutions with the conventional MME approach. The ERF approach should be considered for use in major international efforts such as CORDEX. Key words: Multi-model ensemble, ensemble analysis, ERF, regional climate modeling

  8. Averaging for solitons with nonlinearity management

    International Nuclear Information System (INIS)

    Pelinovsky, D.E.; Kevrekidis, P.G.; Frantzeskakis, D.J.

    2003-01-01

    We develop an averaging method for solitons of the nonlinear Schroedinger equation with a periodically varying nonlinearity coefficient, which is used to effectively describe solitons in Bose-Einstein condensates, in the context of the recently proposed technique of Feshbach resonance management. Using the derived local averaged equation, we study matter-wave bright and dark solitons and demonstrate a very good agreement between solutions of the averaged and full equations

  9. Improving the Q:H strength ratio in women using plyometric exercises.

    Science.gov (United States)

    Tsang, Kavin K W; DiPasquale, Angela A

    2011-10-01

    Plyometric training programs have been implemented in anterior cruciate ligament injury prevention programs. Plyometric exercises are designed to aid in the improvement of muscle strength and neuromuscular control. Our purpose was to examine the effects of plyometric training on lower leg strength in women. Thirty (age = 20.3 ± 1.9 years) recreationally active women were divided into control and experimental groups. The experimental group performed a plyometric training program for 6 weeks, 3 d·wk(-1). All subjects attended 4 testing sessions: before the start of the training program and after weeks 2, 4, and 6. Concentric quadriceps and hamstring strength (dominant leg) was assessed using an isokinetic dynamometer at speeds of 60 and 120°·s(-1). Peak torque, average peak torque, and average power (AvgPower) were measured. The results revealed a significant (p plyometric group than in the control group at testing session 4 and that AvgPower was greater in the plyometric group than in the control group in testing sessions 2-4. Our results indicate that the plyometric training program increased hamstring strength while maintaining quadriceps strength, thereby improving the Q:H strength ratio.

  10. Average Case Analysis of Java 7's Dual Pivot Quicksort

    OpenAIRE

    Wild, Sebastian; Nebel, Markus E.

    2013-01-01

    Recently, a new Quicksort variant due to Yaroslavskiy was chosen as standard sorting method for Oracle's Java 7 runtime library. The decision for the change was based on empirical studies showing that on average, the new algorithm is faster than the formerly used classic Quicksort. Surprisingly, the improvement was achieved by using a dual pivot approach, an idea that was considered not promising by several theoretical studies in the past. In this paper, we identify the reason for this unexpe...

  11. 40 CFR 600.510-12 - Calculation of average fuel economy and average carbon-related exhaust emissions.

    Science.gov (United States)

    2010-07-01

    ... and average carbon-related exhaust emissions. 600.510-12 Section 600.510-12 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF... Transportation. (iv) [Reserved] (2) Average carbon-related exhaust emissions will be calculated to the nearest...

  12. Identification of moving vehicle forces on bridge structures via moving average Tikhonov regularization

    Science.gov (United States)

    Pan, Chu-Dong; Yu, Ling; Liu, Huan-Lin

    2017-08-01

    Traffic-induced moving force identification (MFI) is a typical inverse problem in the field of bridge structural health monitoring. Lots of regularization-based methods have been proposed for MFI. However, the MFI accuracy obtained from the existing methods is low when the moving forces enter into and exit a bridge deck due to low sensitivity of structural responses to the forces at these zones. To overcome this shortcoming, a novel moving average Tikhonov regularization method is proposed for MFI by combining with the moving average concepts. Firstly, the bridge-vehicle interaction moving force is assumed as a discrete finite signal with stable average value (DFS-SAV). Secondly, the reasonable signal feature of DFS-SAV is quantified and introduced for improving the penalty function (∣∣x∣∣2 2) defined in the classical Tikhonov regularization. Then, a feasible two-step strategy is proposed for selecting regularization parameter and balance coefficient defined in the improved penalty function. Finally, both numerical simulations on a simply-supported beam and laboratory experiments on a hollow tube beam are performed for assessing the accuracy and the feasibility of the proposed method. The illustrated results show that the moving forces can be accurately identified with a strong robustness. Some related issues, such as selection of moving window length, effect of different penalty functions, and effect of different car speeds, are discussed as well.

  13. Averaging, not internal noise, limits the development of coherent motion processing

    Directory of Open Access Journals (Sweden)

    Catherine Manning

    2014-10-01

    Full Text Available The development of motion processing is a critical part of visual development, allowing children to interact with moving objects and navigate within a dynamic environment. However, global motion processing, which requires pooling motion information across space, develops late, reaching adult-like levels only by mid-to-late childhood. The reasons underlying this protracted development are not yet fully understood. In this study, we sought to determine whether the development of motion coherence sensitivity is limited by internal noise (i.e., imprecision in estimating the directions of individual elements and/or global pooling across local estimates. To this end, we presented equivalent noise direction discrimination tasks and motion coherence tasks at both slow (1.5°/s and fast (6°/s speeds to children aged 5, 7, 9 and 11 years, and adults. We show that, as children get older, their levels of internal noise reduce, and they are able to average across more local motion estimates. Regression analyses indicated, however, that age-related improvements in coherent motion perception are driven solely by improvements in averaging and not by reductions in internal noise. Our results suggest that the development of coherent motion sensitivity is primarily limited by developmental changes within brain regions involved in integrating motion signals (e.g., MT/V5.

  14. Use of Processed Nerve Allografts to Repair Nerve Injuries Greater Than 25 mm in the Hand.

    Science.gov (United States)

    Rinker, Brian; Zoldos, Jozef; Weber, Renata V; Ko, Jason; Thayer, Wesley; Greenberg, Jeffrey; Leversedge, Fraser J; Safa, Bauback; Buncke, Gregory

    2017-06-01

    Processed nerve allografts (PNAs) have been demonstrated to have improved clinical results compared with hollow conduits for reconstruction of digital nerve gaps less than 25 mm; however, the use of PNAs for longer gaps warrants further clinical investigation. Long nerve gaps have been traditionally hard to study because of low incidence. The advent of the RANGER registry, a large, institutional review board-approved, active database for PNA (Avance Nerve Graft; AxoGen, Inc, Alachua, FL) has allowed evaluation of lower incidence subsets. The RANGER database was queried for digital nerve repairs of 25 mm or greater. Demographics, injury, treatment, and functional outcomes were recorded on standardized forms. Patients younger than 18 and those lacking quantitative follow-up data were excluded. Recovery was graded according to the Medical Research Council Classification for sensory function, with meaningful recovery defined as S3 or greater level. Fifty digital nerve injuries in 28 subjects were included. There were 22 male and 6 female subjects, and the mean age was 45. Three patients gave a previous history of diabetes, and there were 6 active smokers. The most commonly reported mechanisms of injury were saw injuries (n = 13), crushing injuries (n = 9), resection of neuroma (n = 9), amputation/avulsions (n = 8), sharp lacerations (n = 7), and blast/gunshots (n = 4). The average gap length was 35 ± 8 mm (range, 25-50 mm). Recovery to the S3 or greater level was reported in 86% of repairs. Static 2-point discrimination (s2PD) and Semmes-Weinstein monofilament (SWF) were the most common completed assessments. Mean s2PD in 24 repairs reporting 2PD data was 9 ± 4 mm. For the 38 repairs with SWF data, protective sensation was reported in 33 repairs, deep pressure in 2, and no recovery in 3. These data compared favorably with historical data for nerve autograft repairs, with reported levels of meaningful recovery of 60% to 88%. There were no reported adverse effects

  15. A time-averaged cosmic ray propagation theory

    International Nuclear Information System (INIS)

    Klimas, A.J.

    1975-01-01

    An argument is presented, which casts doubt on our ability to choose an appropriate magnetic field ensemble for computing the average behavior of cosmic ray particles. An alternate procedure, using time-averages rather than ensemble-averages, is presented. (orig.) [de

  16. Vibrations in force-and-mass disordered alloys in the average local-information transfer approximation. Application to Al-Ag

    International Nuclear Information System (INIS)

    Czachor, A.

    1979-01-01

    The configuration-averaged displacement-displacement Green's function, derived in the locator-based approximation accounting for average transfer of information on local coupling and mass, has been applied to study the force-and-mass-disorder induced modifications of phonon dispersion relations in substitutional alloys of cubic structures. In this approach the translational invariance condition is obeyed whereas damping is neglected. The force-disorder was found to lead to additional splitting of phonon curves besides that due to mass-disorder, even in the small impurity-concentration case; at larger concentrations the number of splits (frequency gaps) should be still greater. The use of a quasi-locator in the Green's function derivation allows one to partly reconcile the present results with those of the average t-matrix approximation. The experimentally observed splitting in the [100]T phonon dispersion curve for Al-Ag alloys has been interpreted in terms of the above theory and of a quasi-mass of heavy impurity atoms. (Author)

  17. Improving the quality of maternal and neonatal care: the role of standard based participatory assessments.

    Directory of Open Access Journals (Sweden)

    Giorgio Tamburlini

    Full Text Available BACKGROUND: Gaps in quality of care are seriously affecting maternal and neonatal health globally but reports of successful quality improvement cycles implemented at large scale are scanty. We report the results of a nation-wide program to improve quality of maternal and neonatal hospital care in a lower-middle income country focusing on the role played by standard-based participatory assessments. METHODS: Improvements in the quality of maternal and neonatal care following an action-oriented participatory assessment of 19 areas covering the whole continuum from admission to discharge were measured after an average period of 10 months in four busy referral maternity hospitals in Uzbekistan. Information was collected by a multidisciplinary national team with international supervision through visit to hospital services, examination of medical records, direct observation of cases and interviews with staff and mothers. Scores (range 0 to 3 attributed to over 400 items and combined in average scores for each area were compared with the baseline assessment. RESULTS: Between the first and the second assessment, all four hospitals improved their overall score by an average 0.7 points out of 3 (range 0.4 to 1, i.e. by 22%. The improvements occurred in all main areas of care and were greater in the care of normal labor and delivery (+0.9, monitoring, infection control and mother and baby friendly care (+0.8 the role of the participatory action-oriented approach in determining the observed changes was estimated crucial in 6 out of 19 areas and contributory in other 8. Ongoing implementation of referral system and new classification of neonatal deaths impede the improved process of care to be reflected in current statistics. CONCLUSIONS: Important improvements in the quality of hospital care provided to mothers and newborn babies can be achieved through a standard-based action-oriented and participatory assessment and reassessment process.

  18. Gas, Oil, and Water Production from Jonah, Pinedale, Greater Wamsutter, and Stagecoach Draw Fields in the Greater Green River Basin, Wyoming

    Science.gov (United States)

    Nelson, Philip H.; Ewald, Shauna M.; Santus, Stephen L.; Trainor, Patrick K.

    2010-01-01

    Gas, oil, and water production data were compiled from selected wells in four gas fields in rocks of Late Cretaceous age in southwestern Wyoming. This study is one of a series of reports examining fluid production from tight-gas reservoirs, which are characterized by low permeability, low porosity, and the presence of clay minerals in pore space. Production from each well is represented by two samples spaced five years apart, the first sample typically taken two years after commencement of production. For each producing interval, summary diagrams of oil versus gas and water versus gas production show fluid production rates, the change in rates during five years, the water-gas and oil-gas ratios, and the fluid type. These diagrams permit well-to-well and field-to-field comparisons. Fields producing water at low rates (water dissolved in gas in the reservoir) can be distinguished from fields producing water at moderate or high rates, and the water-gas ratios are quantified. The ranges of first-sample gas rates in Pinedale field and Jonah field are quite similar, and the average gas production rate for the second sample, taken five years later, is about one-half that of the first sample for both fields. Water rates are generally substantially higher in Pinedale than in Jonah, and water-gas ratios in Pinedale are roughly a factor of ten greater in Pinedale than in Jonah. Gas and water production rates from each field are fairly well grouped, indicating that Pinedale and Jonah fields are fairly cohesive gas-water systems. Pinedale field appears to be remarkably uniform in its flow behavior with time. Jonah field, which is internally faulted, exhibits a small spread in first-sample production rates. In the Greater Wamsutter field, gas production from the upper part of the Almond Formation is greater than from the main part of the Almond. Some wells in the main and the combined (upper and main parts) Almond show increases in water production with time, whereas increases

  19. Land cover mapping of Greater Mesoamerica using MODIS data

    Science.gov (United States)

    Giri, Chandra; Jenkins, Clinton N.

    2005-01-01

    A new land cover database of Greater Mesoamerica has been prepared using moderate resolution imaging spectroradiometer (MODIS, 500 m resolution) satellite data. Daily surface reflectance MODIS data and a suite of ancillary data were used in preparing the database by employing a decision tree classification approach. The new land cover data are an improvement over traditional advanced very high resolution radiometer (AVHRR) based land cover data in terms of both spatial and thematic details. The dominant land cover type in Greater Mesoamerica is forest (39%), followed by shrubland (30%) and cropland (22%). Country analysis shows forest as the dominant land cover type in Belize (62%), Cost Rica (52%), Guatemala (53%), Honduras (56%), Nicaragua (53%), and Panama (48%), cropland as the dominant land cover type in El Salvador (60.5%), and shrubland as the dominant land cover type in Mexico (37%). A three-step approach was used to assess the quality of the classified land cover data: (i) qualitative assessment provided good insight in identifying and correcting gross errors; (ii) correlation analysis of MODIS- and Landsat-derived land cover data revealed strong positive association for forest (r2 = 0.88), shrubland (r2 = 0.75), and cropland (r2 = 0.97) but weak positive association for grassland (r2 = 0.26); and (iii) an error matrix generated using unseen training data provided an overall accuracy of 77.3% with a Kappa coefficient of 0.73608. Overall, MODIS 500 m data and the methodology used were found to be quite useful for broad-scale land cover mapping of Greater Mesoamerica.

  20. A Divergence Median-based Geometric Detector with A Weighted Averaging Filter

    Science.gov (United States)

    Hua, Xiaoqiang; Cheng, Yongqiang; Li, Yubo; Wang, Hongqiang; Qin, Yuliang

    2018-01-01

    To overcome the performance degradation of the classical fast Fourier transform (FFT)-based constant false alarm rate detector with the limited sample data, a divergence median-based geometric detector on the Riemannian manifold of Heimitian positive definite matrices is proposed in this paper. In particular, an autocorrelation matrix is used to model the correlation of sample data. This method of the modeling can avoid the poor Doppler resolution as well as the energy spread of the Doppler filter banks result from the FFT. Moreover, a weighted averaging filter, conceived from the philosophy of the bilateral filtering in image denoising, is proposed and combined within the geometric detection framework. As the weighted averaging filter acts as the clutter suppression, the performance of the geometric detector is improved. Numerical experiments are given to validate the effectiveness of our proposed method.

  1. Greater Confinement Disposal Program at the Savannah River Plant

    International Nuclear Information System (INIS)

    Towler, O.A.; Cook, J.R.; Peterson, D.L.

    1983-01-01

    Plans for improved LLW disposal at the Savannah River Plant include Greater Confinement Disposal (GCD) for the higher activity fractions of this waste. GCD practices will include waste segregation, packaging, emplacement below the root zone, and stabilizing the emplacement with cement. Statistical review of SRP burial records showed that about 95% of the radioactivity is associated with only 5% of the waste volume. Trigger values determined in this study were compared with actual burials in 1982 to determine what GCD facilities would be needed for a demonstration to begin in Fall 1983. Facilities selected include 8-feet-diameter x 30-feet-deep boreholes to contain reactor scrap, tritiated waste, and selected wastes from offsite

  2. Impact of connected vehicle guidance information on network-wide average travel time

    Directory of Open Access Journals (Sweden)

    Jiangfeng Wang

    2016-12-01

    Full Text Available With the emergence of connected vehicle technologies, the potential positive impact of connected vehicle guidance on mobility has become a research hotspot by data exchange among vehicles, infrastructure, and mobile devices. This study is focused on micro-modeling and quantitatively evaluating the impact of connected vehicle guidance on network-wide travel time by introducing various affecting factors. To evaluate the benefits of connected vehicle guidance, a simulation architecture based on one engine is proposed representing the connected vehicle–enabled virtual world, and connected vehicle route guidance scenario is established through the development of communication agent and intelligent transportation systems agents using connected vehicle application programming interface considering the communication properties, such as path loss and transmission power. The impact of connected vehicle guidance on network-wide travel time is analyzed by comparing with non-connected vehicle guidance in response to different market penetration rate, following rate, and congestion level. The simulation results explore that average network-wide travel time in connected vehicle guidance shows a significant reduction versus that in non–connected vehicle guidance. Average network-wide travel time in connected vehicle guidance have an increase of 42.23% comparing to that in non-connected vehicle guidance, and average travel time variability (represented by the coefficient of variance increases as the travel time increases. Other vital findings include that higher penetration rate and following rate generate bigger savings of average network-wide travel time. The savings of average network-wide travel time increase from 17% to 38% according to different congestion levels, and savings of average travel time in more serious congestion have a more obvious improvement for the same penetration rate or following rate.

  3. Greater autonomy at work

    NARCIS (Netherlands)

    Houtman, I.L.D.

    2004-01-01

    In the past 10 years, workers in the Netherlands increasingly report more decision-making power in their work. This is important for an economy in recession and where workers face greater work demands. It makes work more interesting, creates a healthier work environment, and provides opportunities

  4. Measurement of average density and relative volumes in a dispersed two-phase fluid

    Science.gov (United States)

    Sreepada, Sastry R.; Rippel, Robert R.

    1992-01-01

    An apparatus and a method are disclosed for measuring the average density and relative volumes in an essentially transparent, dispersed two-phase fluid. A laser beam with a diameter no greater than 1% of the diameter of the bubbles, droplets, or particles of the dispersed phase is directed onto a diffraction grating. A single-order component of the diffracted beam is directed through the two-phase fluid and its refraction is measured. Preferably, the refracted beam exiting the fluid is incident upon a optical filter with linearly varing optical density and the intensity of the filtered beam is measured. The invention can be combined with other laser-based measurement systems, e.g., laser doppler anemometry.

  5. Greater vertical spot spacing to improve femtosecond laser capsulotomy quality.

    Science.gov (United States)

    Schultz, Tim; Joachim, Stephanie C; Noristani, Rozina; Scott, Wendell; Dick, H Burkhard

    2017-03-01

    To evaluate the effect of adapted capsulotomy laser settings on the cutting quality in femtosecond laser-assisted cataract surgery. Ruhr-University Eye Clinic, Bochum, Germany. Prospective randomized case series. Eyes were treated with 1 of 2 laser settings. In Group 1, the regular standard settings were used (incisional depth 600 μm, pulse energy 4 μJ, horizontal spot spacing 5 μm, vertical spot spacing 10 μm, treatment time 1.2 seconds). In Group 2, vertical spot spacing was increased to 15 μm and the treatment time was 1.0 seconds. Light microscopy was used to evaluate the cut quality of the capsule edge. The size and number of tags (misplaced laser spots, which form a second cut of the capsule with high tear risk) were evaluated in a blinded manner. Groups were compared using the Mann-Whitney U test. The study comprised 100 eyes (50 eyes in each group). Cataract surgery was successfully completed in all eyes, and no anterior capsule tear occurred during the treatment. Histologically, significant fewer tags were observed with the new capsulotomy laser setting. The mean score for the number and size of free tags was significantly lower in this group than with the standard settings (P laser settings improved cut quality and reduced the number of tags. The modification has the potential to reduce the risk for radial capsule tears in femtosecond laser-assisted cataract surgery. With the new settings, no tags and no capsule tears were observed under the operating microscope in any eye. Copyright © 2017 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  6. 40 CFR 76.11 - Emissions averaging.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Emissions averaging. 76.11 Section 76.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General...

  7. A practical guide to averaging functions

    CERN Document Server

    Beliakov, Gleb; Calvo Sánchez, Tomasa

    2016-01-01

    This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...

  8. 7 CFR 51.2561 - Average moisture content.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except when...

  9. Reynolds averaged turbulence modelling using deep neural networks with embedded invariance

    International Nuclear Information System (INIS)

    Ling, Julia; Kurzawski, Andrew; Templeton, Jeremy

    2016-01-01

    There exists significant demand for improved Reynolds-averaged Navier–Stokes (RANS) turbulence models that are informed by and can represent a richer set of turbulence physics. This paper presents a method of using deep neural networks to learn a model for the Reynolds stress anisotropy tensor from high-fidelity simulation data. A novel neural network architecture is proposed which uses a multiplicative layer with an invariant tensor basis to embed Galilean invariance into the predicted anisotropy tensor. It is demonstrated that this neural network architecture provides improved prediction accuracy compared with a generic neural network architecture that does not embed this invariance property. Furthermore, the Reynolds stress anisotropy predictions of this invariant neural network are propagated through to the velocity field for two test cases. For both test cases, significant improvement versus baseline RANS linear eddy viscosity and nonlinear eddy viscosity models is demonstrated.

  10. Determining average path length and average trapping time on generalized dual dendrimer

    Science.gov (United States)

    Li, Ling; Guan, Jihong

    2015-03-01

    Dendrimer has wide number of important applications in various fields. In some cases during transport or diffusion process, it transforms into its dual structure named Husimi cactus. In this paper, we study the structure properties and trapping problem on a family of generalized dual dendrimer with arbitrary coordination numbers. We first calculate exactly the average path length (APL) of the networks. The APL increases logarithmically with the network size, indicating that the networks exhibit a small-world effect. Then we determine the average trapping time (ATT) of the trapping process in two cases, i.e., the trap placed on a central node and the trap is uniformly distributed in all the nodes of the network. In both case, we obtain explicit solutions of ATT and show how they vary with the networks size. Besides, we also discuss the influence of the coordination number on trapping efficiency.

  11. Updated precision measurement of the average lifetime of B hadrons

    CERN Document Server

    Abreu, P; Adye, T; Agasi, E; Ajinenko, I; Aleksan, Roy; Alekseev, G D; Alemany, R; Allport, P P; Almehed, S; Amaldi, Ugo; Amato, S; Andreazza, A; Andrieux, M L; Antilogus, P; Apel, W D; Arnoud, Y; Åsman, B; Augustin, J E; Augustinus, A; Baillon, Paul; Bambade, P; Barate, R; Barbi, M S; Barbiellini, Guido; Bardin, Dimitri Yuri; Baroncelli, A; Bärring, O; Barrio, J A; Bartl, Walter; Bates, M J; Battaglia, Marco; Baubillier, M; Baudot, J; Becks, K H; Begalli, M; Beillière, P; Belokopytov, Yu A; Benvenuti, Alberto C; Berggren, M; Bertrand, D; Bianchi, F; Bigi, M; Bilenky, S M; Billoir, P; Bloch, D; Blume, M; Blyth, S; Bolognese, T; Bonesini, M; Bonivento, W; Booth, P S L; Borisov, G; Bosio, C; Bosworth, S; Botner, O; Boudinov, E; Bouquet, B; Bourdarios, C; Bowcock, T J V; Bozzo, M; Branchini, P; Brand, K D; Brenke, T; Brenner, R A; Bricman, C; Brillault, L; Brown, R C A; Brückman, P; Brunet, J M; Bugge, L; Buran, T; Burgsmüller, T; Buschmann, P; Buys, A; Cabrera, S; Caccia, M; Calvi, M; Camacho-Rozas, A J; Camporesi, T; Canale, V; Canepa, M; Cankocak, K; Cao, F; Carena, F; Carroll, L; Caso, Carlo; Castillo-Gimenez, M V; Cattai, A; Cavallo, F R; Cerrito, L; Chabaud, V; Charpentier, P; Chaussard, L; Chauveau, J; Checchia, P; Chelkov, G A; Chen, M; Chierici, R; Chliapnikov, P V; Chochula, P; Chorowicz, V; Chudoba, J; Cindro, V; Collins, P; Contreras, J L; Contri, R; Cortina, E; Cosme, G; Cossutti, F; Crawley, H B; Crennell, D J; Crosetti, G; Cuevas-Maestro, J; Czellar, S; Dahl-Jensen, Erik; Dahm, J; D'Almagne, B; Dam, M; Damgaard, G; Dauncey, P D; Davenport, Martyn; Da Silva, W; Defoix, C; Deghorain, A; Della Ricca, G; Delpierre, P A; Demaria, N; De Angelis, A; de Boer, Wim; De Brabandere, S; De Clercq, C; La Vaissière, C de; De Lotto, B; De Min, A; De Paula, L S; De Saint-Jean, C; Dijkstra, H; Di Ciaccio, Lucia; Djama, F; Dolbeau, J; Dönszelmann, M; Doroba, K; Dracos, M; Drees, J; Drees, K A; Dris, M; Dufour, Y; Edsall, D M; Ehret, R; Eigen, G; Ekelöf, T J C; Ekspong, Gösta; Elsing, M; Engel, J P; Ershaidat, N; Erzen, B; Espirito-Santo, M C; Falk, E; Fassouliotis, D; Feindt, Michael; Fenyuk, A; Ferrer, A; Filippas-Tassos, A; Firestone, A; Fischer, P A; Föth, H; Fokitis, E; Fontanelli, F; Formenti, F; Franek, B J; Frenkiel, P; Fries, D E C; Frodesen, A G; Frühwirth, R; Fulda-Quenzer, F; Fuster, J A; Galloni, A; Gamba, D; Gandelman, M; García, C; García, J; Gaspar, C; Gasparini, U; Gavillet, P; Gazis, E N; Gelé, D; Gerber, J P; Gibbs, M; Gokieli, R; Golob, B; Gopal, Gian P; Gorn, L; Górski, M; Guz, Yu; Gracco, Valerio; Graziani, E; Grosdidier, G; Grzelak, K; Gumenyuk, S A; Gunnarsson, P; Günther, M; Guy, J; Hahn, F; Hahn, S; Hajduk, Z; Hallgren, A; Hamacher, K; Hao, W; Harris, F J; Hedberg, V; Henriques, R P; Hernández, J J; Herquet, P; Herr, H; Hessing, T L; Higón, E; Hilke, Hans Jürgen; Hill, T S; Holmgren, S O; Holt, P J; Holthuizen, D J; Hoorelbeke, S; Houlden, M A; Hrubec, Josef; Huet, K; Hultqvist, K; Jackson, J N; Jacobsson, R; Jalocha, P; Janik, R; Jarlskog, C; Jarlskog, G; Jarry, P; Jean-Marie, B; Johansson, E K; Jönsson, L B; Jönsson, P E; Joram, Christian; Juillot, P; Kaiser, M; Kapusta, F; Karafasoulis, K; Karlsson, M; Karvelas, E; Katsanevas, S; Katsoufis, E C; Keränen, R; Khokhlov, Yu A; Khomenko, B A; Khovanskii, N N; King, B J; Kjaer, N J; Klein, H; Klovning, A; Kluit, P M; Köne, B; Kokkinias, P; Koratzinos, M; Korcyl, K; Kourkoumelis, C; Kuznetsov, O; Kramer, P H; Krammer, Manfred; Kreuter, C; Kronkvist, I J; Krumshtein, Z; Krupinski, W; Kubinec, P; Kucewicz, W; Kurvinen, K L; Lacasta, C; Laktineh, I; Lamblot, S; Lamsa, J; Lanceri, L; Lane, D W; Langefeld, P; Last, I; Laugier, J P; Lauhakangas, R; Leder, Gerhard; Ledroit, F; Lefébure, V; Legan, C K; Leitner, R; Lemoigne, Y; Lemonne, J; Lenzen, Georg; Lepeltier, V; Lesiak, T; Liko, D; Lindner, R; Lipniacka, A; Lippi, I; Lörstad, B; Loken, J G; López, J M; Loukas, D; Lutz, P; Lyons, L; MacNaughton, J N; Maehlum, G; Maio, A; Malychev, V; Mandl, F; Marco, J; Marco, R P; Maréchal, B; Margoni, M; Marin, J C; Mariotti, C; Markou, A; Maron, T; Martínez-Rivero, C; Martínez-Vidal, F; Martí i García, S; Masik, J; Matorras, F; Matteuzzi, C; Matthiae, Giorgio; Mazzucato, M; McCubbin, M L; McKay, R; McNulty, R; Medbo, J; Merk, M; Meroni, C; Meyer, S; Meyer, W T; Michelotto, M; Migliore, E; Mirabito, L; Mitaroff, Winfried A; Mjörnmark, U; Moa, T; Møller, R; Mönig, K; Monge, M R; Morettini, P; Müller, H; Mundim, L M; Murray, W J; Muryn, B; Myatt, Gerald; Naraghi, F; Navarria, Francesco Luigi; Navas, S; Nawrocki, K; Negri, P; Neumann, W; Nicolaidou, R; Nielsen, B S; Nieuwenhuizen, M; Nikolaenko, V; Niss, P; Nomerotski, A; Normand, Ainsley; Novák, M; Oberschulte-Beckmann, W; Obraztsov, V F; Olshevskii, A G; Onofre, A; Orava, Risto; Österberg, K; Ouraou, A; Paganini, P; Paganoni, M; Pagès, P; Palka, H; Papadopoulou, T D; Papageorgiou, K; Pape, L; Parkes, C; Parodi, F; Passeri, A; Pegoraro, M; Peralta, L; Pernegger, H; Pernicka, Manfred; Perrotta, A; Petridou, C; Petrolini, A; Petrovykh, M; Phillips, H T; Piana, G; Pierre, F; Pimenta, M; Pindo, M; Plaszczynski, S; Podobrin, O; Pol, M E; Polok, G; Poropat, P; Pozdnyakov, V; Prest, M; Privitera, P; Pukhaeva, N; Pullia, Antonio; Radojicic, D; Ragazzi, S; Rahmani, H; Ratoff, P N; Read, A L; Reale, M; Rebecchi, P; Redaelli, N G; Regler, Meinhard; Reid, D; Renton, P B; Resvanis, L K; Richard, F; Richardson, J; Rídky, J; Rinaudo, G; Ripp, I; Romero, A; Roncagliolo, I; Ronchese, P; Ronjin, V M; Roos, L; Rosenberg, E I; Rosso, E; Roudeau, Patrick; Rovelli, T; Rückstuhl, W; Ruhlmann-Kleider, V; Ruiz, A; Rybicki, K; Saarikko, H; Sacquin, Yu; Sadovskii, A; Sajot, G; Salt, J; Sánchez, J; Sannino, M; Schimmelpfennig, M; Schneider, H; Schwickerath, U; Schyns, M A E; Sciolla, G; Scuri, F; Seager, P; Sedykh, Yu; Segar, A M; Seitz, A; Sekulin, R L; Shellard, R C; Siccama, I; Siegrist, P; Simonetti, S; Simonetto, F; Sissakian, A N; Sitár, B; Skaali, T B; Smadja, G; Smirnov, N; Smirnova, O G; Smith, G R; Solovyanov, O; Sosnowski, R; Souza-Santos, D; Spassoff, Tz; Spiriti, E; Sponholz, P; Squarcia, S; Stanescu, C; Stapnes, Steinar; Stavitski, I; Stichelbaut, F; Stocchi, A; Strauss, J; Strub, R; Stugu, B; Szczekowski, M; Szeptycka, M; Tabarelli de Fatis, T; Tavernet, J P; Chikilev, O G; Tilquin, A; Timmermans, J; Tkatchev, L G; Todorov, T; Toet, D Z; Tomaradze, A G; Tomé, B; Tonazzo, A; Tortora, L; Tranströmer, G; Treille, D; Trischuk, W; Tristram, G; Trombini, A; Troncon, C; Tsirou, A L; Turluer, M L; Tyapkin, I A; Tyndel, M; Tzamarias, S; Überschär, B; Ullaland, O; Uvarov, V; Valenti, G; Vallazza, E; Van der Velde, C; van Apeldoorn, G W; van Dam, P; Van Doninck, W K; Van Eldik, J; Vassilopoulos, N; Vegni, G; Ventura, L; Venus, W A; Verbeure, F; Verlato, M; Vertogradov, L S; Vilanova, D; Vincent, P; Vitale, L; Vlasov, E; Vodopyanov, A S; Vrba, V; Wahlen, H; Walck, C; Weierstall, M; Weilhammer, Peter; Weiser, C; Wetherell, Alan M; Wicke, D; Wickens, J H; Wielers, M; Wilkinson, G R; Williams, W S C; Winter, M; Witek, M; Woschnagg, K; Yip, K; Yushchenko, O P; Zach, F; Zaitsev, A; Zalewska-Bak, A; Zalewski, Piotr; Zavrtanik, D; Zevgolatakos, E; Zimin, N I; Zito, M; Zontar, D; Zuberi, R; Zucchelli, G C; Zumerle, G; Belokopytov, Yu; Charpentier, Ph; Gavillet, Ph; Gouz, Yu; Jarlskog, Ch; Khokhlov, Yu; Papadopoulou, Th D

    1996-01-01

    The measurement of the average lifetime of B hadrons using inclusively reconstructed secondary vertices has been updated using both an improved processing of previous data and additional statistics from new data. This has reduced the statistical and systematic uncertainties and gives \\tau_{\\mathrm{B}} = 1.582 \\pm 0.011\\ \\mathrm{(stat.)} \\pm 0.027\\ \\mathrm{(syst.)}\\ \\mathrm{ps.} Combining this result with the previous result based on charged particle impact parameter distributions yields \\tau_{\\mathrm{B}} = 1.575 \\pm 0.010\\ \\mathrm{(stat.)} \\pm 0.026\\ \\mathrm{(syst.)}\\ \\mathrm{ps.}

  12. The Importance of Government Effectiveness for Transitions toward Greater Electrification in Developing Countries

    Directory of Open Access Journals (Sweden)

    Rohan Best

    2017-08-01

    Full Text Available Electricity is a vital factor underlying modern living standards, but there are many developing countries with low levels of electricity access and use. We seek to systematically identify the crucial elements underlying transitions toward greater electrification in developing countries. We use a cross-sectional regression approach with national-level data up to 2012 for 135 low- and middle-income countries. The paper finds that the effectiveness of governments is the most important governance attribute for encouraging the transition to increased electrification in developing countries, on average. The results add to the growing evidence on the importance of governance for development outcomes. Donors seeking to make more successful contributions to electrification may wish to target countries with more effective governments.

  13. Computation of the bounce-average code

    International Nuclear Information System (INIS)

    Cutler, T.A.; Pearlstein, L.D.; Rensink, M.E.

    1977-01-01

    The bounce-average computer code simulates the two-dimensional velocity transport of ions in a mirror machine. The code evaluates and bounce-averages the collision operator and sources along the field line. A self-consistent equilibrium magnetic field is also computed using the long-thin approximation. Optionally included are terms that maintain μ, J invariance as the magnetic field changes in time. The assumptions and analysis that form the foundation of the bounce-average code are described. When references can be cited, the required results are merely stated and explained briefly. A listing of the code is appended

  14. Clustering Batik Images using Fuzzy C-Means Algorithm Based on Log-Average Luminance

    Directory of Open Access Journals (Sweden)

    Ahmad Sanmorino

    2012-06-01

    Full Text Available Batik is a fabric or clothes that are made ​​with a special staining technique called wax-resist dyeing and is one of the cultural heritage which has high artistic value. In order to improve the efficiency and give better semantic to the image, some researchers apply clustering algorithm for managing images before they can be retrieved. Image clustering is a process of grouping images based on their similarity. In this paper we attempt to provide an alternative method of grouping batik image using fuzzy c-means (FCM algorithm based on log-average luminance of the batik. FCM clustering algorithm is an algorithm that works using fuzzy models that allow all data from all cluster members are formed with different degrees of membership between 0 and 1. Log-average luminance (LAL is the average value of the lighting in an image. We can compare different image lighting from one image to another using LAL. From the experiments that have been made, it can be concluded that fuzzy c-means algorithm can be used for batik image clustering based on log-average luminance of each image possessed.

  15. Average is Over

    Science.gov (United States)

    Eliazar, Iddo

    2018-02-01

    The popular perception of statistical distributions is depicted by the iconic bell curve which comprises of a massive bulk of 'middle-class' values, and two thin tails - one of small left-wing values, and one of large right-wing values. The shape of the bell curve is unimodal, and its peak represents both the mode and the mean. Thomas Friedman, the famous New York Times columnist, recently asserted that we have entered a human era in which "Average is Over" . In this paper we present mathematical models for the phenomenon that Friedman highlighted. While the models are derived via different modeling approaches, they share a common foundation. Inherent tipping points cause the models to phase-shift from a 'normal' bell-shape statistical behavior to an 'anomalous' statistical behavior: the unimodal shape changes to an unbounded monotone shape, the mode vanishes, and the mean diverges. Hence: (i) there is an explosion of small values; (ii) large values become super-large; (iii) 'middle-class' values are wiped out, leaving an infinite rift between the small and the super large values; and (iv) "Average is Over" indeed.

  16. Strategies to improve homing of mesenchymal stem cells for greater efficacy in stem cell therapy.

    Science.gov (United States)

    Naderi-Meshkin, Hojjat; Bahrami, Ahmad Reza; Bidkhori, Hamid Reza; Mirahmadi, Mahdi; Ahmadiankia, Naghmeh

    2015-01-01

    Stem/progenitor cell-based therapeutic approach in clinical practice has been an elusive dream in medical sciences, and improvement of stem cell homing is one of major challenges in cell therapy programs. Stem/progenitor cells have a homing response to injured tissues/organs, mediated by interactions of chemokine receptors expressed on the cells and chemokines secreted by the injured tissue. For improvement of directed homing of the cells, many techniques have been developed either to engineer stem/progenitor cells with higher amount of chemokine receptors (stem cell-based strategies) or to modulate the target tissues to release higher level of the corresponding chemokines (target tissue-based strategies). This review discusses both of these strategies involved in the improvement of stem cell homing focusing on mesenchymal stem cells as most frequent studied model in cellular therapies. © 2014 International Federation for Cell Biology.

  17. Averaging and sampling for magnetic-observatory hourly data

    Directory of Open Access Journals (Sweden)

    J. J. Love

    2010-11-01

    Full Text Available A time and frequency-domain analysis is made of the effects of averaging and sampling methods used for constructing magnetic-observatory hourly data values. Using 1-min data as a proxy for continuous, geomagnetic variation, we construct synthetic hourly values of two standard types: instantaneous "spot" measurements and simple 1-h "boxcar" averages. We compare these average-sample types with others: 2-h average, Gaussian, and "brick-wall" low-frequency-pass. Hourly spot measurements provide a statistically unbiased representation of the amplitude range of geomagnetic-field variation, but as a representation of continuous field variation over time, they are significantly affected by aliasing, especially at high latitudes. The 1-h, 2-h, and Gaussian average-samples are affected by a combination of amplitude distortion and aliasing. Brick-wall values are not affected by either amplitude distortion or aliasing, but constructing them is, in an operational setting, relatively more difficult than it is for other average-sample types. It is noteworthy that 1-h average-samples, the present standard for observatory hourly data, have properties similar to Gaussian average-samples that have been optimized for a minimum residual sum of amplitude distortion and aliasing. For 1-h average-samples from medium and low-latitude observatories, the average of the combination of amplitude distortion and aliasing is less than the 5.0 nT accuracy standard established by Intermagnet for modern 1-min data. For medium and low-latitude observatories, average differences between monthly means constructed from 1-min data and monthly means constructed from any of the hourly average-sample types considered here are less than the 1.0 nT resolution of standard databases. We recommend that observatories and World Data Centers continue the standard practice of reporting simple 1-h-average hourly values.

  18. Mechanisms for greater insulin-stimulated glucose uptake in normal and insulin-resistant skeletal muscle after acute exercise

    Science.gov (United States)

    2015-01-01

    Enhanced skeletal muscle and whole body insulin sensitivity can persist for up to 24–48 h after one exercise session. This review focuses on potential mechanisms for greater postexercise and insulin-stimulated glucose uptake (ISGU) by muscle in individuals with normal or reduced insulin sensitivity. A model is proposed for the processes underlying this improvement; i.e., triggers initiate events that activate subsequent memory elements, which store information that is relayed to mediators, which translate memory into action by controlling an end effector that directly executes increased insulin-stimulated glucose transport. Several candidates are potential triggers or memory elements, but none have been conclusively verified. Regarding potential mediators in both normal and insulin-resistant individuals, elevated postexercise ISGU with a physiological insulin dose coincides with greater Akt substrate of 160 kDa (AS160) phosphorylation without improved proximal insulin signaling at steps from insulin receptor binding to Akt activity. Causality remains to be established between greater AS160 phosphorylation and improved ISGU. The end effector for normal individuals is increased GLUT4 translocation, but this remains untested for insulin-resistant individuals postexercise. Following exercise, insulin-resistant individuals can attain ISGU values similar to nonexercising healthy controls, but after a comparable exercise protocol performed by both groups, ISGU for the insulin-resistant group has been consistently reported to be below postexercise values for the healthy group. Further research is required to fully understand the mechanisms underlying the improved postexercise ISGU in individuals with normal or subnormal insulin sensitivity and to explain the disparity between these groups after similar exercise. PMID:26487009

  19. The average size of ordered binary subgraphs

    NARCIS (Netherlands)

    van Leeuwen, J.; Hartel, Pieter H.

    To analyse the demands made on the garbage collector in a graph reduction system, the change in size of an average graph is studied when an arbitrary edge is removed. In ordered binary trees the average number of deleted nodes as a result of cutting a single edge is equal to the average size of a

  20. Assessing the Resolution Adaptability of the Zhang-McFarlane Cumulus Parameterization With Spatial and Temporal Averaging: RESOLUTION ADAPTABILITY OF ZM SCHEME

    Energy Technology Data Exchange (ETDEWEB)

    Yun, Yuxing [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA; State Key Laboratory of Severe Weather, Chinese Academy of Meteorological Sciences, Beijing China; Fan, Jiwen [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA; Xiao, Heng [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA; Zhang, Guang J. [Scripps Institution of Oceanography, University of California, San Diego CA USA; Ghan, Steven J. [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA; Xu, Kuan-Man [NASA Langley Research Center, Hampton VA USA; Ma, Po-Lun [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA; Gustafson, William I. [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA

    2017-11-01

    Realistic modeling of cumulus convection at fine model resolutions (a few to a few tens of km) is problematic since it requires the cumulus scheme to adapt to higher resolution than they were originally designed for (~100 km). To solve this problem, we implement the spatial averaging method proposed in Xiao et al. (2015) and also propose a temporal averaging method for the large-scale convective available potential energy (CAPE) tendency in the Zhang-McFarlane (ZM) cumulus parameterization. The resolution adaptability of the original ZM scheme, the scheme with spatial averaging, and the scheme with both spatial and temporal averaging at 4-32 km resolution is assessed using the Weather Research and Forecasting (WRF) model, by comparing with Cloud Resolving Model (CRM) results. We find that the original ZM scheme has very poor resolution adaptability, with sub-grid convective transport and precipitation increasing significantly as the resolution increases. The spatial averaging method improves the resolution adaptability of the ZM scheme and better conserves the total transport of moist static energy and total precipitation. With the temporal averaging method, the resolution adaptability of the scheme is further improved, with sub-grid convective precipitation becoming smaller than resolved precipitation for resolution higher than 8 km, which is consistent with the results from the CRM simulation. Both the spatial distribution and time series of precipitation are improved with the spatial and temporal averaging methods. The results may be helpful for developing resolution adaptability for other cumulus parameterizations that are based on quasi-equilibrium assumption.

  1. Statistical theory for estimating sampling errors of regional radiation averages based on satellite measurements

    Science.gov (United States)

    Smith, G. L.; Bess, T. D.; Minnis, P.

    1983-01-01

    The processes which determine the weather and climate are driven by the radiation received by the earth and the radiation subsequently emitted. A knowledge of the absorbed and emitted components of radiation is thus fundamental for the study of these processes. In connection with the desire to improve the quality of long-range forecasting, NASA is developing the Earth Radiation Budget Experiment (ERBE), consisting of a three-channel scanning radiometer and a package of nonscanning radiometers. A set of these instruments is to be flown on both the NOAA-F and NOAA-G spacecraft, in sun-synchronous orbits, and on an Earth Radiation Budget Satellite. The purpose of the scanning radiometer is to obtain measurements from which the average reflected solar radiant exitance and the average earth-emitted radiant exitance at a reference level can be established. The estimate of regional average exitance obtained will not exactly equal the true value of the regional average exitance, but will differ due to spatial sampling. A method is presented for evaluating this spatial sampling error.

  2. Accurate measurement of imaging photoplethysmographic signals based camera using weighted average

    Science.gov (United States)

    Pang, Zongguang; Kong, Lingqin; Zhao, Yuejin; Sun, Huijuan; Dong, Liquan; Hui, Mei; Liu, Ming; Liu, Xiaohua; Liu, Lingling; Li, Xiaohui; Li, Rongji

    2018-01-01

    Imaging Photoplethysmography (IPPG) is an emerging technique for the extraction of vital signs of human being using video recordings. IPPG technology with its advantages like non-contact measurement, low cost and easy operation has become one research hot spot in the field of biomedicine. However, the noise disturbance caused by non-microarterial area cannot be removed because of the uneven distribution of micro-arterial, different signal strength of each region, which results in a low signal noise ratio of IPPG signals and low accuracy of heart rate. In this paper, we propose a method of improving the signal noise ratio of camera-based IPPG signals of each sub-region of the face using a weighted average. Firstly, we obtain the region of interest (ROI) of a subject's face based camera. Secondly, each region of interest is tracked and feature-based matched in each frame of the video. Each tracked region of face is divided into 60x60 pixel block. Thirdly, the weights of PPG signal of each sub-region are calculated, based on the signal-to-noise ratio of each sub-region. Finally, we combine the IPPG signal from all the tracked ROI using weighted average. Compared with the existing approaches, the result shows that the proposed method takes modest but significant effects on improvement of signal noise ratio of camera-based PPG estimated and accuracy of heart rate measurement.

  3. Strengthened glass for high average power laser applications

    International Nuclear Information System (INIS)

    Cerqua, K.A.; Lindquist, A.; Jacobs, S.D.; Lambropoulos, J.

    1987-01-01

    Recent advancements in high repetition rate and high average power laser systems have put increasing demands on the development of improved solid state laser materials with high thermal loading capabilities. The authors have developed a process for strengthening a commercially available Nd doped phosphate glass utilizing an ion-exchange process. Results of thermal loading fracture tests on moderate size (160 x 15 x 8 mm) glass slabs have shown a 6-fold improvement in power loading capabilities for strengthened samples over unstrengthened slabs. Fractographic analysis of post-fracture samples has given insight into the mechanism of fracture in both unstrengthened and strengthened samples. Additional stress analysis calculations have supported these findings. In addition to processing the glass' surface during strengthening in a manner which preserves its post-treatment optical quality, the authors have developed an in-house optical fabrication technique utilizing acid polishing to minimize subsurface damage in samples prior to exchange treatment. Finally, extension of the strengthening process to alternate geometries of laser glass has produced encouraging results, which may expand the potential or strengthened glass in laser systems, making it an exciting prospect for many applications

  4. Delineation of facial archetypes by 3d averaging.

    Science.gov (United States)

    Shaweesh, Ashraf I; Thomas, C David L; Bankier, Agnes; Clement, John G

    2004-10-01

    The objective of this study was to investigate the feasibility of creating archetypal 3D faces through computerized 3D facial averaging. A 3D surface scanner Fiore and its software were used to acquire the 3D scans of the faces while 3D Rugle3 and locally-developed software generated the holistic facial averages. 3D facial averages were created from two ethnic groups; European and Japanese and from children with three previous genetic disorders; Williams syndrome, achondroplasia and Sotos syndrome as well as the normal control group. The method included averaging the corresponding depth (z) coordinates of the 3D facial scans. Compared with other face averaging techniques there was not any warping or filling in the spaces by interpolation; however, this facial average lacked colour information. The results showed that as few as 14 faces were sufficient to create an archetypal facial average. In turn this would make it practical to use face averaging as an identification tool in cases where it would be difficult to recruit a larger number of participants. In generating the average, correcting for size differences among faces was shown to adjust the average outlines of the facial features. It is assumed that 3D facial averaging would help in the identification of the ethnic status of persons whose identity may not be known with certainty. In clinical medicine, it would have a great potential for the diagnosis of syndromes with distinctive facial features. The system would also assist in the education of clinicians in the recognition and identification of such syndromes.

  5. To quantum averages through asymptotic expansion of classical averages on infinite-dimensional space

    International Nuclear Information System (INIS)

    Khrennikov, Andrei

    2007-01-01

    We study asymptotic expansions of Gaussian integrals of analytic functionals on infinite-dimensional spaces (Hilbert and nuclear Frechet). We obtain an asymptotic equality coupling the Gaussian integral and the trace of the composition of scaling of the covariation operator of a Gaussian measure and the second (Frechet) derivative of a functional. In this way we couple classical average (given by an infinite-dimensional Gaussian integral) and quantum average (given by the von Neumann trace formula). We can interpret this mathematical construction as a procedure of 'dequantization' of quantum mechanics. We represent quantum mechanics as an asymptotic projection of classical statistical mechanics with infinite-dimensional phase space. This space can be represented as the space of classical fields, so quantum mechanics is represented as a projection of 'prequantum classical statistical field theory'

  6. Averaging Robertson-Walker cosmologies

    International Nuclear Information System (INIS)

    Brown, Iain A.; Robbers, Georg; Behrend, Juliane

    2009-01-01

    The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ω eff 0 ≈ 4 × 10 −6 , with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10 −8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w eff < −1/3 can be found for strongly phantom models

  7. Stress and Subjective Age: Those With Greater Financial Stress Look Older.

    Science.gov (United States)

    Agrigoroaei, Stefan; Lee-Attardo, Angela; Lachman, Margie E

    2017-12-01

    Subjective indicators of age add to our understanding of the aging process beyond the role of chronological age. We examined whether financial stress contributes to subjective age as rated by others and the self. The participants ( N = 228), aged 26-75, were from a Boston area satellite of the Midlife in the United States (MIDUS) longitudinal study. Participants reported how old they felt and how old they thought they looked, and observers assessed the participants' age based on photographs (other-look age), at two occasions, an average of 10 years apart. Financial stress was measured at Time 1. Controlling for income, general stress, health, and attractiveness, participants who reported higher levels of financial stress were perceived as older than their actual age to a greater extent and showed larger increases in other-look age over time. We consider the results on accelerated aging of appearance with regard to their implications for interpersonal interactions and in relation to health.

  8. Bayesian model averaging using particle filtering and Gaussian mixture modeling : Theory, concepts, and simulation experiments

    NARCIS (Netherlands)

    Rings, J.; Vrugt, J.A.; Schoups, G.; Huisman, J.A.; Vereecken, H.

    2012-01-01

    Bayesian model averaging (BMA) is a standard method for combining predictive distributions from different models. In recent years, this method has enjoyed widespread application and use in many fields of study to improve the spread-skill relationship of forecast ensembles. The BMA predictive

  9. Self-similarity of higher-order moving averages

    Science.gov (United States)

    Arianos, Sergio; Carbone, Anna; Türk, Christian

    2011-10-01

    In this work, higher-order moving average polynomials are defined by straightforward generalization of the standard moving average. The self-similarity of the polynomials is analyzed for fractional Brownian series and quantified in terms of the Hurst exponent H by using the detrending moving average method. We prove that the exponent H of the fractional Brownian series and of the detrending moving average variance asymptotically agree for the first-order polynomial. Such asymptotic values are compared with the results obtained by the simulations. The higher-order polynomials correspond to trend estimates at shorter time scales as the degree of the polynomial increases. Importantly, the increase of polynomial degree does not require to change the moving average window. Thus trends at different time scales can be obtained on data sets with the same size. These polynomials could be interesting for those applications relying on trend estimates over different time horizons (financial markets) or on filtering at different frequencies (image analysis).

  10. Determinants of College Grade Point Averages

    Science.gov (United States)

    Bailey, Paul Dean

    2012-01-01

    Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the…

  11. Mixed artificial grasslands with more roots improved mine soil infiltration capacity

    Science.gov (United States)

    Wu, Gao-Lin; Yang, Zheng; Cui, Zeng; Liu, Yu; Fang, Nu-Fang; Shi, Zhi-Hua

    2016-04-01

    Soil water is one of the critical limiting factors in achieving sustainable revegetation. Soil infiltration capacity plays a vital role in determining the inputs from precipitation and enhancing water storage, which are important for the maintenance and survival of vegetation patches in arid and semi-arid areas. Our study investigated the effects of different artificial grasslands on soil physical properties and soil infiltration capacity. The artificial grasslands were Medicago sativa, Astragalus adsurgens, Agropyron mongolicum, Lespedeza davurica, Bromus inermis, Hedysarum scoparium, A. mongolicum + Artemisia desertorum, A. adsurgens + A. desertorum and M. sativa + B. inermis. The soil infiltration capacity index (SICI), which was based on the average infiltration rate of stage I (AIRSI) and the average infiltration rate of stage III (AIRS III), was higher (indicating that the infiltration capacity was greater) under the artificial grasslands than that of the bare soil. The SICI of the A. adsurgens + A. desertorum grassland had the highest value (1.48) and bare soil (-0.59) had the lowest value. It was evident that artificial grassland could improve soil infiltration capacity. We also used principal component analysis (PCA) to determine that the main factors that affected SICI were the soil water content at a depth of 20 cm (SWC20), the below-ground root biomasses at depths of 10 and 30 cm (BGB10, BGB30), the capillary porosity at a depth of 10 cm (CP10) and the non-capillary porosity at a depth of 20 cm (NCP20). Our study suggests that the use of Legume-poaceae mixtures and Legume-shrub mixtures to create grasslands provided an effective ecological restoration approach to improve soil infiltration properties due to their greater root biomasses. Furthermore, soil water content, below-ground root biomass, soil capillary porosity and soil non-capillary porosity were the main factors that affect the soil infiltration capacity.

  12. Rotational averaging of multiphoton absorption cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Friese, Daniel H., E-mail: daniel.h.friese@uit.no; Beerepoot, Maarten T. P.; Ruud, Kenneth [Centre for Theoretical and Computational Chemistry, University of Tromsø — The Arctic University of Norway, N-9037 Tromsø (Norway)

    2014-11-28

    Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.

  13. Ergodic averages via dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    2006-01-01

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain....

  14. Local and average structure of Mn- and La-substituted BiFeO3

    Science.gov (United States)

    Jiang, Bo; Selbach, Sverre M.

    2017-06-01

    The local and average structure of solid solutions of the multiferroic perovskite BiFeO3 is investigated by synchrotron X-ray diffraction (XRD) and electron density functional theory (DFT) calculations. The average experimental structure is determined by Rietveld refinement and the local structure by total scattering data analyzed in real space with the pair distribution function (PDF) method. With equal concentrations of La on the Bi site or Mn on the Fe site, La causes larger structural distortions than Mn. Structural models based on DFT relaxed geometry give an improved fit to experimental PDFs compared to models constrained by the space group symmetry. Berry phase calculations predict a higher ferroelectric polarization than the experimental literature values, reflecting that structural disorder is not captured in either average structure space group models or DFT calculations with artificial long range order imposed by periodic boundary conditions. Only by including point defects in a supercell, here Bi vacancies, can DFT calculations reproduce the literature results on the structure and ferroelectric polarization of Mn-substituted BiFeO3. The combination of local and average structure sensitive experimental methods with DFT calculations is useful for illuminating the structure-property-composition relationships in complex functional oxides with local structural distortions.

  15. Extracting Credible Dependencies for Averaged One-Dependence Estimator Analysis

    Directory of Open Access Journals (Sweden)

    LiMin Wang

    2014-01-01

    Full Text Available Of the numerous proposals to improve the accuracy of naive Bayes (NB by weakening the conditional independence assumption, averaged one-dependence estimator (AODE demonstrates remarkable zero-one loss performance. However, indiscriminate superparent attributes will bring both considerable computational cost and negative effect on classification accuracy. In this paper, to extract the most credible dependencies we present a new type of seminaive Bayesian operation, which selects superparent attributes by building maximum weighted spanning tree and removes highly correlated children attributes by functional dependency and canonical cover analysis. Our extensive experimental comparison on UCI data sets shows that this operation efficiently identifies possible superparent attributes at training time and eliminates redundant children attributes at classification time.

  16. Medical-Legal Partnerships At Veterans Affairs Medical Centers Improved Housing And Psychosocial Outcomes For Vets.

    Science.gov (United States)

    Tsai, Jack; Middleton, Margaret; Villegas, Jennifer; Johnson, Cindy; Retkin, Randye; Seidman, Alison; Sherman, Scott; Rosenheck, Robert A

    2017-12-01

    Medical-legal partnerships-collaborations between legal professionals and health care providers that help patients address civil legal problems that can affect health and well-being-have been implemented at several Veterans Affairs (VA) medical centers to serve homeless and low-income veterans with mental illness. We describe the outcomes of veterans who accessed legal services at four partnership sites in Connecticut and New York in the period 2014-16. The partnerships served 950 veterans, who collectively had 1,384 legal issues; on average, the issues took 5.4 hours' worth of legal services to resolve. The most common problems were related to VA benefits, housing, family issues, and consumer issues. Among a subsample of 148 veterans who were followed for one year, we observed significant improvements in housing, income, and mental health. Veterans who received more partnership services showed greater improvements in housing and mental health than those who received fewer services, and those who achieved their predefined legal goals showed greater improvements in housing status and community integration than those who did not. Medical-legal partnerships represent an opportunity to expand cross-sector, community-based partnerships in the VA health care system to address social determinants of mental health.

  17. Asymmetric network connectivity using weighted harmonic averages

    Science.gov (United States)

    Morrison, Greg; Mahadevan, L.

    2011-02-01

    We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.

  18. Greater Vancouver's water supply receives ozone treatment

    Energy Technology Data Exchange (ETDEWEB)

    Crosby, J.; Singh, I.; Reil, D. D.; Neden, G.

    2000-10-01

    To improve the overall quality of the treated water delivered to the member municipalities of the Greater Vancouver Water District (GVWD), the GVWD implemented a phased drinking water quality improvement program. The phased treatment program is directed at attaining effective disinfection while minimizing the formation of chlorinated disinfection by-products. Accordingly, the current primary disinfection method of chlorination was reevaluated and an ozone primary disinfection without filtration was authorized. Ozonization provides increased protection against Giardia and Cryptosporidium and a decrease in the formation potential for disinfection by-products (DPBs). This paper describes the design for the ozonation facility at Coquitlam, construction of which began in 1998 and completed during the summer of 2000. The facility houses the liquid oxygen supply, ozone generation, cooling water, ozone injection, primary off-gas ozone destruct system, and provides a home for various office, electrical maintenance and diesel generating functions. The second site at Capilano is expected to start construction in the fall of 2000 and be completed late in 2002. Wit its kilometre long stainless steel ozone contactor and sidestream injector tower, the Coquitlam Ozonation Facility is the first ozone pressure injection system of its kind in North America. 1 tab., 2 figs.

  19. A 100-year average recurrence interval for the san andreas fault at wrightwood, california.

    Science.gov (United States)

    Fumal, T E; Schwartz, D P; Pezzopane, S K; Weldon, R J

    1993-01-08

    Evidence for five large earthquakes during the past five centuries along the San Andreas fault zone 70 kilometers northeast of Los Angeles, California, indicates that the average recurrence interval and the temporal variability are significantly smaller than previously thought. Rapid sedimentation during the past 5000 years in a 150-meter-wide structural depression has produced a greater than 21-meter-thick sequence of debris flow and stream deposits interbedded with more than 50 datable peat layers. Fault scarps, colluvial wedges, fissure infills, upward termination of ruptures, and tilted and folded deposits above listric faults provide evidence for large earthquakes that occurred in A.D. 1857, 1812, and about 1700, 1610, and 1470.

  20. Theory and analysis of accuracy for the method of characteristics direction probabilities with boundary averaging

    International Nuclear Information System (INIS)

    Liu, Zhouyu; Collins, Benjamin; Kochunas, Brendan; Downar, Thomas; Xu, Yunlin; Wu, Hongchun

    2015-01-01

    Highlights: • The CDP combines the benefits of the CPM’s efficiency and the MOC’s flexibility. • Boundary averaging reduces the computation effort with losing minor accuracy. • An analysis model is used to justify the choice of optimize averaging strategy. • Numerical results show the performance and accuracy. - Abstract: The method of characteristic direction probabilities (CDP) combines the benefits of the collision probability method (CPM) and the method of characteristics (MOC) for the solution of the integral form of the Botlzmann Transport Equation. By coupling only the fine regions traversed by the characteristic rays in a particular direction, the computational effort required to calculate the probability matrices and to solve the matrix system is considerably reduced compared to the CPM. Furthermore, boundary averaging is performed to reduce the storage and computation but the capability of dealing with complicated geometries is preserved since the same ray tracing information is used as in MOC. An analysis model for the outgoing angular flux is used to analyze a variety of outgoing angular flux averaging methods for the boundary and to justify the choice of optimize averaging strategy. The boundary average CDP method was then implemented in the Michigan PArallel Characteristic based Transport (MPACT) code to perform 2-D and 3-D transport calculations. The numerical results are given for different cases to show the effect of averaging on the outgoing angular flux, region scalar flux and the eigenvalue. Comparison of the results with the case with no averaging demonstrates that an angular dependent averaging strategy is possible for the CDP to improve its computational performance without compromising the achievable accuracy

  1. Real-time traffic signal optimization model based on average delay time per person

    Directory of Open Access Journals (Sweden)

    Pengpeng Jiao

    2015-10-01

    Full Text Available Real-time traffic signal control is very important for relieving urban traffic congestion. Many existing traffic control models were formulated using optimization approach, with the objective functions of minimizing vehicle delay time. To improve people’s trip efficiency, this article aims to minimize delay time per person. Based on the time-varying traffic flow data at intersections, the article first fits curves of accumulative arrival and departure vehicles, as well as the corresponding functions. Moreover, this article transfers vehicle delay time to personal delay time using average passenger load of cars and buses, employs such time as the objective function, and proposes a signal timing optimization model for intersections to achieve real-time signal parameters, including cycle length and green time. This research further implements a case study based on practical data collected at an intersection in Beijing, China. The average delay time per person and queue length are employed as evaluation indices to show the performances of the model. The results show that the proposed methodology is capable of improving traffic efficiency and is very effective for real-world applications.

  2. Measurement uncertainties of long-term 222Rn averages at environmental levels using alpha track detectors

    International Nuclear Information System (INIS)

    Nelson, R.A.

    1987-01-01

    More than 250 replicate measurements of outdoor Rn concentration integrated over quarterly periods were made to estimate the random component of the measurement uncertainty of Track Etch detectors (type F) under outdoor conditions. The measurements were performed around three U mill tailings piles to provide a range of environmental concentrations. The measurement uncertainty was typically greater than could be accounted for by Poisson counting statistics. Average coefficients of variation of the order of 20% for all measured concentrations were found. It is concluded that alpha track detectors can be successfully used to determine annual average outdoor Rn concentrations through the use of careful quality control procedures. These include rapid deployment and collection of detectors to minimize unintended Rn exposure, careful packaging and shipping to and from the manufacturer, use of direct sunlight shields for all detectors and careful and secure mounting of all detectors in as similar a manner as possible. The use of multiple (at least duplicate) detectors at each monitoring location and an exposure period of no less than one quarter are suggested

  3. Multiphase averaging of periodic soliton equations

    International Nuclear Information System (INIS)

    Forest, M.G.

    1979-01-01

    The multiphase averaging of periodic soliton equations is considered. Particular attention is given to the periodic sine-Gordon and Korteweg-deVries (KdV) equations. The periodic sine-Gordon equation and its associated inverse spectral theory are analyzed, including a discussion of the spectral representations of exact, N-phase sine-Gordon solutions. The emphasis is on physical characteristics of the periodic waves, with a motivation from the well-known whole-line solitons. A canonical Hamiltonian approach for the modulational theory of N-phase waves is prescribed. A concrete illustration of this averaging method is provided with the periodic sine-Gordon equation; explicit averaging results are given only for the N = 1 case, laying a foundation for a more thorough treatment of the general N-phase problem. For the KdV equation, very general results are given for multiphase averaging of the N-phase waves. The single-phase results of Whitham are extended to general N phases, and more importantly, an invariant representation in terms of Abelian differentials on a Riemann surface is provided. Several consequences of this invariant representation are deduced, including strong evidence for the Hamiltonian structure of N-phase modulational equations

  4. Greater trochanteric pain syndrome diagnosis and treatment.

    Science.gov (United States)

    Mallow, Michael; Nazarian, Levon N

    2014-05-01

    Lateral hip pain, or greater trochanteric pain syndrome, is a commonly seen condition; in this article, the relevant anatomy, epidemiology, and evaluation strategies of greater trochanteric pain syndrome are reviewed. Specific attention is focused on imaging of this syndrome and treatment techniques, including ultrasound-guided interventions. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Tendon surveillance requirements - average tendon force

    International Nuclear Information System (INIS)

    Fulton, J.F.

    1982-01-01

    Proposed Rev. 3 to USNRC Reg. Guide 1.35 discusses the need for comparing, for individual tendons, the measured and predicted lift-off forces. Such a comparison is intended to detect any abnormal tendon force loss which might occur. Recognizing that there are uncertainties in the prediction of tendon losses, proposed Guide 1.35.1 has allowed specific tolerances on the fundamental losses. Thus, the lift-off force acceptance criteria for individual tendons appearing in Reg. Guide 1.35, Proposed Rev. 3, is stated relative to a lower bound predicted tendon force, which is obtained using the 'plus' tolerances on the fundamental losses. There is an additional acceptance criterion for the lift-off forces which is not specifically addressed in these two Reg. Guides; however, it is included in a proposed Subsection IWX to ASME Code Section XI. This criterion is based on the overriding requirement that the magnitude of prestress in the containment structure be sufficeint to meet the minimum prestress design requirements. This design requirement can be expressed as an average tendon force for each group of vertical hoop, or dome tendons. For the purpose of comparing the actual tendon forces with the required average tendon force, the lift-off forces measured for a sample of tendons within each group can be averaged to construct the average force for the entire group. However, the individual lift-off forces must be 'corrected' (normalized) prior to obtaining the sample average. This paper derives the correction factor to be used for this purpose. (orig./RW)

  6. Average Bandwidth Allocation Model of WFQ

    Directory of Open Access Journals (Sweden)

    Tomáš Balogh

    2012-01-01

    Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.

  7. Regional averaging and scaling in relativistic cosmology

    International Nuclear Information System (INIS)

    Buchert, Thomas; Carfora, Mauro

    2002-01-01

    Averaged inhomogeneous cosmologies lie at the forefront of interest, since cosmological parameters such as the rate of expansion or the mass density are to be considered as volume-averaged quantities and only these can be compared with observations. For this reason the relevant parameters are intrinsically scale-dependent and one wishes to control this dependence without restricting the cosmological model by unphysical assumptions. In the latter respect we contrast our way to approach the averaging problem in relativistic cosmology with shortcomings of averaged Newtonian models. Explicitly, we investigate the scale-dependence of Eulerian volume averages of scalar functions on Riemannian three-manifolds. We propose a complementary view of a Lagrangian smoothing of (tensorial) variables as opposed to their Eulerian averaging on spatial domains. This programme is realized with the help of a global Ricci deformation flow for the metric. We explain rigorously the origin of the Ricci flow which, on heuristic grounds, has already been suggested as a possible candidate for smoothing the initial dataset for cosmological spacetimes. The smoothing of geometry implies a renormalization of averaged spatial variables. We discuss the results in terms of effective cosmological parameters that would be assigned to the smoothed cosmological spacetime. In particular, we find that on the smoothed spatial domain B-bar evaluated cosmological parameters obey Ω-bar B-bar m + Ω-bar B-bar R + Ω-bar B-bar A + Ω-bar B-bar Q 1, where Ω-bar B-bar m , Ω-bar B-bar R and Ω-bar B-bar A correspond to the standard Friedmannian parameters, while Ω-bar B-bar Q is a remnant of cosmic variance of expansion and shear fluctuations on the averaging domain. All these parameters are 'dressed' after smoothing out the geometrical fluctuations, and we give the relations of the 'dressed' to the 'bare' parameters. While the former provide the framework of interpreting observations with a 'Friedmannian bias

  8. Radio-anatomical Study of the Greater Palatine Canal and the Pterygopalatine Fossa in a Lebanese Population: A Consideration for Maxillary Nerve Block

    Directory of Open Access Journals (Sweden)

    Georges Aoun

    2016-01-01

    Full Text Available Aim: The aim of this study was to describe the morphology of the component, greater palatine canal-pterygopalatine fossa (GPC-PPF, in a Lebanese population using cone-beam computed tomography (CBCT technology. Materials and Methods: CBCT images of 79 Lebanese adult patients (38 females and 41 males were included in this study, and a total of 158 cases were evaluated bilaterally. The length and path of the GPCs-PPFs were determined, and the data obtained analyzed statistically. Results: In the sagittal plane, of all the GPCs-PPFs assessed, the average length was 35.02 mm on the right and 35.01 mm on the left. The most common anatomic path consisted in the presence of a curvature resulting in an internal narrowing whose average diameter was 2.4 mm on the right and 2.45 mm on the left. The mean diameter of the upper opening was 5.85 mm on the right and 5.82 mm on the left. As for the lower opening corresponding to the greater palatine foramen, the right and left average diameters were 6.39 mm and 6.42 mm, respectively. Conclusion: Within the limits of this study, we concluded that throughout the Lebanese population, the GPC-PPF path is variable with a predominance of curved one (77.21% [122/158] in both the right and left sides; however, the GPC-PPF length does not significantly vary according to gender and side.

  9. Greater Trochanteric Fixation Using a Cable System for Partial Hip Arthroplasty: A Clinical and Finite Element Analysis

    Science.gov (United States)

    Ozan, Fırat; Koyuncu, Şemmi; Pekedis, Mahmut; Altay, Taşkın; Yıldız, Hasan; Toker, Gökhan

    2014-01-01

    The aim of the study was to investigate the efficacy of greater trochanteric fixation using a multifilament cable to ensure abductor lever arm continuity in patients with a proximal femoral fracture undergoing partial hip arthroplasty. Mean age of the patients (12 men, 20 women) was 84.12 years. Mean follow-up was 13.06 months. Fixation of the dislocated greater trochanter with or without a cable following load application was assessed by finite element analysis (FEA). Radiological evaluation was based on the distance between the fracture and the union site. Harris hip score was used to evaluate final results: outcomes were excellent in 7 patients (21.8%), good in 17 patients (53.1%), average in 5 patients (15.6%), and poor in 1 patient (9.3%). Mean abduction angle was 20.21°. Union was achieved in 14 patients (43.7%), fibrous union in 12 (37.5%), and no union in 6 (18.7%). FEA showed that the maximum total displacement of the greater trochanter decreased when the fractured bone was fixed with a cable. As the force applied to the cable increased, the displacement of the fractured trochanter decreased. This technique ensures continuity of the abductor lever arm in patients with a proximal femoral fracture who are undergoing partial hip arthroplasty surgery. PMID:25177703

  10. Conservation of greater sage-grouse- a synthesis of current trends and future management

    Science.gov (United States)

    Connelly, John W.; Knick, Steven T.; Braun, Clait E.; Baker, William L.; Beever, Erik A.; Christiansen, Thomas J.; Doherty, Kevin E.; Garton, Edward O.; Hagen, Christian A.; Hanser, Steven E.; Johnson, Douglas H.; Leu, Matthias; Miller, Richard F.; Naugle, David E.; Oyler-McCance, Sara J.; Pyke, David A.; Reese, Kerry P.; Schroeder, Michael A.; Stiver, San J.; Walker, Brett L.; Wisdorn, Michael J.

    2011-01-01

    Recent analyses of Greater Sage-Grouse (Centrocercus urophasianus) populations indicate substantial declines in many areas but relatively stable populations in other portions of the species? range. Sagebrush (Artemisia spp.) habitats neces-sary to support sage-grouse are being burned by large wildfires, invaded by nonnative plants, and developed for energy resources (gas, oil, and wind). Management on public lands, which con-tain 70% of sagebrush habitats, has changed over the last 30 years from large sagebrush control projects directed at enhancing livestock grazing to a greater emphasis on projects that often attempt to improve or restore ecological integrity. Never-theless, the mandate to manage public lands to provide traditional consumptive uses as well as recreation and wilderness values is not likely to change in the near future. Consequently, demand and use of resources contained in sagebrush land-scapes plus the associated infrastructure to sup-port increasing human populations in the western United States will continue to challenge efforts to conserve Greater Sage-Grouse. The continued widespread distribution of sage-grouse, albeit at very low densities in some areas, coupled with large areas of important sagebrush habitat that are relatively unaffected by the human footprint, sug-gest that Greater Sage-Grouse populations may be able to persist into the future. We summarize the status of sage-grouse populations and habitats, provide a synthesis of major threats and chal-lenges to conservation of sage-grouse, and suggest a roadmap to attaining conservation goals.

  11. Average and local structure of α-CuI by configurational averaging

    International Nuclear Information System (INIS)

    Mohn, Chris E; Stoelen, Svein

    2007-01-01

    Configurational Boltzmann averaging together with density functional theory are used to study in detail the average and local structure of the superionic α-CuI. We find that the coppers are spread out with peaks in the atom-density at the tetrahedral sites of the fcc sublattice of iodines. We calculate Cu-Cu, Cu-I and I-I pair radial distribution functions, the distribution of coordination numbers and the distribution of Cu-I-Cu, I-Cu-I and Cu-Cu-Cu bond-angles. The partial pair distribution functions are in good agreement with experimental neutron diffraction-reverse Monte Carlo, extended x-ray absorption fine structure and ab initio molecular dynamics results. In particular, our results confirm the presence of a prominent peak at around 2.7 A in the Cu-Cu pair distribution function as well as a broader, less intense peak at roughly 4.3 A. We find highly flexible bonds and a range of coordination numbers for both iodines and coppers. This structural flexibility is of key importance in order to understand the exceptional conductivity of coppers in α-CuI; the iodines can easily respond to changes in the local environment as the coppers diffuse, and a myriad of different diffusion-pathways is expected due to the large variation in the local motifs

  12. Efficacy of prophylactic splenectomy for proximal advanced gastric cancer invading greater curvature.

    Science.gov (United States)

    Ohkura, Yu; Haruta, Shusuke; Shindoh, Junichi; Tanaka, Tsuyoshi; Ueno, Masaki; Udagawa, Harushi

    2017-05-25

    For proximal gastric cancer invading the greater curvature, concomitant splenectomy is frequently performed to secure the clearance of lymph node metastases. However, prognostic impact of prophylactic splenectomy remains unclear. The aim of this study was to clarify the oncological significance of prophylactic splenectomy for advanced proximal gastric cancer invading the greater curvature. Retrospective review of 108 patients who underwent total or subtotal gastrectomy for advanced proximal gastric cancer involving the greater curvature was performed. Short-term and long-term outcomes were compared between the patients who underwent splenectomy (n = 63) and those who did not (n = 45). Patients who underwent splenectomy showed higher amount of blood loss (538 vs. 450 mL, p = 0.016) and morbidity rate (30.2 vs. 13.3, p = 0.041) compared with those who did not undergo splenectomy. In particular, pancreas-related complications were frequently observed among patients who received splenectomy (17.4 vs. 0%, p = 0.003). However, no significant improvement of long-term outcomes were confirmed in the cases with splenectomy (5-year recurrence-free rate, 60.2 vs. 67.3%; p = 0.609 and 5-year overall survival rates, 63.7 vs. 73.6%; p = 0.769). On the other hand, splenectomy was correlated with marginally better survival in patients with Borrmann type 1 or 2 gastric cancer (p = 0.072). For advanced proximal gastric cancer involving the greater curvature, prophylactic splenectomy may have no significant prognostic impact despite the increased morbidity rate after surgery. Such surgical procedure should be avoided as long as lymph node involvement is not evident.

  13. The effect of averaging adjacent planes for artifact reduction in matrix inversion tomosynthesis

    Energy Technology Data Exchange (ETDEWEB)

    Godfrey, Devon J. [Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina 27705 (United States); Page McAdams, H. [Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University Medical Center, Durham, North Carolina 27705 (United States); Dobbins, James T. III [Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Department of Biomedical Engineering, Department of Physics, and Medical Physics Graduate Program, Duke University Medical Center, Durham, North Carolina 27705 (United States)

    2013-02-15

    planes must be averaged to sufficiently remove partial-pixel artifacts. MITSa7 does appear to subtly reduce the contrast of high-frequency 'edge' information, but the removal of partial-pixel artifacts makes the appearance of low-contrast, fine-detail anatomy even more conspicuous in MITSa7 slices. MITSa7 also appears to render simulated subtle 5 mm pulmonary nodules with greater visibility than MITS alone, in both the open lung and regions overlying the mediastinum. Finally, the MITSa7 technique reduces stochastic image variance, though the in-plane stochastic SNR (for very thin objects which do not span multiple MITS planes) is only improved at spatial frequencies between 0.05 and 0.20 cycles/mm. Conclusions: The MITSa7 method is an improvement over traditional single-plane MITS for thoracic imaging and the pulmonary nodule detection task, and thus the authors plan to use the MITSa7 approach for all future MITS research at the authors' institution.

  14. The effect of averaging adjacent planes for artifact reduction in matrix inversion tomosynthesis

    Science.gov (United States)

    Godfrey, Devon J.; Page McAdams, H.; Dobbins, James T.

    2013-01-01

    averaged to sufficiently remove partial-pixel artifacts. MITSa7 does appear to subtly reduce the contrast of high-frequency “edge” information, but the removal of partial-pixel artifacts makes the appearance of low-contrast, fine-detail anatomy even more conspicuous in MITSa7 slices. MITSa7 also appears to render simulated subtle 5 mm pulmonary nodules with greater visibility than MITS alone, in both the open lung and regions overlying the mediastinum. Finally, the MITSa7 technique reduces stochastic image variance, though the in-plane stochastic SNR (for very thin objects which do not span multiple MITS planes) is only improved at spatial frequencies between 0.05 and 0.20 cycles/mm. Conclusions: The MITSa7 method is an improvement over traditional single-plane MITS for thoracic imaging and the pulmonary nodule detection task, and thus the authors plan to use the MITSa7 approach for all future MITS research at the authors’ institution. PMID:23387755

  15. The effect of averaging adjacent planes for artifact reduction in matrix inversion tomosynthesis.

    Science.gov (United States)

    Godfrey, Devon J; McAdams, H Page; Dobbins, James T

    2013-02-01

    remove partial-pixel artifacts. MITSa7 does appear to subtly reduce the contrast of high-frequency "edge" information, but the removal of partial-pixel artifacts makes the appearance of low-contrast, fine-detail anatomy even more conspicuous in MITSa7 slices. MITSa7 also appears to render simulated subtle 5 mm pulmonary nodules with greater visibility than MITS alone, in both the open lung and regions overlying the mediastinum. Finally, the MITSa7 technique reduces stochastic image variance, though the in-plane stochastic SNR (for very thin objects which do not span multiple MITS planes) is only improved at spatial frequencies between 0.05 and 0.20 cycles∕mm. The MITSa7 method is an improvement over traditional single-plane MITS for thoracic imaging and the pulmonary nodule detection task, and thus the authors plan to use the MITSa7 approach for all future MITS research at the authors' institution.

  16. The effect of averaging adjacent planes for artifact reduction in matrix inversion tomosynthesis

    International Nuclear Information System (INIS)

    Godfrey, Devon J.; Page McAdams, H.; Dobbins, James T. III

    2013-01-01

    averaged to sufficiently remove partial-pixel artifacts. MITSa7 does appear to subtly reduce the contrast of high-frequency “edge” information, but the removal of partial-pixel artifacts makes the appearance of low-contrast, fine-detail anatomy even more conspicuous in MITSa7 slices. MITSa7 also appears to render simulated subtle 5 mm pulmonary nodules with greater visibility than MITS alone, in both the open lung and regions overlying the mediastinum. Finally, the MITSa7 technique reduces stochastic image variance, though the in-plane stochastic SNR (for very thin objects which do not span multiple MITS planes) is only improved at spatial frequencies between 0.05 and 0.20 cycles/mm. Conclusions: The MITSa7 method is an improvement over traditional single-plane MITS for thoracic imaging and the pulmonary nodule detection task, and thus the authors plan to use the MITSa7 approach for all future MITS research at the authors’ institution.

  17. A Single Image Dehazing Method Using Average Saturation Prior

    Directory of Open Access Journals (Sweden)

    Zhenfei Gu

    2017-01-01

    Full Text Available Outdoor images captured in bad weather are prone to yield poor visibility, which is a fatal problem for most computer vision applications. The majority of existing dehazing methods rely on an atmospheric scattering model and therefore share a common limitation; that is, the model is only valid when the atmosphere is homogeneous. In this paper, we propose an improved atmospheric scattering model to overcome this inherent limitation. By adopting the proposed model, a corresponding dehazing method is also presented. In this method, we first create a haze density distribution map of a hazy image, which enables us to segment the hazy image into scenes according to the haze density similarity. Then, in order to improve the atmospheric light estimation accuracy, we define an effective weight assignment function to locate a candidate scene based on the scene segmentation results and therefore avoid most potential errors. Next, we propose a simple but powerful prior named the average saturation prior (ASP, which is a statistic of extensive high-definition outdoor images. Using this prior combined with the improved atmospheric scattering model, we can directly estimate the scene atmospheric scattering coefficient and restore the scene albedo. The experimental results verify that our model is physically valid, and the proposed method outperforms several state-of-the-art single image dehazing methods in terms of both robustness and effectiveness.

  18. Experimental demonstration of squeezed-state quantum averaging

    DEFF Research Database (Denmark)

    Lassen, Mikael Østergaard; Madsen, Lars Skovgaard; Sabuncu, Metin

    2010-01-01

    We propose and experimentally demonstrate a universal quantum averaging process implementing the harmonic mean of quadrature variances. The averaged variances are prepared probabilistically by means of linear optical interference and measurement-induced conditioning. We verify that the implemented...

  19. Stochastic Averaging and Stochastic Extremum Seeking

    CERN Document Server

    Liu, Shu-Jun

    2012-01-01

    Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering  and analysis of bacterial  convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...

  20. 20 CFR 404.220 - Average-monthly-wage method.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Average-monthly-wage method. 404.220 Section... INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.220 Average-monthly-wage method. (a) Who is eligible for this method. You must...

  1. A Correlation Between the Intrinsic Brightness and Average Decay Rate of Gamma-Ray Burst X-Ray Afterglow Light Curves

    Science.gov (United States)

    Racusin, J. L.; Oates, S. R.; De Pasquale, M.; Kocevski, D.

    2016-01-01

    We present a correlation between the average temporal decay (alpha X,avg, greater than 200 s) and early-time luminosity (LX,200 s) of X-ray afterglows of gamma-ray bursts as observed by the Swift X-ray Telescope. Both quantities are measured relative to a rest-frame time of 200 s after the gamma-ray trigger. The luminosity â€" average decay correlation does not depend on specific temporal behavior and contains one scale-independent quantity minimizing the role of selection effects. This is a complementary correlation to that discovered by Oates et al. in the optical light curves observed by the Swift Ultraviolet Optical Telescope. The correlation indicates that, on average, more luminous X-ray afterglows decay faster than less luminous ones, indicating some relative mechanism for energy dissipation. The X-ray and optical correlations are entirely consistent once corrections are applied and contamination is removed. We explore the possible biases introduced by different light-curve morphologies and observational selection effects, and how either geometrical effects or intrinsic properties of the central engine and jet could explain the observed correlation.

  2. Greater trochanteric fracture with occult intertrochanteric extension.

    Science.gov (United States)

    Reiter, Michael; O'Brien, Seth D; Bui-Mansfield, Liem T; Alderete, Joseph

    2013-10-01

    Proximal femoral fractures are frequently encountered in the emergency department (ED). Prompt diagnosis is paramount as delay will exacerbate the already poor outcomes associated with these injuries. In cases where radiography is negative but clinical suspicion remains high, magnetic resonance imaging (MRI) is the study of choice as it has the capability to depict fractures which are occult on other imaging modalities. Awareness of a particular subset of proximal femoral fractures, namely greater trochanteric fractures, is vital for both radiologists and clinicians since it has been well documented that they invariably have an intertrochanteric component which may require surgical management. The detection of intertrochanteric or cervical extension of greater trochanteric fractures has been described utilizing MRI but is underestimated with both computed tomography (CT) and bone scan. Therefore, if MRI is unavailable or contraindicated, the diagnosis of an isolated greater trochanteric fracture should be met with caution. The importance of avoiding this potential pitfall is demonstrated in the following case of an elderly woman with hip pain and CT demonstrating an isolated greater trochanteric fracture who subsequently returned to the ED with a displaced intertrochanteric fracture.

  3. Retrospective cost adaptive Reynolds-averaged Navier-Stokes k-ω model for data-driven unsteady turbulent simulations

    Science.gov (United States)

    Li, Zhiyong; Hoagg, Jesse B.; Martin, Alexandre; Bailey, Sean C. C.

    2018-03-01

    This paper presents a data-driven computational model for simulating unsteady turbulent flows, where sparse measurement data is available. The model uses the retrospective cost adaptation (RCA) algorithm to automatically adjust the closure coefficients of the Reynolds-averaged Navier-Stokes (RANS) k- ω turbulence equations to improve agreement between the simulated flow and the measurements. The RCA-RANS k- ω model is verified for steady flow using a pipe-flow test case and for unsteady flow using a surface-mounted-cube test case. Measurements used for adaptation of the verification cases are obtained from baseline simulations with known closure coefficients. These verification test cases demonstrate that the RCA-RANS k- ω model can successfully adapt the closure coefficients to improve agreement between the simulated flow field and a set of sparse flow-field measurements. Furthermore, the RCA-RANS k- ω model improves agreement between the simulated flow and the baseline flow at locations at which measurements do not exist. The RCA-RANS k- ω model is also validated with experimental data from 2 test cases: steady pipe flow, and unsteady flow past a square cylinder. In both test cases, the adaptation improves agreement with experimental data in comparison to the results from a non-adaptive RANS k- ω model that uses the standard values of the k- ω closure coefficients. For the steady pipe flow, adaptation is driven by mean stream-wise velocity measurements at 24 locations along the pipe radius. The RCA-RANS k- ω model reduces the average velocity error at these locations by over 35%. For the unsteady flow over a square cylinder, adaptation is driven by time-varying surface pressure measurements at 2 locations on the square cylinder. The RCA-RANS k- ω model reduces the average surface-pressure error at these locations by 88.8%.

  4. E2,M1 multipole mixing ratios in even--even nuclei, A greater than or equal to 152

    International Nuclear Information System (INIS)

    Krane, K.S.

    1975-01-01

    A survey is presented of E2,M1 mixing ratios of gamma-ray transitions in even-even nuclei with mass numbers A greater than or equal to 152. Angular distribution and correlation data from the literature are analyzed in terms of a consistent choice of the phase relationship between the E2 and M1 matrix elements. The cutoff date for the literature was June 1975. Based on an average of the experimental results from the literature, a recommended value of the E2,M1 mixing ratio for each transition is included

  5. Light-cone averaging in cosmology: formalism and applications

    International Nuclear Information System (INIS)

    Gasperini, M.; Marozzi, G.; Veneziano, G.; Nugier, F.

    2011-01-01

    We present a general gauge invariant formalism for defining cosmological averages that are relevant for observations based on light-like signals. Such averages involve either null hypersurfaces corresponding to a family of past light-cones or compact surfaces given by their intersection with timelike hypersurfaces. Generalized Buchert-Ehlers commutation rules for derivatives of these light-cone averages are given. After introducing some adapted ''geodesic light-cone'' coordinates, we give explicit expressions for averaging the redshift to luminosity-distance relation and the so-called ''redshift drift'' in a generic inhomogeneous Universe

  6. Dosimetric consequences of planning lung treatments on 4DCT average reconstruction to represent a moving tumour

    International Nuclear Information System (INIS)

    Dunn, L.F.; Taylor, M.L.; Kron, T.; Franich, R.

    2010-01-01

    Full text: Anatomic motion during a radiotherapy treatment is one of the more significant challenges in contemporary radiation therapy. For tumours of the lung, motion due to patient respiration makes both accurate planning and dose delivery difficult. One approach is to use the maximum intensity projection (MIP) obtained from a 40 computed tomography (CT) scan and then use this to determine the treatment volume. The treatment is then planned on a 4DCT average reco struction, rather than assuming the entire ITY has a uniform tumour density. This raises the question: how well does planning on a 'blurred' distribution of density with CT values greater than lung density but less than tumour density match the true case of a tumour moving within lung tissue? The aim of this study was to answer this question, determining the dosimetric impact of using a 4D-CT average reconstruction as the basis for a radiotherapy treatment plan. To achieve this, Monte-Carlo sim ulations were undertaken using GEANT4. The geometry consisted of a tumour (diameter 30 mm) moving with a sinusoidal pattern of amplitude = 20 mm. The tumour's excursion occurs within a lung equivalent volume beyond a chest wall interface. Motion was defined parallel to a 6 MY beam. This was then compared to a single oblate tumour of a magnitude determined by the extremes of the tumour motion. The variable density of the 4DCT average tumour is simulated by a time-weighted average, to achieve the observed density gradient. The generic moving tumour geometry is illustrated in the Figure.

  7. Bounds on Average Time Complexity of Decision Trees

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any diagnostic problem, the minimum average depth of decision tree is bounded from below by the entropy of probability distribution (with a multiplier 1/log2 k for a problem over a k-valued information system). Among diagnostic problems, the problems with a complete set of attributes have the lowest minimum average depth of decision trees (e.g, the problem of building optimal prefix code [1] and a blood test study in assumption that exactly one patient is ill [23]). For such problems, the minimum average depth of decision tree exceeds the lower bound by at most one. The minimum average depth reaches the maximum on the problems in which each attribute is "indispensable" [44] (e.g., a diagnostic problem with n attributes and kn pairwise different rows in the decision table and the problem of implementing the modulo 2 summation function). These problems have the minimum average depth of decision tree equal to the number of attributes in the problem description. © Springer-Verlag Berlin Heidelberg 2011.

  8. Lateral dispersion coefficients as functions of averaging time

    International Nuclear Information System (INIS)

    Sheih, C.M.

    1980-01-01

    Plume dispersion coefficients are discussed in terms of single-particle and relative diffusion, and are investigated as functions of averaging time. To demonstrate the effects of averaging time on the relative importance of various dispersion processes, and observed lateral wind velocity spectrum is used to compute the lateral dispersion coefficients of total, single-particle and relative diffusion for various averaging times and plume travel times. The results indicate that for a 1 h averaging time the dispersion coefficient of a plume can be approximated by single-particle diffusion alone for travel times <250 s and by relative diffusion for longer travel times. Furthermore, it is shown that the power-law formula suggested by Turner for relating pollutant concentrations for other averaging times to the corresponding 15 min average is applicable to the present example only when the averaging time is less than 200 s and the tral time smaller than about 300 s. Since the turbulence spectrum used in the analysis is an observed one, it is hoped that the results could represent many conditions encountered in the atmosphere. However, as the results depend on the form of turbulence spectrum, the calculations are not for deriving a set of specific criteria but for demonstrating the need in discriminating various processes in studies of plume dispersion

  9. Determination of the diagnostic x-ray tube practical peak voltage (PPV) from average or average peak voltage measurements

    Energy Technology Data Exchange (ETDEWEB)

    Hourdakis, C J, E-mail: khour@gaec.gr [Ionizing Radiation Calibration Laboratory-Greek Atomic Energy Commission, PO Box 60092, 15310 Agia Paraskevi, Athens, Attiki (Greece)

    2011-04-07

    The practical peak voltage (PPV) has been adopted as the reference measuring quantity for the x-ray tube voltage. However, the majority of commercial kV-meter models measure the average peak, U-bar{sub P}, the average, U-bar, the effective, U{sub eff} or the maximum peak, U{sub P} tube voltage. This work proposed a method for determination of the PPV from measurements with a kV-meter that measures the average U-bar or the average peak, U-bar{sub p} voltage. The kV-meter reading can be converted to the PPV by applying appropriate calibration coefficients and conversion factors. The average peak k{sub PPV,kVp} and the average k{sub PPV,Uav} conversion factors were calculated from virtual voltage waveforms for conventional diagnostic radiology (50-150 kV) and mammography (22-35 kV) tube voltages and for voltage ripples from 0% to 100%. Regression equation and coefficients provide the appropriate conversion factors at any given tube voltage and ripple. The influence of voltage waveform irregularities, like 'spikes' and pulse amplitude variations, on the conversion factors was investigated and discussed. The proposed method and the conversion factors were tested using six commercial kV-meters at several x-ray units. The deviations between the reference and the calculated - according to the proposed method - PPV values were less than 2%. Practical aspects on the voltage ripple measurement were addressed and discussed. The proposed method provides a rigorous base to determine the PPV with kV-meters from U-bar{sub p} and U-bar measurement. Users can benefit, since all kV-meters, irrespective of their measuring quantity, can be used to determine the PPV, complying with the IEC standard requirements.

  10. Averaging operations on matrices

    Indian Academy of Sciences (India)

    2014-07-03

    Jul 3, 2014 ... Role of Positive Definite Matrices. • Diffusion Tensor Imaging: 3 × 3 pd matrices model water flow at each voxel of brain scan. • Elasticity: 6 × 6 pd matrices model stress tensors. • Machine Learning: n × n pd matrices occur as kernel matrices. Tanvi Jain. Averaging operations on matrices ...

  11. 20 CFR 404.221 - Computing your average monthly wage.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the average...

  12. Analysis of compound parabolic concentrators and aperture averaging to mitigate fading on free-space optical links

    Science.gov (United States)

    Wasiczko, Linda M.; Smolyaninov, Igor I.; Davis, Christopher C.

    2004-01-01

    Free space optics (FSO) is one solution to the bandwidth bottleneck resulting from increased demand for broadband access. It is well known that atmospheric turbulence distorts the wavefront of a laser beam propagating through the atmosphere. This research investigates methods of reducing the effects of intensity scintillation and beam wander on the performance of free space optical communication systems, by characterizing system enhancement using either aperture averaging techniques or nonimaging optics. Compound Parabolic Concentrators, nonimaging optics made famous by Winston and Welford, are inexpensive elements that may be easily integrated into intensity modulation-direct detection receivers to reduce fading caused by beam wander and spot breakup in the focal plane. Aperture averaging provides a methodology to show the improvement of a given receiver aperture diameter in averaging out the optical scintillations over the received wavefront.

  13. The Greater Sekhukhune-CAPABILITY outreach project.

    Science.gov (United States)

    Gregersen, Nerine; Lampret, Julie; Lane, Tony; Christianson, Arnold

    2013-07-01

    The Greater Sekhukhune-CAPABILITY Outreach Project was undertaken in a rural district in Limpopo, South Africa, as part of the European Union-funded CAPABILITY programme to investigate approaches for capacity building for the translation of genetic knowledge into care and prevention of congenital disorders. Based on previous experience of a clinical genetic outreach programme in Limpopo, it aimed to initiate a district clinical genetic service in Greater Sekhukhune to gain knowledge and experience to assist in the implementation and development of medical genetic services in South Africa. Implementing the service in Greater Sekhukhune was impeded by a developing staff shortage in the province and pressure on the health service from the existing HIV/AIDS and TB epidemics. This situation underscores the need for health needs assessment for developing services for the care and prevention of congenital disorders in middle- and low-income countries. However, these impediments stimulated the pioneering of innovate ways to offer medical genetic services in these circumstances, including tele-teaching of nurses and doctors, using cellular phones to enhance clinical care and adapting and assessing the clinical utility of a laboratory test, QF-PCR, for use in the local circumstances.

  14. Averaging in SU(2) open quantum random walk

    International Nuclear Information System (INIS)

    Ampadu Clement

    2014-01-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT

  15. Averaging in SU(2) open quantum random walk

    Science.gov (United States)

    Clement, Ampadu

    2014-03-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT.

  16. Determining average yarding distance.

    Science.gov (United States)

    Roger H. Twito; Charles N. Mann

    1979-01-01

    Emphasis on environmental and esthetic quality in timber harvesting has brought about increased use of complex boundaries of cutting units and a consequent need for a rapid and accurate method of determining the average yarding distance and area of these units. These values, needed for evaluation of road and landing locations in planning timber harvests, are easily and...

  17. Average Revisited in Context

    Science.gov (United States)

    Watson, Jane; Chick, Helen

    2012-01-01

    This paper analyses the responses of 247 middle school students to items requiring the concept of average in three different contexts: a city's weather reported in maximum daily temperature, the number of children in a family, and the price of houses. The mixed but overall disappointing performance on the six items in the three contexts indicates…

  18. 75 FR 25323 - Light-Duty Vehicle Greenhouse Gas Emission Standards and Corporate Average Fuel Economy Standards...

    Science.gov (United States)

    2010-05-07

    ... Greenhouse Gas Emission Standards and Corporate Average Fuel Economy Standards; Final Rule #0;#0;Federal... Fuel Economy Standards; Final Rule AGENCY: Environmental Protection Agency (EPA) and National Highway... reduce greenhouse gas emissions and improve fuel economy. This joint Final Rule is consistent with the...

  19. When good = better than average

    Directory of Open Access Journals (Sweden)

    Don A. Moore

    2007-10-01

    Full Text Available People report themselves to be above average on simple tasks and below average on difficult tasks. This paper proposes an explanation for this effect that is simpler than prior explanations. The new explanation is that people conflate relative with absolute evaluation, especially on subjective measures. The paper then presents a series of four studies that test this conflation explanation. These tests distinguish conflation from other explanations, such as differential weighting and selecting the wrong referent. The results suggest that conflation occurs at the response stage during which people attempt to disambiguate subjective response scales in order to choose an answer. This is because conflation has little effect on objective measures, which would be equally affected if the conflation occurred at encoding.

  20. Greater commitment to the domestic violence training is required.

    Science.gov (United States)

    Leppäkoski, Tuija Helena; Flinck, Aune; Paavilainen, Eija

    2015-05-01

    Domestic violence (DV) is a major public health problem with high health and social costs. A solution to this multi-faceted problem requires that various help providers work together in an effective and optimal manner when dealing with different parties of DV. The objective of our research and development project (2008-2013) was to improve the preparedness of the social and healthcare professionals to manage DV. This article focuses on the evaluation of interprofessional education (IPE) to provide knowledge and skills for identifying and intervening in DV and to improve collaboration among social and health care professionals and other help providers at the local and regional level. The evaluation data were carried out with an internal evaluation. The evaluation data were collected from the participants orally and in the written form. The participants were satisfied with the content of the IPE programme itself and the teaching methods used. Participation in the training sessions could have been more active. Moreover, some of the people who had enrolled for the trainings could not attend all of them. IPE is a valuable way to develop intervening in DV. However, greater commitment to the training is required from not only the participants and their superiors but also from trustees.

  1. Do Self-Regulated Processes such as Study Strategies and Satisfaction Predict Grade Point Averages for First and Second Generation College Students?

    Science.gov (United States)

    DiBenedetto, Maria K.

    2010-01-01

    The current investigation sought to determine whether self-regulatory variables: "study strategies" and "self-satisfaction" correlate with first and second generation college students' grade point averages, and to determine if these two variables would improve the prediction of their averages if used along with high school grades and SAT scores.…

  2. 47 CFR 80.759 - Average terrain elevation.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Average terrain elevation. 80.759 Section 80.759 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.759 Average terrain elevation. (a)(1) Draw radials...

  3. The post-orgasmic prolactin increase following intercourse is greater than following masturbation and suggests greater satiety.

    Science.gov (United States)

    Brody, Stuart; Krüger, Tillmann H C

    2006-03-01

    Research indicates that prolactin increases following orgasm are involved in a feedback loop that serves to decrease arousal through inhibitory central dopaminergic and probably peripheral processes. The magnitude of post-orgasmic prolactin increase is thus a neurohormonal index of sexual satiety. Using data from three studies of men and women engaging in masturbation or penile-vaginal intercourse to orgasm in the laboratory, we report that for both sexes (adjusted for prolactin changes in a non-sexual control condition), the magnitude of prolactin increase following intercourse is 400% greater than that following masturbation. The results are interpreted as an indication of intercourse being more physiologically satisfying than masturbation, and discussed in light of prior research reporting greater physiological and psychological benefits associated with coitus than with any other sexual activities.

  4. Greater Trochanteric Fixation Using a Cable System for Partial Hip Arthroplasty: A Clinical and Finite Element Analysis

    Directory of Open Access Journals (Sweden)

    Fırat Ozan

    2014-01-01

    Full Text Available The aim of the study was to investigate the efficacy of greater trochanteric fixation using a multifilament cable to ensure abductor lever arm continuity in patients with a proximal femoral fracture undergoing partial hip arthroplasty. Mean age of the patients (12 men, 20 women was 84.12 years. Mean follow-up was 13.06 months. Fixation of the dislocated greater trochanter with or without a cable following load application was assessed by finite element analysis (FEA. Radiological evaluation was based on the distance between the fracture and the union site. Harris hip score was used to evaluate final results: outcomes were excellent in 7 patients (21.8%, good in 17 patients (53.1%, average in 5 patients (15.6%, and poor in 1 patient (9.3%. Mean abduction angle was 20.21°. Union was achieved in 14 patients (43.7%, fibrous union in 12 (37.5%, and no union in 6 (18.7%. FEA showed that the maximum total displacement of the greater trochanter decreased when the fractured bone was fixed with a cable. As the force applied to the cable increased, the displacement of the fractured trochanter decreased. This technique ensures continuity of the abductor lever arm in patients with a proximal femoral fracture who are undergoing partial hip arthroplasty surgery.

  5. HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS

    International Nuclear Information System (INIS)

    2005-01-01

    Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department

  6. Cardiovascular risk assessment: audit findings from a nurse clinic--a quality improvement initiative.

    Science.gov (United States)

    Waldron, Sarah; Horsburgh, Margaret

    2009-09-01

    Evidence has shown the effectiveness of risk factor management in reducing mortality and morbidity from cardiovascular disease (CVD). An audit of a nurse CVD risk assessment programme undertaken between November 2005 and December 2008 in a Northland general practice. A retrospective audit of CVD risk assessment with data for the first entry of 621 patients collected exclusively from PREDICT-CVDTM, along with subsequent data collected from 320 of these patients who had a subsequent assessment recorded at an interval ranging from six months to three years (18 month average). Of the eligible population (71%) with an initial CVD risk assessment, 430 (69.2%) had afive-year absolute risk less than 15%, with 84 (13.5%) having a risk greater than 15% and having not had a cardiovascular event. Of the patients with a follow-up CVD risk assessment, 34 showed improvement. Medication prescribing for patients with absolute CVD risk greater than 15% increased from 71% to 86% for anti-platelet medication and for lipid lowering medication from 65% to 72% in the audit period. The recently available 'heart health' trajectory tool will help patients become more aware of risks that are modifiable, together with community support to engage more patients in the nurse CVD prevention programme. Further medication audits to monitor prescribing trends. Patients who showed an improvement in CVD risk had an improvement in one or more modifiable risk factors and became actively involved in making changes to their health.

  7. Grassmann Averages for Scalable Robust PCA

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Black, Michael J.

    2014-01-01

    As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...

  8. Scale-invariant Green-Kubo relation for time-averaged diffusivity

    Science.gov (United States)

    Meyer, Philipp; Barkai, Eli; Kantz, Holger

    2017-12-01

    In recent years it was shown both theoretically and experimentally that in certain systems exhibiting anomalous diffusion the time- and ensemble-averaged mean-squared displacement are remarkably different. The ensemble-averaged diffusivity is obtained from a scaling Green-Kubo relation, which connects the scale-invariant nonstationary velocity correlation function with the transport coefficient. Here we obtain the relation between time-averaged diffusivity, usually recorded in single-particle tracking experiments, and the underlying scale-invariant velocity correlation function. The time-averaged mean-squared displacement is given by 〈δ2¯〉 ˜2 DνtβΔν -β , where t is the total measurement time and Δ is the lag time. Here ν is the anomalous diffusion exponent obtained from ensemble-averaged measurements 〈x2〉 ˜tν , while β ≥-1 marks the growth or decline of the kinetic energy 〈v2〉 ˜tβ . Thus, we establish a connection between exponents that can be read off the asymptotic properties of the velocity correlation function and similarly for the transport constant Dν. We demonstrate our results with nonstationary scale-invariant stochastic and deterministic models, thereby highlighting that systems with equivalent behavior in the ensemble average can differ strongly in their time average. If the averaged kinetic energy is finite, β =0 , the time scaling of 〈δ2¯〉 and 〈x2〉 are identical; however, the time-averaged transport coefficient Dν is not identical to the corresponding ensemble-averaged diffusion constant.

  9. Technical concept for a greater-confinement-disposal test facility

    International Nuclear Information System (INIS)

    Hunter, P.H.

    1982-01-01

    Greater confinement disposal (GCO) has been defined by the National Low-Level Waste Program as the disposal of low-level waste in such a manner as to provide greater containment of radiation, reduce potential for migration or dispersion or radionuclides, and provide greater protection from inadvertent human and biological intrusions in order to protect the public health and safety. This paper discusses: the need for GCD; definition of GCD; advantages and disadvantages of GCD; relative dose impacts of GCD versus shallow land disposal; types of waste compatible with GCD; objectives of GCD borehole demonstration test; engineering and technical issues; and factors affecting performance of the greater confinement disposal facility

  10. DOD Financial Management: Greater Visibility Needed to Better Assess Audit Readiness for Property, Plant, and Equipment

    Science.gov (United States)

    2016-05-01

    with U.S. generally accepted accounting principles and establish and maintain effective internal control over financial reporting and compliance with... Accountability Office Highlights of GAO-16-383, a report to congressional committees May 2016 DOD FINANCIAL MANAGEMENT Greater Visibility... Accounting Standards Advisory Board FIAR Financial Improvement and Audit Readiness IUS internal-use software NDAA National Defense Authorization Act

  11. 40 CFR 80.205 - How is the annual refinery or importer average and corporate pool average sulfur level determined?

    Science.gov (United States)

    2010-07-01

    ... volume of gasoline produced or imported in batch i. Si=The sulfur content of batch i determined under § 80.330. n=The number of batches of gasoline produced or imported during the averaging period. i=Individual batch of gasoline produced or imported during the averaging period. (b) All annual refinery or...

  12. Waste management in Greater Vancouver

    Energy Technology Data Exchange (ETDEWEB)

    Carrusca, K. [Greater Vancouver Regional District, Burnaby, BC (Canada); Richter, R. [Montenay Inc., Vancouver, BC (Canada)]|[Veolia Environmental Services, Vancouver, BC (Canada)

    2006-07-01

    An outline of the Greater Vancouver Regional District (GVRD) waste-to-energy program was presented. The GVRD has an annual budget for solid waste management of $90 million. Energy recovery revenues from solid waste currently exceed $10 million. Over 1,660,00 tonnes of GVRD waste is recycled, and another 280,000 tonnes is converted from waste to energy. The GVRD waste-to-energy facility combines state-of-the-art combustion and air pollution control, and has processed over 5 million tonnes of municipal solid waste since it opened in 1988. Its central location minimizes haul distance, and it was originally sited to utilize steam through sales to a recycle paper mill. The facility has won several awards, including the Solid Waste Association of North America award for best facility in 1990. The facility focuses on continual improvement, and has installed a carbon injection system; an ammonia injection system; a flyash stabilization system; and heat capacity upgrades in addition to conducting continuous waste composition studies. Continuous air emissions monitoring is also conducted at the plant, which produces a very small percentage of the total air emissions in metropolitan Vancouver. The GVRD is now seeking options for the management of a further 500,000 tonnes per year of solid waste, and has received 23 submissions from a range of waste energy technologies which are now being evaluated. It was concluded that waste-to-energy plants can be located in densely populated metropolitan areas and provide a local disposal solution as well as a source of renewable energy. Other GVRD waste reduction policies were also reviewed. refs., tabs., figs.

  13. High-Average-Power Diffraction Pulse-Compression Gratings Enabling Next-Generation Ultrafast Laser Systems

    Energy Technology Data Exchange (ETDEWEB)

    Alessi, D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-11-01

    Pulse compressors for ultrafast lasers have been identified as a technology gap in the push towards high peak power systems with high average powers for industrial and scientific applications. Gratings for ultrashort (sub-150fs) pulse compressors are metallic and can absorb a significant percentage of laser energy resulting in up to 40% loss as well as thermal issues which degrade on-target performance. We have developed a next generation gold grating technology which we have scaled to the petawatt-size. This resulted in improvements in efficiency, uniformity and processing as compared to previous substrate etched gratings for high average power. This new design has a deposited dielectric material for the grating ridge rather than etching directly into the glass substrate. It has been observed that average powers as low as 1W in a compressor can cause distortions in the on-target beam. We have developed and tested a method of actively cooling diffraction gratings which, in the case of gold gratings, can support a petawatt peak power laser with up to 600W average power. We demonstrated thermo-mechanical modeling of a grating in its use environment and benchmarked with experimental measurement. Multilayer dielectric (MLD) gratings are not yet used for these high peak power, ultrashort pulse durations due to their design challenges. We have designed and fabricated broad bandwidth, low dispersion MLD gratings suitable for delivering 30 fs pulses at high average power. This new grating design requires the use of a novel Out Of Plane (OOP) compressor, which we have modeled, designed, built and tested. This prototype compressor yielded a transmission of 90% for a pulse with 45 nm bandwidth, and free of spatial and angular chirp. In order to evaluate gratings and compressors built in this project we have commissioned a joule-class ultrafast Ti:Sapphire laser system. Combining the grating cooling and MLD technologies developed here could enable petawatt laser systems to

  14. Fiscal consequences of greater openness: from tax avoidance and tax arbitrage to revenue growth

    OpenAIRE

    Jouko Ylä-Liedenpohja

    2008-01-01

    Revenue from corporation tax and taxes on capital income, net of revenue loss from deductibility of interest, as a percentage of the GDP has tripled in Finland over the past two decades. This is argued to result from greater openness of the economy as well as from simultaneous tax reforms towards neutrality of capital income taxation by combining tax-base broadening with tax-rate reductions. They implied improved efficiency of real investments, elimination of tax avoidance in entrepreneurial ...

  15. [Assessment of risk of burden in construction: improvement interventions and contribution of the competent physician].

    Science.gov (United States)

    Martinelli, R; Tarquini, M

    2012-01-01

    Three construction companies in three years have changed the operating modes, making use of innovative carpentry, with little amount of equipment, improved usability of the site, reduced cleaning time, less manual handling and reduced risk of accidents. The Competent Doctor has participated in the review of the risk assessment of manual handling: data has been acquired on musculoskeletal disorders to compare, in terms of this innovation, the average trend and changes, with encouraging results in terms of incidence of musculoskeletal disorders, absenteeism due to illness by these causes, new cases of lumbar diseases. It remains difficult in building to assess manual handling risk, but the collaboration between the Employer, Prevention and Protection Service and Competent Doctor, thanks to the greater attention that the design subject to these issues, suggests improvements and further steps to extend to all phases of operation of building.

  16. Averaging processes in granular flows driven by gravity

    Science.gov (United States)

    Rossi, Giulia; Armanini, Aronne

    2016-04-01

    One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental

  17. Average-case analysis of numerical problems

    CERN Document Server

    2000-01-01

    The average-case analysis of numerical problems is the counterpart of the more traditional worst-case approach. The analysis of average error and cost leads to new insight on numerical problems as well as to new algorithms. The book provides a survey of results that were mainly obtained during the last 10 years and also contains new results. The problems under consideration include approximation/optimal recovery and numerical integration of univariate and multivariate functions as well as zero-finding and global optimization. Background material, e.g. on reproducing kernel Hilbert spaces and random fields, is provided.

  18. Post-model selection inference and model averaging

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  19. Visualization of Radial Peripapillary Capillaries Using Optical Coherence Tomography Angiography: The Effect of Image Averaging.

    Directory of Open Access Journals (Sweden)

    Shelley Mo

    Full Text Available To assess the effect of image registration and averaging on the visualization and quantification of the radial peripapillary capillary (RPC network on optical coherence tomography angiography (OCTA.Twenty-two healthy controls were imaged with a commercial OCTA system (AngioVue, Optovue, Inc.. Ten 10x10° scans of the optic disc were obtained, and the most superficial layer (50-μm slab extending from the inner limiting membrane was extracted for analysis. Rigid registration was achieved using ImageJ, and averaging of each 2 to 10 frames was performed in five ~2x2° regions of interest (ROI located 1° from the optic disc margin. The ROI were automatically skeletonized. Signal-to-noise ratio (SNR, number of endpoints and mean capillary length from the skeleton, capillary density, and mean intercapillary distance (ICD were measured for the reference and each averaged ROI. Repeated measures analysis of variance was used to assess statistical significance. Three patients with primary open angle glaucoma were also imaged to compare RPC density to controls.Qualitatively, vessels appeared smoother and closer to histologic descriptions with increasing number of averaged frames. Quantitatively, number of endpoints decreased by 51%, and SNR, mean capillary length, capillary density, and ICD increased by 44%, 91%, 11%, and 4.5% from single frame to 10-frame averaged, respectively. The 10-frame averaged images from the glaucomatous eyes revealed decreased density correlating to visual field defects and retinal nerve fiber layer thinning.OCTA image registration and averaging is a viable and accessible method to enhance the visualization of RPCs, with significant improvements in image quality and RPC quantitative parameters. With this technique, we will be able to non-invasively and reliably study RPC involvement in diseases such as glaucoma.

  20. Average L-shell fluorescence, Auger, and electron yields

    International Nuclear Information System (INIS)

    Krause, M.O.

    1980-01-01

    The dependence of the average L-shell fluorescence and Auger yields on the initial vacancy distribution is shown to be small. By contrast, the average electron yield pertaining to both Auger and Coster-Kronig transitions is shown to display a strong dependence. Numerical examples are given on the basis of Krause's evaluation of subshell radiative and radiationless yields. Average yields are calculated for widely differing vacancy distributions and are intercompared graphically for 40 3 subshell yields in most cases of inner-shell ionization

  1. Bayesian Averaging is Well-Temperated

    DEFF Research Database (Denmark)

    Hansen, Lars Kai

    2000-01-01

    Bayesian predictions are stochastic just like predictions of any other inference scheme that generalize from a finite sample. While a simple variational argument shows that Bayes averaging is generalization optimal given that the prior matches the teacher parameter distribution the situation is l...

  2. A Small Decrease in Rubisco Content by Individual Suppression of RBCS Genes Leads to Improvement of Photosynthesis and Greater Biomass Production in Rice Under Conditions of Elevated CO2.

    Science.gov (United States)

    Kanno, Keiichi; Suzuki, Yuji; Makino, Amane

    2017-03-01

    Rubisco limits photosynthesis at low CO2 concentrations ([CO2]), but does not limit it at elevated [CO2]. This means that the amount of Rubisco is excessive for photosynthesis at elevated [CO2]. Therefore, we examined whether a small decrease in Rubisco content by individual suppression of the RBCS multigene family leads to increases in photosynthesis and biomass production at elevated [CO2] in rice (Oryza sativa L.). Our previous studies indicated that the individual suppression of RBCS decreased Rubisco content in rice by 10-25%. Three lines of BC2F2 progeny were selected from transgenic plants with individual suppression of OsRBCS2, 3 and 5. Rubisco content in the selected lines was 71-90% that of wild-type plants. These three transgenic lines showed lower rates of CO2 assimilation at low [CO2] (28 Pa) but higher rates of CO2 assimilation at elevated [CO2] (120 Pa). Similarly, the biomass production and relative growth rate (RGR) of the two lines were also smaller at low [CO2] but greater than that of wild-type plants at elevated [CO2]. This greater RGR was caused by the higher net assimilation rate (NAR). When the nitrogen use efficiency (NUE) for the NAR was estimated by dividing the NAR by whole-plant leaf N content, the NUE for NAR at elevated [CO2] was higher in these two lines. Thus, a small decrease in Rubisco content leads to improvements of photosynthesis and greater biomass production in rice under conditions of elevated CO2. © The Author 2017. Published by Oxford University Press on behalf of Japanese Society of Plant Physiologists. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  3. Simultaneous inference for model averaging of derived parameters

    DEFF Research Database (Denmark)

    Jensen, Signe Marie; Ritz, Christian

    2015-01-01

    Model averaging is a useful approach for capturing uncertainty due to model selection. Currently, this uncertainty is often quantified by means of approximations that do not easily extend to simultaneous inference. Moreover, in practice there is a need for both model averaging and simultaneous...... inference for derived parameters calculated in an after-fitting step. We propose a method for obtaining asymptotically correct standard errors for one or several model-averaged estimates of derived parameters and for obtaining simultaneous confidence intervals that asymptotically control the family...

  4. Improving Photosynthesis

    Science.gov (United States)

    Evans, John R.

    2013-01-01

    Photosynthesis is the basis of plant growth, and improving photosynthesis can contribute toward greater food security in the coming decades as world population increases. Multiple targets have been identified that could be manipulated to increase crop photosynthesis. The most important target is Rubisco because it catalyses both carboxylation and oxygenation reactions and the majority of responses of photosynthesis to light, CO2, and temperature are reflected in its kinetic properties. Oxygenase activity can be reduced either by concentrating CO2 around Rubisco or by modifying the kinetic properties of Rubisco. The C4 photosynthetic pathway is a CO2-concentrating mechanism that generally enables C4 plants to achieve greater efficiency in their use of light, nitrogen, and water than C3 plants. To capitalize on these advantages, attempts have been made to engineer the C4 pathway into C3 rice (Oryza sativa). A simpler approach is to transfer bicarbonate transporters from cyanobacteria into chloroplasts and prevent CO2 leakage. Recent technological breakthroughs now allow higher plant Rubisco to be engineered and assembled successfully in planta. Novel amino acid sequences can be introduced that have been impossible to reach via normal evolution, potentially enlarging the range of kinetic properties and breaking free from the constraints associated with covariation that have been observed between certain kinetic parameters. Capturing the promise of improved photosynthesis in greater yield potential will require continued efforts to improve carbon allocation within the plant as well as to maintain grain quality and resistance to disease and lodging. PMID:23812345

  5. Inter-comparison of interpolated background nitrogen dioxide concentrations across Greater Manchester, UK

    Science.gov (United States)

    Lindley, S. J.; Walsh, T.

    There are many modelling methods dedicated to the estimation of spatial patterns in pollutant concentrations, each with their distinctive advantages and disadvantages. The derivation of a surface of air quality values from monitoring data alone requires the conversion of point-based data from a limited number of monitoring stations to a continuous surface using interpolation. Since interpolation techniques involve the estimation of data at un-sampled points based on calculated relationships between data measured at a number of known sample points, they are subject to some uncertainty, both in terms of the values estimated and their spatial distribution. These uncertainties, which are incorporated into many empirical and semi-empirical mapping methodologies, could be recognised in any further usage of the data and also in the assessment of the extent of an exceedence of an air quality standard and the degree of exposure this may represent. There is a wide range of available interpolation techniques and the differences in the characteristics of these result in variations in the output surfaces estimated from the same set of input points. The work presented in this paper provides an examination of uncertainties through the application of a number of interpolation techniques available in standard GIS packages to a case study nitrogen dioxide data set for the Greater Manchester conurbation in northern England. The implications of the use of different techniques are discussed through application to hourly concentrations during an air quality episode and annual average concentrations in 2001. Patterns of concentrations demonstrate considerable differences in the estimated spatial pattern of maxima as the combined effects of chemical processes, topography and meteorology. In the case of air quality episodes, the considerable spatial variability of concentrations results in large uncertainties in the surfaces produced but these uncertainties vary widely from area to area

  6. Analytical expressions for conditional averages: A numerical test

    DEFF Research Database (Denmark)

    Pécseli, H.L.; Trulsen, J.

    1991-01-01

    Conditionally averaged random potential fluctuations are an important quantity for analyzing turbulent electrostatic plasma fluctuations. Experimentally, this averaging can be readily performed by sampling the fluctuations only when a certain condition is fulfilled at a reference position...

  7. Multi-Repeated Projection Lithography for High-Precision Linear Scale Based on Average Homogenization Effect

    Directory of Open Access Journals (Sweden)

    Dongxu Ren

    2016-04-01

    Full Text Available A multi-repeated photolithography method for manufacturing an incremental linear scale using projection lithography is presented. The method is based on the average homogenization effect that periodically superposes the light intensity of different locations of pitches in the mask to make a consistent energy distribution at a specific wavelength, from which the accuracy of a linear scale can be improved precisely using the average pitch with different step distances. The method’s theoretical error is within 0.01 µm for a periodic mask with a 2-µm sine-wave error. The intensity error models in the focal plane include the rectangular grating error on the mask, static positioning error, and lithography lens focal plane alignment error, which affect pitch uniformity less than in the common linear scale projection lithography splicing process. It was analyzed and confirmed that increasing the repeat exposure number of a single stripe could improve accuracy, as could adjusting the exposure spacing to achieve a set proportion of black and white stripes. According to the experimental results, the effectiveness of the multi-repeated photolithography method is confirmed to easily realize a pitch accuracy of 43 nm in any 10 locations of 1 m, and the whole length accuracy of the linear scale is less than 1 µm/m.

  8. Nonequilibrium statistical averages and thermo field dynamics

    International Nuclear Information System (INIS)

    Marinaro, A.; Scarpetta, Q.

    1984-01-01

    An extension of thermo field dynamics is proposed, which permits the computation of nonequilibrium statistical averages. The Brownian motion of a quantum oscillator is treated as an example. In conclusion it is pointed out that the procedure proposed to computation of time-dependent statistical average gives the correct two-point Green function for the damped oscillator. A simple extension can be used to compute two-point Green functions of free particles

  9. Environmental characteristics of shallow bottoms used by Greater Flamingo Phoenicopterus roseus in a northern Adriatic lagoon

    Directory of Open Access Journals (Sweden)

    Scarton Francesco

    2017-12-01

    Full Text Available Since the beginning of this century, Greater Flamingo Phoenicopterus roseus flocks have been observed regularly when feeding in the large extensions of shallow bottoms in the Lagoon of Venice (NE Italy, the largest lagoon along the Mediterranean. Nowadays thousands of flamingos are present throughout the year. Between 2013 and 2017 I collected data on the environmental features of the shallow bottoms used by feeding flocks, along with measurements of flight initiation distance (FID of Greater Flamingo in response to the approach of boats and pedestrians. Shallow bottoms were shown to be used when covered with approximately 10 to 60 cm of water. All the feeding sites were in open landscapes, with low occurrence of saltmarshes in a radius of 500 m. The bottoms were barely covered with seagrasses (<4% of the surface around the survey points and were mostly silty. Feeding flocks were on average 1.2 km far from the nearest road or dyke, while the mean distance from channels that could be used by boats was about 420 m. The mean FID caused by boats or pedestrians was 241 m ± 117 m (N = 31, ± 1 SD without significant differences between those for the two disturbance sources. The use of shallow bottoms by the Greater Flamingo appears governed primarily by the tidal cycle, but boat disturbance probably modifies this effect. According to FID values, a set-back distance of 465 m is suggested to reduce the disturbance caused by boats and pedestrians to the flamingo feeding flocks.

  10. Time average vibration fringe analysis using Hilbert transformation

    International Nuclear Information System (INIS)

    Kumar, Upputuri Paul; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad

    2010-01-01

    Quantitative phase information from a single interferogram can be obtained using the Hilbert transform (HT). We have applied the HT method for quantitative evaluation of Bessel fringes obtained in time average TV holography. The method requires only one fringe pattern for the extraction of vibration amplitude and reduces the complexity in quantifying the data experienced in the time average reference bias modulation method, which uses multiple fringe frames. The technique is demonstrated for the measurement of out-of-plane vibration amplitude on a small scale specimen using a time average microscopic TV holography system.

  11. CPAP Adherence is Associated With Attentional Improvements in a Group of Primarily Male Patients With Moderate to Severe OSA.

    Science.gov (United States)

    Deering, Sean; Liu, Lin; Zamora, Tania; Hamilton, Joanne; Stepnowsky, Carl

    2017-12-15

    Obstructive sleep apnea (OSA) is a widespread condition that adversely affects physical health and cognitive functioning. The prevailing treatment for OSA is continuous positive airway pressure (CPAP), but therapeutic benefits are dependent on consistent use. Our goal was to investigate the relationship between CPAP adherence and measures of sustained attention in patients with OSA. Our hypothesis was that the Psychomotor Vigilance Task (PVT) would be sensitive to attention-related improvements resulting from CPAP use. This study was a secondary analysis of a larger clinical trial. Treatment adherence was determined from CPAP use data. Validated sleep-related questionnaires and a sustained-attention and alertness test (PVT) were administered to participants at baseline and at the 6-month time point. Over a 6-month time period, the average CPAP adherence was 3.32 h/night (standard deviation [SD] = 2.53), average improvement in PVT minor lapses was -4.77 (SD = 13.2), and average improvement in PVT reaction time was -73.1 milliseconds (standard deviation = 211). Multiple linear regression analysis showed that higher CPAP adherence was significantly associated with a greater reduction in minor lapses in attention after 6 months of continuous treatment with CPAP therapy (β = -0.72, standard error = 0.34, P = .037). The results of this study showed that higher levels of CPAP adherence were associated with significant improvements in vigilance. Because the PVT is a performance-based measure that is not influenced by prior learning and is not subjective, it may be an important supplement to patient self-reported assessments. Name: Effect of Self-Management on Improving Sleep Apnea Outcomes, URL: https://clinicaltrials.gov/ct2/show/NCT00310310, Identifier: NCT00310310. © 2017 American Academy of Sleep Medicine

  12. Is Greater Improvement in Early Self-Regulation Associated with Fewer Behavioral Problems Later in Childhood?

    Science.gov (United States)

    Sawyer, Alyssa C. P.; Miller-Lewis, Lauren R.; Searle, Amelia K.; Sawyer, Michael G.; Lynch, John W.

    2015-01-01

    The aim of this study was to determine whether the extent of improvement in self-regulation achieved between ages 4 and 6 years is associated with the level of behavioral problems later in childhood. Participants were 4-year-old children (n = 510) attending preschools in South Australia. Children's level of self-regulation was assessed using the…

  13. Safety Impact of Average Speed Control in the UK

    DEFF Research Database (Denmark)

    Lahrmann, Harry Spaabæk; Brassøe, Bo; Johansen, Jonas Wibert

    2016-01-01

    of automatic speed control was point-based, but in recent years a potentially more effective alternative automatic speed control method has been introduced. This method is based upon records of drivers’ average travel speed over selected sections of the road and is normally called average speed control...... in the UK. The study demonstrates that the introduction of average speed control results in statistically significant and substantial reductions both in speed and in number of accidents. The evaluation indicates that average speed control has a higher safety effect than point-based automatic speed control....

  14. Adaptive radiotherapy with an average anatomy model: Evaluation and quantification of residual deformations in head and neck cancer patients

    International Nuclear Information System (INIS)

    Kranen, Simon van; Mencarelli, Angelo; Beek, Suzanne van; Rasch, Coen; Herk, Marcel van; Sonke, Jan-Jakob

    2013-01-01

    Background and purpose: To develop and validate an adaptive intervention strategy for radiotherapy of head-and-neck cancer that accounts for systematic deformations by modifying the planning-CT (pCT) to the average misalignments in daily cone beam CT (CBCT) measured with deformable registration (DR). Methods and materials: Daily CBCT scans (808 scans) for 25 patients were retrospectively registered to the pCT with B-spline DR. The average deformation vector field ( ) was used to deform the pCT for adaptive intervention. Two strategies were simulated: single intervention after 10 fractions and weekly intervention with an from the previous week. The model was geometrically validated with the residual misalignment of anatomical landmarks both on bony-anatomy (BA; automatically generated) and soft-tissue (ST; manually identified). Results: Systematic deformations were 2.5/3.4 mm vector length (BA/ST). Single intervention reduced deformations to 1.5/2.7 mm (BA/ST). Weekly intervention resulted in 1.0/2.2 mm (BA/ST) and accounted better for progressive changes. 15 patients had average systematic deformations >2 mm (BA): reductions were 1.1/1.9 mm (single/weekly BA). ST improvements were underestimated due to observer and registration variability. Conclusions: Adaptive intervention with a pCT modified to the average anatomy during treatment successfully reduces systematic deformations. The improved accuracy could possibly be exploited in margin reduction and/or dose escalation

  15. Numerical artifacts in the Generalized Porous Medium Equation: Why harmonic averaging itself is not to blame

    Science.gov (United States)

    Maddix, Danielle C.; Sampaio, Luiz; Gerritsen, Margot

    2018-05-01

    The degenerate parabolic Generalized Porous Medium Equation (GPME) poses numerical challenges due to self-sharpening and its sharp corner solutions. For these problems, we show results for two subclasses of the GPME with differentiable k (p) with respect to p, namely the Porous Medium Equation (PME) and the superslow diffusion equation. Spurious temporal oscillations, and nonphysical locking and lagging have been reported in the literature. These issues have been attributed to harmonic averaging of the coefficient k (p) for small p, and arithmetic averaging has been suggested as an alternative. We show that harmonic averaging is not solely responsible and that an improved discretization can mitigate these issues. Here, we investigate the causes of these numerical artifacts using modified equation analysis. The modified equation framework can be used for any type of discretization. We show results for the second order finite volume method. The observed problems with harmonic averaging can be traced to two leading error terms in its modified equation. This is also illustrated numerically through a Modified Harmonic Method (MHM) that can locally modify the critical terms to remove the aforementioned numerical artifacts.

  16. Bounds on Average Time Complexity of Decision Trees

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any

  17. Treatment of obstructive sleep apnea syndrome with nasal positive airway pressure improves golf performance.

    Science.gov (United States)

    Benton, Marc L; Friedman, Neil S

    2013-12-15

    Obstructive sleep apnea syndrome (OSAS) is associated with impairment of cognitive function, and improvement is often noted with treatment. Golf is a sport that requires a range of cognitive skills. We evaluated the impact of nasal positive airway pressure (PAP) therapy on the handicap index (HI) of golfers with OSAS. Golfers underwent a nocturnal polysomnogram (NPSG) to determine whether they had significant OSAS (respiratory disturbance index > 15). Twelve subjects with a positive NPSG were treated with PAP. HI, an Epworth Sleepiness Scale (ESS), and sleep questionnaire (SQ) were submitted upon study entry. After 20 rounds of golf on PAP treatment, the HI was recalculated, and the questionnaires were repeated. A matched control group composed of non-OSAS subjects was studied to assess the impact of the study construct on HI, ESS, and SQ. Statistical comparisons between pre- and post-PAP treatment were calculated. The control subjects demonstrated no significant change in HI, ESS, or SQ during this study, while the OSAS group demonstrated a significant drop in average HI (11.3%, p = 0.01), ESS, (p = 0.01), and SQ (p = 0.003). Among the more skilled golfers (defined as HI ≤ 12), the average HI dropped by an even greater degree (31.5%). Average utilization of PAP was 91.4% based on data card reporting. Treatment of OSAS with PAP enhanced performance in golfers with this condition. Treatment adherence was unusually high in this study. Non-medical performance improvement may be a strong motivator for selected subjects with OSAS to seek treatment and maximize adherence.

  18. An average salary: approaches to the index determination

    Directory of Open Access Journals (Sweden)

    T. M. Pozdnyakova

    2017-01-01

    Full Text Available The article “An average salary: approaches to the index determination” is devoted to studying various methods of calculating this index, both used by official state statistics of the Russian Federation and offered by modern researchers.The purpose of this research is to analyze the existing approaches to calculating the average salary of employees of enterprises and organizations, as well as to make certain additions that would help to clarify this index.The information base of the research is laws and regulations of the Russian Federation Government, statistical and analytical materials of the Federal State Statistics Service of Russia for the section «Socio-economic indexes: living standards of the population», as well as materials of scientific papers, describing different approaches to the average salary calculation. The data on the average salary of employees of educational institutions of the Khabarovsk region served as the experimental base of research. In the process of conducting the research, the following methods were used: analytical, statistical, calculated-mathematical and graphical.The main result of the research is an option of supplementing the method of calculating average salary index within enterprises or organizations, used by Goskomstat of Russia, by means of introducing a correction factor. Its essence consists in the specific formation of material indexes for different categories of employees in enterprises or organizations, mainly engaged in internal secondary jobs. The need for introducing this correction factor comes from the current reality of working conditions of a wide range of organizations, when an employee is forced, in addition to the main position, to fulfill additional job duties. As a result, the situation is frequent when the average salary at the enterprise is difficult to assess objectively because it consists of calculating multiple rates per staff member. In other words, the average salary of

  19. 7 CFR 1437.11 - Average market price and payment factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Average market price and payment factors. 1437.11... ASSISTANCE PROGRAM General Provisions § 1437.11 Average market price and payment factors. (a) An average... average market price by the applicable payment factor (i.e., harvested, unharvested, or prevented planting...

  20. Anomalous behavior of q-averages in nonextensive statistical mechanics

    International Nuclear Information System (INIS)

    Abe, Sumiyoshi

    2009-01-01

    A generalized definition of average, termed the q-average, is widely employed in the field of nonextensive statistical mechanics. Recently, it has however been pointed out that such an average value may behave unphysically under specific deformations of probability distributions. Here, the following three issues are discussed and clarified. Firstly, the deformations considered are physical and may be realized experimentally. Secondly, in view of the thermostatistics, the q-average is unstable in both finite and infinite discrete systems. Thirdly, a naive generalization of the discussion to continuous systems misses a point, and a norm better than the L 1 -norm should be employed for measuring the distance between two probability distributions. Consequently, stability of the q-average is shown not to be established in all of the cases

  1. Books average previous decade of economic misery.

    Science.gov (United States)

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  2. Asynchronous Gossip for Averaging and Spectral Ranking

    Science.gov (United States)

    Borkar, Vivek S.; Makhijani, Rahul; Sundaresan, Rajesh

    2014-08-01

    We consider two variants of the classical gossip algorithm. The first variant is a version of asynchronous stochastic approximation. We highlight a fundamental difficulty associated with the classical asynchronous gossip scheme, viz., that it may not converge to a desired average, and suggest an alternative scheme based on reinforcement learning that has guaranteed convergence to the desired average. We then discuss a potential application to a wireless network setting with simultaneous link activation constraints. The second variant is a gossip algorithm for distributed computation of the Perron-Frobenius eigenvector of a nonnegative matrix. While the first variant draws upon a reinforcement learning algorithm for an average cost controlled Markov decision problem, the second variant draws upon a reinforcement learning algorithm for risk-sensitive control. We then discuss potential applications of the second variant to ranking schemes, reputation networks, and principal component analysis.

  3. Benchmarking statistical averaging of spectra with HULLAC

    Science.gov (United States)

    Klapisch, Marcel; Busquet, Michel

    2008-11-01

    Knowledge of radiative properties of hot plasmas is important for ICF, astrophysics, etc When mid-Z or high-Z elements are present, the spectra are so complex that one commonly uses statistically averaged description of atomic systems [1]. In a recent experiment on Fe[2], performed under controlled conditions, high resolution transmission spectra were obtained. The new version of HULLAC [3] allows the use of the same model with different levels of details/averaging. We will take advantage of this feature to check the effect of averaging with comparison with experiment. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Quant. Spectros. Rad. Transf. 65, 43 (2000). [2] J. E. Bailey, G. A. Rochau, C. A. Iglesias et al., Phys. Rev. Lett. 99, 265002-4 (2007). [3]. M. Klapisch, M. Busquet, and A. Bar-Shalom, AIP Conference Proceedings 926, 206-15 (2007).

  4. Comparing daily temperature averaging methods: the role of surface and atmosphere variables in determining spatial and seasonal variability

    Science.gov (United States)

    Bernhardt, Jase; Carleton, Andrew M.

    2018-05-01

    The two main methods for determining the average daily near-surface air temperature, twice-daily averaging (i.e., [Tmax+Tmin]/2) and hourly averaging (i.e., the average of 24 hourly temperature measurements), typically show differences associated with the asymmetry of the daily temperature curve. To quantify the relative influence of several land surface and atmosphere variables on the two temperature averaging methods, we correlate data for 215 weather stations across the Contiguous United States (CONUS) for the period 1981-2010 with the differences between the two temperature-averaging methods. The variables are land use-land cover (LULC) type, soil moisture, snow cover, cloud cover, atmospheric moisture (i.e., specific humidity, dew point temperature), and precipitation. Multiple linear regression models explain the spatial and monthly variations in the difference between the two temperature-averaging methods. We find statistically significant correlations between both the land surface and atmosphere variables studied with the difference between temperature-averaging methods, especially for the extreme (i.e., summer, winter) seasons (adjusted R2 > 0.50). Models considering stations with certain LULC types, particularly forest and developed land, have adjusted R2 values > 0.70, indicating that both surface and atmosphere variables control the daily temperature curve and its asymmetry. This study improves our understanding of the role of surface and near-surface conditions in modifying thermal climates of the CONUS for a wide range of environments, and their likely importance as anthropogenic forcings—notably LULC changes and greenhouse gas emissions—continues.

  5. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....

  6. High average power supercontinuum sources

    Indian Academy of Sciences (India)

    The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium.

  7. Average monthly and annual climate maps for Bolivia

    KAUST Repository

    Vicente-Serrano, Sergio M.

    2015-02-24

    This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.

  8. Average stress in a Stokes suspension of disks

    NARCIS (Netherlands)

    Prosperetti, Andrea

    2004-01-01

    The ensemble-average velocity and pressure in an unbounded quasi-random suspension of disks (or aligned cylinders) are calculated in terms of average multipoles allowing for the possibility of spatial nonuniformities in the system. An expression for the stress due to the suspended particles is

  9. Fractures of the greater trochanter following total hip replacement.

    Science.gov (United States)

    Brun, Ole-Christian L; Maansson, Lukas

    2013-01-01

    We studied the incidence of greater trochanteric fractures at our department following THR. In all we examined 911 patients retrospectively and found the occurance of a greater trochanteric fracture to be 3%. Patients with fractures had significantly poorer outcome on Oxford Hip score, Pain VAS, Satisfaction VAS and EQ-5D compared to THR without fractures. Greater trochanteric fracture following THR is one of the most common complications following THR. It has previously been thought to have little impact on the overall outcome following THR, but our study suggests otherwise.

  10. Salecker-Wigner-Peres clock and average tunneling times

    International Nuclear Information System (INIS)

    Lunardi, Jose T.; Manzoni, Luiz A.; Nystrom, Andrew T.

    2011-01-01

    The quantum clock of Salecker-Wigner-Peres is used, by performing a post-selection of the final state, to obtain average transmission and reflection times associated to the scattering of localized wave packets by static potentials in one dimension. The behavior of these average times is studied for a Gaussian wave packet, centered around a tunneling wave number, incident on a rectangular barrier and, in particular, on a double delta barrier potential. The regime of opaque barriers is investigated and the results show that the average transmission time does not saturate, showing no evidence of the Hartman effect (or its generalized version).

  11. Average wind statistics for SRP area meteorological towers

    International Nuclear Information System (INIS)

    Laurinat, J.E.

    1987-01-01

    A quality assured set of average wind Statistics for the seven SRP area meteorological towers has been calculated for the five-year period 1982--1986 at the request of DOE/SR. A Similar set of statistics was previously compiled for the years 1975-- 1979. The updated wind statistics will replace the old statistics as the meteorological input for calculating atmospheric radionuclide doses from stack releases, and will be used in the annual environmental report. This report details the methods used to average the wind statistics and to screen out bad measurements and presents wind roses generated by the averaged statistics

  12. Control of underactuated driftless systems using higher-order averaging theory

    OpenAIRE

    Vela, Patricio A.; Burdick, Joel W.

    2003-01-01

    This paper applies a recently developed "generalized averaging theory" to construct stabilizing feedback control laws for underactuated driftless systems. These controls exponentialy stabilize in the average; the actual system may orbit around the average. Conditions for which the orbit collapses to the averaged trajectory are given. An example validates the theory, demonstrating its utility.

  13. NOAA Average Annual Salinity (3-Zone)

    Data.gov (United States)

    California Natural Resource Agency — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...

  14. Greater temperature sensitivity of plant phenology at colder sites

    DEFF Research Database (Denmark)

    Prevey, Janet; Vellend, Mark; Ruger, Nadja

    2017-01-01

    Warmer temperatures are accelerating the phenology of organisms around the world. Temperature sensitivity of phenology might be greater in colder, higher latitude sites than in warmer regions, in part because small changes in temperature constitute greater relative changes in thermal balance...

  15. An approach to averaging digitized plantagram curves.

    Science.gov (United States)

    Hawes, M R; Heinemeyer, R; Sovak, D; Tory, B

    1994-07-01

    The averaging of outline shapes of the human foot for the purposes of determining information concerning foot shape and dimension within the context of comfort of fit of sport shoes is approached as a mathematical problem. An outline of the human footprint is obtained by standard procedures and the curvature is traced with a Hewlett Packard Digitizer. The paper describes the determination of an alignment axis, the identification of two ray centres and the division of the total curve into two overlapping arcs. Each arc is divided by equiangular rays which intersect chords between digitized points describing the arc. The radial distance of each ray is averaged within groups of foot lengths which vary by +/- 2.25 mm (approximately equal to 1/2 shoe size). The method has been used to determine average plantar curves in a study of 1197 North American males (Hawes and Sovak 1993).

  16. Books Average Previous Decade of Economic Misery

    Science.gov (United States)

    Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  17. Greater Somalia, the never-ending dream?

    DEFF Research Database (Denmark)

    Zoppi, Marco

    2015-01-01

    This paper provides an historical analysis of the concept of Greater Somalia, the nationalist project that advocates the political union of all Somali-speaking people, including those inhabiting areas in current Djibouti, Ethiopia and Kenya. The Somali territorial unification project of “lost...

  18. What about improving the productivity of electric power plants

    International Nuclear Information System (INIS)

    Lawroski, H.; Knecht, P.D.; Prideaux, D.L.; Zahner, R.R.

    1976-01-01

    The FEA in April of 1974 established an Interagency Task Group on Power Plant Reliability, which was charged with the broad objective of improving the productivity of existing and planned large fossil-fueled and nuclear power plants. It took approximately 11 months for the task force to publish a report, ''Report on Improving the Productivity of Electrical Power Plants'' (FEA-263-G), a detailed analysis and comparison of successful and below-average-performance power plants. The Nuclear Service Corp. portion of this study examined four large central-station power plants: two fossil (coal) and two nuclear plants. Only plants with electrical generation capacities greater than 400 MWe were considered. The study included the following: staff technical skill, engineering support, QA program, plant/corporate coordination, operation philosophy, maintenance programs, federal/state regulations, network control, and equipment problems. Personnel were interviewed, and checklists providing input from some 21 or more plant and corporate personnel of each utility were utilized. Reports and other documentation were also reviewed. It was recognized early that productivity is closely allied to technical skills and positive motivation. For this reason, considerable attention was given to people in this study

  19. Improved drought monitoring in the Greater Horn of Africa by combining meteorological and remote sensing based indicators

    DEFF Research Database (Denmark)

    Horion, Stéphanie Marie Anne F; Kurnik, Blaz; Barbosa, Paulo

    2010-01-01

    , and therefore to better trigger timely and appropriate actions on the field. In this study, meteorological and remote sensing based drought indicators were compared over the Greater Horn of Africa in order to better understand: (i) how they depict historical drought events ; (ii) if they could be combined...... distribution. Two remote sensing based indicators were tested: the Normalized Difference Water Index (NDWI) derived from SPOT-VEGETATION and the Global Vegetation Index (VGI) derived form MERIS. The first index is sensitive to change in leaf water content of vegetation canopies while the second is a proxy...... of the amount and vigour of vegetation. For both indexes, anomalies were estimated using available satellite archives. Cross-correlations between remote sensing based anomalies and SPI were analysed for five land covers (forest, shrubland, grassland, sparse grassland, cropland and bare soil) over different...

  20. Self-mastery among Chinese Older Adults in the Greater Chicago Area

    Directory of Open Access Journals (Sweden)

    Xinqi Dong

    2014-09-01

    Full Text Available Background: Self-mastery is an important psychological resource to cope with stressful situations. However, we have limited understanding of self-mastery among minority aging populations. Objective: This study aims to examine the presence and levels of self-mastery among U.S. Chinese older adults. Methods: Data were drawn from the PINE study, a population-based survey of U.S. Chinese older adults in the Greater Chicago area. Guided by a community-based participatory research approach, a total of 3,159 Chinese older adults aged 60 and above were surveyed. A Chinese version of the Self-Mastery Scale was used to assess self-mastery. Results: Out of the 7-item Chinese Self-Mastery Scale, approximately 42.8% to 87.5% of Chinese older adults experienced some degree of self-mastery in their lives. Older adults with no formal education and the oldest-old aged 85 and over had the lowest level of self-mastery in our study. A higher mastery level was associated with being married, having fewer children, better self-reported health status, better quality of life, and positive health changes. Conclusion: Although self-mastery is commonly experienced among the Chinese aging population in the Greater Chicago area, specific subgroups are still vulnerable. Future longitudinal studies are needed to improve the understanding of risk factors and outcomes associated with self-mastery among Chinese older adults.

  1. Average Albedos of Close-in Super-Earths and Super-Neptunes from Statistical Analysis of Long-cadence Kepler Secondary Eclipse Data

    Science.gov (United States)

    Sheets, Holly A.; Deming, Drake

    2017-10-01

    We present the results of our work to determine the average albedo for small, close-in planets in the Kepler candidate catalog. We have adapted our method of averaging short-cadence light curves of multiple Kepler planet candidates to long-cadence data, in order to detect an average albedo for the group of candidates. Long-cadence data exist for many more candidates than the short-cadence data, and so we separate the candidates into smaller radius bins than in our previous work: 1-2 {R}\\oplus , 2-4 {R}\\oplus , and 4-6 {R}\\oplus . We find that, on average, all three groups appear darker than suggested by the short-cadence results, but not as dark as many hot Jupiters. The average geometric albedos for the three groups are 0.11 ± 0.06, 0.05 ± 0.04, and 0.23 ± 0.11, respectively, for the case where heat is uniformly distributed about the planet. If heat redistribution is inefficient, the albedos are even lower, since there will be a greater thermal contribution to the total light from the planet. We confirm that newly identified false-positive Kepler Object of Interest (KOI) 1662.01 is indeed an eclipsing binary at twice the period listed in the planet candidate catalog. We also newly identify planet candidate KOI 4351.01 as an eclipsing binary, and we report a secondary eclipse measurement for Kepler-4b (KOI 7.01) of ˜7.50 ppm at a phase of ˜0.7, indicating that the planet is on an eccentric orbit.

  2. Cooperative AF Relaying in Spectrum-Sharing Systems: Performance Analysis under Average Interference Power Constraints and Nakagami-m Fading

    KAUST Repository

    Xia, Minghua

    2012-06-01

    Since the electromagnetic spectrum resource becomes more and more scarce, improving spectral efficiency is extremely important for the sustainable development of wireless communication systems and services. Integrating cooperative relaying techniques into spectrum-sharing cognitive radio systems sheds new light on higher spectral efficiency. In this paper, we analyze the end-to-end performance of cooperative amplify-and-forward (AF) relaying in spectrum-sharing systems. In order to achieve the optimal end-to-end performance, the transmit powers of the secondary source and the relays are optimized with respect to average interference power constraints at primary users and Nakagami-$m$ fading parameters of interference channels (for mathematical tractability, the desired channels from secondary source to relay and from relay to secondary destination are assumed to be subject to Rayleigh fading). Also, both partial and opportunistic relay-selection strategies are exploited to further enhance system performance. Based on the exact distribution functions of the end-to-end signal-to-noise ratio (SNR) obtained herein, the outage probability, average symbol error probability, diversity order, and ergodic capacity of the system under study are analytically investigated. Our results show that system performance is dominated by the resource constraints and it improves slowly with increasing average SNR. Furthermore, larger Nakagami-m fading parameter on interference channels deteriorates system performance slightly. On the other hand, when interference power constraints are stringent, opportunistic relay selection can be exploited to improve system performance significantly. All analytical results are corroborated by simulation results and they are shown to be efficient tools for exact evaluation of system performance.

  3. Assessing Human Impacts on the Greater Akaki River, Ethiopia ...

    African Journals Online (AJOL)

    We assessed the impacts of human activities on the Greater Akaki River using physicochemical parameters and macroinvertebrate metrics. Physicochemical samples and macroinvertebrates were collected bimonthly from eight sites established on the Greater Akaki River from February 2006 to April 2006. Eleven metrics ...

  4. Model averaging, optimal inference and habit formation

    Directory of Open Access Journals (Sweden)

    Thomas H B FitzGerald

    2014-06-01

    Full Text Available Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function – the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge – that of determining which model or models of their environment are the best for guiding behaviour. Bayesian model averaging – which says that an agent should weight the predictions of different models according to their evidence – provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent’s behaviour should show an equivalent balance. We hypothesise that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realisable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behaviour. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded Bayesian inference, focussing particularly upon the relationship between goal-directed and habitual behaviour.

  5. Greater Emphasis on Female Attractiveness in Homo Sapiens: A Revised Solution to an Old Evolutionary Riddle

    Directory of Open Access Journals (Sweden)

    Jonathan Gottschall

    2007-04-01

    Full Text Available Substantial evidence from psychology and cross-cultural anthropology supports a general rule of greater emphasis on female physical attractiveness in Homo sapiens. As sensed by Darwin (1871 and clarified by Trivers (1972, generally higher female parental investment is a key determinant of a common pattern of sexual selection in which male animals are more competitive, more eager sexually and more conspicuous in courtship display, ornamentation, and coloration. Therefore, given the larger minimal and average parental investment of human females, keener physical attractiveness pressure among women has long been considered an evolutionary riddle. This paper briefly surveys previous thinking on the question, before offering a revised explanation for why we should expect humans to sharply depart from general zoological pattern of greater emphasis on male attractiveness. This contribution hinges on the argument that humans have been seen as anomalies mainly because we have been held up to the wrong zoological comparison groups. I argue that humans are a partially sex-role reversed species, and more emphasis on female physical attractiveness is relatively common in such species. This solution to the riddle, like those of other evolutionists, is based on peculiarities in human mating behavior, so this paper is also presented as a refinement of current thinking about the evolution of human mating preferences.

  6. Molecular insights into the biology of Greater Sage-Grouse

    Science.gov (United States)

    Oyler-McCance, Sara J.; Quinn, Thomas W.

    2011-01-01

    Recent research on Greater Sage-Grouse (Centrocercus urophasianus) genetics has revealed some important findings. First, multiple paternity in broods is more prevalent than previously thought, and leks do not comprise kin groups. Second, the Greater Sage-Grouse is genetically distinct from the congeneric Gunnison sage-grouse (C. minimus). Third, the Lyon-Mono population in the Mono Basin, spanning the border between Nevada and California, has unique genetic characteristics. Fourth, the previous delineation of western (C. u. phaios) and eastern Greater Sage-Grouse (C. u. urophasianus) is not supported genetically. Fifth, two isolated populations in Washington show indications that genetic diversity has been lost due to population declines and isolation. This chapter examines the use of molecular genetics to understand the biology of Greater Sage-Grouse for the conservation and management of this species and put it into the context of avian ecology based on selected molecular studies.

  7. Bootstrapping pre-averaged realized volatility under market microstructure noise

    DEFF Research Database (Denmark)

    Hounyo, Ulrich; Goncalves, Sílvia; Meddahi, Nour

    The main contribution of this paper is to propose a bootstrap method for inference on integrated volatility based on the pre-averaging approach of Jacod et al. (2009), where the pre-averaging is done over all possible overlapping blocks of consecutive observations. The overlapping nature of the pre......-averaged returns implies that these are kn-dependent with kn growing slowly with the sample size n. This motivates the application of a blockwise bootstrap method. We show that the "blocks of blocks" bootstrap method suggested by Politis and Romano (1992) (and further studied by Bühlmann and Künsch (1995......)) is valid only when volatility is constant. The failure of the blocks of blocks bootstrap is due to the heterogeneity of the squared pre-averaged returns when volatility is stochastic. To preserve both the dependence and the heterogeneity of squared pre-averaged returns, we propose a novel procedure...

  8. Mitigation effectiveness for improving nesting success of greater sage-grouse influenced by energy development

    Science.gov (United States)

    Kirol, Christopher P.; Sutphin, Andrew L.; Bond, Laura S.; Fuller, Mark R.; Maechtle, Thomas L.

    2015-01-01

    Sagebrush Artemisia spp. habitats being developed for oil and gas reserves are inhabited by sagebrush obligate species — including the greater sage-grouse Centrocercus urophasianus (sage-grouse) that is currently being considered for protection under the U.S. Endangered Species Act. Numerous studies suggest increasing oil and gas development may exacerbate species extinction risks. Therefore, there is a great need for effective on-site mitigation to reduce impacts to co-occurring wildlife such as sage-grouse. Nesting success is a primary factor in avian productivity and declines in nesting success are also thought to be an important contributor to population declines in sage-grouse. From 2008 to 2011 we monitored 296 nests of radio-marked female sage-grouse in a natural gas (NG) field in the Powder River Basin, Wyoming, USA, and compared nest survival in mitigated and non-mitigated development areas and relatively unaltered areas to determine if specific mitigation practices were enhancing nest survival. Nest survival was highest in relatively unaltered habitats followed by mitigated, and then non-mitigated NG areas. Reservoirs used for holding NG discharge water had the greatest support as having a direct relationship to nest survival. Within a 5-km2 area surrounding a nest, the probability of nest failure increased by about 15% for every 1.5 km increase in reservoir water edge. Reducing reservoirs was a mitigation focus and sage-grouse nesting in mitigated areas were exposed to almost half of the amount of water edge compared to those in non-mitigated areas. Further, we found that an increase in sagebrush cover was positively related to nest survival. Consequently, mitigation efforts focused on reducing reservoir construction and reducing surface disturbance, especially when the surface disturbance results in sagebrush removal, are important to enhancing sage-grouse nesting success.

  9. INDUSTRIAL LAND DEVELOPMENT AND MANUFACTURING DECONCENTRATION IN GREATER JAKARTA

    NARCIS (Netherlands)

    Hudalah, Delik; Viantari, Dimitra; Firman, Tommy; Woltjer, Johan

    2013-01-01

    Industrial land development has become a key feature of urbanization in Greater Jakarta, one of the largest metropolitan areas in Southeast Asia. Following Suharto's market-oriented policy measures in the late 1980s, private developers have dominated the land development projects in Greater Jakarta.

  10. MN Temperature Average (1961-1990) - Line

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  11. MN Temperature Average (1961-1990) - Polygon

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  12. High average-power induction linacs

    International Nuclear Information System (INIS)

    Prono, D.S.; Barrett, D.; Bowles, E.

    1989-01-01

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  13. Interpreting Bivariate Regression Coefficients: Going beyond the Average

    Science.gov (United States)

    Halcoussis, Dennis; Phillips, G. Michael

    2010-01-01

    Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…

  14. Unscrambling The "Average User" Of Habbo Hotel

    Directory of Open Access Journals (Sweden)

    Mikael Johnson

    2007-01-01

    Full Text Available The “user” is an ambiguous concept in human-computer interaction and information systems. Analyses of users as social actors, participants, or configured users delineate approaches to studying design-use relationships. Here, a developer’s reference to a figure of speech, termed the “average user,” is contrasted with design guidelines. The aim is to create an understanding about categorization practices in design through a case study about the virtual community, Habbo Hotel. A qualitative analysis highlighted not only the meaning of the “average user,” but also the work that both the developer and the category contribute to this meaning. The average user a represents the unknown, b influences the boundaries of the target user groups, c legitimizes the designer to disregard marginal user feedback, and d keeps the design space open, thus allowing for creativity. The analysis shows how design and use are intertwined and highlights the developers’ role in governing different users’ interests.

  15. The Effect of Cumulus Cloud Field Anisotropy on Domain-Averaged Solar Fluxes and Atmospheric Heating Rates

    Science.gov (United States)

    Hinkelman, Laura M.; Evans, K. Franklin; Clothiaux, Eugene E.; Ackerman, Thomas P.; Stackhouse, Paul W., Jr.

    2006-01-01

    Cumulus clouds can become tilted or elongated in the presence of wind shear. Nevertheless, most studies of the interaction of cumulus clouds and radiation have assumed these clouds to be isotropic. This paper describes an investigation of the effect of fair-weather cumulus cloud field anisotropy on domain-averaged solar fluxes and atmospheric heating rate profiles. A stochastic field generation algorithm was used to produce twenty three-dimensional liquid water content fields based on the statistical properties of cloud scenes from a large eddy simulation. Progressively greater degrees of x-z plane tilting and horizontal stretching were imposed on each of these scenes, so that an ensemble of scenes was produced for each level of distortion. The resulting scenes were used as input to a three-dimensional Monte Carlo radiative transfer model. Domain-average transmission, reflection, and absorption of broadband solar radiation were computed for each scene along with the average heating rate profile. Both tilt and horizontal stretching were found to significantly affect calculated fluxes, with the amount and sign of flux differences depending strongly on sun position relative to cloud distortion geometry. The mechanisms by which anisotropy interacts with solar fluxes were investigated by comparisons to independent pixel approximation and tilted independent pixel approximation computations for the same scenes. Cumulus anisotropy was found to most strongly impact solar radiative transfer by changing the effective cloud fraction, i.e., the cloud fraction when the field is projected on a surface perpendicular to the direction of the incident solar beam.

  16. GI Joe or Average Joe? The impact of average-size and muscular male fashion models on men's and women's body image and advertisement effectiveness.

    Science.gov (United States)

    Diedrichs, Phillippa C; Lee, Christina

    2010-06-01

    Increasing body size and shape diversity in media imagery may promote positive body image. While research has largely focused on female models and women's body image, men may also be affected by unrealistic images. We examined the impact of average-size and muscular male fashion models on men's and women's body image and perceived advertisement effectiveness. A sample of 330 men and 289 women viewed one of four advertisement conditions: no models, muscular, average-slim or average-large models. Men and women rated average-size models as equally effective in advertisements as muscular models. For men, exposure to average-size models was associated with more positive body image in comparison to viewing no models, but no difference was found in comparison to muscular models. Similar results were found for women. Internalisation of beauty ideals did not moderate these effects. These findings suggest that average-size male models can promote positive body image and appeal to consumers. 2010 Elsevier Ltd. All rights reserved.

  17. Average: the juxtaposition of procedure and context

    Science.gov (United States)

    Watson, Jane; Chick, Helen; Callingham, Rosemary

    2014-09-01

    This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.

  18. High average-power induction linacs

    International Nuclear Information System (INIS)

    Prono, D.S.; Barrett, D.; Bowles, E.; Caporaso, G.J.; Chen, Yu-Jiuan; Clark, J.C.; Coffield, F.; Newton, M.A.; Nexsen, W.; Ravenscroft, D.; Turner, W.C.; Watson, J.A.

    1989-01-01

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of ∼ 50-ns duration pulses to > 100 MeV. In this paper the authors report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  19. Serpent-COREDAX analysis of CANDU-6 time-average model

    Energy Technology Data Exchange (ETDEWEB)

    Motalab, M.A.; Cho, B.; Kim, W.; Cho, N.Z.; Kim, Y., E-mail: yongheekim@kaist.ac.kr [Korea Advanced Inst. of Science and Technology (KAIST), Dept. of Nuclear and Quantum Engineering Daejeon (Korea, Republic of)

    2015-07-01

    COREDAX-2 is the nuclear core analysis nodal code that has adopted the Analytic Function Expansion Nodal (AFEN) methodology which has been developed in Korea. AFEN method outperforms in terms of accuracy compared to other conventional nodal methods. To evaluate the possibility of CANDU-type core analysis using the COREDAX-2, the time-average analysis code system was developed. The two-group homogenized cross-sections were calculated using Monte Carlo code, Serpent2. A stand-alone time-average module was developed to determine the time-average burnup distribution in the core for a given fuel management strategy. The coupled Serpent-COREDAX-2 calculation converges to an equilibrium time-average model for the CANDU-6 core. (author)

  20. Improving employee productivity through improved health.

    Science.gov (United States)

    Mitchell, Rebecca J; Ozminkowski, Ronald J; Serxner, Seth

    2013-10-01

    The objective of this study was to estimate productivity-related savings associated with employee participation in health promotion programs. Propensity score weighting and multiple regression techniques were used to estimate savings. These techniques were adjusted for demographic and health status differences between participants who engaged in one or more telephonic health management programs and nonparticipants who were eligible for but did not engage in these programs. Employees who participated in a program and successfully improved their health care or lifestyle showed significant improvements in lost work time. These employees saved an average of $353 per person per year. This reflects about 10.3 hours in additional productive time annually, compared with similar, but nonparticipating employees. Participating in health promotion programs can help improve productivity levels among employees and save money for their employers.

  1. Transitions of Care from Child and Adolescent Mental Health Services to Adult Mental Health Services (TRACK Study: A study of protocols in Greater London

    Directory of Open Access Journals (Sweden)

    Ford Tamsin

    2008-06-01

    Full Text Available Abstract Background Although young people's transition from Child and Adolescent Mental Health Services (CAMHS to Adult Mental Health Services (AMHS in England is a significant health issue for service users, commissioners and providers, there is little evidence available to guide service development. The TRACK study aims to identify factors which facilitate or impede effective transition from CAHMS to AMHS. This paper presents findings from a survey of transition protocols in Greater London. Methods A questionnaire survey (Jan-April 2005 of Greater London CAMHS to identify transition protocols and collect data on team size, structure, transition protocols, population served and referral rates to AMHS. Identified transition protocols were subjected to content analysis. Results Forty two of the 65 teams contacted (65% responded to the survey. Teams varied in type (generic/targeted/in-patient, catchment area (locality-based, wider or national and transition boundaries with AMHS. Estimated annual average number of cases considered suitable for transfer to AMHS, per CAMHS team (mean 12.3, range 0–70, SD 14.5, n = 37 was greater than the annual average number of cases actually accepted by AMHS (mean 8.3, range 0–50, SD 9.5, n = 33. In April 2005, there were 13 active and 2 draft protocols in Greater London. Protocols were largely similar in stated aims and policies, but differed in key procedural details, such as joint working between CAHMS and AMHS and whether protocols were shared at Trust or locality level. While the centrality of service users' involvement in the transition process was identified, no protocol specified how users should be prepared for transition. A major omission from protocols was procedures to ensure continuity of care for patients not accepted by AMHS. Conclusion At least 13 transition protocols were in operation in Greater London in April 2005. Not all protocols meet all requirements set by government policy. Variation in

  2. Coping Strategies Applied to Comprehend Multistep Arithmetic Word Problems by Students with Above-Average Numeracy Skills and Below-Average Reading Skills

    Science.gov (United States)

    Nortvedt, Guri A.

    2011-01-01

    This article discusses how 13-year-old students with above-average numeracy skills and below-average reading skills cope with comprehending word problems. Compared to other students who are proficient in numeracy and are skilled readers, these students are more disadvantaged when solving single-step and multistep arithmetic word problems. The…

  3. Autoregressive moving average fitting for real standard deviation in Monte Carlo power distribution calculation

    International Nuclear Information System (INIS)

    Ueki, Taro

    2010-01-01

    The noise propagation of tallies in the Monte Carlo power method can be represented by the autoregressive moving average process of orders p and p-1 (ARMA(p,p-1)], where p is an integer larger than or equal to two. The formula of the autocorrelation of ARMA(p,q), p≥q+1, indicates that ARMA(3,2) fitting is equivalent to lumping the eigenmodes of fluctuation propagation in three modes such as the slow, intermediate and fast attenuation modes. Therefore, ARMA(3,2) fitting was applied to the real standard deviation estimation of fuel assemblies at particular heights. The numerical results show that straightforward ARMA(3,2) fitting is promising but a stability issue must be resolved toward the incorporation in the distributed version of production Monte Carlo codes. The same numerical results reveal that the average performance of ARMA(3,2) fitting is equivalent to that of the batch method in MCNP with a batch size larger than one hundred and smaller than two hundred cycles for a 1100 MWe pressurized water reactor. The bias correction of low lag autocovariances in MVP/GMVP is demonstrated to have the potential of improving the average performance of ARMA(3,2) fitting. (author)

  4. Average of delta: a new quality control tool for clinical laboratories.

    Science.gov (United States)

    Jones, Graham R D

    2016-01-01

    Average of normals is a tool used to control assay performance using the average of a series of results from patients' samples. Delta checking is a process of identifying errors in individual patient results by reviewing the difference from previous results of the same patient. This paper introduces a novel alternate approach, average of delta, which combines these concepts to use the average of a number of sequential delta values to identify changes in assay performance. Models for average of delta and average of normals were developed in a spreadsheet application. The model assessed the expected scatter of average of delta and average of normals functions and the effect of assay bias for different values of analytical imprecision and within- and between-subject biological variation and the number of samples included in the calculations. The final assessment was the number of patients' samples required to identify an added bias with 90% certainty. The model demonstrated that with larger numbers of delta values, the average of delta function was tighter (lower coefficient of variation). The optimal number of samples for bias detection with average of delta was likely to be between 5 and 20 for most settings and that average of delta outperformed average of normals when the within-subject biological variation was small relative to the between-subject variation. Average of delta provides a possible additional assay quality control tool which theoretical modelling predicts may be more valuable than average of normals for analytes where the group biological variation is wide compared with within-subject variation and where there is a high rate of repeat testing in the laboratory patient population. © The Author(s) 2015.

  5. Strategies for Talent Management: Greater Philadelphia Companies in Action

    Science.gov (United States)

    Council for Adult and Experiential Learning (NJ1), 2008

    2008-01-01

    Human capital is one of the critical issues that impacts the Greater Philadelphia region's ability to grow and prosper. The CEO Council for Growth (CEO Council) is committed to ensuring a steady and talented supply of quality workers for this region. "Strategies for Talent Management: Greater Philadelphia Companies in Action" provides…

  6. Greater oil investment opportunities

    International Nuclear Information System (INIS)

    Arenas, Ismael Enrique

    1997-01-01

    Geologically speaking, Colombia is a very attractive country for the world oil community. According to this philosophy new and important steps are being taken to reinforce the oil sector: Expansion of the exploratory frontier by including a larger number of sedimentary areas, and the adoption of innovative contracting instruments. Colombia has to offer, Greater economic incentives for the exploration of new areas to expand the exploratory frontier, stimulation of exploration in areas with prospectivity for small fields. Companies may offer Ecopetrol a participation in production over and above royalties, without it's participating in the investments and costs of these fields, more favorable conditions for natural gas seeking projects, in comparison with those governing the terms for oil

  7. High-Average, High-Peak Current Injector Design

    CERN Document Server

    Biedron, S G; Virgo, M

    2005-01-01

    There is increasing interest in high-average-power (>100 kW), um-range FELs. These machines require high peak current (~1 kA), modest transverse emittance, and beam energies of ~100 MeV. High average currents (~1 A) place additional constraints on the design of the injector. We present a design for an injector intended to produce the required peak currents at the injector, eliminating the need for magnetic compression within the linac. This reduces the potential for beam quality degradation due to CSR and space charge effects within magnetic chicanes.

  8. Statistics on exponential averaging of periodograms

    Energy Technology Data Exchange (ETDEWEB)

    Peeters, T.T.J.M. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Ciftcioglu, Oe. [Istanbul Technical Univ. (Turkey). Dept. of Electrical Engineering

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a {chi}{sup 2} distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.).

  9. Statistics on exponential averaging of periodograms

    International Nuclear Information System (INIS)

    Peeters, T.T.J.M.; Ciftcioglu, Oe.

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a χ 2 distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.)

  10. Average rainwater pH, concepts of atmospheric acidity, and buffering in open systems

    Energy Technology Data Exchange (ETDEWEB)

    Liljestrand, H.M.

    1985-01-01

    The system of water equilibrated with a constant partial pressure of CO/sub 2/, as a reference point for pH acidity-alkalinity relationships, has nonvolatile acidity and alkalinity components as conservative quantities, but not (H/sup +/). Simple algorithms are presented for the determination of the average pH for combinations of samples both above and below pH 5.6. Averaging the nonconservative quantity (H/sup +/) yields erroneously low mean pH values. To extend the open CO/sub 2/ system to include other volatile atmospheric acids and bases distributed among the gas, liquid and particulate matter phases, a theoretical framework for atmospheric acidity is presented. Within certain oxidation-reduction limitations, the total atmospheric acidity (but not free acidity) is a conservative quantity. The concept of atmospheric acidity is applied to air-water systems approximating aerosols, fogwater, cloudwater and rainwater. The buffer intensity in hydrometers is described as a function of net strong acidity, partial pressures of acid and base gases and the water to air ratio. For high liquid to air volume ratios, the equilibrium partial pressures of trace acid and base gases are set by the pH or net acidity controlled by the nonvolatile acid and base concentrations. For low water to air volume ratios as well as stationary state systems such as precipitation scavenging with continuous emissions, the partial pressures of trace gases (NH/sub 3/, HCl, NHO/sub 3/, SO/sub 2/, and CH/sub 3/COOH) appear to be of greater or equal importance as carbonate species as buffers in the aqueous phase.

  11. Average rainwater pH, concepts of atmospheric acidity, and buffering in open systems

    Science.gov (United States)

    Liljestrand, Howard M.

    The system of water equilibrated with a constant partial pressure of CO 2, as a reference point for pH acidity-alkalinity relationships, has nonvolatile acidity and alkalinity components as conservative quantities, but not [H +]. Simple algorithms are presented for the determination of the average pH for combinations of samples both above and below pH 5.6. Averaging the nonconservative quantity [H +] yields erroneously low mean pH values. To extend the open CO 2 system to include other volatile atmospheric acids and bases distributed among the gas, liquid and particulate matter phases, a theoretical framework for atmospheric acidity is presented. Within certain oxidation-reduction limitations, the total atmospheric acidity (but not free acidity) is a conservative quantity. The concept of atmospheric acidity is applied to air-water systems approximating aerosols, fogwater, cloudwater and rainwater. The buffer intensity in hydrometeors is described as a function of net strong acidity, partial pressures of acid and base gases and the water to air ratio. For high liquid to air volume ratios, the equilibrium partial pressures of trace acid and base gases are set by the pH or net acidity controlled by the nonvolatile acid and base concentrations. For low water to air volume ratios as well as stationary state systems such as precipitation scavenging with continuous emissions, the partial pressures of trace gases (NH 3, HCl, HNO 3, SO 2 and CH 3COOH) appear to be of greater or equal importance as carbonate species as buffers in the aqueous phase.

  12. Local and average structure of Mn- and La-substituted BiFeO{sub 3}

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Bo; Selbach, Sverre M., E-mail: selbach@ntnu.no

    2017-06-15

    The local and average structure of solid solutions of the multiferroic perovskite BiFeO{sub 3} is investigated by synchrotron X-ray diffraction (XRD) and electron density functional theory (DFT) calculations. The average experimental structure is determined by Rietveld refinement and the local structure by total scattering data analyzed in real space with the pair distribution function (PDF) method. With equal concentrations of La on the Bi site or Mn on the Fe site, La causes larger structural distortions than Mn. Structural models based on DFT relaxed geometry give an improved fit to experimental PDFs compared to models constrained by the space group symmetry. Berry phase calculations predict a higher ferroelectric polarization than the experimental literature values, reflecting that structural disorder is not captured in either average structure space group models or DFT calculations with artificial long range order imposed by periodic boundary conditions. Only by including point defects in a supercell, here Bi vacancies, can DFT calculations reproduce the literature results on the structure and ferroelectric polarization of Mn-substituted BiFeO{sub 3}. The combination of local and average structure sensitive experimental methods with DFT calculations is useful for illuminating the structure-property-composition relationships in complex functional oxides with local structural distortions. - Graphical abstract: The experimental and simulated partial pair distribution functions (PDF) for BiFeO{sub 3}, BiFe{sub 0.875}Mn{sub 0.125}O{sub 3}, BiFe{sub 0.75}Mn{sub 0.25}O{sub 3} and Bi{sub 0.9}La{sub 0.1}FeO{sub 3}.

  13. Fitting a function to time-dependent ensemble averaged data

    DEFF Research Database (Denmark)

    Fogelmark, Karl; Lomholt, Michael A.; Irbäck, Anders

    2018-01-01

    Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion...... method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software....

  14. Short-Term Effects of Methylphenidate on Math Productivity in Children With Attention-Deficit/Hyperactivity Disorder are Mediated by Symptom Improvements: Evidence From a Placebo-Controlled Trial.

    Science.gov (United States)

    Kortekaas-Rijlaarsdam, Anne Fleur; Luman, Marjolein; Sonuga-Barke, Edmund; Bet, Pierre M; Oosterlaan, Jaap

    2017-04-01

    Although numerous studies report positive effects of methylphenidate on academic performance, the mechanism behind these improvements remains unclear. This study investigates the effects of methylphenidate on academic performance in children with attention-deficit/hyperactivity disorder (ADHD) and the mediating and moderating influence of ADHD severity, academic performance, and ADHD symptom improvement. Sixty-three children with ADHD participated in a double-blind placebo-controlled crossover study comparing the effects of long-acting methylphenidate and placebo. Dependent variables were math, reading, and spelling performance. The ADHD group performance was compared with a group of 67 typically developing children. Methylphenidate improved math productivity and accuracy in children with ADHD. The effect of methylphenidate on math productivity was partly explained by parent-rated symptom improvement, with greater efficacy for children showing more symptom improvement. Further, children showing below-average math performance while on placebo profited more from methylphenidate than children showing above-average math performance. The results from this study indicate positive effects of methylphenidate on academic performance, although these were limited to math abilities. In light of these results, expectations of parents, teachers, and treating physicians about the immediate effects of methylphenidate on academic improvement should be tempered. Moreover, our results implicate that positive effects of methylphenidate on math performance are in part due directly to effects on math ability and in part due to reductions in ADHD symptoms.

  15. Adaptive neuro-fuzzy based inferential sensor model for estimating the average air temperature in space heating systems

    Energy Technology Data Exchange (ETDEWEB)

    Jassar, S.; Zhao, L. [Department of Electrical and Computer Engineering, Ryerson University, 350 Victoria Street, Toronto, ON (Canada); Liao, Z. [Department of Architectural Science, Ryerson University (Canada)

    2009-08-15

    The heating systems are conventionally controlled by open-loop control systems because of the absence of practical methods for estimating average air temperature in the built environment. An inferential sensor model, based on adaptive neuro-fuzzy inference system modeling, for estimating the average air temperature in multi-zone space heating systems is developed. This modeling technique has the advantage of expert knowledge of fuzzy inference systems (FISs) and learning capability of artificial neural networks (ANNs). A hybrid learning algorithm, which combines the least-square method and the back-propagation algorithm, is used to identify the parameters of the network. This paper describes an adaptive network based inferential sensor that can be used to design closed-loop control for space heating systems. The research aims to improve the overall performance of heating systems, in terms of energy efficiency and thermal comfort. The average air temperature results estimated by using the developed model are strongly in agreement with the experimental results. (author)

  16. Infuence of Averaging Method on the Evaluation of a Coastal Ocean Color Event on the U.S. Northeast Coast

    Science.gov (United States)

    Acker, James G.; Uz, Stephanie Schollaert; Shen, Suhung; Leptoukh, Gregory G.

    2010-01-01

    Application of appropriate spatial averaging techniques is crucial to correct evaluation of ocean color radiometric data, due to the common log-normal or mixed log-normal distribution of these data. Averaging method is particularly crucial for data acquired in coastal regions. The effect of averaging method was markedly demonstrated for a precipitation-driven event on the U.S. Northeast coast in October-November 2005, which resulted in export of high concentrations of riverine colored dissolved organic matter (CDOM) to New York and New Jersey coastal waters over a period of several days. Use of the arithmetic mean averaging method created an inaccurate representation of the magnitude of this event in SeaWiFS global mapped chl a data, causing it to be visualized as a very large chl a anomaly. The apparent chl a anomaly was enhanced by the known incomplete discrimination of CDOM and phytoplankton chlorophyll in SeaWiFS data; other data sources enable an improved characterization. Analysis using the geometric mean averaging method did not indicate this event to be statistically anomalous. Our results predicate the necessity of providing the geometric mean averaging method for ocean color radiometric data in the Goddard Earth Sciences DISC Interactive Online Visualization ANd aNalysis Infrastructure (Giovanni).

  17. Correlations between PANCE performance, physician assistant program grade point average, and selection criteria.

    Science.gov (United States)

    Brown, Gina; Imel, Brittany; Nelson, Alyssa; Hale, LaDonna S; Jansen, Nick

    2013-01-01

    The purpose of this study was to examine correlations between first-time Physician Assistant National Certifying Exam (PANCE) scores and pass/fail status, physician assistant (PA) program didactic grade point average (GPA), and specific selection criteria. This retrospective study evaluated graduating classes from 2007, 2008, and 2009 at a single program (N = 119). There was no correlation between PANCE performance and undergraduate grade point average (GPA), science prerequisite GPA, or health care experience. There was a moderate correlation between PANCE pass/fail and where students took science prerequisites (r = 0.27, P = .003) but not with the PANCE score. PANCE scores were correlated with overall PA program GPA (r = 0.67), PA pharmacology grade (r = 0.68), and PA anatomy grade (r = 0.41) but not with PANCE pass/fail. Correlations between selection criteria and PANCE performance were limited, but further research regarding the influence of prerequisite institution type may be warranted and may improve admission decisions. PANCE scores and PA program GPA correlations may guide academic advising and remediation decisions for current students.

  18. Average Soil Water Retention Curves Measured by Neutron Radiography

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Chu-Lin [ORNL; Perfect, Edmund [University of Tennessee, Knoxville (UTK); Kang, Misun [ORNL; Voisin, Sophie [ORNL; Bilheux, Hassina Z [ORNL; Horita, Juske [Texas Tech University (TTU); Hussey, Dan [NIST Center for Neutron Research (NCRN), Gaithersburg, MD

    2011-01-01

    Water retention curves are essential for understanding the hydrologic behavior of partially-saturated porous media and modeling flow transport processes within the vadose zone. In this paper we report direct measurements of the main drying and wetting branches of the average water retention function obtained using 2-dimensional neutron radiography. Flint sand columns were saturated with water and then drained under quasi-equilibrium conditions using a hanging water column setup. Digital images (2048 x 2048 pixels) of the transmitted flux of neutrons were acquired at each imposed matric potential (~10-15 matric potential values per experiment) at the NCNR BT-2 neutron imaging beam line. Volumetric water contents were calculated on a pixel by pixel basis using Beer-Lambert s law after taking into account beam hardening and geometric corrections. To remove scattering effects at high water contents the volumetric water contents were normalized (to give relative saturations) by dividing the drying and wetting sequences of images by the images obtained at saturation and satiation, respectively. The resulting pixel values were then averaged and combined with information on the imposed basal matric potentials to give average water retention curves. The average relative saturations obtained by neutron radiography showed an approximate one-to-one relationship with the average values measured volumetrically using the hanging water column setup. There were no significant differences (at p < 0.05) between the parameters of the van Genuchten equation fitted to the average neutron radiography data and those estimated from replicated hanging water column data. Our results indicate that neutron imaging is a very effective tool for quantifying the average water retention curve.

  19. Measurement of average radon gas concentration at workplaces

    International Nuclear Information System (INIS)

    Kavasi, N.; Somlai, J.; Kovacs, T.; Gorjanacz, Z.; Nemeth, Cs.; Szabo, T.; Varhegyi, A.; Hakl, J.

    2003-01-01

    In this paper results of measurement of average radon gas concentration at workplaces (the schools and kindergartens and the ventilated workplaces) are presented. t can be stated that the one month long measurements means very high variation (as it is obvious in the cases of the hospital cave and the uranium tailing pond). Consequently, in workplaces where the expectable changes of radon concentration considerable with the seasons should be measure for 12 months long. If it is not possible, the chosen six months period should contain summer and winter months as well. The average radon concentration during working hours can be differ considerable from the average of the whole time in the cases of frequent opening the doors and windows or using artificial ventilation. (authors)

  20. The flattening of the average potential in models with fermions

    International Nuclear Information System (INIS)

    Bornholdt, S.

    1993-01-01

    The average potential is a scale dependent scalar effective potential. In a phase with spontaneous symmetry breaking its inner region becomes flat as the averaging extends over infinite volume and the average potential approaches the convex effective potential. Fermion fluctuations affect the shape of the average potential in this region and its flattening with decreasing physical scale. They have to be taken into account to find the true minimum of the scalar potential which determines the scale of spontaneous symmetry breaking. (orig.)

  1. Sea Surface Temperature Average_SST_Master

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using ArcGIS...

  2. Depth-averaged instantaneous currents in a tidally dominated shelf sea from glider observations

    Science.gov (United States)

    Merckelbach, Lucas

    2016-12-01

    Ocean gliders have become ubiquitous observation platforms in the ocean in recent years. They are also increasingly used in coastal environments. The coastal observatory system COSYNA has pioneered the use of gliders in the North Sea, a shallow tidally energetic shelf sea. For operational reasons, the gliders operated in the North Sea are programmed to resurface every 3-5 h. The glider's dead-reckoning algorithm yields depth-averaged currents, averaged in time over each subsurface interval. Under operational conditions these averaged currents are a poor approximation of the instantaneous tidal current. In this work an algorithm is developed that estimates the instantaneous current (tidal and residual) from glider observations only. The algorithm uses a first-order Butterworth low pass filter to estimate the residual current component, and a Kalman filter based on the linear shallow water equations for the tidal component. A comparison of data from a glider experiment with current data from an acoustic Doppler current profilers deployed nearby shows that the standard deviations for the east and north current components are better than 7 cm s-1 in near-real-time mode and improve to better than 6 cm s-1 in delayed mode, where the filters can be run forward and backward. In the near-real-time mode the algorithm provides estimates of the currents that the glider is expected to encounter during its next few dives. Combined with a behavioural and dynamic model of the glider, this yields predicted trajectories, the information of which is incorporated in warning messages issued to ships by the (German) authorities. In delayed mode the algorithm produces useful estimates of the depth-averaged currents, which can be used in (process-based) analyses in case no other source of measured current information is available.

  3. Effect of tank geometry on its average performance

    Science.gov (United States)

    Orlov, Aleksey A.; Tsimbalyuk, Alexandr F.; Malyugin, Roman V.; Leontieva, Daria A.; Kotelnikova, Alexandra A.

    2018-03-01

    The mathematical model of non-stationary filling of vertical submerged tanks with gaseous uranium hexafluoride is presented in the paper. There are calculations of the average productivity, heat exchange area, and filling time of various volumes tanks with smooth inner walls depending on their "height : radius" ratio as well as the average productivity, degree, and filling time of horizontal ribbing tank with volume 6.10-2 m3 with change central hole diameter of the ribs. It has been shown that the growth of "height / radius" ratio in tanks with smooth inner walls up to the limiting values allows significantly increasing tank average productivity and reducing its filling time. Growth of H/R ratio of tank with volume 1.0 m3 to the limiting values (in comparison with the standard tank having H/R equal 3.49) augments tank productivity by 23.5 % and the heat exchange area by 20%. Besides, we have demonstrated that maximum average productivity and a minimum filling time are reached for the tank with volume 6.10-2 m3 having central hole diameter of horizontal ribs 6.4.10-2 m.

  4. An effective approach using blended learning to assist the average students to catch up with the talented ones

    Directory of Open Access Journals (Sweden)

    Baijie Yang

    2013-03-01

    Full Text Available Because the average students are the prevailing part of the student population, it is important but difficult for the educators to help average students by improving their learning efficiency and learning outcome in school tests. We conducted a quasi-experiment with two English classes taught by one teacher in the second term of the first year of a junior high school. The experimental class was composed of average students (N=37, while the control class comprised talented students (N=34. Therefore the two classes performed differently in English subject with mean difference of 13.48 that is statistically significant based on the independent sample T-Test analysis. We tailored the web-based intelligent English instruction system, called Computer Simulation in Educational Communication (CSIEC and featured with instant feedback, to the learning content in the experiment term, and the experimental class used it one school hour per week throughout the term. This blended learning setting with the focus on vocabulary and dialogue acquisition helped the students in the experimental class improve their learning performance gradually. The mean difference of the final test between the two classes was decreased to 3.78, while the mean difference of the test designed for the specially drilled vocabulary knowledge was decreased to 2.38 and was statistically not significant. The student interview and survey also demonstrated the students’ favor to the blended learning system. We conclude that the long-term integration of this content oriented blended learning system featured with instant feedback into ordinary class is an effective approach to assist the average students to catch up with the talented ones.

  5. Improving a Dental School's Clinic Operations Using Lean Process Improvement.

    Science.gov (United States)

    Robinson, Fonda G; Cunningham, Larry L; Turner, Sharon P; Lindroth, John; Ray, Deborah; Khan, Talib; Yates, Audrey

    2016-10-01

    The term "lean production," also known as "Lean," describes a process of operations management pioneered at the Toyota Motor Company that contributed significantly to the success of the company. Although developed by Toyota, the Lean process has been implemented at many other organizations, including those in health care, and should be considered by dental schools in evaluating their clinical operations. Lean combines engineering principles with operations management and improvement tools to optimize business and operating processes. One of the core concepts is relentless elimination of waste (non-value-added components of a process). Another key concept is utilization of individuals closest to the actual work to analyze and improve the process. When the medical center of the University of Kentucky adopted the Lean process for improving clinical operations, members of the College of Dentistry trained in the process applied the techniques to improve inefficient operations at the Walk-In Dental Clinic. The purpose of this project was to reduce patients' average in-the-door-to-out-the-door time from over four hours to three hours within 90 days. Achievement of this goal was realized by streamlining patient flow and strategically relocating key phases of the process. This initiative resulted in patient benefits such as shortening average in-the-door-to-out-the-door time by over an hour, improving satisfaction by 21%, and reducing negative comments by 24%, as well as providing opportunity to implement the electronic health record, improving teamwork, and enhancing educational experiences for students. These benefits were achieved while maintaining high-quality patient care with zero adverse outcomes during and two years following the process improvement project.

  6. An alternative scheme of the Bogolyubov's average method

    International Nuclear Information System (INIS)

    Ortiz Peralta, T.; Ondarza R, R.; Camps C, E.

    1990-01-01

    In this paper the average energy and the magnetic moment conservation laws in the Drift Theory of charged particle motion are obtained in a simple way. The approach starts from the energy and magnetic moment conservation laws and afterwards the average is performed. This scheme is more economic from the standpoint of time and algebraic calculations than the usual procedure of Bogolyubov's method. (Author)

  7. Influence of coma aberration on aperture averaged scintillations in oceanic turbulence

    Science.gov (United States)

    Luo, Yujuan; Ji, Xiaoling; Yu, Hong

    2018-01-01

    The influence of coma aberration on aperture averaged scintillations in oceanic turbulence is studied in detail by using the numerical simulation method. In general, in weak oceanic turbulence, the aperture averaged scintillation can be effectively suppressed by means of the coma aberration, and the aperture averaged scintillation decreases as the coma aberration coefficient increases. However, in moderate and strong oceanic turbulence the influence of coma aberration on aperture averaged scintillations can be ignored. In addition, the aperture averaged scintillation dominated by salinity-induced turbulence is larger than that dominated by temperature-induced turbulence. In particular, it is shown that for coma-aberrated Gaussian beams, the behavior of aperture averaged scintillation index is quite different from the behavior of point scintillation index, and the aperture averaged scintillation index is more suitable for characterizing scintillations in practice.

  8. Self-consistent field theory of collisions: Orbital equations with asymptotic sources and self-averaged potentials

    Energy Technology Data Exchange (ETDEWEB)

    Hahn, Y.K., E-mail: ykhahn22@verizon.net

    2014-12-15

    The self-consistent field theory of collisions is formulated, incorporating the unique dynamics generated by the self-averaged potentials. The bound state Hartree–Fock approach is extended for the first time to scattering states, by properly resolving the principal difficulties of non-integrable continuum orbitals and imposing complex asymptotic conditions. The recently developed asymptotic source theory provides the natural theoretical basis, as the asymptotic conditions are completely transferred to the source terms and the new scattering function is made fullyintegrable. The scattering solutions can then be directly expressed in terms of bound state HF configurations, establishing the relationship between the bound and scattering state solutions. Alternatively, the integrable spin orbitals are generated by constructing the individual orbital equations that contain asymptotic sources and self-averaged potentials. However, the orbital energies are not determined by the equations, and a special channel energy fixing procedure is developed to secure the solutions. It is also shown that the variational construction of the orbital equations has intrinsic ambiguities that are generally associated with the self-consistent approach. On the other hand, when a small subset of open channels is included in the source term, the solutions are only partiallyintegrable, but the individual open channels can then be treated more simply by properly selecting the orbital energies. The configuration mixing and channel coupling are then necessary to complete the solution. The new theory improves the earlier continuum HF model. - Highlights: • First extension of HF to scattering states, with proper asymptotic conditions. • Orbital equations with asymptotic sources and integrable orbital solutions. • Construction of self-averaged potentials, and orbital energy fixing. • Channel coupling and configuration mixing, involving the new orbitals. • Critical evaluation of the

  9. Locking plate fixation provides superior fixation of humerus split type greater tuberosity fractures than tension bands and double row suture bridges.

    Science.gov (United States)

    Gaudelli, Cinzia; Ménard, Jérémie; Mutch, Jennifer; Laflamme, G-Yves; Petit, Yvan; Rouleau, Dominique M

    2014-11-01

    This paper aims to determine the strongest fixation method for split type greater tuberosity fractures of the proximal humerus by testing and comparing three fixation methods: a tension band with No. 2 wire suture, a double-row suture bridge with suture anchors, and a manually contoured calcaneal locking plate. Each method was tested on eight porcine humeri. A osteotomy of the greater tuberosity was performed 50° to the humeral shaft and then fixed according to one of three methods. The humeri were then placed in a testing apparatus and tension was applied along the supraspinatus tendon using a thermoelectric cooling clamp. The load required to produce 3mm and 5mm of displacement, as well as complete failure, was recorded using an axial load cell. The average load required to produce 3mm and 5mm of displacement was 658N and 1112N for the locking plate, 199N and 247N for the double row, and 75N and 105N for the tension band. The difference between the three groups was significant (Prow (456N) and tension band (279N) (Prow (71N/mm) and tension band (33N/mm) (Pbiomechanical fixation for split type greater tuberosity fractures. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Forecasting probabilistic seismic shaking for greater Tokyo from 400 years of intensity observations (Invited)

    Science.gov (United States)

    Bozkurt, S.; Stein, R. S.; Toda, S.

    2009-12-01

    The long recorded history of earthquakes in Japan affords an opportunity to forecast seismic shaking exclusively from past shaking. We calculate the time-averaged (Poisson) probability of severe shaking by using more than 10,000 intensity observations recorded since AD 1600 in a 350-km-wide box centered on Tokyo. Unlike other hazard assessment methods, source and site effects are included without modeling, and we do not need to know the size or location of any earthquake or the location and slip rate of any fault. The two key assumptions are that the slope of the observed frequency-intensity relation at every site is the same; and that the 400-year record is long enough to encompass the full range of seismic behavior. Tests we conduct here suggest that both assumptions are sound. The resulting 30-year probability of IJMA≥6 shaking (~PGA≥0.9 g or MMI≥IX) is 30-40% in Tokyo, Kawasaki, and Yokohama, and 10-15% in Chiba and Tsukuba. This result means that there is a 30% chance that 4 million people would be subjected to IJMA≥6 shaking during an average 30-year period. We also produce exceedance maps of peak ground acceleration for building code regulations, and calculate short-term hazard associated with a hypothetical catastrophe bond. Our results resemble an independent assessment developed from conventional seismic hazard analysis for greater Tokyo. Over 10000 intensity observations stored and analyzed using geostatistical tools of GIS. Distribution of historical data is shown on this figure.

  11. Improving Banking Supervision

    OpenAIRE

    Mayes, David G.

    1998-01-01

    This paper explains how banking supervision within the EU, and in Finland in particular, can be improved by the implementation of greater market discipline and related changes. Although existing EU law, institutions, market structures and practices of corporate governance restrict the scope for change, substantial improvements can be introduced now while there is a window of opportunity for change. The economy is growing H5ly and the consequences of the banking crises of the early 1990s have ...

  12. The use of difference spectra with a filtered rolling average background in mobile gamma spectrometry measurements

    International Nuclear Information System (INIS)

    Cresswell, A.J.; Sanderson, D.C.W.

    2009-01-01

    The use of difference spectra, with a filtering of a rolling average background, as a variation of the more common rainbow plots to aid in the visual identification of radiation anomalies in mobile gamma spectrometry systems is presented. This method requires minimal assumptions about the radiation environment, and is not computationally intensive. Some case studies are presented to illustrate the method. It is shown that difference spectra produced in this manner can improve signal to background, estimate shielding or mass depth using scattered spectral components, and locate point sources. This approach could be a useful addition to the methods available for locating point sources and mapping dispersed activity in real time. Further possible developments of the procedure utilising more intelligent filters and spatial averaging of the background are identified.

  13. Population Aging at Cross-Roads: Diverging Secular Trends in Average Cognitive Functioning and Physical Health in the Older Population of Germany

    Science.gov (United States)

    Steiber, Nadia

    2015-01-01

    This paper uses individual-level data from the German Socio-Economic Panel to model trends in population health in terms of cognition, physical fitness, and mental health between 2006 and 2012. The focus is on the population aged 50–90. We use a repeated population-based cross-sectional design. As outcome measures, we use SF-12 measures of physical and mental health and the Symbol-Digit Test (SDT) that captures cognitive processing speed. In line with previous research we find a highly significant Flynn effect on cognition; i.e., SDT scores are higher among those who were tested more recently (at the same age). This result holds for men and women, all age groups, and across all levels of education. While we observe a secular improvement in terms of cognitive functioning, at the same time, average physical and mental health has declined. The decline in average physical health is shown to be stronger for men than for women and found to be strongest for low-educated, young-old men aged 50–64: the decline over the 6-year interval in average physical health is estimated to amount to about 0.37 SD, whereas average fluid cognition improved by about 0.29 SD. This pattern of results at the population-level (trends in average population health) stands in interesting contrast to the positive association of physical health and cognitive functioning at the individual-level. The findings underscore the multi-dimensionality of health and the aging process. PMID:26323093

  14. Effect of the average soft-segment length on the morphology and properties of segmented polyurethane nanocomposites

    International Nuclear Information System (INIS)

    Finnigan, Bradley; Halley, Peter; Jack, Kevin; McDowell, Alasdair; Truss, Rowan; Casey, Phil; Knott, Robert; Martin, Darren

    2006-01-01

    Two organically modified layered silicates (with small and large diameters) were incorporated into three segmented polyurethanes with various degrees of microphase separation. Microphase separation increased with the molecular weight of the poly(hexamethylene oxide) soft segment. The molecular weight of the soft segment did not influence the amount of polyurethane intercalating the interlayer spacing. Small-angle neutron scattering and differential scanning calorimetry data indicated that the layered silicates did not affect the microphase morphology of any host polymer, regardless of the particle diameter. The stiffness enhancement on filler addition increased as the microphase separation of the polyurethane decreased, presumably because a greater number of urethane linkages were available to interact with the filler. For comparison, the small nanofiller was introduced into a polyurethane with a poly(tetramethylene oxide) soft segment, and a significant increase in the tensile strength and a sharper upturn in the stress-strain curve resulted. No such improvement occurred in the host polymers with poly(hexamethylene oxide) soft segments. It is proposed that the nanocomposite containing the more hydrophilic and mobile poly(tetramethylene oxide) soft segment is capable of greater secondary bonding between the polyurethane chains and the organosilicate surface, resulting in improved stress transfer to the filler and reduced molecular slippage.

  15. Room for improvement

    DEFF Research Database (Denmark)

    Sandal, Louise F; Thorlund, Jonas B; Moore, Andrew J

    2018-01-01

    -reported outcomes and qualitative findings supported the primary finding, while improvements in muscle strength and aerobic capacity did not differ between exercise groups. CONCLUSION: Results suggest that the physical environment contributes to treatment response. Matching patients' preferences to treatment rooms...... significance (p=0.07). Waitlist group reported no improvement (-0.05 95% CI -0.5 to 0.4). In interviews, participants from the standard environment expressed greater social cohesion and feeling at home. Qualitative themes identified; reflection, sense of fellowship and transition. Secondary patient...... may improve patient-reported outcomes. TRIAL REGISTRATION NUMBER: ClinicalTrials.gov identifier: NCT02043613....

  16. A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, Manfred

    2003-01-01

    We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages...

  17. 7 CFR 1209.12 - On average.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false On average. 1209.12 Section 1209.12 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS... CONSUMER INFORMATION ORDER Mushroom Promotion, Research, and Consumer Information Order Definitions § 1209...

  18. Greater self-enhancement in Western than Eastern Ukraine, but failure to replicate the Muhammad Ali effect.

    Science.gov (United States)

    Kemmelmeier, Markus; Malanchuk, Oksana

    2016-02-01

    Based on the cross-cultural research linking individualism-collectivism and self-enhancement, this research examines regional pattern of self-enhancement in Ukraine. Broadly speaking, the western part of Ukraine is mainly Ukrainian speaking and historically oriented towards Europe, whereas Eastern Ukraine is mainly Russian speaking and historically oriented towards the Russian cultural sphere. We found self-enhancement on a "better than average" task to be higher in a Western Ukrainian sample compared to an Eastern Ukrainian sample, with differences in independent self-construals supporting assumed regional variation in individualism. However, the Muhammad Ali effect, the finding that self-enhancement is greater in the domain of morality than intelligence, was not replicated. The discussion focuses on the specific sources of this regional difference in self-enhancement, and reasons for why the Muhammad Ali effect was not found. © 2015 International Union of Psychological Science.

  19. GREATER OMENTUM: MORPHOFUNCTIONAL CHARACTERISTICS AND CLINICAL SIGNIFICANCE IN PEDIATRICS

    Directory of Open Access Journals (Sweden)

    A.V. Nekrutov

    2007-01-01

    Full Text Available The review analyzes the structure organization and pathophysiological age specificities of the greater omentum, which determine its uniqueness and functional diversity in a child's organism. the article discusses protective functions of the organ, its role in the development of post operative complications of children, and the usage in children's reconstructive plastic surgery.Key words: greater omentum, omentitis, of post operative complications, children.

  20. A depth semi-averaged model for coastal dynamics

    Science.gov (United States)

    Antuono, M.; Colicchio, G.; Lugni, C.; Greco, M.; Brocchini, M.

    2017-05-01

    The present work extends the semi-integrated method proposed by Antuono and Brocchini ["Beyond Boussinesq-type equations: Semi-integrated models for coastal dynamics," Phys. Fluids 25(1), 016603 (2013)], which comprises a subset of depth-averaged equations (similar to Boussinesq-like models) and a Poisson equation that accounts for vertical dynamics. Here, the subset of depth-averaged equations has been reshaped in a conservative-like form and both the Poisson equation formulations proposed by Antuono and Brocchini ["Beyond Boussinesq-type equations: Semi-integrated models for coastal dynamics," Phys. Fluids 25(1), 016603 (2013)] are investigated: the former uses the vertical velocity component (formulation A) and the latter a specific depth semi-averaged variable, ϒ (formulation B). Our analyses reveal that formulation A is prone to instabilities as wave nonlinearity increases. On the contrary, formulation B allows an accurate, robust numerical implementation. Test cases derived from the scientific literature on Boussinesq-type models—i.e., solitary and Stokes wave analytical solutions for linear dispersion and nonlinear evolution and experimental data for shoaling properties—are used to assess the proposed solution strategy. It is found that the present method gives reliable predictions of wave propagation in shallow to intermediate waters, in terms of both semi-averaged variables and conservation properties.

  1. More features, greater connectivity.

    Science.gov (United States)

    Hunt, Sarah

    2015-09-01

    Changes in our political infrastructure, the continuing frailties of our economy, and a stark growth in population, have greatly impacted upon the perceived stability of the NHS. Healthcare teams have had to adapt to these changes, and so too have the technologies upon which they rely to deliver first-class patient care. Here Sarah Hunt, marketing co-ordinator at Aid Call, assesses how the changing healthcare environment has affected one of its fundamental technologies - the nurse call system, argues the case for wireless such systems in terms of what the company claims is greater adaptability to changing needs, and considers the ever-wider range of features and functions available from today's nurse call equipment, particularly via connectivity with both mobile devices, and ancillaries ranging from enuresis sensors to staff attack alert 'badges'.

  2. The measurement of power losses at high magnetic field densities or at small cross-section of test specimen using the averaging

    CERN Document Server

    Gorican, V; Hamler, A; Nakata, T

    2000-01-01

    It is difficult to achieve sufficient accuracy of power loss measurement at high magnetic field densities where the magnetic field strength gets more and more distorted, or in cases where the influence of noise increases (small specimen cross section). The influence of averaging on the accuracy of power loss measurement was studied on the cast amorphous magnetic material Metglas 2605-TCA. The results show that the accuracy of power loss measurements can be improved by using the averaging of data acquisition points.

  3. On carrots and curiosity: eating fruit and vegetables is associated with greater flourishing in daily life.

    Science.gov (United States)

    Conner, Tamlin S; Brookie, Kate L; Richardson, Aimee C; Polak, Maria A

    2015-05-01

    Our aim was to determine whether eating fruit and vegetables (FV) is associated with other markers of well-being beyond happiness and life satisfaction. Towards this aim, we tested whether FV consumption is associated with greater eudaemonic well-being - a state of flourishing characterized by feelings of engagement, meaning, and purpose in life. We also tested associations with two eudaemonic behaviours - curiosity and creativity. Daily diary study across 13 days (micro-longitudinal, correlational design). A sample of 405 young adults (67% women; mean age 19.9 [SD 1.6] years) completed an Internet daily diary for 13 consecutive days. Each day, participants reported on their consumption of fruit, vegetables, sweets, and chips, as well as their eudaemonic well-being, curiosity, creativity, positive affect (PA), and negative affect. Between-person associations were analysed on aggregated data. Within-person associations were analysed using multilevel models controlling for weekday and weekend patterns. Fruit and vegetables consumption predicted greater eudaemonic well-being, curiosity, and creativity at the between- and within-person levels. Young adults who ate more FV reported higher average eudaemonic well-being, more intense feelings of curiosity, and greater creativity compared with young adults who ate less FV. On days when young adults ate more FV, they reported greater eudaemonic well-being, curiosity, and creativity compared with days when they ate less FV. FV consumption also predicted higher PA, which mostly did not account for the associations between FV and the other well-being variables. Few unhealthy foods (sweets, chips) were related to well-being except that consumption of sweets was associated with greater curiosity and PA at the within-person level. Lagged data analyses showed no carry-over effects of FV consumption onto next-day well-being (or vice versa). Although these patterns are strictly correlational, this study provides the first evidence

  4. The average covering tree value for directed graph games

    NARCIS (Netherlands)

    Khmelnitskaya, Anna Borisovna; Selcuk, Özer; Talman, Dolf

    We introduce a single-valued solution concept, the so-called average covering tree value, for the class of transferable utility games with limited communication structure represented by a directed graph. The solution is the average of the marginal contribution vectors corresponding to all covering

  5. The Average Covering Tree Value for Directed Graph Games

    NARCIS (Netherlands)

    Khmelnitskaya, A.; Selcuk, O.; Talman, A.J.J.

    2012-01-01

    Abstract: We introduce a single-valued solution concept, the so-called average covering tree value, for the class of transferable utility games with limited communication structure represented by a directed graph. The solution is the average of the marginal contribution vectors corresponding to all

  6. 18 CFR 301.7 - Average System Cost methodology functionalization.

    Science.gov (United States)

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Average System Cost... REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER MARKETING ADMINISTRATIONS AVERAGE SYSTEM COST METHODOLOGY FOR SALES FROM UTILITIES TO BONNEVILLE POWER ADMINISTRATION UNDER NORTHWEST POWER...

  7. Free Energy Self-Averaging in Protein-Sized Random Heteropolymers

    International Nuclear Information System (INIS)

    Chuang, Jeffrey; Grosberg, Alexander Yu.; Kardar, Mehran

    2001-01-01

    Current theories of heteropolymers are inherently macroscopic, but are applied to mesoscopic proteins. To compute the free energy over sequences, one assumes self-averaging -- a property established only in the macroscopic limit. By enumerating the states and energies of compact 18, 27, and 36mers on a lattice with an ensemble of random sequences, we test the self-averaging approximation. We find that fluctuations in the free energy between sequences are weak, and that self-averaging is valid at the scale of real proteins. The results validate sequence design methods which exponentially speed up computational design and simplify experimental realizations

  8. Average multiplications in deep inelastic processes and their interpretation

    International Nuclear Information System (INIS)

    Kiselev, A.V.; Petrov, V.A.

    1983-01-01

    Inclusive production of hadrons in deep inelastic proceseseus is considered. It is shown that at high energies the jet evolution in deep inelastic processes is mainly of nonperturbative character. With the increase of a final hadron state energy the leading contribution to an average multiplicity comes from a parton subprocess due to production of massive quark and gluon jets and their further fragmentation as diquark contribution becomes less and less essential. The ratio of the total average multiplicity in deep inelastic processes to the average multiplicity in e + e - -annihilation at high energies tends to unity

  9. Average combination difference morphological filters for fault feature extraction of bearing

    Science.gov (United States)

    Lv, Jingxiang; Yu, Jianbo

    2018-02-01

    In order to extract impulse components from vibration signals with much noise and harmonics, a new morphological filter called average combination difference morphological filter (ACDIF) is proposed in this paper. ACDIF constructs firstly several new combination difference (CDIF) operators, and then integrates the best two CDIFs as the final morphological filter. This design scheme enables ACIDF to extract positive and negative impacts existing in vibration signals to enhance accuracy of bearing fault diagnosis. The length of structure element (SE) that affects the performance of ACDIF is determined adaptively by a new indicator called Teager energy kurtosis (TEK). TEK further improves the effectiveness of ACDIF for fault feature extraction. Experimental results on the simulation and bearing vibration signals demonstrate that ACDIF can effectively suppress noise and extract periodic impulses from bearing vibration signals.

  10. The impact of intermediate structure on the average fission cross sections

    International Nuclear Information System (INIS)

    Bouland, O.; Lynn, J.E.; Talou, P.

    2014-01-01

    This paper discusses two common approximations used to calculate average fission cross sections over the compound energy range: the disregard of the W II factor and the Porter-Thomas hypothesis made on the double barrier fission width distribution. By reference to a Monte Carlo-type calculation of formal R-matrix fission widths, this work estimates an overall error ranging from 12% to 20% on the fission cross section in the case of the 239 Pu fissile isotope in the energy domain from 1 to 100 keV with very significant impact on the competing capture cross section. This work is part of a recent and very comprehensive formal R-matrix study over the Pu isotope series and is able to give some hints for significant accuracy improvements in the treatment of the fission channel. (authors)

  11. Interseasonal movements of greater sage-grouse, migratory behavior, and an assessment of the core regions concept in Wyoming

    Science.gov (United States)

    Fedy, Bradley C.; Aldridge, Cameron L.; Doherty, Kevin E.; O'Donnell, Michael S.; Beck, Jeffrey L.; Bedrosian, Bryan; Holloran, Matthew J.; Johnson, Gregory D.; Kaczor, Nicholas W.; Kirol, Christopher P.; Mandich, Cheryl A.; Marshall, David; McKee, Gwyn; Olson, Chad; Swanson, Christopher C.; Walker, Brett L.

    2012-01-01

    Animals can require different habitat types throughout their annual cycles. When considering habitat prioritization, we need to explicitly consider habitat requirements throughout the annual cycle, particularly for species of conservation concern. Understanding annual habitat requirements begins with quantifying how far individuals move across landscapes between key life stages to access required habitats. We quantified individual interseasonal movements for greater sage-grouse (Centrocercus urophasianus; hereafter sage-grouse) using radio-telemetry spanning the majority of the species distribution in Wyoming. Sage-grouse are currently a candidate for listing under the United States Endangered Species Act and Wyoming is predicted to remain a stronghold for the species. Sage-grouse use distinct seasonal habitats throughout their annual cycle for breeding, brood rearing, and wintering. Average movement distances in Wyoming from nest sites to summer-late brood-rearing locations were 8.1 km (SE = 0.3 km; n = 828 individuals) and the average subsequent distances moved from summer sites to winter locations were 17.3 km (SE = 0.5 km; n = 607 individuals). Average nest-to-winter movements were 14.4 km (SE = 0.6 km; n = 434 individuals). We documented remarkable variation in the extent of movement distances both within and among sites across Wyoming, with some individuals remaining year-round in the same vicinity and others moving over 50 km between life stages. Our results suggest defining any of our populations as migratory or non-migratory is innappropriate as individual strategies vary widely. We compared movement distances of birds marked using Global Positioning System (GPS) and very high frequency (VHF) radio marking techniques and found no evidence that the heavier GPS radios limited movement. Furthermore, we examined the capacity of the sage-grouse core regions concept to capture seasonal locations. As expected, we found the core regions approach, which was

  12. The mortality effect of ship-related fine particulate matter in the Sydney greater metropolitan region of NSW, Australia.

    Science.gov (United States)

    Broome, Richard A; Cope, Martin E; Goldsworthy, Brett; Goldsworthy, Laurie; Emmerson, Kathryn; Jegasothy, Edward; Morgan, Geoffrey G

    2016-02-01

    This study investigates the mortality effect of primary and secondary PM2.5 related to ship exhaust in the Sydney greater metropolitan region of Australia. A detailed inventory of ship exhaust emissions was used to model a) the 2010/11 concentration of ship-related PM2.5 across the region, and b) the reduction in PM2.5 concentration that would occur if ships used distillate fuel with a 0.1% sulfur content at berth or within 300 km of Sydney. The annual loss of life attributable to 2010/11 levels of ship-related PM2.5 and the improvement in survival associated with use of low-sulfur fuel were estimated from the modelled concentrations. In 2010/11, approximately 1.9% of the region-wide annual average population weighted-mean concentration of all natural and human-made PM2.5 was attributable to ship exhaust, and up to 9.4% at suburbs close to ports. An estimated 220 years of life were lost by people who died in 2010/11 as a result of ship exhaust-related exposure (95% CIβ: 140-290, where CIβ is the uncertainty in the concentration-response coefficient only). Use of 0.1% sulfur fuel at berth would reduce the population weighted-mean concentration of PM2.5 related to ship exhaust by 25% and result in a gain of 390 life-years over a twenty year period (95% CIβ: 260-520). Use of 0.1% sulfur fuel within 300 km of Sydney would reduce the concentration by 56% and result in a gain of 920 life-years over twenty years (95% CIβ: 600-1200). Ship exhaust is an important source of human exposure to PM2.5 in the Sydney greater metropolitan region. This assessment supports intervention to reduce ship emissions in the GMR. Local strategies to limit the sulfur content of fuel would reduce exposure and will become increasingly beneficial as the shipping industry expands. A requirement for use of 0.1% sulfur fuel by ships within 300 km of Sydney would provide more than twice the mortality benefit of a requirement for ships to use 0.1% sulfur fuel at berth. Copyright © 2015 Elsevier

  13. 12 CFR 702.105 - Weighted-average life of investments.

    Science.gov (United States)

    2010-01-01

    ... investment funds. (1) For investments in registered investment companies (e.g., mutual funds) and collective investment funds, the weighted-average life is defined as the maximum weighted-average life disclosed, directly or indirectly, in the prospectus or trust instrument; (2) For investments in money market funds...

  14. Beta-energy averaging and beta spectra

    International Nuclear Information System (INIS)

    Stamatelatos, M.G.; England, T.R.

    1976-07-01

    A simple yet highly accurate method for approximately calculating spectrum-averaged beta energies and beta spectra for radioactive nuclei is presented. This method should prove useful for users who wish to obtain accurate answers without complicated calculations of Fermi functions, complex gamma functions, and time-consuming numerical integrations as required by the more exact theoretical expressions. Therefore, this method should be a good time-saving alternative for investigators who need to make calculations involving large numbers of nuclei (e.g., fission products) as well as for occasional users interested in restricted number of nuclides. The average beta-energy values calculated by this method differ from those calculated by ''exact'' methods by no more than 1 percent for nuclides with atomic numbers in the 20 to 100 range and which emit betas of energies up to approximately 8 MeV. These include all fission products and the actinides. The beta-energy spectra calculated by the present method are also of the same quality

  15. The Greater Phenotypic Homeostasis of the Allopolyploid Coffea arabica Improved the Transcriptional Homeostasis Over that of Both Diploid Parents.

    Science.gov (United States)

    Bertrand, Benoît; Bardil, Amélie; Baraille, Hélène; Dussert, Stéphane; Doulbeau, Sylvie; Dubois, Emeric; Severac, Dany; Dereeper, Alexis; Etienne, Hervé

    2015-10-01

    Polyploidy impacts the diversity of plant species, giving rise to novel phenotypes and leading to ecological diversification. In order to observe adaptive and evolutionary capacities of polyploids, we compared the growth, primary metabolism and transcriptomic expression level in the leaves of the newly formed allotetraploid Coffea arabica species compared with its two diploid parental species (Coffea eugenioides and Coffea canephora), exposed to four thermal regimes (TRs; 18-14, 23-19, 28-24 and 33-29°C). The growth rate of the allopolyploid C. arabica was similar to that of C. canephora under the hottest TR and that of C. eugenioides under the coldest TR. For metabolite contents measured at the hottest TR, the allopolyploid showed similar behavior to C. canephora, the parent which tolerates higher growth temperatures in the natural environment. However, at the coldest TR, the allopolyploid displayed higher sucrose, raffinose and ABA contents than those of its two parents and similar linolenic acid leaf composition and Chl content to those of C. eugenioides. At the gene expression level, few differences between the allopolyploid and its parents were observed for studied genes linked to photosynthesis, respiration and the circadian clock, whereas genes linked to redox activity showed a greater capacity of the allopolyploid for homeostasis. Finally, we found that the overall transcriptional response to TRs of the allopolyploid was more homeostatic compared with its parents. This better transcriptional homeostasis of the allopolyploid C. arabica afforded a greater phenotypic homeostasis when faced with environments that are unsuited to the diploid parental species. © The Author 2015. Published by Oxford University Press on behalf of Japanese Society of Plant Physiologists. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  16. Decision trees with minimum average depth for sorting eight elements

    KAUST Repository

    AbouEisha, Hassan M.

    2015-11-19

    We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees is approximately equal to 8.548×10^326365), has also minimum depth. Both problems were considered by Knuth (1998). To obtain these results, we use tools based on extensions of dynamic programming which allow us to make sequential optimization of decision trees relative to depth and average depth, and to count the number of decision trees with minimum average depth.

  17. Plan averaging for multicriteria navigation of sliding window IMRT and VMAT

    International Nuclear Information System (INIS)

    Craft, David; Papp, Dávid; Unkelbach, Jan

    2014-01-01

    Purpose: To describe a method for combining sliding window plans [intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT)] for use in treatment plan averaging, which is needed for Pareto surface navigation based multicriteria treatment planning. Methods: The authors show that by taking an appropriately defined average of leaf trajectories of sliding window plans, the authors obtain a sliding window plan whose fluence map is the exact average of the fluence maps corresponding to the initial plans. In the case of static-beam IMRT, this also implies that the dose distribution of the averaged plan is the exact dosimetric average of the initial plans. In VMAT delivery, the dose distribution of the averaged plan is a close approximation of the dosimetric average of the initial plans. Results: The authors demonstrate the method on three Pareto optimal VMAT plans created for a demanding paraspinal case, where the tumor surrounds the spinal cord. The results show that the leaf averaged plans yield dose distributions that approximate the dosimetric averages of the precomputed Pareto optimal plans well. Conclusions: The proposed method enables the navigation of deliverable Pareto optimal plans directly, i.e., interactive multicriteria exploration of deliverable sliding window IMRT and VMAT plans, eliminating the need for a sequencing step after navigation and hence the dose degradation that is caused by such a sequencing step

  18. Plan averaging for multicriteria navigation of sliding window IMRT and VMAT.

    Science.gov (United States)

    Craft, David; Papp, Dávid; Unkelbach, Jan

    2014-02-01

    To describe a method for combining sliding window plans [intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT)] for use in treatment plan averaging, which is needed for Pareto surface navigation based multicriteria treatment planning. The authors show that by taking an appropriately defined average of leaf trajectories of sliding window plans, the authors obtain a sliding window plan whose fluence map is the exact average of the fluence maps corresponding to the initial plans. In the case of static-beam IMRT, this also implies that the dose distribution of the averaged plan is the exact dosimetric average of the initial plans. In VMAT delivery, the dose distribution of the averaged plan is a close approximation of the dosimetric average of the initial plans. The authors demonstrate the method on three Pareto optimal VMAT plans created for a demanding paraspinal case, where the tumor surrounds the spinal cord. The results show that the leaf averaged plans yield dose distributions that approximate the dosimetric averages of the precomputed Pareto optimal plans well. The proposed method enables the navigation of deliverable Pareto optimal plans directly, i.e., interactive multicriteria exploration of deliverable sliding window IMRT and VMAT plans, eliminating the need for a sequencing step after navigation and hence the dose degradation that is caused by such a sequencing step.

  19. Plan averaging for multicriteria navigation of sliding window IMRT and VMAT

    Energy Technology Data Exchange (ETDEWEB)

    Craft, David, E-mail: dcraft@partners.org; Papp, Dávid; Unkelbach, Jan [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114 (United States)

    2014-02-15

    Purpose: To describe a method for combining sliding window plans [intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT)] for use in treatment plan averaging, which is needed for Pareto surface navigation based multicriteria treatment planning. Methods: The authors show that by taking an appropriately defined average of leaf trajectories of sliding window plans, the authors obtain a sliding window plan whose fluence map is the exact average of the fluence maps corresponding to the initial plans. In the case of static-beam IMRT, this also implies that the dose distribution of the averaged plan is the exact dosimetric average of the initial plans. In VMAT delivery, the dose distribution of the averaged plan is a close approximation of the dosimetric average of the initial plans. Results: The authors demonstrate the method on three Pareto optimal VMAT plans created for a demanding paraspinal case, where the tumor surrounds the spinal cord. The results show that the leaf averaged plans yield dose distributions that approximate the dosimetric averages of the precomputed Pareto optimal plans well. Conclusions: The proposed method enables the navigation of deliverable Pareto optimal plans directly, i.e., interactive multicriteria exploration of deliverable sliding window IMRT and VMAT plans, eliminating the need for a sequencing step after navigation and hence the dose degradation that is caused by such a sequencing step.

  20. Three-dimensional topography of the gingival line of young adult maxillary teeth: curve averaging using reverse-engineering methods.

    Science.gov (United States)

    Park, Young-Seok; Chang, Mi-Sook; Lee, Seung-Pyo

    2011-01-01

    This study attempted to establish three-dimensional average curves of the gingival line of maxillary teeth using reconstructed virtual models to utilize as guides for dental implant restorations. Virtual models from 100 full-mouth dental stone cast sets were prepared with a three-dimensional scanner and special reconstruction software. Marginal gingival lines were defined by transforming the boundary points to the NURBS (nonuniform rational B-spline) curve. Using an iterative closest point algorithm, the sample models were aligned and the gingival curves were isolated. Each curve was tessellated by 200 points using a uniform interval. The 200 tessellated points of each sample model were averaged according to the index of each model. In a pilot experiment, regression and fitting analysis of one obtained average curve was performed to depict it as mathematical formulae. The three-dimensional average curves of six maxillary anterior teeth, two maxillary right premolars, and a maxillary right first molar were obtained, and their dimensions were measured. Average curves of the gingival lines of young people were investigated. It is proposed that dentists apply these data to implant platforms or abutment designs to achieve ideal esthetics. The curves obtained in the present study may be incorporated as a basis for implant component design to improve the biologic nature and related esthetics of restorations.

  1. Average contraction and synchronization of complex switched networks

    International Nuclear Information System (INIS)

    Wang Lei; Wang Qingguo

    2012-01-01

    This paper introduces an average contraction analysis for nonlinear switched systems and applies it to investigating the synchronization of complex networks of coupled systems with switching topology. For a general nonlinear system with a time-dependent switching law, a basic convergence result is presented according to average contraction analysis, and a special case where trajectories of a distributed switched system converge to a linear subspace is then investigated. Synchronization is viewed as the special case with all trajectories approaching the synchronization manifold, and is thus studied for complex networks of coupled oscillators with switching topology. It is shown that the synchronization of a complex switched network can be evaluated by the dynamics of an isolated node, the coupling strength and the time average of the smallest eigenvalue associated with the Laplacians of switching topology and the coupling fashion. Finally, numerical simulations illustrate the effectiveness of the proposed methods. (paper)

  2. DSCOVR Magnetometer Level 2 One Minute Averages

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-minute average of Level 1 data

  3. DSCOVR Magnetometer Level 2 One Second Averages

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-second average of Level 1 data

  4. 47 CFR 1.959 - Computation of average terrain elevation.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Computation of average terrain elevation. 1.959 Section 1.959 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Wireless Radio Services Applications and Proceedings Application Requirements and Procedures § 1.959 Computation of average terrain elevation. Except a...

  5. Accelerated Distributed Dual Averaging Over Evolving Networks of Growing Connectivity

    Science.gov (United States)

    Liu, Sijia; Chen, Pin-Yu; Hero, Alfred O.

    2018-04-01

    We consider the problem of accelerating distributed optimization in multi-agent networks by sequentially adding edges. Specifically, we extend the distributed dual averaging (DDA) subgradient algorithm to evolving networks of growing connectivity and analyze the corresponding improvement in convergence rate. It is known that the convergence rate of DDA is influenced by the algebraic connectivity of the underlying network, where better connectivity leads to faster convergence. However, the impact of network topology design on the convergence rate of DDA has not been fully understood. In this paper, we begin by designing network topologies via edge selection and scheduling. For edge selection, we determine the best set of candidate edges that achieves the optimal tradeoff between the growth of network connectivity and the usage of network resources. The dynamics of network evolution is then incurred by edge scheduling. Further, we provide a tractable approach to analyze the improvement in the convergence rate of DDA induced by the growth of network connectivity. Our analysis reveals the connection between network topology design and the convergence rate of DDA, and provides quantitative evaluation of DDA acceleration for distributed optimization that is absent in the existing analysis. Lastly, numerical experiments show that DDA can be significantly accelerated using a sequence of well-designed networks, and our theoretical predictions are well matched to its empirical convergence behavior.

  6. The Effects of on Blood Glucose Values are Greater than those of Dietary Changes Alone

    Directory of Open Access Journals (Sweden)

    Ashley N. Hoehn

    2012-01-01

    Full Text Available Eighteen type II diabetics (9 women and 9 men participated in a 12-week trial that consisted of 2 parts, a 3-week control phase followed by a 9-week experimental phase where half of the subjects received 1000 mg of Cinnamomum cassia while the other half received 1000 mg of a placebo pill. All of the subjects that were in the cinnamon group had a statistically significant decrease in their blood sugar levels with a P -value of 3.915 × 10 -10 . The subjects in the cinnamon group had an average overall decrease in their blood sugar levels of about 30 mg/dL, which is comparable to oral medications available for diabetes. All subjects were educated on appropriate diabetic diets and maintained that diet for the entire 12 week study. Greater decreases in blood glucose values were observed in patients using the cinnamon compared to those using the dietary changes alone.

  7. Alpine plant distribution and thermic vegetation indicator on Gloria summits in the central Greater Caucasus

    International Nuclear Information System (INIS)

    Gigauri, K.; Abdaladze, O.; Nakhutsrishvili, G

    2016-01-01

    The distribution of plant species within alpine areas is often directly related to climate or climate-influenced ecological factors. Responding to observed changes in plant species, cover and composition on the GLORIA summits in the Central Caucasus, an extensive setup of 1m * 1m permanent plots was established at the treeline-alpine zones and nival ecotone (between 2240 and 3024 m a.s.l.) on the main watershed range of the Central Greater Caucasus nearby the Cross Pass, Kazbegi region, Georgia. Recording was repeated in a representative selection of 64 quadrates in 2008. The local climatic factors - average soil T degree C and growing degree days (GDD) did not show significant increasing trends. For detection of climate warming we used two indices: thermic vegetation indicator S and thermophilization indicator D. They were varying along altitudinal and exposition gradients. The thermic vegetation indicator decrease in all monitoring summits. The abundance rank of the dominant and endemic species did not change during monitoring period. (author)

  8. Average Costs versus Net Present Value

    NARCIS (Netherlands)

    E.A. van der Laan (Erwin); R.H. Teunter (Ruud)

    2000-01-01

    textabstractWhile the net present value (NPV) approach is widely accepted as the right framework for studying production and inventory control systems, average cost (AC) models are more widely used. For the well known EOQ model it can be verified that (under certain conditions) the AC approach gives

  9. Time-varying cycle average and daily variation in ambient air pollution and fecundability.

    Science.gov (United States)

    Nobles, Carrie J; Schisterman, Enrique F; Ha, Sandie; Buck Louis, Germaine M; Sherman, Seth; Mendola, Pauline

    2018-01-01

    Does ambient air pollution affect fecundability? While cycle-average air pollution exposure was not associated with fecundability, we observed some associations for acute exposure around ovulation and implantation with fecundability. Ambient air pollution exposure has been associated with adverse pregnancy outcomes and decrements in semen quality. The LIFE study (2005-2009), a prospective time-to-pregnancy study, enrolled 501 couples who were followed for up to one year of attempting pregnancy. Average air pollutant exposure was assessed for the menstrual cycle before and during the proliferative phase of each observed cycle (n = 500 couples; n = 2360 cycles) and daily acute exposure was assessed for sensitive windows of each observed cycle (n = 440 couples; n = 1897 cycles). Discrete-time survival analysis modeled the association between fecundability and an interquartile range increase in each pollutant, adjusting for co-pollutants, site, age, race/ethnicity, parity, body mass index, smoking, income and education. Cycle-average air pollutant exposure was not associated with fecundability. In acute models, fecundability was diminished with exposure to ozone the day before ovulation and nitrogen oxides 8 days post ovulation (fecundability odds ratio [FOR] 0.83, 95% confidence interval [CI]: 0.72, 0.96 and FOR 0.84, 95% CI: 0.71, 0.99, respectively). However, particulate matter ≤10 microns 6 days post ovulation was associated with greater fecundability (FOR 1.25, 95% CI: 1.01, 1.54). Although our study was unlikely to be biased due to confounding, misclassification of air pollution exposure and the moderate study size may have limited our ability to detect an association between ambient air pollution and fecundability. While no associations were observed for cycle-average ambient air pollution exposure, consistent with past research in the United States, exposure during critical windows of hormonal variability was associated with prospectively measured couple

  10. Visualizing the uncertainty in the relationship between seasonal average climate and malaria risk.

    Science.gov (United States)

    MacLeod, D A; Morse, A P

    2014-12-02

    Around $1.6 billion per year is spent financing anti-malaria initiatives, and though malaria morbidity is falling, the impact of annual epidemics remains significant. Whilst malaria risk may increase with climate change, projections are highly uncertain and to sidestep this intractable uncertainty, adaptation efforts should improve societal ability to anticipate and mitigate individual events. Anticipation of climate-related events is made possible by seasonal climate forecasting, from which warnings of anomalous seasonal average temperature and rainfall, months in advance are possible. Seasonal climate hindcasts have been used to drive climate-based models for malaria, showing significant skill for observed malaria incidence. However, the relationship between seasonal average climate and malaria risk remains unquantified. Here we explore this relationship, using a dynamic weather-driven malaria model. We also quantify key uncertainty in the malaria model, by introducing variability in one of the first order uncertainties in model formulation. Results are visualized as location-specific impact surfaces: easily integrated with ensemble seasonal climate forecasts, and intuitively communicating quantified uncertainty. Methods are demonstrated for two epidemic regions, and are not limited to malaria modeling; the visualization method could be applied to any climate impact.

  11. Autoregressive Moving Average Graph Filtering

    OpenAIRE

    Isufi, Elvin; Loukas, Andreas; Simonetto, Andrea; Leus, Geert

    2016-01-01

    One of the cornerstones of the field of signal processing on graphs are graph filters, direct analogues of classical filters, but intended for signals defined on graphs. This work brings forth new insights on the distributed graph filtering problem. We design a family of autoregressive moving average (ARMA) recursions, which (i) are able to approximate any desired graph frequency response, and (ii) give exact solutions for tasks such as graph signal denoising and interpolation. The design phi...

  12. Average inactivity time model, associated orderings and reliability properties

    Science.gov (United States)

    Kayid, M.; Izadkhah, S.; Abouammoh, A. M.

    2018-02-01

    In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.

  13. Changing mortality and average cohort life expectancy

    Directory of Open Access Journals (Sweden)

    Robert Schoen

    2005-10-01

    Full Text Available Period life expectancy varies with changes in mortality, and should not be confused with the life expectancy of those alive during that period. Given past and likely future mortality changes, a recent debate has arisen on the usefulness of the period life expectancy as the leading measure of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure, the average cohort life expectancy (ACLE, to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate measures of mortality are calculated for England and Wales, Norway, and Switzerland for the years 1880 to 2000. CAL is found to be sensitive to past and present changes in death rates. ACLE requires the most data, but gives the best representation of the survivorship of cohorts present at a given time.

  14. Encounters with Pinyon-Juniper influence riskier movements in Greater Sage-Grouse across the Great Basin

    Science.gov (United States)

    Prochazka, Brian; Coates, Peter S.; Ricca, Mark; Casazza, Michael L.; Gustafson, K. Ben; Hull, Josh M.

    2016-01-01

    Fine-scale spatiotemporal studies can better identify relationships between individual survival and habitat fragmentation so that mechanistic interpretations can be made at the population level. Recent advances in Global Positioning System (GPS) technology and statistical models capable of deconstructing high-frequency location data have facilitated interpretation of animal movement within a behaviorally mechanistic framework. Habitat fragmentation due to singleleaf pinyon (Pinus monophylla; hereafter pinyon) and Utah juniper (Juniperus osteosperma; hereafter juniper) encroachment into sagebrush (Artemisia spp.) communities is a commonly implicated perturbation that can adversely influence greater sage-grouse (Centrocercus urophasianus; hereafter sage-grouse) demographic rates. Using an extensive GPS data set (233 birds and 282,954 locations) across 12 study sites within the Great Basin, we conducted a behavioral change point analysis and subsequently constructed Brownian bridge movement models from each behaviorally homogenous section. We found a positive relationship between modeled movement rate and probability of encountering pinyon-juniper with significant variation among age classes. The probability of encountering pinyon-juniper among adults was two and three times greater than that of yearlings and juveniles, respectively. However, the movement rate in response to the probability of encountering pinyon-juniper trees was 1.5 times greater for juveniles. We then assessed the risk of mortality associated with an interaction between movement rate and the probability of encountering pinyon-juniper using shared frailty models. During pinyon-juniper encounters, on average, juvenile, yearling, and adult birds experienced a 10.4%, 0.2%, and 0.3% reduction in annual survival probabilities. Populations that used pinyon-juniper habitats with a frequency ≥ 3.8 times the overall mean experienced decreases in annual survival probabilities of 71.1%, 0.9%, and 0.9%. This

  15. Importance of regional variation in conservation planning: A rangewide example of the Greater Sage-Grouse

    Science.gov (United States)

    Doherty, Kevin E.; Evans, Jeffrey S.; Coates, Peter S.; Juliusson, Lara; Fedy, Bradley C.

    2016-01-01

    We developed rangewide population and habitat models for Greater Sage-Grouse (Centrocercus urophasianus) that account for regional variation in habitat selection and relative densities of birds for use in conservation planning and risk assessments. We developed a probabilistic model of occupied breeding habitat by statistically linking habitat characteristics within 4 miles of an occupied lek using a nonlinear machine learning technique (Random Forests). Habitat characteristics used were quantified in GIS and represent standard abiotic and biotic variables related to sage-grouse biology. Statistical model fit was high (mean correctly classified = 82.0%, range = 75.4–88.0%) as were cross-validation statistics (mean = 80.9%, range = 75.1–85.8%). We also developed a spatially explicit model to quantify the relative density of breeding birds across each Greater Sage-Grouse management zone. The models demonstrate distinct clustering of relative abundance of sage-grouse populations across all management zones. On average, approximately half of the breeding population is predicted to be within 10% of the occupied range. We also found that 80% of sage-grouse populations were contained in 25–34% of the occupied range within each management zone. Our rangewide population and habitat models account for regional variation in habitat selection and the relative densities of birds, and thus, they can serve as a consistent and common currency to assess how sage-grouse habitat and populations overlap with conservation actions or threats over the entire sage-grouse range. We also quantified differences in functional habitat responses and disturbance thresholds across the Western Association of Fish and Wildlife Agencies (WAFWA) management zones using statistical relationships identified during habitat modeling. Even for a species as specialized as Greater Sage-Grouse, our results show that ecological context matters in both the strength of habitat selection (i

  16. Simultaneous administration of glucose and hyperoxic gas achieves greater improvement in tumor oxygenation than hyperoxic gas alone

    International Nuclear Information System (INIS)

    Snyder, Stacey A.; Lanzen, Jennifer L.; Braun, Rod D.; Rosner, Gary; Secomb, Timothy W.; Biaglow, John; Brizel, David M.; Dewhirst, Mark W.

    2001-01-01

    Purpose: To test the feasibility of hyperglycemic reduction of oxygen consumption combined with oxygen breathing (O 2 ), to improve tumor oxygenation. Methods and Materials: Fischer-344 rats bearing 1 cm R3230Ac flank tumors were anesthetized with Nembutal. Mean arterial pressure, heart rate, tumor blood flow ([TBF], laser Doppler flowmetry), pH, and pO 2 were measured before, during, and after glucose (1 or 4 g/kg) and/or O 2 . Results: Mean arterial pressure and heart rate were unaffected by treatment. Glucose at 1 g/kg yielded maximum blood glucose of 400 mg/dL, no change in TBF, reduced tumor pH (0.17 unit), and 3 mm Hg pO 2 rise. Glucose at 4 g/kg yielded maximum blood glucose of 900 mg/dL, pH drop of 0.6 unit, no pO 2 change, and reduced TBF (31%). Oxygen tension increased by 5 mm Hg with O 2 . Glucose (1 g/Kg) + O 2 yielded the largest change in pO 2 (27 mm Hg); this is highly significant relative to baseline or either treatment alone. The effect was positively correlated with baseline pO 2 , but 6 of 7 experiments with baseline pO 2 2 to improve tumor oxygenation. However, some cell lines are not susceptible to the Crabtree effect, and the magnitude is dependent on baseline pO 2 . Additional or alternative manipulations may be necessary to achieve more uniform improvement in pO 2

  17. Economic trade-offs between genetic improvement and longevity in dairy cattle.

    Science.gov (United States)

    De Vries, A

    2017-05-01

    Genetic improvement in sires used for artificial insemination (AI) is increasing faster compared with a decade ago. The genetic merit of replacement heifers is also increasing faster and the genetic lag with older cows in the herd increases. This may trigger greater cow culling to capture this genetic improvement. On the other hand, lower culling rates are often viewed favorably because the costs and environmental effects of maintaining herd size are generally lower. Thus, there is an economic trade-off between genetic improvement and longevity in dairy cattle. The objective of this study was to investigate the principles, literature, and magnitude of these trade-offs. Data from the Council on Dairy Cattle Breeding show that the estimated breeding value of the trait productive life has increased for 50 yr but the actual time cows spend in the herd has not increased. The average annual herd cull rate remains at approximately 36% and cow longevity is approximately 59 mo. The annual increase in average estimated breeding value of the economic index lifetime net merit of Holstein sires is accelerating from $40/yr when the sire entered AI around 2002 to $171/yr for sires that entered AI around 2012. The expectation is therefore that heifers born in 2015 are approximately $50 more profitable per lactation than heifers born in 2014. Asset replacement theory shows that assets should be replaced sooner when the challenging asset is technically improved. Few studies have investigated the direct effects of genetic improvement on optimal cull rates. A 35-yr-old study found that the economically optimal cull rates were in the range of 25 to 27%, compared with the lowest possible involuntary cull rate of 20%. Only a small effect was observed of using the best surviving dams to generate the replacement heifer calves. Genetic improvement from sires had little effect on the optimal cull rate. Another study that optimized culling decisions for individual cows also showed that the

  18. Greater general startle reflex is associated with greater anxiety levels: a correlational study on 111 young women

    Directory of Open Access Journals (Sweden)

    Eleonora ePoli

    2015-02-01

    Full Text Available Startle eyeblink reflex is a valid non-invasive tool for studying attention, emotion and psychiatric disorders. In the absence of any experimental manipulation, the general (or baseline startle reflex shows a high inter-individual variability, which is often considered task-irrelevant and therefore normalized across participants. Unlike the above view, we hypothesized that greater general startle magnitude is related to participants’ higher anxiety level. 111 healthy young women, after completing the State-Trait Anxiety Inventory, were randomly administered 10 acoustic white noise probes (50 ms, 100 dBA acoustic level while integrated EMG from left and right orbicularis oculi was recorded. Results showed that participants with greater state anxiety levels exhibited larger startle reflex magnitude from the left eye (r109=0.23, p<0.05. Furthermore, individuals who perceived the acoustic probe as more aversive reported the largest anxiety scores (r109=0.28, p<0.05 and had the largest eyeblinks, especially in the left eye (r109 = 0.34, p<0.001. Results suggest that general startle may represent a valid tool for studying the neural excitability underlying anxiety and emotional dysfunction in neurological and mental disorders.

  19. Wave function collapse implies divergence of average displacement

    OpenAIRE

    Marchewka, A.; Schuss, Z.

    2005-01-01

    We show that propagating a truncated discontinuous wave function by Schr\\"odinger's equation, as asserted by the collapse axiom, gives rise to non-existence of the average displacement of the particle on the line. It also implies that there is no Zeno effect. On the other hand, if the truncation is done so that the reduced wave function is continuous, the average coordinate is finite and there is a Zeno effect. Therefore the collapse axiom of measurement needs to be revised.

  20. Function reconstruction from noisy local averages

    International Nuclear Information System (INIS)

    Chen Yu; Huang Jianguo; Han Weimin

    2008-01-01

    A regularization method is proposed for the function reconstruction from noisy local averages in any dimension. Error bounds for the approximate solution in L 2 -norm are derived. A number of numerical examples are provided to show computational performance of the method, with the regularization parameters selected by different strategies

  1. Average beta-beating from random errors

    CERN Document Server

    Tomas Garcia, Rogelio; Langner, Andy Sven; Malina, Lukas; Franchi, Andrea; CERN. Geneva. ATS Department

    2018-01-01

    The impact of random errors on average β-beating is studied via analytical derivations and simulations. A systematic positive β-beating is expected from random errors quadratic with the sources or, equivalently, with the rms β-beating. However, random errors do not have a systematic effect on the tune.

  2. [Evaluation of nutritional status of school-age children after implementation of "Nutrition Improvement Program" in rural area in Hunan, China].

    Science.gov (United States)

    Deng, Zhu-Juan; Mao, Guang-Xu; Wang, Yu-Jun; Liu, Li; Chen, Yan

    2016-09-01

    To investigate the nutritional status of school-age children in rural area in Hunan, China from 2012 to 2015 and to evaluate the effectiveness of the "Nutrition Improvement Program for Compulsory Education Students in Rural Area" (hereinafter referred to as "Nutrition Improvement Program"). The nutritional status of school-age children aged 6-14 years was evaluated after the implementation of the "Nutrition Improvement Program" and the changing trend of the children's nutritional status was analyzed. The statistical analysis was performed on the monitoring data of the school-age children aged 6-14 years in rural area in Hunan, China from 2012 to 2015, which came from "The Nutrition and Health Status Monitoring and Evaluation System of Nutrition Improvement Program for Compulsory Education Students in Rural Area". In 2015, female students aged 6-7 years in rural area in Hunan, China had a significantly greater body length than the rural average in China (PNutrition Improvement Program", the prevalence rate of growth retardation decreased (PNutrition Improvement Program" has achieved some success, but the nutritional status of school-age children has not improved significantly. Overweight/obesity and malnutrition are still present. Therefore, to promote the nutritional status of school-age children it is recommended to improve the measures for the "Nutrition Improvement Program".

  3. Observations of territorial breeding common ravens caching eggs of greater sage-grouse

    Science.gov (United States)

    Howe, Kristy B.; Coates, Peter S.

    2015-01-01

    Previous investigations using continuous video monitoring of greater sage-grouse Centrocercus urophasianus nests have unambiguously identified common ravens Corvus corax as an important egg predator within the western United States. The quantity of greater sage-grouse eggs an individual common raven consumes during the nesting period and the extent to which common ravens actively hunt greater sage-grouse nests are largely unknown. However, some evidence suggests that territorial breeding common ravens, rather than nonbreeding transients, are most likely responsible for nest depredations. We describe greater sage-grouse egg depredation observations obtained opportunistically from three common raven nests located in Idaho and Nevada where depredated greater sage-grouse eggs were found at or in the immediate vicinity of the nest site, including the caching of eggs in nearby rock crevices. We opportunistically monitored these nests by counting and removing depredated eggs and shell fragments from the nest sites during each visit to determine the extent to which the common raven pairs preyed on greater sage-grouse eggs. To our knowledge, our observations represent the first evidence that breeding, territorial pairs of common ravens cache greater sage-grouse eggs and are capable of depredating multiple greater sage-grouse nests.

  4. Comparative Education in Greater China: Contexts, Characteristics, Contrasts and Contributions.

    Science.gov (United States)

    Bray, Mark; Qin, Gui

    2001-01-01

    The evolution of comparative education in Greater China (mainland China, Taiwan, Hong Kong, and Macau) has been influenced by size, culture, political ideologies, standard of living, and colonialism. Similarities and differences in conceptions of comparative education are identified among the four components and between Greater China and other…

  5. Velocity Drives Greater Power Observed During Back Squat Using Cluster Sets.

    Science.gov (United States)

    Oliver, Jonathan M; Kreutzer, Andreas; Jenke, Shane C; Phillips, Melody D; Mitchell, Joel B; Jones, Margaret T

    2016-01-01

    This investigation compared the kinetics and kinematics of cluster sets (CLU) and traditional sets (TRD) during back squat in trained (RT) and untrained (UT) men. Twenty-four participants (RT = 12, 25 ± 1 year, 179.1 ± 2.2 cm, 84.6 ± 2.1 kg; UT = 12, 25 ± 1 year, 180.1 ± 1.8 cm, 85.4 ± 3.8 kg) performed TRD (4 × 10, 120-second rest) and CLU (4 × (2 × 5) 30 seconds between clusters; 90 seconds between sets) with 70% one repetition maximum, randomly. Kinematics and kinetics were sampled through force plate and linear position transducers. Resistance-trained produced greater overall force, velocity, and power; however, similar patterns were observed in all variables when comparing conditions. Cluster sets produced significantly greater force in isolated repetitions in sets 1-3, while consistently producing greater force due to a required reduction in load during set 4 resulting in greater total volume load (CLU, 3302.4 ± 102.7 kg; TRD, 3274.8 ± 102.8 kg). Velocity loss was lessened in CLU resulting in significantly higher velocities in sets 2 through 4. Furthermore, higher velocities were produced by CLU during later repetitions of each set. Cluster sets produced greater power output for an increasing number of repetitions in each set (set 1, 5 repetitions; sets 2 and 3, 6 repetitions; set 4, 8 repetitions), and the difference between conditions increased over subsequent sets. Time under tension increased over each set and was greater in TRD. This study demonstrates greater power output is driven by greater velocity when back squatting during CLU; therefore, velocity may be a useful measure by which to assess power.

  6. Active convergence between the Lesser and Greater Caucasus in Georgia: Constraints on the tectonic evolution of the Lesser-Greater Caucasus continental collision

    Science.gov (United States)

    Sokhadze, G.; Floyd, M.; Godoladze, T.; King, R.; Cowgill, E. S.; Javakhishvili, Z.; Hahubia, G.; Reilinger, R.

    2018-01-01

    We present and interpret newly determined site motions derived from GPS observations made from 2008 through 2016 in the Republic of Georgia, which constrain the rate and locus of active shortening in the Lesser-Greater Caucasus continental collision zone. Observation sites are located along two ∼160 km-long profiles crossing the Lesser-Greater Caucasus boundary zone: one crossing the Rioni Basin in western Georgia and the other crossing further east near the longitude of Tbilisi. Convergence across the Rioni Basin Profile occurs along the southern margin of the Greater Caucasus, near the surface trace of the north-dipping Main Caucasus Thrust Fault (MCTF) system, and is consistent with strain accumulation on the fault that generated the 1991 MW6.9 Racha earthquake. In contrast, convergence along the Tbilisi Profile occurs near Tbilisi and the northern boundary of the Lesser Caucasus (near the south-dipping Lesser Caucasus Thrust Fault), approximately 50-70 km south of the MCTF, which is inactive within the resolution of geodetic observations (< ± 0.5 mm/yr) at the location of the Tbilisi Profile. We suggest that the southward offset of convergence along strike of the range is related to the incipient collision of the Lesser-Greater Caucasus, and closing of the intervening Kura Basin, which is most advanced along this segment of the collision zone. The identification of active shortening near Tbilisi requires a reevaluation of seismic hazards in this area.

  7. IMPROVING PATIENT SAFETY:

    DEFF Research Database (Denmark)

    Bagger, Bettan; Taylor Kelly, Hélène; Hørdam, Britta

    Improving patient safety is both a national and international priority as millions of patients Worldwide suffer injury or death every year due to unsafe care. University College Zealand employs innovative pedagogical approaches in educational design. Regional challenges related to geographic......, social and cultural factors have resulted in a greater emphasis upon digital technology. Attempts to improve patient safety by optimizing students’ competencies in relation to the reporting of clinical errors, has resulted in the development of an interdisciplinary e-learning concept. The program makes...

  8. Improved cell viability and hydroxyapatite growth on nitrogen ion-implanted surfaces

    Science.gov (United States)

    Shafique, Muhammad Ahsan; Murtaza, G.; Saadat, Shahzad; Uddin, Muhammad K. H.; Ahmad, Riaz

    2017-08-01

    Stainless steel 306 is implanted with various doses of nitrogen ions using a 2 MV pelletron accelerator for the improvement of its surface biomedical properties. Raman spectroscopy reveals incubation of hydroxyapatite (HA) on all the samples and it is found that the growth of incubated HA is greater in higher ion dose samples. SEM profiles depict uniform growth and greater spread of HA with higher ion implantation. Human oral fibroblast response is also found consistent with Raman spectroscopy and SEM results; the cell viability is found maximum in samples treated with the highest (more than 300%) dose. XRD profiles signified greater peak intensity of HA with ion implantation; a contact angle study revealed hydrophilic behavior of all the samples but the treated samples were found to be lesser hydrophilic compared to the control samples. Nitrogen implantation yields greater bioactivity, improved surface affinity for HA incubation and improved hardness of the surface.

  9. Studies concerning average volume flow and waterpacking anomalies in thermal-hydraulics codes

    International Nuclear Information System (INIS)

    Lyczkowski, R.W.; Ching, J.T.; Mecham, D.C.

    1977-01-01

    One-dimensional hydrodynamic codes have been observed to exhibit anomalous behavior in the form of non-physical pressure oscillations and spikes. It is our experience that sometimes this anomaloous behavior can result in mass depletion, steam table failure and in severe cases, problem abortion. In addition, these non-physical pressure spikes can result in long running times when small time steps are needed in an attempt to cope with anomalous solution behavior. The source of these pressure spikes has been conjectured to be caused by nonuniform enthalpy distribution or wave reflection off the closed end of a pipe or abrupt changes in pressure history when the fluid changes from subcooled to two-phase conditions. It is demonstrated in this paper that many of the faults can be attributed to inadequate modeling of the average volume flow and the sharp fluid density front crossing a junction. General corrective models are difficult to devise since the causes of the problems touch on the very theoretical bases of the differential field equations and associated solution scheme. For example, the fluid homogeneity assumption and the numerical extrapolation scheme have placed severe restrictions on the capability of a code to adequately model certain physical phenomena involving fluid discontinuities. The need for accurate junction and local properties to describe phenomena internal to a control volume often points to additional lengthy computations that are difficult to justify in terms of computational efficiency. Corrective models that are economical to implement and use are developed. When incorporated into the one-dimensional, homogeneous transient thermal-hydraulic analysis computer code, RELAP4, they help mitigate many of the code's difficulties related to average volume flow and water-packing anomalies. An average volume flow model and a critical density model are presented. Computational improvements due to these models are also demonstrated

  10. Bootstrapping Density-Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...... (1989). In many cases validity of bootstrap-based inference procedures is found to depend crucially on whether the bandwidth sequence satisfies a particular (asymptotic linearity) condition. An exception to this rule occurs for inference procedures involving a studentized estimator employing a "robust...

  11. Adaptive Spontaneous Transitions between Two Mechanisms of Numerical Averaging.

    Science.gov (United States)

    Brezis, Noam; Bronfman, Zohar Z; Usher, Marius

    2015-06-04

    We investigated the mechanism with which humans estimate numerical averages. Participants were presented with 4, 8 or 16 (two-digit) numbers, serially and rapidly (2 numerals/second) and were instructed to convey the sequence average. As predicted by a dual, but not a single-component account, we found a non-monotonic influence of set-size on accuracy. Moreover, we observed a marked decrease in RT as set-size increases and RT-accuracy tradeoff in the 4-, but not in the 16-number condition. These results indicate that in accordance with the normative directive, participants spontaneously employ analytic/sequential thinking in the 4-number condition and intuitive/holistic thinking in the 16-number condition. When the presentation rate is extreme (10 items/sec) we find that, while performance still remains high, the estimations are now based on intuitive processing. The results are accounted for by a computational model postulating population-coding underlying intuitive-averaging and working-memory-mediated symbolic procedures underlying analytical-averaging, with flexible allocation between the two.

  12. Assessing the applicability of WRF optimal parameters under the different precipitation simulations in the Greater Beijing Area

    Science.gov (United States)

    Di, Zhenhua; Duan, Qingyun; Wang, Chen; Ye, Aizhong; Miao, Chiyuan; Gong, Wei

    2018-03-01

    Forecasting skills of the complex weather and climate models have been improved by tuning the sensitive parameters that exert the greatest impact on simulated results based on more effective optimization methods. However, whether the optimal parameter values are still work when the model simulation conditions vary, which is a scientific problem deserving of study. In this study, a highly-effective optimization method, adaptive surrogate model-based optimization (ASMO), was firstly used to tune nine sensitive parameters from four physical parameterization schemes of the Weather Research and Forecasting (WRF) model to obtain better summer precipitation forecasting over the Greater Beijing Area in China. Then, to assess the applicability of the optimal parameter values, simulation results from the WRF model with default and optimal parameter values were compared across precipitation events, boundary conditions, spatial scales, and physical processes in the Greater Beijing Area. The summer precipitation events from 6 years were used to calibrate and evaluate the optimal parameter values of WRF model. Three boundary data and two spatial resolutions were adopted to evaluate the superiority of the calibrated optimal parameters to default parameters under the WRF simulations with different boundary conditions and spatial resolutions, respectively. Physical interpretations of the optimal parameters indicating how to improve precipitation simulation results were also examined. All the results showed that the optimal parameters obtained by ASMO are superior to the default parameters for WRF simulations for predicting summer precipitation in the Greater Beijing Area because the optimal parameters are not constrained by specific precipitation events, boundary conditions, and spatial resolutions. The optimal values of the nine parameters were determined from 127 parameter samples using the ASMO method, which showed that the ASMO method is very highly-efficient for optimizing WRF

  13. ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE

    Directory of Open Access Journals (Sweden)

    Carmen BOGHEAN

    2013-12-01

    Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.

  14. Category structure determines the relative attractiveness of global versus local averages.

    Science.gov (United States)

    Vogel, Tobias; Carr, Evan W; Davis, Tyler; Winkielman, Piotr

    2018-02-01

    Stimuli that capture the central tendency of presented exemplars are often preferred-a phenomenon also known as the classic beauty-in-averageness effect . However, recent studies have shown that this effect can reverse under certain conditions. We propose that a key variable for such ugliness-in-averageness effects is the category structure of the presented exemplars. When exemplars cluster into multiple subcategories, the global average should no longer reflect the underlying stimulus distributions, and will thereby become unattractive. In contrast, the subcategory averages (i.e., local averages) should better reflect the stimulus distributions, and become more attractive. In 3 studies, we presented participants with dot patterns belonging to 2 different subcategories. Importantly, across studies, we also manipulated the distinctiveness of the subcategories. We found that participants preferred the local averages over the global average when they first learned to classify the patterns into 2 different subcategories in a contrastive categorization paradigm (Experiment 1). Moreover, participants still preferred local averages when first classifying patterns into a single category (Experiment 2) or when not classifying patterns at all during incidental learning (Experiment 3), as long as the subcategories were sufficiently distinct. Finally, as a proof-of-concept, we mapped our empirical results onto predictions generated by a well-known computational model of category learning (the Generalized Context Model [GCM]). Overall, our findings emphasize the key role of categorization for understanding the nature of preferences, including any effects that emerge from stimulus averaging. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  15. Oppugning the assumptions of spatial averaging of segment and joint orientations.

    Science.gov (United States)

    Pierrynowski, Michael Raymond; Ball, Kevin Arthur

    2009-02-09

    Movement scientists frequently calculate "arithmetic averages" when examining body segment or joint orientations. Such calculations appear routinely, yet are fundamentally flawed. Three-dimensional orientation data are computed as matrices, yet three-ordered Euler/Cardan/Bryant angle parameters are frequently used for interpretation. These parameters are not geometrically independent; thus, the conventional process of averaging each parameter is incorrect. The process of arithmetic averaging also assumes that the distances between data are linear (Euclidean); however, for the orientation data these distances are geodesically curved (Riemannian). Therefore we question (oppugn) whether use of the conventional averaging approach is an appropriate statistic. Fortunately, exact methods of averaging orientation data have been developed which both circumvent the parameterization issue, and explicitly acknowledge the Euclidean or Riemannian distance measures. The details of these matrix-based averaging methods are presented and their theoretical advantages discussed. The Euclidian and Riemannian approaches offer appealing advantages over the conventional technique. With respect to practical biomechanical relevancy, examinations of simulated data suggest that for sets of orientation data possessing characteristics of low dispersion, an isotropic distribution, and less than 30 degrees second and third angle parameters, discrepancies with the conventional approach are less than 1.1 degrees . However, beyond these limits, arithmetic averaging can have substantive non-linear inaccuracies in all three parameterized angles. The biomechanics community is encouraged to recognize that limitations exist with the use of the conventional method of averaging orientations. Investigations requiring more robust spatial averaging over a broader range of orientations may benefit from the use of matrix-based Euclidean or Riemannian calculations.

  16. Blood transfusion sampling and a greater role for error recovery.

    Science.gov (United States)

    Oldham, Jane

    Patient identification errors in pre-transfusion blood sampling ('wrong blood in tube') are a persistent area of risk. These errors can potentially result in life-threatening complications. Current measures to address root causes of incidents and near misses have not resolved this problem and there is a need to look afresh at this issue. PROJECT PURPOSE: This narrative review of the literature is part of a wider system-improvement project designed to explore and seek a better understanding of the factors that contribute to transfusion sampling error as a prerequisite to examining current and potential approaches to error reduction. A broad search of the literature was undertaken to identify themes relating to this phenomenon. KEY DISCOVERIES: Two key themes emerged from the literature. Firstly, despite multi-faceted causes of error, the consistent element is the ever-present potential for human error. Secondly, current focus on error prevention could potentially be augmented with greater attention to error recovery. Exploring ways in which clinical staff taking samples might learn how to better identify their own errors is proposed to add to current safety initiatives.

  17. Greater bottoms upgrading with Albemarle's e-bed catalysts

    Energy Technology Data Exchange (ETDEWEB)

    Toshima, H.; Sedlacek, Z.; Backhouse, D.; Mayo, S.; Plantenga, F. [Albemarle Catalysts, Houston, TX (United States)

    2006-07-01

    The E-bed process is a heavy oil upgrading technology that produces near isothermal reactor conditions at a constant catalytic activity. However, E-bed conversion optimization is limited by reactor and downstream fouling problems caused by asphaltene precipitation. While asphaltene precipitation can controlled by reducing hydrogenation, high hydrogenation activity is needed for the removal of sulfur and heavy metals. This presentation described an asphaltene molecule management concept to reduce the fouling of E-bed units. Sediment reduction and high hydrogenation catalysts were used in a modified E-bed process with a variety of feeds and operating conditions. It was observed that the KF1312 catalyst achieved much higher sediment-reduction capability along with satisfactory hydrogenation activity with the different kinds of crude oil sources tested. The catalyst hydrocracked the asphaltenes into smaller molecules, which created greater asphaltene solubility. The sediment reduction capacity of the catalyst-staging technology is now being optimized. It was concluded that the technology will help to reduce fouling in E-bed processes and lead to improved conversion rates for refineries. refs., tabs., figs.

  18. Analytic computation of average energy of neutrons inducing fission

    International Nuclear Information System (INIS)

    Clark, Alexander Rich

    2016-01-01

    The objective of this report is to describe how I analytically computed the average energy of neutrons that induce fission in the bare BeRP ball. The motivation of this report is to resolve a discrepancy between the average energy computed via the FMULT and F4/FM cards in MCNP6 by comparison to the analytic results.

  19. Breeding of Greater and Lesser Flamingos at Sua Pan, Botswana ...

    African Journals Online (AJOL)

    to fledging was unknown owing to the rapid drying of the pan in late March 1999. No Greater Flamingo breeding was seen that season. Exceptional flooding during 1999–2000 produced highly favourable breeding conditions, with numbers of Greater and Lesser Flamingos breeding estimated to be 23 869 and 64 287 pairs, ...

  20. Surgical anatomy of greater occipital nerve and its relation to ...

    African Journals Online (AJOL)

    Introduction: The knowledge of the anatomy of greater occipital nerve and its relation to occipital artery is important for the surgeon. Blockage or surgical release of greater occipital nerve is clinically effective in reducing or eliminating chronic migraine symptoms. Aim: The aim of this research was to study the anatomy of ...

  1. Surgical anatomy of greater occipital nerve and its relation to ...

    African Journals Online (AJOL)

    Nancy Mohamed El Sekily

    2014-08-19

    Aug 19, 2014 ... Abstract Introduction: The knowledge of the anatomy of greater occipital nerve and its relation to occipital artery is important for the surgeon. Blockage or surgical release of greater occipital nerve is clinically effective in reducing or eliminating chronic migraine symptoms. Aim: The aim of this research was to ...

  2. Analysis and comparison of safety models using average daily, average hourly, and microscopic traffic.

    Science.gov (United States)

    Wang, Ling; Abdel-Aty, Mohamed; Wang, Xuesong; Yu, Rongjie

    2018-02-01

    There have been plenty of traffic safety studies based on average daily traffic (ADT), average hourly traffic (AHT), or microscopic traffic at 5 min intervals. Nevertheless, not enough research has compared the performance of these three types of safety studies, and seldom of previous studies have intended to find whether the results of one type of study is transferable to the other two studies. First, this study built three models: a Bayesian Poisson-lognormal model to estimate the daily crash frequency using ADT, a Bayesian Poisson-lognormal model to estimate the hourly crash frequency using AHT, and a Bayesian logistic regression model for the real-time safety analysis using microscopic traffic. The model results showed that the crash contributing factors found by different models were comparable but not the same. Four variables, i.e., the logarithm of volume, the standard deviation of speed, the logarithm of segment length, and the existence of diverge segment, were positively significant in the three models. Additionally, weaving segments experienced higher daily and hourly crash frequencies than merge and basic segments. Then, each of the ADT-based, AHT-based, and real-time models was used to estimate safety conditions at different levels: daily and hourly, meanwhile, the real-time model was also used in 5 min intervals. The results uncovered that the ADT- and AHT-based safety models performed similar in predicting daily and hourly crash frequencies, and the real-time safety model was able to provide hourly crash frequency. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Most American Academy of Orthopaedic Surgeons' online patient education material exceeds average patient reading level.

    Science.gov (United States)

    Eltorai, Adam E M; Sharma, Pranav; Wang, Jing; Daniels, Alan H

    2015-04-01

    Advancing health literacy has the potential to improve patient outcomes. The American Academy of Orthopaedic Surgeons' (AAOS) online patient education materials serve as a tool to improve health literacy for orthopaedic patients; however, it is unknown whether the materials currently meet the National Institutes of Health/American Medical Association's recommended sixth grade readability guidelines for health information or the mean US adult reading level of eighth grade. The purposes of this study were (1) to evaluate the mean grade level readability of online AAOS patient education materials; and (2) to determine what proportion of the online materials exceeded recommended (sixth grade) and mean US (eighth grade) reading level. Reading grade levels for 99.6% (260 of 261) of the online patient education entries from the AAOS were analyzed using the Flesch-Kincaid formula built into Microsoft Word software. Mean grade level readability of the AAOS patient education materials was 9.2 (SD ± 1.6). Two hundred fifty-one of the 260 articles (97%) had a readability score above the sixth grade level. The readability of the AAOS articles exceeded the sixth grade level by an average of 3.2 grade levels. Of the 260 articles, 210 (81%) had a readability score above the eighth grade level, which is the average reading level of US adults. Most of the online patient education materials from the AAOS had readability levels that are far too advanced for many patients to comprehend. Efforts to adjust the readability of online education materials to the needs of the audience may improve the health literacy of orthopaedic patients. Patient education materials can be made more comprehensible through use of simpler terms, shorter sentences, and the addition of pictures. More broadly, all health websites, not just those of the AAOS, should aspire to be comprehensible to the typical reader.

  4. Utility of T-DNA insertion mutagenesis in arabidopsis for crop improvement

    Energy Technology Data Exchange (ETDEWEB)

    Feldmann, K A [Arizona Univ., Tucson, AZ (United States). Dept. of Plant Sciences

    1995-11-01

    T-DNA insertion mutagenesis in Arabidopsis is an efficient and expedient method for isolating genes that may have agronomic importance in crop plants. More than 14,000 transformants, with an average of 1.5 inserts per transformant, have been generated in the laboratory at the University of Arizona, Tucson, United States of America. Assuming that the genome of Arabidopsis is 100 Mb and that insertion is random, there is a greater than 50% probability that any particular gene has been tagged in this population. These transformed lines have been screened for any visible alteration in phenotype. In addition, they have been screened under numerous selective regimes such as cold tolerance, auxin and ethylene resistance or sensitivity, and nitrate utilization, among many others. Twenty per cent of these transformants segregate for some type of mutation. Approximately 40% of these are due to T-DNA insertion. Genes have already been cloned from various developmental and biochemical pathways, including flower, root and trichome morphology, light and ethylene regulated growth, fatty acid desaturation and epicuticular wax (EW) production. Some of the isolated genes are being introduced into agronomic species in an attempt to improve specific traits. For example, two genes important in EW production have been introduced into Brassica oleracea (broccoli) to modify the nature of the EW such that engineered plants will show greater resistance to herbivorous insects. Similarly, genes involved in fatty acid desaturation, male sterility, height or nitrogen metabolism, to mention only a few, could also be utilized to improve certain crop traits via genetic engineering. Several of these examples are described. (author). 57 refs, 1 fig., 2 tabs.

  5. Greater Melbourne.

    Science.gov (United States)

    Wulff, M; Burke, T; Newton, P

    1986-03-01

    With more than a quarter of its population born overseas, Melbourne, Australia, is rapidly changing from an all-white British outpost to a multicultural, multilingual community. Since the "white" Australian policy was abandoned after World War II, 3 million immigrants from 100 different countries have moved to Australia. Most of the immigrants come from New Zealand, Rhodesia, South Africa, Britain, Ireland, Greece, Turkey, Yugoslavia, Poland, and Indochina. Melbourne is Australia's 2nd largest city and houses 1 out of 5 Australians. Its 1984 population was 2,888,400. Melbourne's housing pattern consists of subdivisions; 75% of the population live in detached houses. Between 1954 and 1961 Melbourne grew at an annual rate of 3.5%; its growth rate between 1961 and 1971 still averaged 2.5%. In the 1970s the growth rate slowed to 1.4%. Metropolitan Melbourne has no central government but is divided into 56 councils and 8 regions. Both Australia's and Melbourne's fertility rates are high compared to the rest of the developed world, partly because of their younger age structure. 41% of Melbourne's population was under age 24 in 1981. Single-person households are growing faster than any other type. 71% of the housing is owner-occupied; in 1981 the median sized dwelling had 5.2 rooms. Public housing only accounts for 2.6% of all dwellings. Fewer students graduate from high school in Australia than in other developed countries, and fewer graduates pursue higher education. Melbourne's suburban sprawl promotes private car travel. In 1980 Melbourne contained more than 28,000 retail establishments and 4200 restaurants and hotels. Industry accounts for 30% of employment, and services account for another 30%. Its largest industries are motor vehicles, clothing, and footware. Although unemployment reached 10% after the 1973 energy crisis, by 1985 it was down to 6%.

  6. Are passive smoking, air pollution and obesity a greater mortality risk than major radiation incidents?

    Directory of Open Access Journals (Sweden)

    Smith Jim T

    2007-04-01

    Full Text Available Abstract Background Following a nuclear incident, the communication and perception of radiation risk becomes a (perhaps the major public health issue. In response to such incidents it is therefore crucial to communicate radiation health risks in the context of other more common environmental and lifestyle risk factors. This study compares the risk of mortality from past radiation exposures (to people who survived the Hiroshima and Nagasaki atomic bombs and those exposed after the Chernobyl accident with risks arising from air pollution, obesity and passive and active smoking. Methods A comparative assessment of mortality risks from ionising radiation was carried out by estimating radiation risks for realistic exposure scenarios and assessing those risks in comparison with risks from air pollution, obesity and passive and active smoking. Results The mortality risk to populations exposed to radiation from the Chernobyl accident may be no higher than that for other more common risk factors such as air pollution or passive smoking. Radiation exposures experienced by the most exposed group of survivors of Hiroshima and Nagasaki led to an average loss of life expectancy significantly lower than that caused by severe obesity or active smoking. Conclusion Population-averaged risks from exposures following major radiation incidents are clearly significant, but may be no greater than those from other much more common environmental and lifestyle factors. This comparative analysis, whilst highlighting inevitable uncertainties in risk quantification and comparison, helps place the potential consequences of radiation exposures in the context of other public health risks.

  7. The effect of wind shielding and pen position on the average daily weight gain and feed conversion rate of grower/finisher pigs

    DEFF Research Database (Denmark)

    Jensen, Dan B.; Toft, Nils; Cornou, Cécile

    2014-01-01

    of the effects of wind shielding, linear mixed models were fitted to describe the average daily weight gain and feed conversion rate of 1271 groups (14 individuals per group) of purebred Duroc, Yorkshire and Danish Landrace boars, as a function of shielding (yes/no), insert season (winter, spring, summer, autumn...... groups placed in the 1st and 4th pen (p=0.0001). A similar effect was not seen on smaller pigs. Pen placement appears to have no effect on feed conversion rate.No interaction effects between shielding and distance to the corridor could be demonstrated. Furthermore, in models including both factors......). The effect could not be tested for Yorkshire and Danish Landrace due to lack of data on these breeds. For groups of pigs above the average start weight, a clear tendency of higher growth rates at greater distances from the central corridor was observed, with the most significant differences being between...

  8. Technical concept for a Greater Confinement Disposal test facility

    International Nuclear Information System (INIS)

    Hunter, P.H.

    1982-01-01

    For the past two years, Ford, Bacon and Davis has been performing technical services for the Department of Energy at the Nevada Test Site in specific development of defense low-level waste management concepts for greater confinement disposal concept with particular application to arid sites. The investigations have included the development of Criteria for Greater Confinement Disposal, NVO-234, which was published in May of 1981 and the draft of the technical concept for Greater Confinement Disposal, with the latest draft published in November 1981. The final draft of the technical concept and design specifications are expected to be published imminently. The document is prerequisite to the actual construction and implementation of the demonstration facility this fiscal year. The GCD Criteria Document, NVO-234 is considered to contain information complimentary and compatible with that being developed for the reserved section 10 CFR 61.51b of the NRCs proposed licensing rule for low level waste disposal facilities

  9. High-resolution quantification of atmospheric CO2 mixing ratios in the Greater Toronto Area, Canada

    Science.gov (United States)

    Pugliese, Stephanie C.; Murphy, Jennifer G.; Vogel, Felix R.; Moran, Michael D.; Zhang, Junhua; Zheng, Qiong; Stroud, Craig A.; Ren, Shuzhan; Worthy, Douglas; Broquet, Gregoire

    2018-03-01

    Many stakeholders are seeking methods to reduce carbon dioxide (CO2) emissions in urban areas, but reliable, high-resolution inventories are required to guide these efforts. We present the development of a high-resolution CO2 inventory available for the Greater Toronto Area and surrounding region in Southern Ontario, Canada (area of ˜ 2.8 × 105 km2, 26 % of the province of Ontario). The new SOCE (Southern Ontario CO2 Emissions) inventory is available at the 2.5 × 2.5 km spatial and hourly temporal resolution and characterizes emissions from seven sectors: area, residential natural-gas combustion, commercial natural-gas combustion, point, marine, on-road, and off-road. To assess the accuracy of the SOCE inventory, we developed an observation-model framework using the GEM-MACH chemistry-transport model run on a high-resolution grid with 2.5 km grid spacing coupled to the Fossil Fuel Data Assimilation System (FFDAS) v2 inventories for anthropogenic CO2 emissions and the European Centre for Medium-Range Weather Forecasts (ECMWF) land carbon model C-TESSEL for biogenic fluxes. A run using FFDAS for the Southern Ontario region was compared to a run in which its emissions were replaced by the SOCE inventory. Simulated CO2 mixing ratios were compared against in situ measurements made at four sites in Southern Ontario - Downsview, Hanlan's Point, Egbert and Turkey Point - in 3 winter months, January-March 2016. Model simulations had better agreement with measurements when using the SOCE inventory emissions versus other inventories, quantified using a variety of statistics such as correlation coefficient, root-mean-square error, and mean bias. Furthermore, when run with the SOCE inventory, the model had improved ability to capture the typical diurnal pattern of CO2 mixing ratios, particularly at the Downsview, Hanlan's Point, and Egbert sites. In addition to improved model-measurement agreement, the SOCE inventory offers a sectoral breakdown of emissions

  10. High-resolution quantification of atmospheric CO2 mixing ratios in the Greater Toronto Area, Canada

    Directory of Open Access Journals (Sweden)

    S. C. Pugliese

    2018-03-01

    Full Text Available Many stakeholders are seeking methods to reduce carbon dioxide (CO2 emissions in urban areas, but reliable, high-resolution inventories are required to guide these efforts. We present the development of a high-resolution CO2 inventory available for the Greater Toronto Area and surrounding region in Southern Ontario, Canada (area of  ∼ 2.8 × 105 km2, 26 % of the province of Ontario. The new SOCE (Southern Ontario CO2 Emissions inventory is available at the 2.5 × 2.5 km spatial and hourly temporal resolution and characterizes emissions from seven sectors: area, residential natural-gas combustion, commercial natural-gas combustion, point, marine, on-road, and off-road. To assess the accuracy of the SOCE inventory, we developed an observation–model framework using the GEM-MACH chemistry–transport model run on a high-resolution grid with 2.5 km grid spacing coupled to the Fossil Fuel Data Assimilation System (FFDAS v2 inventories for anthropogenic CO2 emissions and the European Centre for Medium-Range Weather Forecasts (ECMWF land carbon model C-TESSEL for biogenic fluxes. A run using FFDAS for the Southern Ontario region was compared to a run in which its emissions were replaced by the SOCE inventory. Simulated CO2 mixing ratios were compared against in situ measurements made at four sites in Southern Ontario – Downsview, Hanlan's Point, Egbert and Turkey Point – in 3 winter months, January–March 2016. Model simulations had better agreement with measurements when using the SOCE inventory emissions versus other inventories, quantified using a variety of statistics such as correlation coefficient, root-mean-square error, and mean bias. Furthermore, when run with the SOCE inventory, the model had improved ability to capture the typical diurnal pattern of CO2 mixing ratios, particularly at the Downsview, Hanlan's Point, and Egbert sites. In addition to improved model–measurement agreement, the SOCE inventory offers a

  11. Asymptotic behaviour of time averages for non-ergodic Gaussian processes

    Science.gov (United States)

    Ślęzak, Jakub

    2017-08-01

    In this work, we study the behaviour of time-averages for stationary (non-ageing), but ergodicity-breaking Gaussian processes using their representation in Fourier space. We provide explicit formulae for various time-averaged quantities, such as mean square displacement, density, and analyse the behaviour of time-averaged characteristic function, which gives insight into rich memory structure of the studied processes. Moreover, we show applications of the ergodic criteria in Fourier space, determining the ergodicity of the generalised Langevin equation's solutions.

  12. Gibbs equilibrium averages and Bogolyubov measure

    International Nuclear Information System (INIS)

    Sankovich, D.P.

    2011-01-01

    Application of the functional integration methods in equilibrium statistical mechanics of quantum Bose-systems is considered. We show that Gibbs equilibrium averages of Bose-operators can be represented as path integrals over a special Gauss measure defined in the corresponding space of continuous functions. We consider some problems related to integration with respect to this measure

  13. Hyaluronic acid microneedle patch for the improvement of crow's feet wrinkles.

    Science.gov (United States)

    Choi, Sun Young; Kwon, Hyun Jung; Ahn, Ga Ram; Ko, Eun Jung; Yoo, Kwang Ho; Kim, Beom Joon; Lee, Changjin; Kim, Daegun

    2017-11-01

    Hyaluronic acid (HA) has an immediate volumizing effect, due to its strong water-binding potential, and stimulates fibroblasts, causing collagen synthesis, with short- and long-term effects on wrinkle improvement. We investigated the efficacy and safety of HA microneedle patches for crow's feet wrinkles. Using a randomized spilt-face design, we compared microneedle patches with a topical application containing the same active ingredients. We enrolled 34 Korean female subjects with mild to moderate crow's feet wrinkles. The wrinkle on each side of the subject's face was randomly assigned to a HA microneedle patch or HA essence application twice a week for 8 weeks. Efficacy was evaluated at weeks 2, 4, and 8. Skin wrinkles were measured as average roughness using replica and PRIMOS. Skin elasticity was assessed using a cutometer. Two independent blinded dermatologists evaluated the changes after treatment using the global visual wrinkle assessment score. Subjects assessed wrinkles using the subject global assessment score. Skin wrinkles were significantly reduced and skin elasticity significantly increased in both groups, although improvement was greater in the patch group at week 8 after treatment. In the primary and cumulative skin irritation tests, the HA microneedle patch did not induce any skin irritation. The HA microneedle patch is more effective than the HA essence for wrinkle improvement and is a safe and convenient without skin irritation. © 2017 Wiley Periodicals, Inc.

  14. Reserves in western basins: Part 1, Greater Green River basin

    Energy Technology Data Exchange (ETDEWEB)

    1993-10-01

    This study characterizes an extremely large gas resource located in low permeability, overpressured sandstone reservoirs located below 8,000 feet drill depth in the Greater Green River basin, Wyoming. Total in place resource is estimated at 1,968 Tcf. Via application of geologic, engineering and economic criteria, the portion of this resource potentially recoverable as reserves is estimated. Those volumes estimated include probable, possible and potential categories and total 33 Tcf as a mean estimate of recoverable gas for all plays considered in the basin. Five plays (formations) were included in this study and each was separately analyzed in terms of its overpressured, tight gas resource, established productive characteristics and future reserves potential based on a constant $2/Mcf wellhead gas price scenario. A scheme has been developed to break the overall resource estimate down into components that can be considered as differing technical and economic challenges that must be overcome in order to exploit such resources: in other words, to convert those resources to economically recoverable reserves. Total recoverable reserves estimates of 33 Tcf do not include the existing production from overpressured tight reservoirs in the basin. These have estimated ultimate recovery of approximately 1.6 Tcf, or a per well average recovery of 2.3 Bcf. Due to the fact that considerable pay thicknesses can be present, wells can be economic despite limited drainage areas. It is typical for significant bypassed gas to be present at inter-well locations because drainage areas are commonly less than regulatory well spacing requirements.

  15. Urban-rural solar radiation loss in the atmosphere of Greater Cairo region, Egypt

    International Nuclear Information System (INIS)

    Robaa, S.M.

    2009-01-01

    A comparative study for measured global solar radiation, G, during the period (1969-2006) and the corresponding global radiation loss in the atmosphere, R L %, over urban and rural districts in Greater Cairo region have been performed. The climatic variabilities of G radiation at the urban and rural sites are also investigated and discussed. Monthly, seasonal and annual mean values of extraterrestrial radiation, Go, and R L % during four successive periods, (1969-1978), (1979-1988), (1989-1998) and (1999-2006) at the above two sites have been calculated and investigated. The results revealed that urban area was always received lower amount of solar radiation due to urbanization factors. The yearly mean values of G radiation were distinctly decreased from maximum value 21.93 and 22.62 MJ m -2 during 1970 year to minimum value 17.57 and 17.87 MJ m -2 during 2004 and 2006 years with average decrease rate 0.09 and 0.10 MJ m -2 per year for the urban and rural areas, respectively. Also, the seasonal and annual mean anomalies of G radiation have been also gradually decreased from maximum values during the eldest period (1969-1978) to minimum values during the recent period (1999-2006). R L % over the urban area was always higher than that rural area. The urban-rural R L % differences range from 0.61% in 1999 year to 4.19% in 2002 year and 2.20% as average value. The yearly mean of R L % values distinctly gradually increase from minimum value 29.47% and 27.28% during 1970 year to maximum value 43.50% and 42.60% during 2004 and 2006 years with average increase rate 0.28% and 0.32% per year for the urban and rural areas, respectively. The minimum value of R L % (26.88%) occurred at rural area during summer season of the eldest period (1969-1978) while the maximum value of R L % (51.27%) occurred at the urban area during winter season of the last recent urbanized period (1999-2006). The linear trend of the yearly variations of R L % revealed that G values will reach zero

  16. A comparison of Ki-67 counting methods in luminal Breast Cancer: The Average Method vs. the Hot Spot Method.

    Science.gov (United States)

    Jang, Min Hye; Kim, Hyun Jung; Chung, Yul Ri; Lee, Yangkyu; Park, So Yeon

    2017-01-01

    In spite of the usefulness of the Ki-67 labeling index (LI) as a prognostic and predictive marker in breast cancer, its clinical application remains limited due to variability in its measurement and the absence of a standard method of interpretation. This study was designed to compare the two methods of assessing Ki-67 LI: the average method vs. the hot spot method and thus to determine which method is more appropriate in predicting prognosis of luminal/HER2-negative breast cancers. Ki-67 LIs were calculated by direct counting of three representative areas of 493 luminal/HER2-negative breast cancers using the two methods. We calculated the differences in the Ki-67 LIs (ΔKi-67) between the two methods and the ratio of the Ki-67 LIs (H/A ratio) of the two methods. In addition, we compared the performance of the Ki-67 LIs obtained by the two methods as prognostic markers. ΔKi-67 ranged from 0.01% to 33.3% and the H/A ratio ranged from 1.0 to 2.6. Based on the receiver operating characteristic curve method, the predictive powers of the KI-67 LI measured by the two methods were similar (Area under curve: hot spot method, 0.711; average method, 0.700). In multivariate analysis, high Ki-67 LI based on either method was an independent poor prognostic factor, along with high T stage and node metastasis. However, in repeated counts, the hot spot method did not consistently classify tumors into high vs. low Ki-67 LI groups. In conclusion, both the average and hot spot method of evaluating Ki-67 LI have good predictive performances for tumor recurrence in luminal/HER2-negative breast cancers. However, we recommend using the average method for the present because of its greater reproducibility.

  17. A comparison of Ki-67 counting methods in luminal Breast Cancer: The Average Method vs. the Hot Spot Method.

    Directory of Open Access Journals (Sweden)

    Min Hye Jang

    Full Text Available In spite of the usefulness of the Ki-67 labeling index (LI as a prognostic and predictive marker in breast cancer, its clinical application remains limited due to variability in its measurement and the absence of a standard method of interpretation. This study was designed to compare the two methods of assessing Ki-67 LI: the average method vs. the hot spot method and thus to determine which method is more appropriate in predicting prognosis of luminal/HER2-negative breast cancers. Ki-67 LIs were calculated by direct counting of three representative areas of 493 luminal/HER2-negative breast cancers using the two methods. We calculated the differences in the Ki-67 LIs (ΔKi-67 between the two methods and the ratio of the Ki-67 LIs (H/A ratio of the two methods. In addition, we compared the performance of the Ki-67 LIs obtained by the two methods as prognostic markers. ΔKi-67 ranged from 0.01% to 33.3% and the H/A ratio ranged from 1.0 to 2.6. Based on the receiver operating characteristic curve method, the predictive powers of the KI-67 LI measured by the two methods were similar (Area under curve: hot spot method, 0.711; average method, 0.700. In multivariate analysis, high Ki-67 LI based on either method was an independent poor prognostic factor, along with high T stage and node metastasis. However, in repeated counts, the hot spot method did not consistently classify tumors into high vs. low Ki-67 LI groups. In conclusion, both the average and hot spot method of evaluating Ki-67 LI have good predictive performances for tumor recurrence in luminal/HER2-negative breast cancers. However, we recommend using the average method for the present because of its greater reproducibility.

  18. Generalized Empirical Likelihood-Based Focused Information Criterion and Model Averaging

    Directory of Open Access Journals (Sweden)

    Naoya Sueishi

    2013-07-01

    Full Text Available This paper develops model selection and averaging methods for moment restriction models. We first propose a focused information criterion based on the generalized empirical likelihood estimator. We address the issue of selecting an optimal model, rather than a correct model, for estimating a specific parameter of interest. Then, this study investigates a generalized empirical likelihood-based model averaging estimator that minimizes the asymptotic mean squared error. A simulation study suggests that our averaging estimator can be a useful alternative to existing post-selection estimators.

  19. Should the average tax rate be marginalized?

    Czech Academy of Sciences Publication Activity Database

    Feldman, N. E.; Katuščák, Peter

    -, č. 304 (2006), s. 1-65 ISSN 1211-3298 Institutional research plan: CEZ:MSM0021620846 Keywords : tax * labor supply * average tax Subject RIV: AH - Economics http://www.cerge-ei.cz/pdf/wp/Wp304.pdf

  20. Application of reversible denoising and lifting steps with step skipping to color space transforms for improved lossless compression

    Science.gov (United States)

    Starosolski, Roman

    2016-07-01

    Reversible denoising and lifting steps (RDLS) are lifting steps integrated with denoising filters in such a way that, despite the inherently irreversible nature of denoising, they are perfectly reversible. We investigated the application of RDLS to reversible color space transforms: RCT, YCoCg-R, RDgDb, and LDgEb. In order to improve RDLS effects, we propose a heuristic for image-adaptive denoising filter selection, a fast estimator of the compressed image bitrate, and a special filter that may result in skipping of the steps. We analyzed the properties of the presented methods, paying special attention to their usefulness from a practical standpoint. For a diverse image test-set and lossless JPEG-LS, JPEG 2000, and JPEG XR algorithms, RDLS improves the bitrates of all the examined transforms. The most interesting results were obtained for an estimation-based heuristic filter selection out of a set of seven filters; the cost of this variant was similar to or lower than the transform cost, and it improved the average lossless JPEG 2000 bitrates by 2.65% for RDgDb and by over 1% for other transforms; bitrates of certain images were improved to a significantly greater extent.

  1. Comparison of power pulses from homogeneous and time-average-equivalent models

    International Nuclear Information System (INIS)

    De, T.K.; Rouben, B.

    1995-01-01

    The time-average-equivalent model is an 'instantaneous' core model designed to reproduce the same three dimensional power distribution as that generated by a time-average model. However it has been found that the time-average-equivalent model gives a full-core static void reactivity about 8% smaller than the time-average or homogeneous models. To investigate the consequences of this difference in static void reactivity in time dependent calculations, simulations of the power pulse following a hypothetical large-loss-of-coolant accident were performed with a homogeneous model and compared with the power pulse from the time-average-equivalent model. The results show that there is a much smaller difference in peak dynamic reactivity than in static void reactivity between the two models. This is attributed to the fact that voiding is not complete, but also to the retardation effect of the delayed-neutron precursors on the dynamic flux shape. The difference in peak reactivity between the models is 0.06 milli-k. The power pulses are essentially the same in the two models, because the delayed-neutron fraction in the time-average-equivalent model is lower than in the homogeneous model, which compensates for the lower void reactivity in the time-average-equivalent model. (author). 1 ref., 5 tabs., 9 figs

  2. Greater Trochanteric Pain Syndrome: Percutaneous Tendon Fenestration Versus Platelet-Rich Plasma Injection for Treatment of Gluteal Tendinosis.

    Science.gov (United States)

    Jacobson, Jon A; Yablon, Corrie M; Henning, P Troy; Kazmers, Irene S; Urquhart, Andrew; Hallstrom, Brian; Bedi, Asheesh; Parameswaran, Aishwarya

    2016-11-01

    The purpose of this study was to compare ultrasound-guided percutaneous tendon fenestration to platelet-rich plasma (PRP) injection for treatment of greater trochanteric pain syndrome. After Institutional Review Board approval was obtained, patients with symptoms of greater trochanteric pain syndrome and ultrasound findings of gluteal tendinosis or a partial tear (Pain scores were recorded at baseline, week 1, and week 2 after treatment. Retrospective clinic record review assessed patient symptoms. The study group consisted of 30 patients (24 female), of whom 50% were treated with fenestration and 50% were treated with PRP. The gluteus medius was treated in 73% and 67% in the fenestration and PRP groups, respectively. Tendinosis was present in all patients. In the fenestration group, mean pain scores were 32.4 at baseline, 16.8 at time point 1, and 15.2 at time point 2. In the PRP group, mean pain scores were 31.4 at baseline, 25.5 at time point 1, and 19.4 at time point 2. Retrospective follow-up showed significant pain score improvement from baseline to time points 1 and 2 (P.99). Our study shows that both ultrasound-guided tendon fenestration and PRP injection are effective for treatment of gluteal tendinosis, showing symptom improvement in both treatment groups. © 2016 by the American Institute of Ultrasound in Medicine.

  3. The Value of Multivariate Model Sophistication: An Application to pricing Dow Jones Industrial Average options

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V.K.; Stentoft, Lars; Violante, Francesco

    innovation for a Laplace innovation assumption improves the pricing in a smaller way. Apart from investigating directly the value of model sophistication in terms of dollar losses, we also use the model condence set approach to statistically infer the set of models that delivers the best pricing performance.......We assess the predictive accuracy of a large number of multivariate volatility models in terms of pricing options on the Dow Jones Industrial Average. We measure the value of model sophistication in terms of dollar losses by considering a set 248 multivariate models that differer...

  4. Method to improve reliability of a fuel cell system using low performance cell detection at low power operation

    Science.gov (United States)

    Choi, Tayoung; Ganapathy, Sriram; Jung, Jaehak; Savage, David R.; Lakshmanan, Balasubramanian; Vecasey, Pamela M.

    2013-04-16

    A system and method for detecting a low performing cell in a fuel cell stack using measured cell voltages. The method includes determining that the fuel cell stack is running, the stack coolant temperature is above a certain temperature and the stack current density is within a relatively low power range. The method further includes calculating the average cell voltage, and determining whether the difference between the average cell voltage and the minimum cell voltage is greater than a predetermined threshold. If the difference between the average cell voltage and the minimum cell voltage is greater than the predetermined threshold and the minimum cell voltage is less than another predetermined threshold, then the method increments a low performing cell timer. A ratio of the low performing cell timer and a system run timer is calculated to identify a low performing cell.

  5. Combined application of tenuigenin and β-asarone improved the efficacy of memantine in treating moderate-to-severe Alzheimer’s disease

    Directory of Open Access Journals (Sweden)

    Chang W

    2018-03-01

    Full Text Available Wenguang Chang,1 Junfang Teng2 1Department of Neurology, Xinxiang Central Hospital, Xinxiang, Henan, People’s Republic of China; 2Department of Neurology, The First Affiliated Hospital of Zhengzhou University, Henan, People’s Republic of China Background: Alzheimer’s disease (AD is a slowly progressive neurodegenerative disease which cannot be cured at present. The aim of this study was to assess whether the combined application of β-asarone and tenuigenin could improve the efficacy of memantine in treating moderate-to-severe AD.Patients and methods: One hundred and fifty-two patients with moderate-to-severe AD were recruited and assigned to two groups. Patients in the experiment group received β-asarone 10 mg/d, tenuigenin 10 mg/d, and memantine 5–20 mg/d. Patients in the control group only received memantine 5–20 mg/d. The Mini Mental State Examination (MMSE, Clinical Dementia Rating Scale (CDR, and Activities of Daily Living (ADL were used to assess the therapeutic effects. The drug-related adverse events were used to assess the safety and acceptability. Treatment was continued for 12 weeks.Results: After 12 weeks of treatment, the average MMSE scores, ADL scores, and CDR scores in the two groups were significantly improved. But, compared to the control group, the experimental group had a significantly higher average MMSE score (p<0.00001, lower average ADL score (p=0.00002, and lower average CDR score (p=0.030. Meanwhile, the rates of adverse events were similar between the two groups. Subgroup analysis indicated that the most likely candidates to benefit from this novel method might be the 60–74-years-old male patients with moderate AD.Conclusion: These results demonstrated that the combined application of β-asarone and tenuigenin could improve the efficacy of memantine in treating moderate-to-severe AD. The clinical applicability of this novel method showed greater promise and should be further explored. Keywords

  6. The Average Temporal and Spectral Evolution of Gamma-Ray Bursts

    International Nuclear Information System (INIS)

    Fenimore, E.E.

    1999-01-01

    We have averaged bright BATSE bursts to uncover the average overall temporal and spectral evolution of gamma-ray bursts (GRBs). We align the temporal structure of each burst by setting its duration to a standard duration, which we call T left-angleDurright-angle . The observed average open-quotes aligned T left-angleDurright-angle close quotes profile for 32 bright bursts with intermediate durations (16 - 40 s) has a sharp rise (within the first 20% of T left-angleDurright-angle ) and then a linear decay. Exponentials and power laws do not fit this decay. In particular, the power law seen in the X-ray afterglow (∝T -1.4 ) is not observed during the bursts, implying that the X-ray afterglow is not just an extension of the average temporal evolution seen during the gamma-ray phase. The average burst spectrum has a low-energy slope of -1.03, a high-energy slope of -3.31, and a peak in the νF ν distribution at 390 keV. We determine the average spectral evolution. Remarkably, it is also a linear function, with the peak of the νF ν distribution given by ∼680-600(T/T left-angleDurright-angle ) keV. Since both the temporal profile and the peak energy are linear functions, on average, the peak energy is linearly proportional to the intensity. This behavior is inconsistent with the external shock model. The observed temporal and spectral evolution is also inconsistent with that expected from variations in just a Lorentz factor. Previously, trends have been reported for GRB evolution, but our results are quantitative relationships that models should attempt to explain. copyright copyright 1999. The American Astronomical Society

  7. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-01-01

    to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic

  8. Is average daily travel time expenditure constant? In search of explanations for an increase in average travel time.

    NARCIS (Netherlands)

    van Wee, B.; Rietveld, P.; Meurs, H.

    2006-01-01

    Recent research suggests that the average time spent travelling by the Dutch population has increased over the past decades. However, different data sources show different levels of increase. This paper explores possible causes for this increase. They include a rise in incomes, which has probably

  9. Yearly, seasonal and monthly daily average diffuse sky radiation models

    International Nuclear Information System (INIS)

    Kassem, A.S.; Mujahid, A.M.; Turner, D.W.

    1993-01-01

    A daily average diffuse sky radiation regression model based on daily global radiation was developed utilizing two year data taken near Blytheville, Arkansas (Lat. =35.9 0 N, Long. = 89.9 0 W), U.S.A. The model has a determination coefficient of 0.91 and 0.092 standard error of estimate. The data were also analyzed for a seasonal dependence and four seasonal average daily models were developed for the spring, summer, fall and winter seasons. The coefficient of determination is 0.93, 0.81, 0.94 and 0.93, whereas the standard error of estimate is 0.08, 0.102, 0.042 and 0.075 for spring, summer, fall and winter, respectively. A monthly average daily diffuse sky radiation model was also developed. The coefficient of determination is 0.92 and the standard error of estimate is 0.083. A seasonal monthly average model was also developed which has 0.91 coefficient of determination and 0.085 standard error of estimate. The developed monthly daily average and daily models compare well with a selected number of previously developed models. (author). 11 ref., figs., tabs

  10. Parameterization of Time-Averaged Suspended Sediment Concentration in the Nearshore

    Directory of Open Access Journals (Sweden)

    Hyun-Doug Yoon

    2015-11-01

    Full Text Available To quantify the effect of wave breaking turbulence on sediment transport in the nearshore, the vertical distribution of time-averaged suspended sediment concentration (SSC in the surf zone was parameterized in terms of the turbulent kinetic energy (TKE at different cross-shore locations, including the bar crest, bar trough, and inner surf zone. Using data from a large-scale laboratory experiment, a simple relationship was developed between the time-averaged SSC and the time-averaged TKE. The vertical variation of the time-averaged SSC was fitted to an equation analogous to the turbulent dissipation rate term. At the bar crest, the proposed equation was slightly modified to incorporate the effect of near-bed sediment processes and yielded reasonable agreement. This parameterization yielded the best agreement at the bar trough, with a coefficient of determination R2 ≥ 0.72 above the bottom boundary layer. The time-averaged SSC in the inner surf zone showed good agreement near the bed but poor agreement near the water surface, suggesting that there is a different sedimentation mechanism that controls the SSC in the inner surf zone.

  11. 42 CFR 100.2 - Average cost of a health insurance policy.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false Average cost of a health insurance policy. 100.2... VACCINE INJURY COMPENSATION § 100.2 Average cost of a health insurance policy. For purposes of determining..., less certain deductions. One of the deductions is the average cost of a health insurance policy, as...

  12. Annual average equivalent dose of workers form health area

    International Nuclear Information System (INIS)

    Daltro, T.F.L.; Campos, L.L.

    1992-01-01

    The data of personnel monitoring during 1985 and 1991 of personnel that work in health area were studied, obtaining a general overview of the value change of annual average equivalent dose. Two different aspects were presented: the analysis of annual average equivalent dose in the different sectors of a hospital and the comparison of these doses in the same sectors in different hospitals. (C.G.C.)

  13. Time averaging, ageing and delay analysis of financial time series

    Science.gov (United States)

    Cherstvy, Andrey G.; Vinod, Deepak; Aghion, Erez; Chechkin, Aleksei V.; Metzler, Ralf

    2017-06-01

    We introduce three strategies for the analysis of financial time series based on time averaged observables. These comprise the time averaged mean squared displacement (MSD) as well as the ageing and delay time methods for varying fractions of the financial time series. We explore these concepts via statistical analysis of historic time series for several Dow Jones Industrial indices for the period from the 1960s to 2015. Remarkably, we discover a simple universal law for the delay time averaged MSD. The observed features of the financial time series dynamics agree well with our analytical results for the time averaged measurables for geometric Brownian motion, underlying the famed Black-Scholes-Merton model. The concepts we promote here are shown to be useful for financial data analysis and enable one to unveil new universal features of stock market dynamics.

  14. 40 CFR 63.1332 - Emissions averaging provisions.

    Science.gov (United States)

    2010-07-01

    ... control technology or standard had been applied instead of the pollution prevention measure. (d) The... technology with an approved nominal efficiency greater than 98 percent or a pollution prevention measure... Section 63.1332 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS...

  15. Expatriate job performance in Greater China: Does age matter?

    DEFF Research Database (Denmark)

    Selmer, Jan; Lauring, Jakob; Feng, Yunxia

    to expatriates in Chinese societies. It is possible that older business expatriates will receive more respect and be treated with more deference in a Chinese cultural context than their apparently younger colleagues. This may have a positive impact on expatriates’ job performance. To empirically test...... this presumption, business expatriates in Greater Chine were targeted by a survey. Controlling for the potential bias of a number of background variables, results indicate that contextual/managerial performance, including general managerial functions applied to the subsidiary in Greater China, had a positive...

  16. Absenteeism movement in Greater Poland in 1840–1902

    OpenAIRE

    Izabela Krasińska

    2013-01-01

    The article presents the origins and development of the idea of absenteeism in Greater Poland in the 19th century. The start date for the research is 1840, which is considered to be a breakthrough year in the history of an organized absenteeism movement in Greater Poland. It was due to the Association for the Suppression of the Use of Vodka (Towarzystwo ku Przytłumieniu Używania Wódki) in the Great Duchy of Posen that was then established in Kórnik. It was a secular organization that came int...

  17. A virtual pebble game to ensemble average graph rigidity.

    Science.gov (United States)

    González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J

    2015-01-01

    The body-bar Pebble Game (PG) algorithm is commonly used to calculate network rigidity properties in proteins and polymeric materials. To account for fluctuating interactions such as hydrogen bonds, an ensemble of constraint topologies are sampled, and average network properties are obtained by averaging PG characterizations. At a simpler level of sophistication, Maxwell constraint counting (MCC) provides a rigorous lower bound for the number of internal degrees of freedom (DOF) within a body-bar network, and it is commonly employed to test if a molecular structure is globally under-constrained or over-constrained. MCC is a mean field approximation (MFA) that ignores spatial fluctuations of distance constraints by replacing the actual molecular structure by an effective medium that has distance constraints globally distributed with perfect uniform density. The Virtual Pebble Game (VPG) algorithm is a MFA that retains spatial inhomogeneity in the density of constraints on all length scales. Network fluctuations due to distance constraints that may be present or absent based on binary random dynamic variables are suppressed by replacing all possible constraint topology realizations with the probabilities that distance constraints are present. The VPG algorithm is isomorphic to the PG algorithm, where integers for counting "pebbles" placed on vertices or edges in the PG map to real numbers representing the probability to find a pebble. In the VPG, edges are assigned pebble capacities, and pebble movements become a continuous flow of probability within the network. Comparisons between the VPG and average PG results over a test set of proteins and disordered lattices demonstrate the VPG quantitatively estimates the ensemble average PG results well. The VPG performs about 20% faster than one PG, and it provides a pragmatic alternative to averaging PG rigidity characteristics over an ensemble of constraint topologies. The utility of the VPG falls in between the most

  18. Improved core monitoring for improved plant operations

    International Nuclear Information System (INIS)

    Mueller, N.P.

    1987-01-01

    Westinghouse has recently installed a core on-line surveillance, monitoring and operations systems (COSMOS), which uses only currently available core and plant data to accurately reconstruct the core average axial and radial power distributions. This information is provided to the operator in an immediately usable, human-engineered format and is accumulated for use in application programs that provide improved core performance predictive tools and a data base for improved fuel management. Dynamic on-line real-time axial and radial core monitoring supports a variety of plant operations to provide a favorable cost/benefit ratio for such a system. Benefits include: (1) relaxation or elimination of certain technical specifications to reduce surveillance and reporting requirements and allow higher availability factors, (2) improved information displays, predictive tools, and control strategies to support more efficient core control and reduce effluent production, and (3) expanded burnup data base for improved fuel management. Such systems can be backfit into operating plants without changing the existing instrumentation and control system and can frequently be implemented on existing plant computer capacity

  19. Gender differences in commuting behavior: Women's greater sensitivity

    Energy Technology Data Exchange (ETDEWEB)

    Olmo Sanchez, M.I.; Maeso Gonzalez, E.

    2016-07-01

    Women's greater sensitivity to changes in their environment is one of the most distinguishing features between both genders. This article raises women's greater sensitivity to the different variables which influence their commuting modal choice. In order to do this, gender gaps detected in the choice of means of transport in commuting trips with respect to the decision factors such as age, education level, driver's license, private transport access; location, household size and net income, are quantified.The results show a greater female sensitivity to the different variables that affect their modal choice, which helps to better understand the different mobility patterns and it is useful for planning measures favoring sustainable mobility policies and equity. (Author)

  20. Spatiotemporal distribution and variation of GPP in the Greater Khingan Mountains from 1982 to 2015

    Science.gov (United States)

    Hu, L.; Fan, W.; Liu, S.; Ren, H.; Xu, X.

    2017-12-01

    GPP (Gross Primary Productivity) is an important index to reflect the productivity of plants because it refers to the organic accumulated by green plants on land through assimilating the carbon dioxide in the atmosphere by photosynthesis and a serial of physiological processes in plants. Therefore, GPP plays a significant role in studying the carbon sink of terrestrial ecosystem and plants' reaction to global climate change. Remote sensing provides an efficient way to estimate GPP at regional and global scales and its products can be used to monitor the spatiotemporal variation of terrestrial ecosystem.As the Greater Khingan Mountains is the only bright coniferous forest of cool temperate zone in China and accounts for about 30% of the forest in China. This region is sensitive to climate change, but its forest coverage presented a significant variation due to fire disasters, excessive deforestation and so on. Here, we aimed at studying the variation pattern of GPP in the Greater Khingan Mountains and further found impact factors for the change in order to improve the understanding of what have and will happen on plants and carbon cycle under climate change.Based on GPP product from the GLASS program, we first studied spatial distribution of plants in the Greater Khingan Mountains from 1982 to 2015. With a linear regression model, seasonal and inter-annual GPP variability were explored on pixel and regional scale. We analyzed some climatic factors (e.g. temperature and precipitation) and terrain in order to find the driven factors for the GPP variations. The Growing Season Length (GSL) was also regarded as a factor and was retrieved from GIMMS 3g NDVI datasets using dynamic threshold method. We found that GPP in study area linearly decreased with the increasing elevation. Both annual accumulated GPP (AAG) and maximum daily GPP (during mid-June to mid-July) gained obvious improvement over the past 34 years under climate warming and drying (Fig.1 and Fig.2). Further

  1. 47 CFR 64.1801 - Geographic rate averaging and rate integration.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Geographic rate averaging and rate integration. 64.1801 Section 64.1801 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) MISCELLANEOUS RULES RELATING TO COMMON CARRIERS Geographic Rate Averaging and...

  2. [Autoerotic fatalities in Greater Dusseldorf].

    Science.gov (United States)

    Hartung, Benno; Hellen, Florence; Borchard, Nora; Huckenbeck, Wolfgang

    2011-01-01

    Autoerotic fatalities in the Greater Dusseldorf area correspond to the relevant medicolegal literature. Our results included exclusively young to middle-aged, usually single men who were found dead in their city apartments. Clothing and devices used showed a great variety. Women's or fetish clothing and complex shackling or hanging devices were disproportionately frequent. In most cases, death occurred due to hanging or ligature strangulation. There was no increased incidence of underlying psychiatric disorders. In most of the deceased no or at least no remarkable alcohol intoxication was found. Occasionally, it may be difficult to reliably differentiate autoerotic accidents, accidents occurring in connection with practices of bondage & discipline, dominance & submission (BDSM) from natural death, suicide or homicide.

  3. The Economic Value of the Greater Montreal Blue Network (Quebec, Canada: A Contingent Choice Study Using Real Projects to Estimate Non-Market Aquatic Ecosystem Services Benefits.

    Directory of Open Access Journals (Sweden)

    Thomas G Poder

    Full Text Available This study used a contingent choice method to determine the economic value of improving various ecosystem services (ESs of the Blue Network of Greater Montreal (Quebec, Canada. Three real projects were used and the evaluation focused on six ESs that are related to freshwater aquatic ecosystems: biodiversity, water quality, carbon sequestration, recreational activities, landscape aesthetics and education services. We also estimated the value associated with the superficies of restored sites. We calculated the monetary value that a household would be willing to pay for each additional qualitative or quantitative unit of different ESs, and these marginal values range from $0.11 to $15.39 per household per unit. Thus, under certain assumptions, we determined the monetary values that all Quebec households would allocate to improve each ES in Greater Montreal by one unit. The most valued ES was water quality ($13.5 million, followed by education services ($10.7 million, recreational activities ($8.9 million, landscape aesthetics ($4.1 million, biodiversity ($1.2 million, and carbon sequestration ($0.1 million. Our results ascribe monetary values to improved (or degraded aquatic ecosystems in the Blue Network of Greater Montreal, but can also enhance economic analyses of various aquatic ecosystem restoration and management projects.

  4. The Economic Value of the Greater Montreal Blue Network (Quebec, Canada): A Contingent Choice Study Using Real Projects to Estimate Non-Market Aquatic Ecosystem Services Benefits.

    Science.gov (United States)

    Poder, Thomas G; Dupras, Jérôme; Fetue Ndefo, Franck; He, Jie

    2016-01-01

    This study used a contingent choice method to determine the economic value of improving various ecosystem services (ESs) of the Blue Network of Greater Montreal (Quebec, Canada). Three real projects were used and the evaluation focused on six ESs that are related to freshwater aquatic ecosystems: biodiversity, water quality, carbon sequestration, recreational activities, landscape aesthetics and education services. We also estimated the value associated with the superficies of restored sites. We calculated the monetary value that a household would be willing to pay for each additional qualitative or quantitative unit of different ESs, and these marginal values range from $0.11 to $15.39 per household per unit. Thus, under certain assumptions, we determined the monetary values that all Quebec households would allocate to improve each ES in Greater Montreal by one unit. The most valued ES was water quality ($13.5 million), followed by education services ($10.7 million), recreational activities ($8.9 million), landscape aesthetics ($4.1 million), biodiversity ($1.2 million), and carbon sequestration ($0.1 million). Our results ascribe monetary values to improved (or degraded) aquatic ecosystems in the Blue Network of Greater Montreal, but can also enhance economic analyses of various aquatic ecosystem restoration and management projects.

  5. Stingless bees further improve apple pollination and production

    Directory of Open Access Journals (Sweden)

    Blandina Felipe Viana

    2014-10-01

    Full Text Available The use of Africanised honeybee (Apis mellifera scutellata Lepeletier hives to increase pollination success in apple orchards is a widespread practice. However, this study is the first to investigate the number of honeybee hives ha-1 required to increase the production of fruits and seeds as well as the potential contribution of the stingless bee Mandaçaia (Melipona quadrifasciata anthidioides Lepeletier. We performed tests in a 43-ha apple orchard located in the municipality of Ibicoara (13º24’50.7’’S and 41º17’7.4’’W in Chapada Diamantina, State of Bahia, Brazil. In 2011, fruits from the Eva variety set six seeds on average, and neither a greater number of hives (from 7 to 11 hives ha-1 nor a greater number of pollen collectors at the honeybee hives displayed general effects on the seed number. Without wild pollinators, seven Africanised honeybee hives ha-1 with pollen collectors is currently the best option for apple producers because no further increase in the seed number was observed with higher hive densities. In 2012, supplementation with both stingless bees (12 hives ha-1 and Africanised honeybees (7 hives ha-1 provided higher seed and fruit production than supplementation with honeybees (7 hives ha-1 alone. Therefore, the stingless bee can improve the performance of honeybee as a pollinator of apple flowers, since the presence of both of these bees results in increases in apple fruit and seed number.

  6. Radiographic features of tuberculous osteitis in greater trochanter and lschium

    International Nuclear Information System (INIS)

    Hahm, So Hee; Lee, Ye Ri; Kim, Dong Jin; Sung, Ki Jun; Lim, Jong Nam

    1996-01-01

    To evaluate, if possible, the radiographic features of tuberculous osteitis in the greater trochanter and ischium, and to determine the cause of the lesions. We reterospectively reviewed the plain radiographic findings of 14 ptients with histologically proven tuberculous osteitis involving the greater trochanter and ischium. In each case, the following were analyzed:morphology of bone destruction, including cortical erosion;periosteal reaction;presence or abscence of calcific shadows in adjacent soft tissue. On the basis of an analysis of radiographic features and correlation of the anatomy with adjacent structures we attempted to determine causes. Of the 14 cases evaluated, 12 showed varrious degrees of extrinsic erosion on the outer cortical bone of the greater trochanter and ischium ; in two cases, bone destruction was so severe that the radiographic features of advanced perforated osteomyelitis were simulated. In addition to findings of bone destruction, in these twelve cases, the presence of sequestrum or calcific shadows was seen in adjacent soft tissue. Tuberculous osteitis in the greater trochanter and ischium showed the characteristic findings of chronic extrinsic erosion. On the basis of these findings we can suggest that these lesions result from an extrinsic pathophysiologic cause such as adjacent bursitis

  7. Radiographic features of tuberculous osteitis in greater trochanter and lschium

    Energy Technology Data Exchange (ETDEWEB)

    Hahm, So Hee; Lee, Ye Ri [Hanil Hospital Affiliated to KEPCO, Seoul (Korea, Republic of); Kim, Dong Jin; Sung, Ki Jun [Yonsei Univ. Wonju College of Medicine, Wonju (Korea, Republic of); Lim, Jong Nam [Konkuk Univ. College of Medicine, Seoul (Korea, Republic of)

    1996-11-01

    To evaluate, if possible, the radiographic features of tuberculous osteitis in the greater trochanter and ischium, and to determine the cause of the lesions. We reterospectively reviewed the plain radiographic findings of 14 ptients with histologically proven tuberculous osteitis involving the greater trochanter and ischium. In each case, the following were analyzed:morphology of bone destruction, including cortical erosion;periosteal reaction;presence or abscence of calcific shadows in adjacent soft tissue. On the basis of an analysis of radiographic features and correlation of the anatomy with adjacent structures we attempted to determine causes. Of the 14 cases evaluated, 12 showed varrious degrees of extrinsic erosion on the outer cortical bone of the greater trochanter and ischium ; in two cases, bone destruction was so severe that the radiographic features of advanced perforated osteomyelitis were simulated. In addition to findings of bone destruction, in these twelve cases, the presence of sequestrum or calcific shadows was seen in adjacent soft tissue. Tuberculous osteitis in the greater trochanter and ischium showed the characteristic findings of chronic extrinsic erosion. On the basis of these findings we can suggest that these lesions result from an extrinsic pathophysiologic cause such as adjacent bursitis.

  8. The B-dot Earth Average Magnetic Field

    Science.gov (United States)

    Capo-Lugo, Pedro A.; Rakoczy, John; Sanders, Devon

    2013-01-01

    The average Earth's magnetic field is solved with complex mathematical models based on mean square integral. Depending on the selection of the Earth magnetic model, the average Earth's magnetic field can have different solutions. This paper presents a simple technique that takes advantage of the damping effects of the b-dot controller and is not dependent of the Earth magnetic model; but it is dependent on the magnetic torquers of the satellite which is not taken into consideration in the known mathematical models. Also the solution of this new technique can be implemented so easily that the flight software can be updated during flight, and the control system can have current gains for the magnetic torquers. Finally, this technique is verified and validated using flight data from a satellite that it has been in orbit for three years.

  9. Improving the risk assessment of lipophilic persistent environmental chemicals in breast milk

    Science.gov (United States)

    BACKGROUND: A breastfeeding infant’s intake of persistent organic pollutants (POPs) may be much greater than his/her mother’s average daily POP exposure. In many cases, current human health risk assessment methods do not account for differences between maternal and infant POP exp...

  10. Asymptotic Time Averages and Frequency Distributions

    Directory of Open Access Journals (Sweden)

    Muhammad El-Taha

    2016-01-01

    Full Text Available Consider an arbitrary nonnegative deterministic process (in a stochastic setting {X(t,  t≥0} is a fixed realization, i.e., sample-path of the underlying stochastic process with state space S=(-∞,∞. Using a sample-path approach, we give necessary and sufficient conditions for the long-run time average of a measurable function of process to be equal to the expectation taken with respect to the same measurable function of its long-run frequency distribution. The results are further extended to allow unrestricted parameter (time space. Examples are provided to show that our condition is not superfluous and that it is weaker than uniform integrability. The case of discrete-time processes is also considered. The relationship to previously known sufficient conditions, usually given in stochastic settings, will also be discussed. Our approach is applied to regenerative processes and an extension of a well-known result is given. For researchers interested in sample-path analysis, our results will give them the choice to work with the time average of a process or its frequency distribution function and go back and forth between the two under a mild condition.

  11. Averaged null energy condition from causality

    Science.gov (United States)

    Hartman, Thomas; Kundu, Sandipan; Tajdini, Amirhossein

    2017-07-01

    Unitary, Lorentz-invariant quantum field theories in flat spacetime obey mi-crocausality: commutators vanish at spacelike separation. For interacting theories in more than two dimensions, we show that this implies that the averaged null energy, ∫ duT uu , must be non-negative. This non-local operator appears in the operator product expansion of local operators in the lightcone limit, and therefore contributes to n-point functions. We derive a sum rule that isolates this contribution and is manifestly positive. The argument also applies to certain higher spin operators other than the stress tensor, generating an infinite family of new constraints of the form ∫ duX uuu··· u ≥ 0. These lead to new inequalities for the coupling constants of spinning operators in conformal field theory, which include as special cases (but are generally stronger than) the existing constraints from the lightcone bootstrap, deep inelastic scattering, conformal collider methods, and relative entropy. We also comment on the relation to the recent derivation of the averaged null energy condition from relative entropy, and suggest a more general connection between causality and information-theoretic inequalities in QFT.

  12. Aquifer restoration system improvement using an acid fluid purge

    International Nuclear Information System (INIS)

    Hodder, E.A.; Peck, C.A.

    1992-01-01

    The implementation of a water pump acid purge procedure at a free-phase liquid hydrocarbon recovery site has increased water pump operational run times and improved the effectiveness of the aquifer restoration effort. Before introduction of this technique, pumps at some locations would fail within 14 days of operation due to CaSO 4 .2H 2 O (calcium sulfate) precipitate fouling. After acid purge implementation at these locations, pump operational life improved to an average of over 110 days. Other locations, where pump failures would occur within one month, were improved to approximately six months of operation. The increase in water pump run time has also improved the liquid hydrocarbon recovery rate by 2,000 gallons per day; representing a 20% increase for the aquifer restoration system. Other concepts tested in attempts to prolong pump life included: specially designed electric submersible pumps, submersible pump shrouds intended to reduce the fluid pressure shear that enhances CaSO 4 .2H 2 O precipitation, and high volume pneumatic gas lift pumps. Due to marginal pump life improvement or other undesirable operational features, these concepts were primarily ineffective. The purge apparatus utilizes an acid pump, hose, and discharge piping to deliver the solution directly into the inlet of an operating water pump. The water pumps used for this activity require stainless steel construction with Teflon or other acid resistant bearings and seals. Purges are typically conducted before sudden discharge pressure drops (greater than 15 psig) occur for the operating water pump. Depending on volume of precipitate accumulation and pump type, discharge pressure is restored after introduction of 10 to 40 gallons of hydrochloric acid solution. The acid purge procedure outlined herein eliminates operational downtime and does not require well head pump removal and the associated costs of industry cleaning procedures

  13. Decision trees with minimum average depth for sorting eight elements

    KAUST Repository

    AbouEisha, Hassan M.; Chikalov, Igor; Moshkov, Mikhail

    2015-01-01

    We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees

  14. Search for greater stability in nuclear regulation

    International Nuclear Information System (INIS)

    Asselstine, J.K.

    1985-01-01

    The need for greater stability in nuclear regulation is discussed. Two possible approaches for dealing with the problems of new and rapidly changing regulatory requirements are discussed. The first approach relies on the more traditional licensing reform initiatives that have been considered off and on for the past decade. The second approach considers a new regulator philosophy aimed at the root causes of the proliferation of new safety requirements that have been imposed in recent years. For the past few years, the concepts of deregulation and regulatory reform have been in fashion in Washington, and the commercial nuclear power program has not remained unaffected. Many look to these concepts to provide greater stability in the regulatory program. The NRC, the nuclear industry and the administration have all been avidly pursuing regulatory reform initiatives, which take the form of both legislative and administrative proposals. Many of these proposals look to the future, and, if adopted, would have little impact on currently operating nuclear power plants or plants now under construction

  15. Deblurring of class-averaged images in single-particle electron microscopy

    International Nuclear Information System (INIS)

    Park, Wooram; Chirikjian, Gregory S; Madden, Dean R; Rockmore, Daniel N

    2010-01-01

    This paper proposes a method for the deblurring of class-averaged images in single-particle electron microscopy (EM). Since EM images of biological samples are very noisy, the images which are nominally identical projection images are often grouped, aligned and averaged in order to cancel or reduce the background noise. However, the noise in the individual EM images generates errors in the alignment process, which creates an inherent limit on the accuracy of the resulting class averages. This inaccurate class average due to the alignment errors can be viewed as the result of a convolution of an underlying clear image with a blurring function. In this work, we develop a deconvolution method that gives an estimate for the underlying clear image from a blurred class-averaged image using precomputed statistics of misalignment. Since this convolution is over the group of rigid-body motions of the plane, SE(2), we use the Fourier transform for SE(2) in order to convert the convolution into a matrix multiplication in the corresponding Fourier space. For practical implementation we use a Hermite-function-based image modeling technique, because Hermite expansions enable lossless Cartesian-polar coordinate conversion using the Laguerre–Fourier expansions, and Hermite expansion and Laguerre–Fourier expansion retain their structures under the Fourier transform. Based on these mathematical properties, we can obtain the deconvolution of the blurred class average using simple matrix multiplication. Tests of the proposed deconvolution method using synthetic and experimental EM images confirm the performance of our method

  16. Estimating average glandular dose by measuring glandular rate in mammograms

    International Nuclear Information System (INIS)

    Goto, Sachiko; Azuma, Yoshiharu; Sumimoto, Tetsuhiro; Eiho, Shigeru

    2003-01-01

    The glandular rate of the breast was objectively measured in order to calculate individual patient exposure dose (average glandular dose) in mammography. By employing image processing techniques and breast-equivalent phantoms with various glandular rate values, a conversion curve for pixel value to glandular rate can be determined by a neural network. Accordingly, the pixel values in clinical mammograms can be converted to the glandular rate value for each pixel. The individual average glandular dose can therefore be calculated using the individual glandular rates on the basis of the dosimetry method employed for quality control in mammography. In the present study, a data set of 100 craniocaudal mammograms from 50 patients was used to evaluate our method. The average glandular rate and average glandular dose of the data set were 41.2% and 1.79 mGy, respectively. The error in calculating the individual glandular rate can be estimated to be less than ±3%. When the calculation error of the glandular rate is taken into consideration, the error in the individual average glandular dose can be estimated to be 13% or less. We feel that our method for determining the glandular rate from mammograms is useful for minimizing subjectivity in the evaluation of patient breast composition. (author)

  17. General and Local: Averaged k-Dependence Bayesian Classifiers

    Directory of Open Access Journals (Sweden)

    Limin Wang

    2015-06-01

    Full Text Available The inference of a general Bayesian network has been shown to be an NP-hard problem, even for approximate solutions. Although k-dependence Bayesian (KDB classifier can construct at arbitrary points (values of k along the attribute dependence spectrum, it cannot identify the changes of interdependencies when attributes take different values. Local KDB, which learns in the framework of KDB, is proposed in this study to describe the local dependencies implicated in each test instance. Based on the analysis of functional dependencies, substitution-elimination resolution, a new type of semi-naive Bayesian operation, is proposed to substitute or eliminate generalization to achieve accurate estimation of conditional probability distribution while reducing computational complexity. The final classifier, averaged k-dependence Bayesian (AKDB classifiers, will average the output of KDB and local KDB. Experimental results on the repository of machine learning databases from the University of California Irvine (UCI showed that AKDB has significant advantages in zero-one loss and bias relative to naive Bayes (NB, tree augmented naive Bayes (TAN, Averaged one-dependence estimators (AODE, and KDB. Moreover, KDB and local KDB show mutually complementary characteristics with respect to variance.

  18. Non-invasive assessment of distribution volume ratios and binding potential: tissue heterogeneity and interindividually averaged time-activity curves

    Energy Technology Data Exchange (ETDEWEB)

    Reimold, M.; Mueller-Schauenburg, W.; Dohmen, B.M.; Bares, R. [Department of Nuclear Medicine, University of Tuebingen, Otfried-Mueller-Strasse 14, 72076, Tuebingen (Germany); Becker, G.A. [Nuclear Medicine, University of Leipzig, Leipzig (Germany); Reischl, G. [Radiopharmacy, University of Tuebingen, Tuebingen (Germany)

    2004-04-01

    Due to the stochastic nature of radioactive decay, any measurement of radioactivity concentration requires spatial averaging. In pharmacokinetic analysis of time-activity curves (TAC), such averaging over heterogeneous tissues may introduce a systematic error (heterogeneity error) but may also improve the accuracy and precision of parameter estimation. In addition to spatial averaging (inevitable due to limited scanner resolution and intended in ROI analysis), interindividual averaging may theoretically be beneficial, too. The aim of this study was to investigate the effect of such averaging on the binding potential (BP) calculated with Logan's non-invasive graphical analysis and the ''simplified reference tissue method'' (SRTM) proposed by Lammertsma and Hume, on the basis of simulated and measured positron emission tomography data [{sup 11}C]d-threo-methylphenidate (dMP) and [{sup 11}C]raclopride (RAC) PET. dMP was not quantified with SRTM since the low k {sub 2} (washout rate constant from the first tissue compartment) introduced a high noise sensitivity. Even for considerably different shapes of TAC (dMP PET in parkinsonian patients and healthy controls, [{sup 11}C]raclopride in patients with and without haloperidol medication) and a high variance in the rate constants (e.g. simulated standard deviation of K {sub 1}=25%), the BP obtained from average TAC was close to the mean BP (<5%). However, unfavourably distributed parameters, especially a correlated large variance in two or more parameters, may lead to larger errors. In Monte Carlo simulations, interindividual averaging before quantification reduced the variance from the SRTM (beyond a critical signal to noise ratio) and the bias in Logan's method. Interindividual averaging may further increase accuracy when there is an error term in the reference tissue assumption E=DV {sub 2}-DV ' (DV {sub 2} = distribution volume of the first tissue compartment, DV &apos

  19. Does Lifestyle Exercise After a Cardiac Event Improve Metabolic Syndrome Profile in Older Adults?

    Science.gov (United States)

    Wright, Kathy D; Moore-Schiltz, Laura; Sattar, Abdus; Josephson, Richard; Moore, Shirley M

    Exercise is a common recommendation to reduce the risk factors of metabolic syndrome, yet there are limited data on the influence of lifestyle exercise after cardiac events on metabolic syndrome factors. The purpose of this study was to determine whether lifestyle exercise improves metabolic syndrome profile in older adults after a cardiac event. Participants were from a post-cardiac-event lifestyle exercise study. Five metabolic syndrome factors were assessed: waist circumference, triglycerides, high-density lipids, glucose, and systolic and diastolic blood pressure. Objective measures of exercise were obtained from heart rate monitors over a year. Logistic regression was used to determine whether participants who engaged in the minimum recommendation of 130 hours of exercise or greater during the 12-month period improved their metabolic syndrome profile by improving at least 1 metabolic syndrome factor. In the sample of 116 participants (74% men; average age, 67.5 years), 43% exercised at the recommended amount (≥130 h/y) and 28% (n = 33) improved their metabolic syndrome profile. After controlling for confounding factors of age, gender, race, diabetes, functional ability, and employment, subjects who exercised at least 130 hours a year were 3.6 times more likely to improve at least 1 metabolic syndrome factor (95% confidence interval, 1.24-10.49). Of the 28% who improved their metabolic syndrome profile, 72% increased their high-density lipoprotein and 60.6% reduced their waist circumference and glucose. After a cardiac event, older patients who engage in lifestyle exercise at the recommended amount have improvement in their metabolic syndrome profile.

  20. Runoff and leaching of metolachlor from Mississippi River alluvial soil during seasons of average and below-average rainfall.

    Science.gov (United States)

    Southwick, Lloyd M; Appelboom, Timothy W; Fouss, James L

    2009-02-25

    The movement of the herbicide metolachlor [2-chloro-N-(2-ethyl-6-methylphenyl)-N-(2-methoxy-1-methylethyl)acetamide] via runoff and leaching from 0.21 ha plots planted to corn on Mississippi River alluvial soil (Commerce silt loam) was measured for a 6-year period, 1995-2000. The first three years received normal rainfall (30 year average); the second three years experienced reduced rainfall. The 4-month periods prior to application plus the following 4 months after application were characterized by 1039 +/- 148 mm of rainfall for 1995-1997 and by 674 +/- 108 mm for 1998-2000. During the normal rainfall years 216 +/- 150 mm of runoff occurred during the study seasons (4 months following herbicide application), accompanied by 76.9 +/- 38.9 mm of leachate. For the low-rainfall years these amounts were 16.2 +/- 18.2 mm of runoff (92% less than the normal years) and 45.1 +/- 25.5 mm of leachate (41% less than the normal seasons). Runoff of metolachlor during the normal-rainfall seasons was 4.5-6.1% of application, whereas leaching was 0.10-0.18%. For the below-normal periods, these losses were 0.07-0.37% of application in runoff and 0.22-0.27% in leachate. When averages over the three normal and the three less-than-normal seasons were taken, a 35% reduction in rainfall was characterized by a 97% reduction in runoff loss and a 71% increase in leachate loss of metolachlor on a percent of application basis. The data indicate an increase in preferential flow in the leaching movement of metolachlor from the surface soil layer during the reduced rainfall periods. Even with increased preferential flow through the soil during the below-average rainfall seasons, leachate loss (percent of application) of the herbicide remained below 0.3%. Compared to the average rainfall seasons of 1995-1997, the below-normal seasons of 1998-2000 were characterized by a 79% reduction in total runoff and leachate flow and by a 93% reduction in corresponding metolachlor movement via these routes