WorldWideScience

Sample records for greater average improvement

  1. Regional correlations of VS30 averaged over depths less than and greater than 30 meters

    Science.gov (United States)

    Boore, David M.; Thompson, Eric M.; Cadet, Héloïse

    2011-01-01

    Using velocity profiles from sites in Japan, California, Turkey, and Europe, we find that the time-averaged shear-wave velocity to 30 m (VS30), used as a proxy for site amplification in recent ground-motion prediction equations (GMPEs) and building codes, is strongly correlated with average velocities to depths less than 30 m (VSz, with z being the averaging depth). The correlations for sites in Japan (corresponding to the KiK-net network) show that VSz is systematically larger for a given VSz than for profiles from the other regions. The difference largely results from the placement of the KiK-net station locations on rock and rocklike sites, whereas stations in the other regions are generally placed in urban areas underlain by sediments. Using the KiK-net velocity profiles, we provide equations relating VS30 to VSz for z ranging from 5 to 29 m in 1-m increments. These equations (and those for California velocity profiles given in Boore, 2004b) can be used to estimate VS30 from VSz for sites in which velocity profiles do not extend to 30 m. The scatter of the residuals decreases with depth, but, even for an averaging depth of 5 m, a variation in logVS30 of ±1 standard deviation maps into less than a 20% uncertainty in ground motions given by recent GMPEs at short periods. The sensitivity of the ground motions to VS30 uncertainty is considerably larger at long periods (but is less than a factor of 1.2 for averaging depths greater than about 20 m). We also find that VS30 is correlated with VSz for z as great as 400 m for sites of the KiK-net network, providing some justification for using VS30 as a site-response variable for predicting ground motions at periods for which the wavelengths far exceed 30 m.

  2. Improving consensus structure by eliminating averaging artifacts

    Directory of Open Access Journals (Sweden)

    KC Dukka B

    2009-03-01

    Full Text Available Abstract Background Common structural biology methods (i.e., NMR and molecular dynamics often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures. However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%; in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38 of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA 1, our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction 2, which

  3. Improved averaging for non-null interferometry

    Science.gov (United States)

    Fleig, Jon F.; Murphy, Paul E.

    2013-09-01

    Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.

  4. A group's physical attractiveness is greater than the average attractiveness of its members : The group attractiveness effect

    NARCIS (Netherlands)

    van Osch, Y.M.J.; Blanken, Irene; Meijs, Maartje H. J.; van Wolferen, Job

    2015-01-01

    We tested whether the perceived physical attractiveness of a group is greater than the average attractiveness of its members. In nine studies, we find evidence for the so-called group attractiveness effect (GA-effect), using female, male, and mixed-gender groups, indicating that group impressions of

  5. A group's physical attractiveness is greater than the average attractiveness of its members: the group attractiveness effect.

    Science.gov (United States)

    van Osch, Yvette; Blanken, Irene; Meijs, Maartje H J; van Wolferen, Job

    2015-04-01

    We tested whether the perceived physical attractiveness of a group is greater than the average attractiveness of its members. In nine studies, we find evidence for the so-called group attractiveness effect (GA-effect), using female, male, and mixed-gender groups, indicating that group impressions of physical attractiveness are more positive than the average ratings of the group members. A meta-analysis on 33 comparisons reveals that the effect is medium to large (Cohen's d = 0.60) and moderated by group size. We explored two explanations for the GA-effect: (a) selective attention to attractive group members, and (b) the Gestalt principle of similarity. The results of our studies are in favor of the selective attention account: People selectively attend to the most attractive members of a group and their attractiveness has a greater influence on the evaluation of the group. © 2015 by the Society for Personality and Social Psychology, Inc.

  6. Improving greater trochanteric reattachment with a novel cable plate system.

    Science.gov (United States)

    Baril, Yannick; Bourgeois, Yan; Brailovski, Vladimir; Duke, Kajsa; Laflamme, G Yves; Petit, Yvan

    2013-03-01

    Cable-grip systems are commonly used for greater trochanteric reattachment because they have provided the best fixation performance to date, even though they have a rather high complication rate. A novel reattachment system is proposed with the aim of improving fixation stability. It consists of a Y-shaped fixation plate combined with locking screws and superelastic cables to reduce cable loosening and limit greater trochanter movement. The novel system is compared with a commercially available reattachment system in terms of greater trochanter movement and cable tensions under different greater trochanteric abductor application angles. A factorial design of experiments was used including four independent variables: plate system, cable type, abductor application angle, and femur model. The test procedure included 50 cycles of simultaneous application of an abductor force on the greater trochanter and a hip force on the femoral head. The novel plate reduces the movements of a greater trochanter fragment within a single loading cycle up to 26%. Permanent degradation of the fixation (accumulated movement based on 50-cycle testing) is reduced up to 46%. The use of superelastic cables reduces tension loosening up to 24%. However this last improvement did not result in a significant reduction of the grater trochanter movement. The novel plate and cables present advantages over the commercially available greater trochanter reattachment system. The plate reduces movements generated by the hip abductor. The superelastic cables reduce cable loosening during cycling. Both of these positive effects could decrease the risks related to grater trochanter non-union. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.

  7. Regional correlations of V s30 and velocities averaged over depths less than and greater than 30 meters

    Science.gov (United States)

    Boore, D.M.; Thompson, E.M.; Cadet, H.

    2011-01-01

    Using velocity profiles from sites in Japan, California, Turkey, and Europe, we find that the time-averaged shear-wave velocity to 30 m (V S30), used as a proxy for site amplification in recent ground-motion prediction equations (GMPEs) and building codes, is strongly correlated with average velocities to depths less than 30 m (V Sz, with z being the averaging depth). The correlations for sites in Japan (corresponding to the KiK-net network) show that V S30 is systematically larger for a given V Sz than for profiles from the other regions. The difference largely results from the placement of the KiK-net station locations on rock and rocklike sites, whereas stations in the other regions are generally placed in urban areas underlain by sediments. Using the KiK-net velocity profiles, we provide equations relating V S30 to V Sz for z ranging from 5 to 29 m in 1-m increments. These equations (and those for California velocity profiles given in Boore, 2004b) can be used to estimate V S30 from V Sz for sites in which velocity profiles do not extend to 30 m. The scatter of the residuals decreases with depth, but, even for an averaging depth of 5 m, a variation in log V S30 of 1 standard deviation maps into less than a 20% uncertainty in ground motions given by recent GMPEs at short periods. The sensitivity of the ground motions to V S30 uncertainty is considerably larger at long periods (but is less than a factor of 1.2 for averaging depths greater than about 20 m). We also find that V S30 is correlated with V Sz for z as great as 400 m for sites of the KiK-net network, providing some justification for using V S30 as a site-response variable for predicting ground motions at periods for which the wavelengths far exceed 30 m.

  8. Greater utilization of wood residue fuels through improved financial planning

    International Nuclear Information System (INIS)

    Billings, C.D.; Ziemke, M.C.; Stanford, R.

    1991-01-01

    Recent events have focused attention on the promotion of greater utilization of biomass fuel. Considerations include the need to reduce increases in global warming and also to improve ground level air quality by limiting the use of fossil fuels. However, despite all these important environmentally related considerations, economics remains the most important factor in the decision process used to determine the feasibility of using available renewable fuels instead of more convenient fossil fuels. In many areas of the Southeast, this decision process involves choosing between wood residue fuels such as bark, sawdust and shavings and presently plentiful natural gas. The primary candidate users of wood residue fuels are industries that use large amounts of heat and electric power and are located near centers of activity in the forest products industry such as sawmills, veneer mills and furniture factories. Given that such facilities both produce wood residues and need large amounts of heat and electricity, it is understandable that these firms are often major users of wood-fired furnaces and boilers. The authors have observed that poor or incomplete financial planning by the subject firms is a major barrier to economic utilization of inexpensive and widely available renewable fuels. In this paper, the authors suggest that wider usage of improved financial planning could double the present modest annual incidence of new commercial wood-fueled installation

  9. Average chewing pattern improvements following Disclusion Time reduction.

    Science.gov (United States)

    Kerstein, Robert B; Radke, John

    2017-05-01

    Studies involving electrognathographic (EGN) recordings of chewing improvements obtained following occlusal adjustment therapy are rare, as most studies lack 'chewing' within the research. The objectives of this study were to determine if reducing long Disclusion Time to short Disclusion Time with the immediate complete anterior guidance development (ICAGD) coronoplasty in symptomatic subjects altered their average chewing pattern (ACP) and their muscle function. Twenty-nine muscularly symptomatic subjects underwent simultaneous EMG and EGN recordings of right and left gum chewing, before and after the ICAGD coronoplasty. Statistical differences in the mean Disclusion Time, the mean muscle contraction cycle, and the mean ACP resultant from ICAGD underwent the Student's paired t-test (α = 0.05). Disclusion Time reductions from ICAGD were significant (2.11-0.45 s. p = 0.0000). Post-ICAGD muscle changes were significant in the mean area (p = 0.000001), the peak amplitude (p = 0.00005), the time to peak contraction (p chewing position became closer to centric occlusion (p chewing velocities increased (p chewing pattern (ACP) shape, speed, consistency, muscular coordination, and vertical opening improvements can be significantly improved in muscularly dysfunctional TMD patients within one week's time of undergoing the ICAGD enameloplasty. Computer-measured and guided occlusal adjustments quickly and physiologically improved chewing, without requiring the patients to wear pre- or post-treatment appliances.

  10. Exploring JLA supernova data with improved flux-averaging technique

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Shuang; Wen, Sixiang; Li, Miao, E-mail: wangshuang@mail.sysu.edu.cn, E-mail: wensx@mail2.sysu.edu.cn, E-mail: limiao9@mail.sysu.edu.cn [School of Physics and Astronomy, Sun Yat-Sen University, University Road (No. 2), Zhuhai (China)

    2017-03-01

    In this work, we explore the cosmological consequences of the ''Joint Light-curve Analysis'' (JLA) supernova (SN) data by using an improved flux-averaging (FA) technique, in which only the type Ia supernovae (SNe Ia) at high redshift are flux-averaged. Adopting the criterion of figure of Merit (FoM) and considering six dark energy (DE) parameterizations, we search the best FA recipe that gives the tightest DE constraints in the ( z {sub cut}, Δ z ) plane, where z {sub cut} and Δ z are redshift cut-off and redshift interval of FA, respectively. Then, based on the best FA recipe obtained, we discuss the impacts of varying z {sub cut} and varying Δ z , revisit the evolution of SN color luminosity parameter β, and study the effects of adopting different FA recipe on parameter estimation. We find that: (1) The best FA recipe is ( z {sub cut} = 0.6, Δ z =0.06), which is insensitive to a specific DE parameterization. (2) Flux-averaging JLA samples at z {sub cut} ≥ 0.4 will yield tighter DE constraints than the case without using FA. (3) Using FA can significantly reduce the redshift-evolution of β. (4) The best FA recipe favors a larger fractional matter density Ω {sub m} . In summary, we present an alternative method of dealing with JLA data, which can reduce the systematic uncertainties of SNe Ia and give the tighter DE constraints at the same time. Our method will be useful in the use of SNe Ia data for precision cosmology.

  11. Greater vertical spot spacing to improve femtosecond laser capsulotomy quality.

    Science.gov (United States)

    Schultz, Tim; Joachim, Stephanie C; Noristani, Rozina; Scott, Wendell; Dick, H Burkhard

    2017-03-01

    To evaluate the effect of adapted capsulotomy laser settings on the cutting quality in femtosecond laser-assisted cataract surgery. Ruhr-University Eye Clinic, Bochum, Germany. Prospective randomized case series. Eyes were treated with 1 of 2 laser settings. In Group 1, the regular standard settings were used (incisional depth 600 μm, pulse energy 4 μJ, horizontal spot spacing 5 μm, vertical spot spacing 10 μm, treatment time 1.2 seconds). In Group 2, vertical spot spacing was increased to 15 μm and the treatment time was 1.0 seconds. Light microscopy was used to evaluate the cut quality of the capsule edge. The size and number of tags (misplaced laser spots, which form a second cut of the capsule with high tear risk) were evaluated in a blinded manner. Groups were compared using the Mann-Whitney U test. The study comprised 100 eyes (50 eyes in each group). Cataract surgery was successfully completed in all eyes, and no anterior capsule tear occurred during the treatment. Histologically, significant fewer tags were observed with the new capsulotomy laser setting. The mean score for the number and size of free tags was significantly lower in this group than with the standard settings (P laser settings improved cut quality and reduced the number of tags. The modification has the potential to reduce the risk for radial capsule tears in femtosecond laser-assisted cataract surgery. With the new settings, no tags and no capsule tears were observed under the operating microscope in any eye. Copyright © 2017 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  12. Greater-than-Class C low-level radioactive waste characterization. Appendix E-5: Impact of the 1993 NRC draft Branch Technical Position on concentration averaging of greater-than-Class C low-level radioactive waste

    International Nuclear Information System (INIS)

    Tuite, P.; Tuite, K.; Harris, G.

    1994-09-01

    This report evaluates the effects of concentration averaging practices on the disposal of greater-than-Class C low-level radioactive waste (GTCC LLW) generated by the nuclear utility industry and sealed sources. Using estimates of the number of waste components that individually exceed Class C limits, this report calculates the proportion that would be classified as GTCC LLW after applying concentration averaging; this proportion is called the concentration averaging factor. The report uses the guidance outlined in the 1993 Nuclear Regulatory Commission (NRC) draft Branch Technical Position on concentration averaging, as well as waste disposal experience at nuclear utilities, to calculate the concentration averaging factors for nuclear utility wastes. The report uses the 1993 NRC draft Branch Technical Position and the criteria from the Barnwell, South Carolina, LLW disposal site to calculate concentration averaging factors for sealed sources. The report addresses three waste groups: activated metals from light water reactors, process wastes from light-water reactors, and sealed sources. For each waste group, three concentration averaging cases are considered: high, base, and low. The base case, which is the most likely case to occur, assumes using the specific guidance given in the 1993 NRC draft Branch Technical Position on concentration averaging. To project future GTCC LLW generation, each waste category is assigned a concentration averaging factor for the high, base, and low cases

  13. Greater-than-Class C low-level waste characterization. Appendix I: Impact of concentration averaging low-level radioactive waste volume projections

    International Nuclear Information System (INIS)

    Tuite, P.; Tuite, K.; O'Kelley, M.; Ely, P.

    1991-08-01

    This study provides a quantitative framework for bounding unpackaged greater-than-Class C low-level radioactive waste types as a function of concentration averaging. The study defines the three concentration averaging scenarios that lead to base, high, and low volumetric projections; identifies those waste types that could be greater-than-Class C under the high volume, or worst case, concentration averaging scenario; and quantifies the impact of these scenarios on identified waste types relative to the base case scenario. The base volume scenario was assumed to reflect current requirements at the disposal sites as well as the regulatory views. The high volume scenario was assumed to reflect the most conservative criteria as incorporated in some compact host state requirements. The low volume scenario was assumed to reflect the 10 CFR Part 61 criteria as applicable to both shallow land burial facilities and to practices that could be employed to reduce the generation of Class C waste types

  14. Improved performance of high average power semiconductor arrays for applications in diode pumped solid state lasers

    International Nuclear Information System (INIS)

    Beach, R.; Emanuel, M.; Benett, W.; Freitas, B.; Ciarlo, D.; Carlson, N.; Sutton, S.; Skidmore, J.; Solarz, R.

    1994-01-01

    The average power performance capability of semiconductor diode laser arrays has improved dramatically over the past several years. These performance improvements, combined with cost reductions pursued by LLNL and others in the fabrication and packaging of diode lasers, have continued to reduce the price per average watt of laser diode radiation. Presently, we are at the point where the manufacturers of commercial high average power solid state laser systems used in material processing applications can now seriously consider the replacement of their flashlamp pumps with laser diode pump sources. Additionally, a low cost technique developed and demonstrated at LLNL for optically conditioning the output radiation of diode laser arrays has enabled a new and scalable average power diode-end-pumping architecture that can be simply implemented in diode pumped solid state laser systems (DPSSL's). This development allows the high average power DPSSL designer to look beyond the Nd ion for the first time. Along with high average power DPSSL's which are appropriate for material processing applications, low and intermediate average power DPSSL's are now realizable at low enough costs to be attractive for use in many medical, electronic, and lithographic applications

  15. Improving sensitivity in micro-free flow electrophoresis using signal averaging

    Science.gov (United States)

    Turgeon, Ryan T.; Bowser, Michael T.

    2009-01-01

    Microfluidic free-flow electrophoresis (μFFE) is a separation technique that separates continuous streams of analytes as they travel through an electric field in a planar flow channel. The continuous nature of the μFFE separation suggests that approaches more commonly applied in spectroscopy and imaging may be effective in improving sensitivity. The current paper describes the S/N improvements that can be achieved by simply averaging multiple images of a μFFE separation; 20–24-fold improvements in S/N were observed by averaging the signal from 500 images recorded for over 2 min. Up to an 80-fold improvement in S/N was observed by averaging 6500 images. Detection limits as low as 14 pM were achieved for fluorescein, which is impressive considering the non-ideal optical set-up used in these experiments. The limitation to this signal averaging approach was the stability of the μFFE separation. At separation times longer than 20 min bubbles began to form at the electrodes, which disrupted the flow profile through the device, giving rise to erratic peak positions. PMID:19319908

  16. Laser properties of an improved average-power Nd-doped phosphate glass

    International Nuclear Information System (INIS)

    Payne, S.A.; Marshall, C.D.; Bayramian, A.J.

    1995-01-01

    The Nd-doped phosphate laser glass described herein can withstand 2.3 times greater thermal loading without fracture, compared to APG-1 (commercially-available average-power glass from Schott Glass Technologies). The enhanced thermal loading capability is established on the basis of the intrinsic thermomechanical properties (expansion, conduction, fracture toughness, and Young's modulus), and by direct thermally-induced fracture experiments using Ar-ion laser heating of the samples. This Nd-doped phosphate glass (referred to as APG-t) is found to be characterized by a 29% lower gain cross section and a 25% longer low-concentration emission lifetime

  17. Greater use of wood residue fuels through improved financial planning: a case study in Alabama

    Energy Technology Data Exchange (ETDEWEB)

    Billings, C.D.; Ziemke, M.C. (Alabama Univ., Huntsville, AL (United States). Coll. of Administrative Science); Stanford, R. (Alabama Dept. of Economic and Community Affairs, Montgomery, AL (United States))

    1993-01-01

    As the world reacts to environmental concerns relating to fossil energy usage, emphasis is again placed on greater use of renewable fuels such as wood residues. Realistically, however, decisions to utilize such fuels are based on economic factors, rather than desires to improve US energy independence and/or protect the environment. Because Alabama has a large forest products industry, state authorities have long sought to assist potential users of wood residue fuels to better use biomass fuels instead of the usual alternative: natural gas. State agency experience in promoting commercial and industrial use of wood residue fuels has shown that inadequate financial planning has often resulted in rejection of viable projects or acceptance of non-optimum projects. This paper discusses the reasons for this situation and suggests remedies for its improvement. (author)

  18. Greater Proptosis Is Not Associated With Improved Compressive Optic Neuropathy in Thyroid Eye Disease.

    Science.gov (United States)

    Nanda, Tavish; Dunbar, Kristen E; Campbell, Ashley A; Bathras, Ryan M; Kazim, Michael

    2018-05-18

    Despite the paucity of supporting data, it has generally been held that proptosis in thyroid eye disease (TED) may provide relative protection from compressive optic neuropathy (CON) by producing spontaneous decompression. The objective of this study was to investigate this phenomenon in patients with bilateral TED-CON. We retrospectively reviewed the charts of 67 patients (134 orbits) with bilateral TED-CON at Columbia-Presbyterian Medical Center. Significant asymmetric proptosis (Hertel) was defined as ≥ 2 mm. Significant asymmetric CON was defined first, as the presence of an relative afferent pupillary defect. Those without an relative afferent pupillary defect were evaluated according to the TED-CON formula y = -0.69 - 0.31 × (motility) - 0.2 × (mean deviation) - 0.02 × (color vision) as previously established for the diagnosis of TED-CON. A difference in the formula result ≥ 1.0 between eyes was considered significant. Patients were then divided into 4 groups. Forty-one of 67 patients demonstrated asymmetric CON (29 by relative afferent pupillary defect, 12 by formula). Twenty-one of 67 patients demonstrated asymmetric proptosis. Only 5 of 12 (41.6%) of the patients who had both asymmetric proptosis and asymmetric CON (group 1) showed greater proptosis in the eye with less CON. Twenty-nine patients (group 2) showed that asymmetric CON occurred despite symmetrical proptosis. Seventeen patients (group 3), showed the inverse, that asymmetric differences in proptosis occurred with symmetrical CON. Despite commonly held assumptions, our results suggest that greater proptosis is not associated with improved TED-CON. Combining groups 1 to 3-all of which demonstrated asymmetry of either proptosis, CON, or both-91.4% of patients did not show a relationship between greater proptosis and improved CON.

  19. Power Efficiency Improvements through Peak-to-Average Power Ratio Reduction and Power Amplifier Linearization

    Directory of Open Access Journals (Sweden)

    Zhou G Tong

    2007-01-01

    Full Text Available Many modern communication signal formats, such as orthogonal frequency-division multiplexing (OFDM and code-division multiple access (CDMA, have high peak-to-average power ratios (PARs. A signal with a high PAR not only is vulnerable in the presence of nonlinear components such as power amplifiers (PAs, but also leads to low transmission power efficiency. Selected mapping (SLM and clipping are well-known PAR reduction techniques. We propose to combine SLM with threshold clipping and digital baseband predistortion to improve the overall efficiency of the transmission system. Testbed experiments demonstrate the effectiveness of the proposed approach.

  20. Improve Gear Fault Diagnosis and Severity Indexes Determinations via Time Synchronous Average

    Directory of Open Access Journals (Sweden)

    Mohamed El Morsy

    2016-11-01

    Full Text Available In order to reduce operation and maintenance costs, prognostics and health management (PHM of the geared system is needed to improve effective gearbox fault detection tools.  PHM system allows less costly maintenance because it can inform operators of needed repairs before a fault causes collateral damage happens to the gearbox. In this article, time synchronous average (TSA technique and complex continuous wavelet analysis enhancement are used as gear fault detection approach. In the first step, extract the periodic waveform from the noisy measured signal is considered as The main value of Time synchronous averaging (TSA for gearbox signals analyses, where it allows the vibration signature of the gear under analysis to be separated from other gears and noise sources in the gearbox that are not synchronous with faulty gear. In the second step, the complex wavelet analysis is used in case of multi-faults in same gear. The signal phased-locked with the angular position of a shaft within the system is done. The main aims for this research is to improve the gear fault diagnosis and severity index determinations based on TSA  of measured signal for investigated passenger vehicle gearbox under different operation conditions. In addition to, correct the variations in shaft speed such that the spreading of spectral energy into an adjacent gear mesh bin helps in detecting the gear fault position (faulted tooth or teeth and improve the Root Mean Square (RMS, Kurtosis, and Peak Pulse as the sensitivity of severity indexes for maintenance, prognostics and health management (PHM purposes. The open loop test stand is equipped with two dynamometers and investigated vehicle gearbox of mid-size passenger car; the total power is taken-off from one side only. Reference Number: www.asrongo.org/doi:4.2016.1.1.6

  1. NDVI saturation adjustment: a new approach for improving cropland performance estimates in the Greater Platte River Basin, USA

    Science.gov (United States)

    Gu, Yingxin; Wylie, Bruce K.; Howard, Daniel M.; Phuyal, Khem P.; Ji, Lei

    2013-01-01

    In this study, we developed a new approach that adjusted normalized difference vegetation index (NDVI) pixel values that were near saturation to better characterize the cropland performance (CP) in the Greater Platte River Basin (GPRB), USA. The relationship between NDVI and the ratio vegetation index (RVI) at high NDVI values was investigated, and an empirical equation for estimating saturation-adjusted NDVI (NDVIsat_adjust) based on RVI was developed. A 10-year (2000–2009) NDVIsat_adjust data set was developed using 250-m 7-day composite historical eMODIS (expedited Moderate Resolution Imaging Spectroradiometer) NDVI data. The growing season averaged NDVI (GSN), which is a proxy for ecosystem performance, was estimated and long-term NDVI non-saturation- and saturation-adjusted cropland performance (CPnon_sat_adjust, CPsat_adjust) maps were produced over the GPRB. The final CP maps were validated using National Agricultural Statistics Service (NASS) crop yield data. The relationship between CPsat_adjust and the NASS average corn yield data (r = 0.78, 113 samples) is stronger than the relationship between CPnon_sat_adjust and the NASS average corn yield data (r = 0.67, 113 samples), indicating that the new CPsat_adjust map reduces the NDVI saturation effects and is in good agreement with the corn yield ground observations. Results demonstrate that the NDVI saturation adjustment approach improves the quality of the original GSN map and better depicts the actual vegetation conditions of the GPRB cropland systems.

  2. Improved contrast deep optoacoustic imaging using displacement-compensated averaging: breast tumour phantom studies

    Energy Technology Data Exchange (ETDEWEB)

    Jaeger, M; Preisser, S; Kitz, M; Frenz, M [Institute of Applied Physics, University of Bern, Sidlerstrasse 5, CH-3012 Bern (Switzerland); Ferrara, D; Senegas, S; Schweizer, D, E-mail: frenz@iap.unibe.ch [Fukuda Denshi Switzerland AG, Reinacherstrasse 131, CH-4002 Basel (Switzerland)

    2011-09-21

    For real-time optoacoustic (OA) imaging of the human body, a linear array transducer and reflection mode optical irradiation is usually preferred. Such a setup, however, results in significant image background, which prevents imaging structures at the ultimate depth determined by the light distribution and the signal noise level. Therefore, we previously proposed a method for image background reduction, based on displacement-compensated averaging (DCA) of image series obtained when the tissue sample under investigation is gradually deformed. OA signals and background signals are differently affected by the deformation and can thus be distinguished. The proposed method is now experimentally applied to image artificial tumours embedded inside breast phantoms. OA images are acquired alternately with pulse-echo images using a combined OA/echo-ultrasound device. Tissue deformation is accessed via speckle tracking in pulse echo images, and used to compensate in the OA images for the local tissue displacement. In that way, OA sources are highly correlated between subsequent images, while background is decorrelated and can therefore be reduced by averaging. We show that image contrast in breast phantoms is strongly improved and detectability of embedded tumours significantly increased, using the DCA method.

  3. An improved procedure for determining grain boundary diffusion coefficients from averaged concentration profiles

    Science.gov (United States)

    Gryaznov, D.; Fleig, J.; Maier, J.

    2008-03-01

    Whipple's solution of the problem of grain boundary diffusion and Le Claire's relation, which is often used to determine grain boundary diffusion coefficients, are examined for a broad range of ratios of grain boundary to bulk diffusivities Δ and diffusion times t. Different reasons leading to errors in determining the grain boundary diffusivity (DGB) when using Le Claire's relation are discussed. It is shown that nonlinearities of the diffusion profiles in lnCav-y6/5 plots and deviations from "Le Claire's constant" (-0.78) are the major error sources (Cav=averaged concentration, y =coordinate in diffusion direction). An improved relation (replacing Le Claire's constant) is suggested for analyzing diffusion profiles particularly suited for small diffusion lengths (short times) as often required in diffusion experiments on nanocrystalline materials.

  4. Improving The Average Session Evaluation Score Of Supervisory Programby Using PDCA Cycle At PT XYZ

    Directory of Open Access Journals (Sweden)

    Jonny Jonny

    2016-09-01

    Full Text Available PT XYZ took People Development tasks as important things in order to provide great leaders for handling its business operations. It had several leadership programs such as basic management program, supervisory program, managerial program, senior management program, general management program, and the executive program. For basic management and supervisory programs, PT XYZ had appointed ABC division to solely handled them, while the rest, ABC division should cooperate with other training providers who were reputable in leadership ones. The aim of this study was to ensure that the appropriate leadership style has been delivered accordingly to the guideline to the employees by ABC division to improve the average session evaluation score of the supervisory program by using PDCA (Plan, Do, Check, and Action cycle. The method of this research was by gathering quantitative and qualitative data by using session and program evaluation format to see current condition. The research finds that the reasons why the program is below target 4,10 score. It is related to the new facilitator, no framework, and teaching aids. 

  5. Does Greater Autonomy Improve School Performance? Evidence from a Regression Discontinuity Analysis in Chicago

    Science.gov (United States)

    Steinberg, Matthew P.

    2014-01-01

    School districts throughout the United States are increasingly providing greater autonomy to local public (non-charter) school principals. In 2005-06, Chicago Public Schools initiated the Autonomous Management and Performance Schools program, granting academic, programmatic, and operational freedoms to select principals. This paper provides…

  6. Greater physician involvement improves coding outcomes in endobronchial ultrasound-guided transbronchial needle aspiration procedures.

    Science.gov (United States)

    Pillai, Anilkumar; Medford, Andrew R L

    2013-01-01

    Correct coding is essential for accurate reimbursement for clinical activity. Published data confirm that significant aberrations in coding occur, leading to considerable financial inaccuracies especially in interventional procedures such as endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA). Previous data reported a 15% coding error for EBUS-TBNA in a U.K. service. We hypothesised that greater physician involvement with coders would reduce EBUS-TBNA coding errors and financial disparity. The study was done as a prospective cohort study in the tertiary EBUS-TBNA service in Bristol. 165 consecutive patients between October 2009 and March 2012 underwent EBUS-TBNA for evaluation of unexplained mediastinal adenopathy on computed tomography. The chief coder was prospectively electronically informed of all procedures and cross-checked on a prospective database and by Trust Informatics. Cost and coding analysis was performed using the 2010-2011 tariffs. All 165 procedures (100%) were coded correctly as verified by Trust Informatics. This compares favourably with the 14.4% coding inaccuracy rate for EBUS-TBNA in a previous U.K. prospective cohort study [odds ratio 201.1 (1.1-357.5), p = 0.006]. Projected income loss was GBP 40,000 per year in the previous study, compared to a GBP 492,195 income here with no coding-attributable loss in revenue. Greater physician engagement with coders prevents coding errors and financial losses which can be significant especially in interventional specialties. The intervention can be as cheap, quick and simple as a prospective email to the coding team with cross-checks by Trust Informatics and against a procedural database. We suggest that all specialties should engage more with their coders using such a simple intervention to prevent revenue losses. Copyright © 2013 S. Karger AG, Basel.

  7. Strategies to improve homing of mesenchymal stem cells for greater efficacy in stem cell therapy.

    Science.gov (United States)

    Naderi-Meshkin, Hojjat; Bahrami, Ahmad Reza; Bidkhori, Hamid Reza; Mirahmadi, Mahdi; Ahmadiankia, Naghmeh

    2015-01-01

    Stem/progenitor cell-based therapeutic approach in clinical practice has been an elusive dream in medical sciences, and improvement of stem cell homing is one of major challenges in cell therapy programs. Stem/progenitor cells have a homing response to injured tissues/organs, mediated by interactions of chemokine receptors expressed on the cells and chemokines secreted by the injured tissue. For improvement of directed homing of the cells, many techniques have been developed either to engineer stem/progenitor cells with higher amount of chemokine receptors (stem cell-based strategies) or to modulate the target tissues to release higher level of the corresponding chemokines (target tissue-based strategies). This review discusses both of these strategies involved in the improvement of stem cell homing focusing on mesenchymal stem cells as most frequent studied model in cellular therapies. © 2014 International Federation for Cell Biology.

  8. Is Greater Improvement in Early Self-Regulation Associated with Fewer Behavioral Problems Later in Childhood?

    Science.gov (United States)

    Sawyer, Alyssa C. P.; Miller-Lewis, Lauren R.; Searle, Amelia K.; Sawyer, Michael G.; Lynch, John W.

    2015-01-01

    The aim of this study was to determine whether the extent of improvement in self-regulation achieved between ages 4 and 6 years is associated with the level of behavioral problems later in childhood. Participants were 4-year-old children (n = 510) attending preschools in South Australia. Children's level of self-regulation was assessed using the…

  9. Improving the Grade Point Average of Our At-Risk Students: A Collaborative Group Action Research Approach.

    Science.gov (United States)

    Saurino, Dan R.; Hinson, Kenneth; Bouma, Amy

    This paper focuses on the use of a group action research approach to help student teachers develop strategies to improve the grade point average of at-risk students. Teaching interventions such as group work and group and individual tutoring were compared to teaching strategies already used in the field. Results indicated an improvement in the…

  10. Mitigation effectiveness for improving nesting success of greater sage-grouse influenced by energy development

    Science.gov (United States)

    Kirol, Christopher P.; Sutphin, Andrew L.; Bond, Laura S.; Fuller, Mark R.; Maechtle, Thomas L.

    2015-01-01

    Sagebrush Artemisia spp. habitats being developed for oil and gas reserves are inhabited by sagebrush obligate species — including the greater sage-grouse Centrocercus urophasianus (sage-grouse) that is currently being considered for protection under the U.S. Endangered Species Act. Numerous studies suggest increasing oil and gas development may exacerbate species extinction risks. Therefore, there is a great need for effective on-site mitigation to reduce impacts to co-occurring wildlife such as sage-grouse. Nesting success is a primary factor in avian productivity and declines in nesting success are also thought to be an important contributor to population declines in sage-grouse. From 2008 to 2011 we monitored 296 nests of radio-marked female sage-grouse in a natural gas (NG) field in the Powder River Basin, Wyoming, USA, and compared nest survival in mitigated and non-mitigated development areas and relatively unaltered areas to determine if specific mitigation practices were enhancing nest survival. Nest survival was highest in relatively unaltered habitats followed by mitigated, and then non-mitigated NG areas. Reservoirs used for holding NG discharge water had the greatest support as having a direct relationship to nest survival. Within a 5-km2 area surrounding a nest, the probability of nest failure increased by about 15% for every 1.5 km increase in reservoir water edge. Reducing reservoirs was a mitigation focus and sage-grouse nesting in mitigated areas were exposed to almost half of the amount of water edge compared to those in non-mitigated areas. Further, we found that an increase in sagebrush cover was positively related to nest survival. Consequently, mitigation efforts focused on reducing reservoir construction and reducing surface disturbance, especially when the surface disturbance results in sagebrush removal, are important to enhancing sage-grouse nesting success.

  11. Commercial Integrated Heat Pump with Thermal Storage --Demonstrate Greater than 50% Average Annual Energy Savings, Compared with Baseline Heat Pump and Water Heater (Go/No-Go) FY16 4th Quarter Milestone Report

    Energy Technology Data Exchange (ETDEWEB)

    Shen, Bo [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Baxter, Van D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Rice, C. Keith [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Abu-Heiba, Ahmad [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2017-03-01

    For this study, we authored a new air source integrated heat pump (AS-IHP) model in EnergyPlus, and conducted building energy simulations to demonstrate greater than 50% average energy savings, in comparison to a baseline heat pump with electric water heater, over 10 US cities, based on the EnergyPlus quick-service restaurant template building. We also assessed water heating energy saving potentials using ASIHP versus gas heating, and pointed out climate zones where AS-IHPs are promising.

  12. ON IMPROVEMENT OF METHODOLOGY FOR CALCULATING THE INDICATOR «AVERAGE WAGE»

    Directory of Open Access Journals (Sweden)

    Oksana V. Kuchmaeva

    2015-01-01

    Full Text Available The article describes the approaches to the calculation of the indicator of average wages in Russia with the use of several sources of information. The proposed method is based on data collected by Rosstat and the Pension Fund of the Russian Federation. The proposed approach allows capturing data on the wages of almost all groups of employees. Results of experimental calculations on the developed technique are present in this article.

  13. Improving Indonesian cinnamon (c. burmannii (Nees & t. nees) Blume) value chains for Greater Farmers Incomes

    Science.gov (United States)

    Menggala, S. R.; Damme, P. V.

    2018-03-01

    Genus Cinnamomum (Lauraceae) regroups some species whose stem bark are harvested, conditioned and traded as cinnamon in an international market. Over the centuries, the species have been domesticated so that now at least six different ones are grown in Southeast Asia countries. One of the species is Cinnamomum burmannii, also known as Korintje Cinnamon, which generates income for most smallholder farmers in Kerinci district, Jambi, Indonesia. Most cinnamon consumed in the world originates from this Korintje Cinnamon products. It is recognized for its unparalleled quality that comes with its sharp and sweet flavor, with a slightly bitter edge. However, international market requirements for product certification and quality standards make it difficult for a farmer to comply. Our research will address issues related to (improvement of) productivity, sustainability and value chains faced by cinnamon producers in Kerinci, to strengthen their product’s value chains. Smallholder farmers are very vulnerable to climate change impacts, and thus empowering the value chains of agricultural products will increase farmers resilience to climate change. The research will analyze the development of agricultural value chains, certification & standards on trade mechanism to help farmers earn a better income and future prospects.

  14. OrthoANI: An improved algorithm and software for calculating average nucleotide identity.

    Science.gov (United States)

    Lee, Imchang; Ouk Kim, Yeong; Park, Sang-Cheol; Chun, Jongsik

    2016-02-01

    Species demarcation in Bacteria and Archaea is mainly based on overall genome relatedness, which serves a framework for modern microbiology. Current practice for obtaining these measures between two strains is shifting from experimentally determined similarity obtained by DNA-DNA hybridization (DDH) to genome-sequence-based similarity. Average nucleotide identity (ANI) is a simple algorithm that mimics DDH. Like DDH, ANI values between two genome sequences may be different from each other when reciprocal calculations are compared. We compared 63 690 pairs of genome sequences and found that the differences in reciprocal ANI values are significantly high, exceeding 1 % in some cases. To resolve this problem of not being symmetrical, a new algorithm, named OrthoANI, was developed to accommodate the concept of orthology for which both genome sequences were fragmented and only orthologous fragment pairs taken into consideration for calculating nucleotide identities. OrthoANI is highly correlated with ANI (using BLASTn) and the former showed approximately 0.1 % higher values than the latter. In conclusion, OrthoANI provides a more robust and faster means of calculating average nucleotide identity for taxonomic purposes. The standalone software tools are freely available at http://www.ezbiocloud.net/sw/oat.

  15. Reductions in Average Lengths of Stays for Surgical Procedures Between the 2008 and 2014 United States National Inpatient Samples Were Not Associated With Greater Incidences of Use of Postacute Care Facilities.

    Science.gov (United States)

    Dexter, Franklin; Epstein, Richard H

    2018-03-01

    Diagnosis-related group (DRG) based reimbursement creates incentives for reduction in hospital length of stay (LOS). Such reductions might be accomplished by lesser incidences of discharges to home. However, we previously reported that, while controlling for DRG, each 1-day decrease in hospital median LOS was associated with lesser odds of transfer to a postacute care facility (P = .0008). The result, though, was limited to elective admissions, 15 common surgical DRGs, and the 2013 US National Readmission Database. We studied the same potential relationship between decreased LOS and postacute care using different methodology and over 2 different years. The observational study was performed using summary measures from the 2008 and 2014 US National Inpatient Sample, with 3 types of categories (strata): (1) Clinical Classifications Software's classes of procedures (CCS), (2) DRGs including a major operating room procedure during hospitalization, or (3) CCS limiting patients to those with US Medicare as the primary payer. Greater reductions in the mean LOS were associated with smaller percentages of patients with disposition to postacute care. Analyzed using 72 different CCSs, 174 DRGs, or 70 CCSs limited to Medicare patients, each pairwise reduction in the mean LOS by 1 day was associated with an estimated 2.6% ± 0.4%, 2.3% ± 0.3%, or 2.4% ± 0.3% (absolute) pairwise reduction in the mean incidence of use of postacute care, respectively. These 3 results obtained using bivariate weighted least squares linear regression were all P < .0001, as were the corresponding results obtained using unweighted linear regression or the Spearman rank correlation. In the United States, reductions in hospital LOS, averaged over many surgical procedures, are not accomplished through a greater incidence of use of postacute care.

  16. Greater Biopsy Core Number Is Associated With Improved Biochemical Control in Patients Treated With Permanent Prostate Brachytherapy

    International Nuclear Information System (INIS)

    Bittner, Nathan; Merrick, Gregory S.; Galbreath, Robert W.; Butler, Wayne M.; Adamovich, Edward; Wallner, Kent E.

    2010-01-01

    Purpose: Standard prostate biopsy schemes underestimate Gleason score in a significant percentage of cases. Extended biopsy improves diagnostic accuracy and provides more reliable prognostic information. In this study, we tested the hypothesis that greater biopsy core number should result in improved treatment outcome through better tailoring of therapy. Methods and Materials: From April 1995 to May 2006, 1,613 prostate cancer patients were treated with permanent brachytherapy. Patients were divided into five groups stratified by the number of prostate biopsy cores (≤6, 7-9, 10-12, 13-20, and >20 cores). Biochemical progression-free survival (bPFS), cause-specific survival (CSS), and overall survival (OS) were evaluated as a function of core number. Results: The median patient age was 66 years, and the median preimplant prostate-specific antigen was 6.5 ng/mL. The overall 10-year bPFS, CSS, and OS were 95.6%, 98.3%, and 78.6%, respectively. When bPFS was analyzed as a function of core number, the 10-year bPFS for patients with >20, 13-20, 10-12, 7-9 and ≤6 cores was 100%, 100%, 98.3%, 95.8%, and 93.0% (p < 0.001), respectively. When evaluated by treatment era (1995-2000 vs. 2001-2006), the number of biopsy cores remained a statistically significant predictor of bPFS. On multivariate analysis, the number of biopsy cores was predictive of bPFS but did not predict for CSS or OS. Conclusion: Greater biopsy core number was associated with a statistically significant improvement in bPFS. Comprehensive regional sampling of the prostate may enhance diagnostic accuracy compared to a standard biopsy scheme, resulting in better tailoring of therapy.

  17. Analysis and Design of Improved Weighted Average Current Control Strategy for LCL-Type Grid-Connected Inverters

    DEFF Research Database (Denmark)

    Han, Yang; Li, Zipeng; Yang, Ping

    2017-01-01

    The LCL grid-connected inverter has the ability to attenuate the high-frequency current harmonics. However, the inherent resonance of the LCL filter affects the system stability significantly. To damp the resonance effect, the dual-loop current control can be used to stabilize the system. The grid...... Control Strategy for LCL-Type Grid-Connected Inverters. Available from: https://www.researchgate.net/publication/313734269_Analysis_and_Design_of_Improved_Weighted_Average_Current_Control_Strategy_for_LCL-Type_Grid-Connected_Inverters [accessed Apr 20, 2017]....... current plus capacitor current feedback system is widely used for its better transient response and high robustness against the grid impedance variations. While the weighted average current (WAC) feedback scheme is capable to provide a wider bandwidth at higher frequencies but show poor stability...

  18. Brief communication: Using averaged soil moisture estimates to improve the performances of a regional-scale landslide early warning system

    Science.gov (United States)

    Segoni, Samuele; Rosi, Ascanio; Lagomarsino, Daniela; Fanti, Riccardo; Casagli, Nicola

    2018-03-01

    We communicate the results of a preliminary investigation aimed at improving a state-of-the-art RSLEWS (regional-scale landslide early warning system) based on rainfall thresholds by integrating mean soil moisture values averaged over the territorial units of the system. We tested two approaches. The simplest can be easily applied to improve other RSLEWS: it is based on a soil moisture threshold value under which rainfall thresholds are not used because landslides are not expected to occur. Another approach deeply modifies the original RSLEWS: thresholds based on antecedent rainfall accumulated over long periods are substituted with soil moisture thresholds. A back analysis demonstrated that both approaches consistently reduced false alarms, while the second approach reduced missed alarms as well.

  19. State Averages

    Data.gov (United States)

    U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...

  20. Instructions to "push as hard as you can" improve average chest compression depth in dispatcher-assisted cardiopulmonary resuscitation.

    Science.gov (United States)

    Mirza, Muzna; Brown, Todd B; Saini, Devashish; Pepper, Tracy L; Nandigam, Hari Krishna; Kaza, Niroop; Cofield, Stacey S

    2008-10-01

    Cardiopulmonary resuscitation (CPR) with adequate chest compression depth appears to improve first shock success in cardiac arrest. We evaluate the effect of simplification of chest compression instructions on compression depth in dispatcher-assisted CPR protocol. Data from two randomized, double-blinded, controlled trials with identical methodology were combined to obtain 332 records for this analysis. Subjects were randomized to either modified Medical Priority Dispatch System (MPDS) v11.2 protocol or a new simplified protocol. The main difference between the protocols was the instruction to "push as hard as you can" in the simplified protocol, compared to "push down firmly 2in. (5cm)" in MPDS. Data were recorded via a Laerdal ResusciAnne SkillReporter manikin. Primary outcome measures included: chest compression depth, proportion of compressions without error, with adequate depth and with total release. Instructions to "push as hard as you can", compared to "push down firmly 2in. (5cm)", resulted in improved chest compression depth (36.4 mm vs. 29.7 mm, pCPR instructions by changing "push down firmly 2in. (5cm)" to "push as hard as you can" achieved improvement in chest compression depth at no cost to total release or average chest compression rate.

  1. The value of model averaging and dynamical climate model predictions for improving statistical seasonal streamflow forecasts over Australia

    Science.gov (United States)

    Pokhrel, Prafulla; Wang, Q. J.; Robertson, David E.

    2013-10-01

    Seasonal streamflow forecasts are valuable for planning and allocation of water resources. In Australia, the Bureau of Meteorology employs a statistical method to forecast seasonal streamflows. The method uses predictors that are related to catchment wetness at the start of a forecast period and to climate during the forecast period. For the latter, a predictor is selected among a number of lagged climate indices as candidates to give the "best" model in terms of model performance in cross validation. This study investigates two strategies for further improvement in seasonal streamflow forecasts. The first is to combine, through Bayesian model averaging, multiple candidate models with different lagged climate indices as predictors, to take advantage of different predictive strengths of the multiple models. The second strategy is to introduce additional candidate models, using rainfall and sea surface temperature predictions from a global climate model as predictors. This is to take advantage of the direct simulations of various dynamic processes. The results show that combining forecasts from multiple statistical models generally yields more skillful forecasts than using only the best model and appears to moderate the worst forecast errors. The use of rainfall predictions from the dynamical climate model marginally improves the streamflow forecasts when viewed over all the study catchments and seasons, but the use of sea surface temperature predictions provide little additional benefit.

  2. Olympic weightlifting and plyometric training with children provides similar or greater performance improvements than traditional resistance training.

    Science.gov (United States)

    Chaouachi, Anis; Hammami, Raouf; Kaabi, Sofiene; Chamari, Karim; Drinkwater, Eric J; Behm, David G

    2014-06-01

    A number of organizations recommend that advanced resistance training (RT) techniques can be implemented with children. The objective of this study was to evaluate the effectiveness of Olympic-style weightlifting (OWL), plyometrics, and traditional RT programs with children. Sixty-three children (10-12 years) were randomly allocated to a 12-week control OWL, plyometric, or traditional RT program. Pre- and post-training tests included body mass index (BMI), sum of skinfolds, countermovement jump (CMJ), horizontal jump, balance, 5- and 20-m sprint times, isokinetic force and power at 60 and 300° · s(-1). Magnitude-based inferences were used to analyze the likelihood of an effect having a standardized (Cohen's) effect size exceeding 0.20. All interventions were generally superior to the control group. Olympic weightlifting was >80% likely to provide substantially better improvements than plyometric training for CMJ, horizontal jump, and 5- and 20-m sprint times, whereas >75% likely to substantially exceed traditional RT for balance and isokinetic power at 300° · s(-1). Plyometric training was >78% likely to elicit substantially better training adaptations than traditional RT for balance, isokinetic force at 60 and 300° · s(-1), isokinetic power at 300° · s(-1), and 5- and 20-m sprints. Traditional RT only exceeded plyometric training for BMI and isokinetic power at 60° · s(-1). Hence, OWL and plyometrics can provide similar or greater performance adaptations for children. It is recommended that any of the 3 training modalities can be implemented under professional supervision with proper training progressions to enhance training adaptations in children.

  3. Greater Melbourne.

    Science.gov (United States)

    Wulff, M; Burke, T; Newton, P

    1986-03-01

    With more than a quarter of its population born overseas, Melbourne, Australia, is rapidly changing from an all-white British outpost to a multicultural, multilingual community. Since the "white" Australian policy was abandoned after World War II, 3 million immigrants from 100 different countries have moved to Australia. Most of the immigrants come from New Zealand, Rhodesia, South Africa, Britain, Ireland, Greece, Turkey, Yugoslavia, Poland, and Indochina. Melbourne is Australia's 2nd largest city and houses 1 out of 5 Australians. Its 1984 population was 2,888,400. Melbourne's housing pattern consists of subdivisions; 75% of the population live in detached houses. Between 1954 and 1961 Melbourne grew at an annual rate of 3.5%; its growth rate between 1961 and 1971 still averaged 2.5%. In the 1970s the growth rate slowed to 1.4%. Metropolitan Melbourne has no central government but is divided into 56 councils and 8 regions. Both Australia's and Melbourne's fertility rates are high compared to the rest of the developed world, partly because of their younger age structure. 41% of Melbourne's population was under age 24 in 1981. Single-person households are growing faster than any other type. 71% of the housing is owner-occupied; in 1981 the median sized dwelling had 5.2 rooms. Public housing only accounts for 2.6% of all dwellings. Fewer students graduate from high school in Australia than in other developed countries, and fewer graduates pursue higher education. Melbourne's suburban sprawl promotes private car travel. In 1980 Melbourne contained more than 28,000 retail establishments and 4200 restaurants and hotels. Industry accounts for 30% of employment, and services account for another 30%. Its largest industries are motor vehicles, clothing, and footware. Although unemployment reached 10% after the 1973 energy crisis, by 1985 it was down to 6%.

  4. Improved drought monitoring in the Greater Horn of Africa by combining meteorological and remote sensing based indicators

    DEFF Research Database (Denmark)

    Horion, Stéphanie Marie Anne F; Kurnik, Blaz; Barbosa, Paulo

    2010-01-01

    , and therefore to better trigger timely and appropriate actions on the field. In this study, meteorological and remote sensing based drought indicators were compared over the Greater Horn of Africa in order to better understand: (i) how they depict historical drought events ; (ii) if they could be combined...... distribution. Two remote sensing based indicators were tested: the Normalized Difference Water Index (NDWI) derived from SPOT-VEGETATION and the Global Vegetation Index (VGI) derived form MERIS. The first index is sensitive to change in leaf water content of vegetation canopies while the second is a proxy...... of the amount and vigour of vegetation. For both indexes, anomalies were estimated using available satellite archives. Cross-correlations between remote sensing based anomalies and SPI were analysed for five land covers (forest, shrubland, grassland, sparse grassland, cropland and bare soil) over different...

  5. The Greater Phenotypic Homeostasis of the Allopolyploid Coffea arabica Improved the Transcriptional Homeostasis Over that of Both Diploid Parents.

    Science.gov (United States)

    Bertrand, Benoît; Bardil, Amélie; Baraille, Hélène; Dussert, Stéphane; Doulbeau, Sylvie; Dubois, Emeric; Severac, Dany; Dereeper, Alexis; Etienne, Hervé

    2015-10-01

    Polyploidy impacts the diversity of plant species, giving rise to novel phenotypes and leading to ecological diversification. In order to observe adaptive and evolutionary capacities of polyploids, we compared the growth, primary metabolism and transcriptomic expression level in the leaves of the newly formed allotetraploid Coffea arabica species compared with its two diploid parental species (Coffea eugenioides and Coffea canephora), exposed to four thermal regimes (TRs; 18-14, 23-19, 28-24 and 33-29°C). The growth rate of the allopolyploid C. arabica was similar to that of C. canephora under the hottest TR and that of C. eugenioides under the coldest TR. For metabolite contents measured at the hottest TR, the allopolyploid showed similar behavior to C. canephora, the parent which tolerates higher growth temperatures in the natural environment. However, at the coldest TR, the allopolyploid displayed higher sucrose, raffinose and ABA contents than those of its two parents and similar linolenic acid leaf composition and Chl content to those of C. eugenioides. At the gene expression level, few differences between the allopolyploid and its parents were observed for studied genes linked to photosynthesis, respiration and the circadian clock, whereas genes linked to redox activity showed a greater capacity of the allopolyploid for homeostasis. Finally, we found that the overall transcriptional response to TRs of the allopolyploid was more homeostatic compared with its parents. This better transcriptional homeostasis of the allopolyploid C. arabica afforded a greater phenotypic homeostasis when faced with environments that are unsuited to the diploid parental species. © The Author 2015. Published by Oxford University Press on behalf of Japanese Society of Plant Physiologists. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  6. Simultaneous administration of glucose and hyperoxic gas achieves greater improvement in tumor oxygenation than hyperoxic gas alone

    International Nuclear Information System (INIS)

    Snyder, Stacey A.; Lanzen, Jennifer L.; Braun, Rod D.; Rosner, Gary; Secomb, Timothy W.; Biaglow, John; Brizel, David M.; Dewhirst, Mark W.

    2001-01-01

    Purpose: To test the feasibility of hyperglycemic reduction of oxygen consumption combined with oxygen breathing (O 2 ), to improve tumor oxygenation. Methods and Materials: Fischer-344 rats bearing 1 cm R3230Ac flank tumors were anesthetized with Nembutal. Mean arterial pressure, heart rate, tumor blood flow ([TBF], laser Doppler flowmetry), pH, and pO 2 were measured before, during, and after glucose (1 or 4 g/kg) and/or O 2 . Results: Mean arterial pressure and heart rate were unaffected by treatment. Glucose at 1 g/kg yielded maximum blood glucose of 400 mg/dL, no change in TBF, reduced tumor pH (0.17 unit), and 3 mm Hg pO 2 rise. Glucose at 4 g/kg yielded maximum blood glucose of 900 mg/dL, pH drop of 0.6 unit, no pO 2 change, and reduced TBF (31%). Oxygen tension increased by 5 mm Hg with O 2 . Glucose (1 g/Kg) + O 2 yielded the largest change in pO 2 (27 mm Hg); this is highly significant relative to baseline or either treatment alone. The effect was positively correlated with baseline pO 2 , but 6 of 7 experiments with baseline pO 2 2 to improve tumor oxygenation. However, some cell lines are not susceptible to the Crabtree effect, and the magnitude is dependent on baseline pO 2 . Additional or alternative manipulations may be necessary to achieve more uniform improvement in pO 2

  7. Greater endurance capacity and improved dyspnoea with acute oxygen supplementation in idiopathic pulmonary fibrosis patients without resting hypoxaemia.

    Science.gov (United States)

    Dowman, Leona M; McDonald, Christine F; Bozinovski, Steven; Vlahos, Ross; Gillies, Rebecca; Pouniotis, Dodie; Hill, Catherine J; Goh, Nicole S L; Holland, Anne E

    2017-07-01

    Supplemental oxygen is commonly prescribed in patients with idiopathic pulmonary fibrosis (IPF), although its benefits have not been proven. The aims of this study were to investigate the effect of oxygen on oxidative stress, cytokine production, skeletal muscle metabolism and physiological response to exercise in IPF. Eleven participants with IPF received either oxygen, at an FiO 2 of 0.50, or compressed air for 1 h at rest and during a cycle endurance test at 85% of peak work rate. Blood samples collected at rest and during exercise were analysed for markers of oxidative stress, skeletal muscle metabolism and cytokines. The protocol was repeated a week later with the alternate intervention. Compared with air, oxygen did not adversely affect biomarker concentrations at rest and significantly improved endurance time (mean difference = 99 ± 81s, P = 0.002), dyspnoea (-1 ± 1 U, P = 0.02), systolic blood pressure (BP; -11 ± 11 mm Hg, P = 0.006), nadir oxyhaemoglobin saturation (SpO 2 ; 8 ± 6%, P = 0.001), SpO 2 at 2-min (7 ± 6%, P = 0.003) and 5-min isotimes (5 ± 3, P < 0.001) and peak exercise xanthine concentrations (-42 ± 73 µmol/L, P = 0.03). Air significantly increased IL-10 (5 ± 5 pg/mL, P = 0.04) at 2-min isotime. Thiobarbituric acid-reactive substances (TBARs), IL-6, TNF-α, creatine kinase, lactate, heart rate and fatigue did not differ between the two interventions at any time point. In patients with IPF, breathing oxygen at FiO 2 of 0.50 at rest seems safe. During exercise, oxygen improves exercise tolerance, alleviates exercise-induced hypoxaemia and reduces dyspnoea. A potential relationship between oxygen administration and improved skeletal muscle metabolism should be explored in future studies. © 2017 Asian Pacific Society of Respirology.

  8. A collaborative project to improve identification and management of patients with chronic kidney disease in a primary care setting in Greater Manchester.

    Science.gov (United States)

    Humphreys, John; Harvey, Gill; Coleiro, Michelle; Butler, Brook; Barclay, Anna; Gwozdziewicz, Maciek; O'Donoghue, Donal; Hegarty, Janet

    2012-08-01

    Research has demonstrated a knowledge and practice gap in the identification and management of chronic kidney disease (CKD). In 2009, published data showed that general practices in Greater Manchester had a low detection rate for CKD. A 12-month improvement collaborative, supported by an evidence-informed implementation framework and financial incentives. 19 general practices from four primary care trusts within Greater Manchester. Number of recorded patients with CKD on practice registers; percentage of patients on registers achieving nationally agreed blood pressure targets. The collaborative commenced in September 2009 and involved three joint learning sessions, interspersed with practice level rapid improvement cycles, and supported by an implementation team from the National Institute for Health Research Collaboration for Leadership in Applied Health Research and Care for Greater Manchester. At baseline, the 19 collaborative practices had 4185 patients on their CKD registers. At final data collection in September 2010, this figure had increased by 1324 to 5509. Blood pressure improved from 34% to 74% of patients on practice registers having a recorded blood pressure within recommended guidelines. Evidence-based improvement can be implemented in practice for chronic disease management. A collaborative approach has been successful in enabling teams to test and apply changes to identify patients and improve care. The model has proved to be more successful for some practices, suggesting a need to develop more context-sensitive approaches to implementation and actively manage the factors that influence the success of the collaborative.

  9. Greater adherence to a Mediterranean dietary pattern is associated with improved plasma lipid profile: the Aragon Health Workers Study cohort.

    Science.gov (United States)

    Peñalvo, José L; Oliva, Belén; Sotos-Prieto, Mercedes; Uzhova, Irina; Moreno-Franco, Belén; León-Latre, Montserrat; Ordovás, José María

    2015-04-01

    There is wide recognition of the importance of healthy eating in cardiovascular health promotion. The purpose of this study was to identify the main dietary patterns among a Spanish population, and to determine their relationship with plasma lipid profiles. A cross-sectional analysis was conducted of data from 1290 participants of the Aragon Workers Health Study cohort. Standardized protocols were used to collect clinical and biochemistry data. Diet was assessed through a food frequency questionnaire, quantifying habitual intake over the past 12 months. The main dietary patterns were identified by factor analysis. The association between adherence to dietary patterns and plasma lipid levels was assessed by linear and logistic regression. Two dietary patterns were identified: a Mediterranean dietary pattern, high in vegetables, fruits, fish, white meat, nuts, and olive oil, and a Western dietary pattern, high in red meat, fast food, dairy, and cereals. Compared with the participants in the lowest quintile of adherence to the Western dietary pattern, those in the highest quintile had 4.6 mg/dL lower high-density lipoprotein cholesterol levels (P dietary pattern had 3.3mg/dL higher high-density lipoprotein cholesterol levels (P dietary pattern is associated with improved lipid profile compared with a Western dietary pattern, which was associated with a lower odds of optimal high-density lipoprotein cholesterol levels in this population. Copyright © 2014 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.

  10. Towards Improving the Efficiency of Bayesian Model Averaging Analysis for Flow in Porous Media via the Probabilistic Collocation Method

    Directory of Open Access Journals (Sweden)

    Liang Xue

    2018-04-01

    Full Text Available The characterization of flow in subsurface porous media is associated with high uncertainty. To better quantify the uncertainty of groundwater systems, it is necessary to consider the model uncertainty. Multi-model uncertainty analysis can be performed in the Bayesian model averaging (BMA framework. However, the BMA analysis via Monte Carlo method is time consuming because it requires many forward model evaluations. A computationally efficient BMA analysis framework is proposed by using the probabilistic collocation method to construct a response surface model, where the log hydraulic conductivity field and hydraulic head are expanded into polynomials through Karhunen–Loeve and polynomial chaos methods. A synthetic test is designed to validate the proposed response surface analysis method. The results show that the posterior model weight and the key statistics in BMA framework can be accurately estimated. The relative errors of mean and total variance in the BMA analysis results are just approximately 0.013% and 1.18%, but the proposed method can be 16 times more computationally efficient than the traditional BMA method.

  11. Improved Multiscale Entropy Technique with Nearest-Neighbor Moving-Average Kernel for Nonlinear and Nonstationary Short-Time Biomedical Signal Analysis

    Directory of Open Access Journals (Sweden)

    S. P. Arunachalam

    2018-01-01

    Full Text Available Analysis of biomedical signals can yield invaluable information for prognosis, diagnosis, therapy evaluation, risk assessment, and disease prevention which is often recorded as short time series data that challenges existing complexity classification algorithms such as Shannon entropy (SE and other techniques. The purpose of this study was to improve previously developed multiscale entropy (MSE technique by incorporating nearest-neighbor moving-average kernel, which can be used for analysis of nonlinear and non-stationary short time series physiological data. The approach was tested for robustness with respect to noise analysis using simulated sinusoidal and ECG waveforms. Feasibility of MSE to discriminate between normal sinus rhythm (NSR and atrial fibrillation (AF was tested on a single-lead ECG. In addition, the MSE algorithm was applied to identify pivot points of rotors that were induced in ex vivo isolated rabbit hearts. The improved MSE technique robustly estimated the complexity of the signal compared to that of SE with various noises, discriminated NSR and AF on single-lead ECG, and precisely identified the pivot points of ex vivo rotors by providing better contrast between the rotor core and the peripheral region. The improved MSE technique can provide efficient complexity analysis of variety of nonlinear and nonstationary short-time biomedical signals.

  12. Genetic variance and covariance and breed differences for feed intake and average daily gain to improve feed efficiency in growing cattle.

    Science.gov (United States)

    Retallick, K J; Bormann, J M; Weaber, R L; MacNeil, M D; Bradford, H L; Freetly, H C; Hales, K E; Moser, D W; Snelling, W M; Thallman, R M; Kuehn, L A

    2017-04-01

    Feed costs are a major economic expense in finishing and developing cattle; however, collection of feed intake data is costly. Examining relationships among measures of growth and intake, including breed differences, could facilitate selection for efficient cattle. Objectives of this study were to estimate genetic parameters for growth and intake traits and compare indices for feed efficiency to accelerate selection response. On-test ADFI and on-test ADG (TESTADG) and postweaning ADG (PWADG) records for 5,606 finishing steers and growing heifers were collected at the U.S. Meat Animal Research Center in Clay Center, NE. On-test ADFI and ADG data were recorded over testing periods that ranged from 62 to 148 d. Individual quadratic regressions were fitted for BW on time, and TESTADG was predicted from the resulting equations. We included PWADG in the model to improve estimates of growth and intake parameters; PWADG was derived by dividing gain from weaning weight to yearling weight by the number of days between the weights. Genetic parameters were estimated using multiple-trait REML animal models with TESTADG, ADFI, and PWADG for both sexes as dependent variables. Fixed contemporary groups were cohorts of calves simultaneously tested, and covariates included age on test, age of dam, direct and maternal heterosis, and breed composition. Genetic correlations (SE) between steer TESTADG and ADFI, PWADG and ADFI, and TESTADG and PWADG were 0.33 (0.10), 0.59 (0.06), and 0.50 (0.09), respectively, and corresponding estimates for heifers were 0.66 (0.073), 0.77 (0.05), and 0.88 (0.05), respectively. Indices combining EBV for ADFI with EBV for ADG were developed and evaluated. Greater improvement in feed efficiency can be expected using an unrestricted index versus a restricted index. Heterosis significantly affected each trait contributing to greater ADFI and TESTADG. Breed additive effects were estimated for ADFI, TESTADG, and the efficiency indices.

  13. Light-duty vehicle fuel economy improvements, 1979--1998: A consumer purchase model of corporate average fuel economy, fuel price, and income effects

    Science.gov (United States)

    Chien, David Michael

    2000-10-01

    The Energy Policy and Conservation Act of 1975, which created fuel economy standards for automobiles and light trucks, was passed by Congress in response to the rapid rise in world oil prices as a result of the 1973 oil crisis. The standards were first implemented in 1978 for automobiles and 1979 for light trucks, and began with initial standards of 18 MPG for automobiles and 17.2 MPG for light trucks. The current fuel economy standards for 1998 have been held constant at 27.5 MPG for automobiles and 20.5 MPG for light trucks since 1990--1991. While actual new automobile fuel economy has almost doubled from 14 MPG in 1974 to 27.2 MPG in 1994, it is reasonable to ask if the CAFE standards are still needed. Each year Congress attempts to pass another increase in the Corporate Average Fuel Economy (CAFE) standard and fails. Many have called for the abolition of CAFE standards citing the ineffectiveness of the standards in the past. In order to determine whether CAFE standards should be increased, held constant, or repealed, an evaluation of the effectiveness of the CAFE standards to date must be established. Because fuel prices were rising concurrently with the CAFE standards, many authors have attributed the rapid rise in new car fuel economy solely to fuel prices. The purpose of this dissertation is to re-examine the determinants of new car fuel economy via three effects: CAFE regulations, fuel price, and income effects. By measuring the marginal effects of the three fuel economy determinants upon consumers and manufacturers choices, for fuel economy, an estimate was made of the influence of each upon new fuel economy. The conclusions of this dissertation present some clear signals to policymakers: CAFE standards have been very effective in increasing fuel economy from 1979 to 1998. Furthermore, they have been the main cause of fuel economy improvement, with income being a much smaller component. Furthermore, this dissertation has suggested that fuel prices have

  14. Neutron resonance averaging

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1986-10-01

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs

  15. High Intensity Interval Training Leads to Greater Improvements in Acute Heart Rate Recovery and Anaerobic Power as High Volume Low Intensity Training

    Science.gov (United States)

    Stöggl, Thomas L.; Björklund, Glenn

    2017-01-01

    The purpose of the current study was to explore if training regimes utilizing diverse training intensity distributions result in different responses on neuromuscular status, anaerobic capacity/power and acute heart rate recovery (HRR) in well-trained endurance athletes. Methods: Thirty-six male (n = 33) and female (n = 3) runners, cyclists, triathletes and cross-country skiers [peak oxygen uptake: (VO2peak): 61.9 ± 8.0 mL·kg−1·min−1] were randomly assigned to one of three groups (blocked high intensity interval training HIIT; polarized training POL; high volume low intensity oriented control group CG/HVLIT applying no HIIT). A maximal anaerobic running/cycling test (MART/MACT) was performed prior to and following a 9-week training period. Results: Only the HIIT group achieved improvements in peak power/velocity (+6.4%, P 0.05). Acute HRR was improved in HIIT (11.2%, P = 0.002) and POL (7.9%, P = 0.023) with no change in the HVLIT oriented control group. Conclusion: Only a training regime that includes a significant amount of HIIT improves the neuromuscular status, anaerobic power and the acute HRR in well-trained endurance athletes. A training regime that followed more a low and moderate intensity oriented model (CG/HVLIT) had no effect on any performance or HRR outcomes. PMID:28824457

  16. High Intensity Interval Training Leads to Greater Improvements in Acute Heart Rate Recovery and Anaerobic Power as High Volume Low Intensity Training

    Directory of Open Access Journals (Sweden)

    Thomas L. Stöggl

    2017-08-01

    Full Text Available The purpose of the current study was to explore if training regimes utilizing diverse training intensity distributions result in different responses on neuromuscular status, anaerobic capacity/power and acute heart rate recovery (HRR in well-trained endurance athletes.Methods: Thirty-six male (n = 33 and female (n = 3 runners, cyclists, triathletes and cross-country skiers [peak oxygen uptake: (VO2peak: 61.9 ± 8.0 mL·kg−1·min−1] were randomly assigned to one of three groups (blocked high intensity interval training HIIT; polarized training POL; high volume low intensity oriented control group CG/HVLIT applying no HIIT. A maximal anaerobic running/cycling test (MART/MACT was performed prior to and following a 9-week training period.Results: Only the HIIT group achieved improvements in peak power/velocity (+6.4%, P < 0.001 and peak lactate (P = 0.001 during the MART/MACT, while, unexpectedly, in none of the groups the performance at the established lactate concentrations (4, 6, 10 mmol·L−1 was changed (P > 0.05. Acute HRR was improved in HIIT (11.2%, P = 0.002 and POL (7.9%, P = 0.023 with no change in the HVLIT oriented control group.Conclusion: Only a training regime that includes a significant amount of HIIT improves the neuromuscular status, anaerobic power and the acute HRR in well-trained endurance athletes. A training regime that followed more a low and moderate intensity oriented model (CG/HVLIT had no effect on any performance or HRR outcomes.

  17. Quality characteristics of chunked and formed hams from pale, average and dark muscles were improved using an ammonium hydroxide curing solution.

    Science.gov (United States)

    Everts, A J; Wulf, D M; Everts, A K R; Nath, T M; Jennings, T D; Weaver, A D

    2010-10-01

    Cooking yield, cooked pH, purge loss, moisture, lipid oxidation, external and internal color, break strength and elongation distance were assessed for pale (PALE), average (AVG) and dark (DARK) inside hams injected with either a control cure solution (CON) or BPI-processing technology cure solution (BPT). Following enhancement, muscles were chunked, vacuum tumbled, smoked and cooked to 66 degrees C. Cooked ham pH was 6.49 for DARK, 6.40 for AVG, and 6.30 for PALE, respectively (PMeat Science Association. Published by Elsevier Ltd. All rights reserved.

  18. Greater autonomy at work

    NARCIS (Netherlands)

    Houtman, I.L.D.

    2004-01-01

    In the past 10 years, workers in the Netherlands increasingly report more decision-making power in their work. This is important for an economy in recession and where workers face greater work demands. It makes work more interesting, creates a healthier work environment, and provides opportunities

  19. Hybrid support vector regression and autoregressive integrated moving average models improved by particle swarm optimization for property crime rates forecasting with economic indicators.

    Science.gov (United States)

    Alwee, Razana; Shamsuddin, Siti Mariyam Hj; Sallehuddin, Roselina

    2013-01-01

    Crimes forecasting is an important area in the field of criminology. Linear models, such as regression and econometric models, are commonly applied in crime forecasting. However, in real crimes data, it is common that the data consists of both linear and nonlinear components. A single model may not be sufficient to identify all the characteristics of the data. The purpose of this study is to introduce a hybrid model that combines support vector regression (SVR) and autoregressive integrated moving average (ARIMA) to be applied in crime rates forecasting. SVR is very robust with small training data and high-dimensional problem. Meanwhile, ARIMA has the ability to model several types of time series. However, the accuracy of the SVR model depends on values of its parameters, while ARIMA is not robust to be applied to small data sets. Therefore, to overcome this problem, particle swarm optimization is used to estimate the parameters of the SVR and ARIMA models. The proposed hybrid model is used to forecast the property crime rates of the United State based on economic indicators. The experimental results show that the proposed hybrid model is able to produce more accurate forecasting results as compared to the individual models.

  20. Hybrid Support Vector Regression and Autoregressive Integrated Moving Average Models Improved by Particle Swarm Optimization for Property Crime Rates Forecasting with Economic Indicators

    Directory of Open Access Journals (Sweden)

    Razana Alwee

    2013-01-01

    Full Text Available Crimes forecasting is an important area in the field of criminology. Linear models, such as regression and econometric models, are commonly applied in crime forecasting. However, in real crimes data, it is common that the data consists of both linear and nonlinear components. A single model may not be sufficient to identify all the characteristics of the data. The purpose of this study is to introduce a hybrid model that combines support vector regression (SVR and autoregressive integrated moving average (ARIMA to be applied in crime rates forecasting. SVR is very robust with small training data and high-dimensional problem. Meanwhile, ARIMA has the ability to model several types of time series. However, the accuracy of the SVR model depends on values of its parameters, while ARIMA is not robust to be applied to small data sets. Therefore, to overcome this problem, particle swarm optimization is used to estimate the parameters of the SVR and ARIMA models. The proposed hybrid model is used to forecast the property crime rates of the United State based on economic indicators. The experimental results show that the proposed hybrid model is able to produce more accurate forecasting results as compared to the individual models.

  1. Improvement of internal tumor volumes of non-small cell lung cancer patients for radiation treatment planning using interpolated average CT in PET/CT.

    Directory of Open Access Journals (Sweden)

    Yao-Ching Wang

    Full Text Available Respiratory motion causes uncertainties in tumor edges on either computed tomography (CT or positron emission tomography (PET images and causes misalignment when registering PET and CT images. This phenomenon may cause radiation oncologists to delineate tumor volume inaccurately in radiotherapy treatment planning. The purpose of this study was to analyze radiology applications using interpolated average CT (IACT as attenuation correction (AC to diminish the occurrence of this scenario. Thirteen non-small cell lung cancer patients were recruited for the present comparison study. Each patient had full-inspiration, full-expiration CT images and free breathing PET images by an integrated PET/CT scan. IACT for AC in PET(IACT was used to reduce the PET/CT misalignment. The standardized uptake value (SUV correction with a low radiation dose was applied, and its tumor volume delineation was compared to those from HCT/PET(HCT. The misalignment between the PET(IACT and IACT was reduced when compared to the difference between PET(HCT and HCT. The range of tumor motion was from 4 to 17 mm in the patient cohort. For HCT and PET(HCT, correction was from 72% to 91%, while for IACT and PET(IACT, correction was from 73% to 93% (*p<0.0001. The maximum and minimum differences in SUVmax were 0.18% and 27.27% for PET(HCT and PET(IACT, respectively. The largest percentage differences in the tumor volumes between HCT/PET and IACT/PET were observed in tumors located in the lowest lobe of the lung. Internal tumor volume defined by functional information using IACT/PET(IACT fusion images for lung cancer would reduce the inaccuracy of tumor delineation in radiation therapy planning.

  2. Greater-confinement disposal

    International Nuclear Information System (INIS)

    Trevorrow, L.E.; Schubert, J.P.

    1989-01-01

    Greater-confinement disposal (GCD) is a general term for low-level waste (LLW) disposal technologies that employ natural and/or engineered barriers and provide a degree of confinement greater than that of shallow-land burial (SLB) but possibly less than that of a geologic repository. Thus GCD is associated with lower risk/hazard ratios than SLB. Although any number of disposal technologies might satisfy the definition of GCD, eight have been selected for consideration in this discussion. These technologies include: (1) earth-covered tumuli, (2) concrete structures, both above and below grade, (3) deep trenches, (4) augered shafts, (5) rock cavities, (6) abandoned mines, (7) high-integrity containers, and (8) hydrofracture. Each of these technologies employ several operations that are mature,however, some are at more advanced stages of development and demonstration than others. Each is defined and further described by information on design, advantages and disadvantages, special equipment requirements, and characteristic operations such as construction, waste emplacement, and closure

  3. Breastfeeding in Mexico was stable, on average, but deteriorated among the poor, whereas complementary feeding improved: results from the 1999 to 2006 National Health and Nutrition Surveys.

    Science.gov (United States)

    González de Cossío, Teresita; Escobar-Zaragoza, Leticia; González-Castell, Dinorah; Reyes-Vázquez, Horacio; Rivera-Dommarco, Juan A

    2013-05-01

    We present: 1) indicators of infant and young child feeding practices (IYCFP) and median age of introduction of foods analyzed by geographic and socioeconomic variables for the 2006 national probabilistic Health Nutrition Survey (ENSANUT-2006); and 2) changes in IYCFP indicators between the 1999 national probabilistic Nutrition Survey and ENSANUT-2006, analyzed by the same variables. Participants were women 12-49 y and their <2-y-old children (2953 in 2006 and 3191 in 1999). Indicators were estimated with the status quo method. The median age of introduction of foods was calculated by the Kaplan-Meier method using recall data. The national median duration of breastfeeding was similar in both surveys, 9.7 mo in 1999 and 10.4 mo in 2006, but decreased in the vulnerable population. In 1999 indigenous women breastfed 20.8 mo but did so for only 13.0 mo in 2006. The national percentage of those exclusively breastfeeding <6 mo also remained stable: 20% in 1999 and 22.3% in 2006. Nevertheless, exclusively breastfeeding <6 mo changed within the indigenous population, from 46% in 1999 to 34.5% in 2006. Between surveys, most breastfeeding indicators had lower values in vulnerable populations than in those better-off. Complementary feeding, however, improved overall. Complementary feeding was inadequately timed: median age of introduction of plain water was 3 mo, formula and non-human milk was 5 mo, and cereals, legumes, and animal foods was 5 mo. Late introduction of animal foods occurred among vulnerable indigenous population when 50% consumed these products at 8 mo. Mexican IYCFP indicate that public policy must protect breastfeeding while promoting the timely introduction of complementary feeding.

  4. More features, greater connectivity.

    Science.gov (United States)

    Hunt, Sarah

    2015-09-01

    Changes in our political infrastructure, the continuing frailties of our economy, and a stark growth in population, have greatly impacted upon the perceived stability of the NHS. Healthcare teams have had to adapt to these changes, and so too have the technologies upon which they rely to deliver first-class patient care. Here Sarah Hunt, marketing co-ordinator at Aid Call, assesses how the changing healthcare environment has affected one of its fundamental technologies - the nurse call system, argues the case for wireless such systems in terms of what the company claims is greater adaptability to changing needs, and considers the ever-wider range of features and functions available from today's nurse call equipment, particularly via connectivity with both mobile devices, and ancillaries ranging from enuresis sensors to staff attack alert 'badges'.

  5. Greater oil investment opportunities

    International Nuclear Information System (INIS)

    Arenas, Ismael Enrique

    1997-01-01

    Geologically speaking, Colombia is a very attractive country for the world oil community. According to this philosophy new and important steps are being taken to reinforce the oil sector: Expansion of the exploratory frontier by including a larger number of sedimentary areas, and the adoption of innovative contracting instruments. Colombia has to offer, Greater economic incentives for the exploration of new areas to expand the exploratory frontier, stimulation of exploration in areas with prospectivity for small fields. Companies may offer Ecopetrol a participation in production over and above royalties, without it's participating in the investments and costs of these fields, more favorable conditions for natural gas seeking projects, in comparison with those governing the terms for oil

  6. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    1999-01-01

    In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belo...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion......In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  7. Improving predictions of protein-protein interfaces by combining amino acid-specific classifiers based on structural and physicochemical descriptors with their weighted neighbor averages.

    Directory of Open Access Journals (Sweden)

    Fábio R de Moraes

    Full Text Available Protein-protein interactions are involved in nearly all regulatory processes in the cell and are considered one of the most important issues in molecular biology and pharmaceutical sciences but are still not fully understood. Structural and computational biology contributed greatly to the elucidation of the mechanism of protein interactions. In this paper, we present a collection of the physicochemical and structural characteristics that distinguish interface-forming residues (IFR from free surface residues (FSR. We formulated a linear discriminative analysis (LDA classifier to assess whether chosen descriptors from the BlueStar STING database (http://www.cbi.cnptia.embrapa.br/SMS/ are suitable for such a task. Receiver operating characteristic (ROC analysis indicates that the particular physicochemical and structural descriptors used for building the linear classifier perform much better than a random classifier and in fact, successfully outperform some of the previously published procedures, whose performance indicators were recently compared by other research groups. The results presented here show that the selected set of descriptors can be utilized to predict IFRs, even when homologue proteins are missing (particularly important for orphan proteins where no homologue is available for comparative analysis/indication or, when certain conformational changes accompany interface formation. The development of amino acid type specific classifiers is shown to increase IFR classification performance. Also, we found that the addition of an amino acid conservation attribute did not improve the classification prediction. This result indicates that the increase in predictive power associated with amino acid conservation is exhausted by adequate use of an extensive list of independent physicochemical and structural parameters that, by themselves, fully describe the nano-environment at protein-protein interfaces. The IFR classifier developed in this study

  8. Improving predictions of protein-protein interfaces by combining amino acid-specific classifiers based on structural and physicochemical descriptors with their weighted neighbor averages.

    Science.gov (United States)

    de Moraes, Fábio R; Neshich, Izabella A P; Mazoni, Ivan; Yano, Inácio H; Pereira, José G C; Salim, José A; Jardine, José G; Neshich, Goran

    2014-01-01

    Protein-protein interactions are involved in nearly all regulatory processes in the cell and are considered one of the most important issues in molecular biology and pharmaceutical sciences but are still not fully understood. Structural and computational biology contributed greatly to the elucidation of the mechanism of protein interactions. In this paper, we present a collection of the physicochemical and structural characteristics that distinguish interface-forming residues (IFR) from free surface residues (FSR). We formulated a linear discriminative analysis (LDA) classifier to assess whether chosen descriptors from the BlueStar STING database (http://www.cbi.cnptia.embrapa.br/SMS/) are suitable for such a task. Receiver operating characteristic (ROC) analysis indicates that the particular physicochemical and structural descriptors used for building the linear classifier perform much better than a random classifier and in fact, successfully outperform some of the previously published procedures, whose performance indicators were recently compared by other research groups. The results presented here show that the selected set of descriptors can be utilized to predict IFRs, even when homologue proteins are missing (particularly important for orphan proteins where no homologue is available for comparative analysis/indication) or, when certain conformational changes accompany interface formation. The development of amino acid type specific classifiers is shown to increase IFR classification performance. Also, we found that the addition of an amino acid conservation attribute did not improve the classification prediction. This result indicates that the increase in predictive power associated with amino acid conservation is exhausted by adequate use of an extensive list of independent physicochemical and structural parameters that, by themselves, fully describe the nano-environment at protein-protein interfaces. The IFR classifier developed in this study is now

  9. Improving Predictions of Protein-Protein Interfaces by Combining Amino Acid-Specific Classifiers Based on Structural and Physicochemical Descriptors with Their Weighted Neighbor Averages

    Science.gov (United States)

    de Moraes, Fábio R.; Neshich, Izabella A. P.; Mazoni, Ivan; Yano, Inácio H.; Pereira, José G. C.; Salim, José A.; Jardine, José G.; Neshich, Goran

    2014-01-01

    Protein-protein interactions are involved in nearly all regulatory processes in the cell and are considered one of the most important issues in molecular biology and pharmaceutical sciences but are still not fully understood. Structural and computational biology contributed greatly to the elucidation of the mechanism of protein interactions. In this paper, we present a collection of the physicochemical and structural characteristics that distinguish interface-forming residues (IFR) from free surface residues (FSR). We formulated a linear discriminative analysis (LDA) classifier to assess whether chosen descriptors from the BlueStar STING database (http://www.cbi.cnptia.embrapa.br/SMS/) are suitable for such a task. Receiver operating characteristic (ROC) analysis indicates that the particular physicochemical and structural descriptors used for building the linear classifier perform much better than a random classifier and in fact, successfully outperform some of the previously published procedures, whose performance indicators were recently compared by other research groups. The results presented here show that the selected set of descriptors can be utilized to predict IFRs, even when homologue proteins are missing (particularly important for orphan proteins where no homologue is available for comparative analysis/indication) or, when certain conformational changes accompany interface formation. The development of amino acid type specific classifiers is shown to increase IFR classification performance. Also, we found that the addition of an amino acid conservation attribute did not improve the classification prediction. This result indicates that the increase in predictive power associated with amino acid conservation is exhausted by adequate use of an extensive list of independent physicochemical and structural parameters that, by themselves, fully describe the nano-environment at protein-protein interfaces. The IFR classifier developed in this study is now

  10. Averaged RMHD equations

    International Nuclear Information System (INIS)

    Ichiguchi, Katsuji

    1998-01-01

    A new reduced set of resistive MHD equations is derived by averaging the full MHD equations on specified flux coordinates, which is consistent with 3D equilibria. It is confirmed that the total energy is conserved and the linearized equations for ideal modes are self-adjoint. (author)

  11. Determining average yarding distance.

    Science.gov (United States)

    Roger H. Twito; Charles N. Mann

    1979-01-01

    Emphasis on environmental and esthetic quality in timber harvesting has brought about increased use of complex boundaries of cutting units and a consequent need for a rapid and accurate method of determining the average yarding distance and area of these units. These values, needed for evaluation of road and landing locations in planning timber harvests, are easily and...

  12. Average Revisited in Context

    Science.gov (United States)

    Watson, Jane; Chick, Helen

    2012-01-01

    This paper analyses the responses of 247 middle school students to items requiring the concept of average in three different contexts: a city's weather reported in maximum daily temperature, the number of children in a family, and the price of houses. The mixed but overall disappointing performance on the six items in the three contexts indicates…

  13. Averaging operations on matrices

    Indian Academy of Sciences (India)

    2014-07-03

    Jul 3, 2014 ... Role of Positive Definite Matrices. • Diffusion Tensor Imaging: 3 × 3 pd matrices model water flow at each voxel of brain scan. • Elasticity: 6 × 6 pd matrices model stress tensors. • Machine Learning: n × n pd matrices occur as kernel matrices. Tanvi Jain. Averaging operations on matrices ...

  14. Average-energy games

    Directory of Open Access Journals (Sweden)

    Patricia Bouyer

    2015-09-01

    Full Text Available Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in NP inter coNP and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements.

  15. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    2001-01-01

    In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong ...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation.......In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  16. Average is Over

    Science.gov (United States)

    Eliazar, Iddo

    2018-02-01

    The popular perception of statistical distributions is depicted by the iconic bell curve which comprises of a massive bulk of 'middle-class' values, and two thin tails - one of small left-wing values, and one of large right-wing values. The shape of the bell curve is unimodal, and its peak represents both the mode and the mean. Thomas Friedman, the famous New York Times columnist, recently asserted that we have entered a human era in which "Average is Over" . In this paper we present mathematical models for the phenomenon that Friedman highlighted. While the models are derived via different modeling approaches, they share a common foundation. Inherent tipping points cause the models to phase-shift from a 'normal' bell-shape statistical behavior to an 'anomalous' statistical behavior: the unimodal shape changes to an unbounded monotone shape, the mode vanishes, and the mean diverges. Hence: (i) there is an explosion of small values; (ii) large values become super-large; (iii) 'middle-class' values are wiped out, leaving an infinite rift between the small and the super large values; and (iv) "Average is Over" indeed.

  17. Average nuclear surface properties

    International Nuclear Information System (INIS)

    Groote, H. von.

    1979-01-01

    The definition of the nuclear surface energy is discussed for semi-infinite matter. This definition is extended also for the case that there is a neutron gas instead of vacuum on the one side of the plane surface. The calculations were performed with the Thomas-Fermi Model of Syler and Blanchard. The parameters of the interaction of this model were determined by a least squares fit to experimental masses. The quality of this fit is discussed with respect to nuclear masses and density distributions. The average surface properties were calculated for different particle asymmetry of the nucleon-matter ranging from symmetry beyond the neutron-drip line until the system no longer can maintain the surface boundary and becomes homogeneous. The results of the calculations are incorporated in the nuclear Droplet Model which then was fitted to experimental masses. (orig.)

  18. Americans' Average Radiation Exposure

    International Nuclear Information System (INIS)

    2000-01-01

    We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body

  19. Planning for greater confinement disposal

    International Nuclear Information System (INIS)

    Gilbert, T.L.; Luner, C.; Meshkov, N.K.; Trevorrow, L.E.; Yu, C.

    1985-01-01

    A report that provides guidance for planning for greater-confinement disposal (GCD) of low-level radioactive waste is being prepared. The report addresses procedures for selecting a GCD technology and provides information for implementing these procedures. The focus is on GCD; planning aspects common to GCD and shallow-land burial are covered by reference. Planning procedure topics covered include regulatory requirements, waste characterization, benefit-cost-risk assessment and pathway analysis methodologies, determination of need, waste-acceptance criteria, performance objectives, and comparative assessment of attributes that support these objectives. The major technologies covered include augered shafts, deep trenches, engineered structures, hydrofracture, improved waste forms, and high-integrity containers. Descriptive information is provided, and attributes that are relevant for risk assessment and operational requirements are given. 10 refs., 3 figs., 2 tabs

  20. A Small Decrease in Rubisco Content by Individual Suppression of RBCS Genes Leads to Improvement of Photosynthesis and Greater Biomass Production in Rice Under Conditions of Elevated CO2.

    Science.gov (United States)

    Kanno, Keiichi; Suzuki, Yuji; Makino, Amane

    2017-03-01

    Rubisco limits photosynthesis at low CO2 concentrations ([CO2]), but does not limit it at elevated [CO2]. This means that the amount of Rubisco is excessive for photosynthesis at elevated [CO2]. Therefore, we examined whether a small decrease in Rubisco content by individual suppression of the RBCS multigene family leads to increases in photosynthesis and biomass production at elevated [CO2] in rice (Oryza sativa L.). Our previous studies indicated that the individual suppression of RBCS decreased Rubisco content in rice by 10-25%. Three lines of BC2F2 progeny were selected from transgenic plants with individual suppression of OsRBCS2, 3 and 5. Rubisco content in the selected lines was 71-90% that of wild-type plants. These three transgenic lines showed lower rates of CO2 assimilation at low [CO2] (28 Pa) but higher rates of CO2 assimilation at elevated [CO2] (120 Pa). Similarly, the biomass production and relative growth rate (RGR) of the two lines were also smaller at low [CO2] but greater than that of wild-type plants at elevated [CO2]. This greater RGR was caused by the higher net assimilation rate (NAR). When the nitrogen use efficiency (NUE) for the NAR was estimated by dividing the NAR by whole-plant leaf N content, the NUE for NAR at elevated [CO2] was higher in these two lines. Thus, a small decrease in Rubisco content leads to improvements of photosynthesis and greater biomass production in rice under conditions of elevated CO2. © The Author 2017. Published by Oxford University Press on behalf of Japanese Society of Plant Physiologists. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  1. Flexible time domain averaging technique

    Science.gov (United States)

    Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng

    2013-09-01

    Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

  2. Does physiotherapy based on the Bobath concept, in conjunction with a task practice, achieve greater improvement in walking ability in people with stroke compared to physiotherapy focused on structured task practice alone?: a pilot randomized controlled trial.

    Science.gov (United States)

    Brock, Kim; Haase, Gerlinde; Rothacher, Gerhard; Cotton, Susan

    2011-10-01

    To compare the short-term effects of two physiotherapy approaches for improving ability to walk in different environments following stroke: (i) interventions based on the Bobath concept, in conjunction with task practice, compared to (ii) structured task practice alone. Randomized controlled trial. Two rehabilitation centres Participants: Twenty-six participants between four and 20 weeks post-stroke, able to walk with supervision indoors. Both groups received six one-hour physiotherapy sessions over a two-week period. One group received physiotherapy based on the Bobath concept, including one hour of structured task practice. The other group received six hours of structured task practice. The primary outcome was an adapted six-minute walk test, incorporating a step, ramp and uneven surface. Secondary measures were gait velocity and the Berg Balance Scale. Measures were assessed before and after the intervention period. Following the intervention, there was no significant difference in improvement between the two groups for the adapted six-minute walk test (89.9 (standard deviation (SD) 73.1) m Bobath versus 41 (40.7) m task practice, P = 0.07). However, walking velocity showed significantly greater increases in the Bobath group (26.2 (SD 17.2) m/min versus 9.9 (SD = 12.9) m/min, P = 0.01). No significant differences between groups were recorded for the Berg Balance Scale (P = 0.2). This pilot study indicates short-term benefit for using interventions based on the Bobath concept for improving walking velocity in people with stroke. A sample size of 32 participants per group is required for a definitive study.

  3. Butterfly valves: greater use in power plants

    International Nuclear Information System (INIS)

    McCoy, M.

    1975-01-01

    Improvements in butterfly valves, particularly in the areas of automatic control and leak tightness are described. The use of butterfly valves in nuclear power plants is discussed. These uses include service in component cooling, containment cooling, and containment isolation. The outlook for further improvements and greater uses is examined. (U.S.)

  4. Waste management in Greater Vancouver

    Energy Technology Data Exchange (ETDEWEB)

    Carrusca, K. [Greater Vancouver Regional District, Burnaby, BC (Canada); Richter, R. [Montenay Inc., Vancouver, BC (Canada)]|[Veolia Environmental Services, Vancouver, BC (Canada)

    2006-07-01

    An outline of the Greater Vancouver Regional District (GVRD) waste-to-energy program was presented. The GVRD has an annual budget for solid waste management of $90 million. Energy recovery revenues from solid waste currently exceed $10 million. Over 1,660,00 tonnes of GVRD waste is recycled, and another 280,000 tonnes is converted from waste to energy. The GVRD waste-to-energy facility combines state-of-the-art combustion and air pollution control, and has processed over 5 million tonnes of municipal solid waste since it opened in 1988. Its central location minimizes haul distance, and it was originally sited to utilize steam through sales to a recycle paper mill. The facility has won several awards, including the Solid Waste Association of North America award for best facility in 1990. The facility focuses on continual improvement, and has installed a carbon injection system; an ammonia injection system; a flyash stabilization system; and heat capacity upgrades in addition to conducting continuous waste composition studies. Continuous air emissions monitoring is also conducted at the plant, which produces a very small percentage of the total air emissions in metropolitan Vancouver. The GVRD is now seeking options for the management of a further 500,000 tonnes per year of solid waste, and has received 23 submissions from a range of waste energy technologies which are now being evaluated. It was concluded that waste-to-energy plants can be located in densely populated metropolitan areas and provide a local disposal solution as well as a source of renewable energy. Other GVRD waste reduction policies were also reviewed. refs., tabs., figs.

  5. Simultaneous bilateral isolated greater trochanter fracture

    Directory of Open Access Journals (Sweden)

    Maruti Kambali

    2013-01-01

    Full Text Available A 48-year-old woman sustained simultaneous isolated bilateral greater trochanteric fracture, following a road traffic accident. The patient presented to us 1 month after the injury. She presented with complaints of pain in the left hip and inability to walk. Roentgenograms revealed displaced comminuted bilateral greater trochanter fractures. The fracture of the left greater trochanter was reduced and fixed internally using the tension band wiring technique. The greater trochanter fracture on the right side was asymptomatic and was managed conservatively. The patient regained full range of motion and use of her hips after a postoperative follow-up of 6 months. Isolated fractures of the greater trochanter are unusual injuries. Because of their relative rarity and the unsettled controversy regarding their etiology and pathogenesis, several methods of treatment have been advocated. Furthermore, the reports of this particular type of injury are not plentiful and the average textbook coverage afforded to this entity is limited. In our study we discuss the mechanism of injury and the various treatment options available.

  6. The difference between alternative averages

    Directory of Open Access Journals (Sweden)

    James Vaupel

    2012-09-01

    Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.

  7. How to average logarithmic retrievals?

    Directory of Open Access Journals (Sweden)

    B. Funke

    2012-04-01

    Full Text Available Calculation of mean trace gas contributions from profiles obtained by retrievals of the logarithm of the abundance rather than retrievals of the abundance itself are prone to biases. By means of a system simulator, biases of linear versus logarithmic averaging were evaluated for both maximum likelihood and maximum a priori retrievals, for various signal to noise ratios and atmospheric variabilities. These biases can easily reach ten percent or more. As a rule of thumb we found for maximum likelihood retrievals that linear averaging better represents the true mean value in cases of large local natural variability and high signal to noise ratios, while for small local natural variability logarithmic averaging often is superior. In the case of maximum a posteriori retrievals, the mean is dominated by the a priori information used in the retrievals and the method of averaging is of minor concern. For larger natural variabilities, the appropriateness of the one or the other method of averaging depends on the particular case because the various biasing mechanisms partly compensate in an unpredictable manner. This complication arises mainly because of the fact that in logarithmic retrievals the weight of the prior information depends on abundance of the gas itself. No simple rule was found on which kind of averaging is superior, and instead of suggesting simple recipes we cannot do much more than to create awareness of the traps related with averaging of mixing ratios obtained from logarithmic retrievals.

  8. Lagrangian averaging with geodesic mean.

    Science.gov (United States)

    Oliver, Marcel

    2017-11-01

    This paper revisits the derivation of the Lagrangian averaged Euler (LAE), or Euler- α equations in the light of an intrinsic definition of the averaged flow map as the geodesic mean on the volume-preserving diffeomorphism group. Under the additional assumption that first-order fluctuations are statistically isotropic and transported by the mean flow as a vector field, averaging of the kinetic energy Lagrangian of an ideal fluid yields the LAE Lagrangian. The derivation presented here assumes a Euclidean spatial domain without boundaries.

  9. Averaging in spherically symmetric cosmology

    International Nuclear Information System (INIS)

    Coley, A. A.; Pelavas, N.

    2007-01-01

    The averaging problem in cosmology is of fundamental importance. When applied to study cosmological evolution, the theory of macroscopic gravity (MG) can be regarded as a long-distance modification of general relativity. In the MG approach to the averaging problem in cosmology, the Einstein field equations on cosmological scales are modified by appropriate gravitational correlation terms. We study the averaging problem within the class of spherically symmetric cosmological models. That is, we shall take the microscopic equations and effect the averaging procedure to determine the precise form of the correlation tensor in this case. In particular, by working in volume-preserving coordinates, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. We find that the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background must be of the form of a spatial curvature. Inhomogeneities and spatial averaging, through this spatial curvature correction term, can have a very significant dynamical effect on the dynamics of the Universe and cosmological observations; in particular, we discuss whether spatial averaging might lead to a more conservative explanation of the observed acceleration of the Universe (without the introduction of exotic dark matter fields). We also find that the correlation tensor for a non-FLRW background can be interpreted as the sum of a spatial curvature and an anisotropic fluid. This may lead to interesting effects of averaging on astrophysical scales. We also discuss the results of averaging an inhomogeneous Lemaitre-Tolman-Bondi solution as well as calculations of linear perturbations (that is, the backreaction) in an FLRW background, which support the main conclusions of the analysis

  10. Averaging models: parameters estimation with the R-Average procedure

    Directory of Open Access Journals (Sweden)

    S. Noventa

    2010-01-01

    Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.

  11. Generalized Jackknife Estimators of Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic...

  12. Operational technology for greater confinement disposal

    International Nuclear Information System (INIS)

    Dickman, P.T.; Vollmer, A.T.; Hunter, P.H.

    1984-12-01

    Procedures and methods for the design and operation of a greater confinement disposal facility using large-diameter boreholes are discussed. It is assumed that the facility would be located at an operating low-level waste disposal site and that only a small portion of the wastes received at the site would require greater confinement disposal. The document is organized into sections addressing: facility planning process; facility construction; waste loading and handling; radiological safety planning; operations procedures; and engineering cost studies. While primarily written for low-level waste management site operators and managers, a detailed economic assessment section is included that should assist planners in performing cost analyses. Economic assessments for both commercial and US government greater confinement disposal facilities are included. The estimated disposal costs range from $27 to $104 per cubic foot for a commercial facility and from $17 to $60 per cubic foot for a government facility. These costs are based on average site preparation, construction, and waste loading costs for both contact- and remote-handled wastes. 14 figures, 22 tables

  13. Does improvement in maternal attachment representations predict greater maternal sensitivity, child attachment security and lower rates of relapse to substance use? A second test of Mothering from the Inside Out treatment mechanisms.

    Science.gov (United States)

    Suchman, Nancy E; DeCoste, Cindy; Borelli, Jessica L; McMahon, Thomas J

    2018-02-01

    In this study, we replicated a rigorous test of the proposed mechanisms of change associated with Mothering from the Inside out (MIO), an evidence-based parenting therapy that aims to enhance maternal reflective functioning and mental representations of caregiving in mothers enrolled in addiction treatment and caring for young children. First, using data from 84 mothers who enrolled in our second randomized controlled trial, we examined whether therapist fidelity to core MIO treatment components predicted improvement in maternal reflective functioning and mental representations of caregiving, even after taking fidelity to non-MIO components into account. Next, we examined whether improvement in directly targeted outcomes (e.g., maternal mentalizing and mental representations of caregiving) led to improvements in the indirectly targeted outcome of maternal caregiving sensitivity, even after controlling for other plausible competing mechanisms (e.g., improvement in maternal psychiatric distress and substance use). Third, we examined whether improvement in targeted parenting outcomes (e.g., maternal mentalizing, mental representations of caregiving and caregiving sensitivity) was associated in improvement in child attachment status, even after controlling for competing mechanisms (e.g., improvement in maternal psychiatric distress and substance use). Finally, we examined whether improvement in maternal mentalizing and caregiving representations was associated with a reduction in relapse to substance use. Support was found for the first three tests of mechanisms but not the fourth. Implications for future research and intervention development are discussed. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Evaluations of average level spacings

    International Nuclear Information System (INIS)

    Liou, H.I.

    1980-01-01

    The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of 168 Er data. 19 figures, 2 tables

  15. Ergodic averages via dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    2006-01-01

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain....

  16. [Autoerotic fatalities in Greater Dusseldorf].

    Science.gov (United States)

    Hartung, Benno; Hellen, Florence; Borchard, Nora; Huckenbeck, Wolfgang

    2011-01-01

    Autoerotic fatalities in the Greater Dusseldorf area correspond to the relevant medicolegal literature. Our results included exclusively young to middle-aged, usually single men who were found dead in their city apartments. Clothing and devices used showed a great variety. Women's or fetish clothing and complex shackling or hanging devices were disproportionately frequent. In most cases, death occurred due to hanging or ligature strangulation. There was no increased incidence of underlying psychiatric disorders. In most of the deceased no or at least no remarkable alcohol intoxication was found. Occasionally, it may be difficult to reliably differentiate autoerotic accidents, accidents occurring in connection with practices of bondage & discipline, dominance & submission (BDSM) from natural death, suicide or homicide.

  17. High average power supercontinuum sources

    Indian Academy of Sciences (India)

    The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium.

  18. Exploiting scale dependence in cosmological averaging

    International Nuclear Information System (INIS)

    Mattsson, Teppo; Ronkainen, Maria

    2008-01-01

    We study the role of scale dependence in the Buchert averaging method, using the flat Lemaitre–Tolman–Bondi model as a testing ground. Within this model, a single averaging scale gives predictions that are too coarse, but by replacing it with the distance of the objects R(z) for each redshift z, we find an O(1%) precision at z<2 in the averaged luminosity and angular diameter distances compared to their exact expressions. At low redshifts, we show the improvement for generic inhomogeneity profiles, and our numerical computations further verify it up to redshifts z∼2. At higher redshifts, the method breaks down due to its inability to capture the time evolution of the inhomogeneities. We also demonstrate that the running smoothing scale R(z) can mimic acceleration, suggesting that it could be at least as important as the backreaction in explaining dark energy as an inhomogeneity induced illusion

  19. Aperture averaging in strong oceanic turbulence

    Science.gov (United States)

    Gökçe, Muhsin Caner; Baykal, Yahya

    2018-04-01

    Receiver aperture averaging technique is employed in underwater wireless optical communication (UWOC) systems to mitigate the effects of oceanic turbulence, thus to improve the system performance. The irradiance flux variance is a measure of the intensity fluctuations on a lens of the receiver aperture. Using the modified Rytov theory which uses the small-scale and large-scale spatial filters, and our previously presented expression that shows the atmospheric structure constant in terms of oceanic turbulence parameters, we evaluate the irradiance flux variance and the aperture averaging factor of a spherical wave in strong oceanic turbulence. Irradiance flux variance variations are examined versus the oceanic turbulence parameters and the receiver aperture diameter are examined in strong oceanic turbulence. Also, the effect of the receiver aperture diameter on the aperture averaging factor is presented in strong oceanic turbulence.

  20. When good = better than average

    Directory of Open Access Journals (Sweden)

    Don A. Moore

    2007-10-01

    Full Text Available People report themselves to be above average on simple tasks and below average on difficult tasks. This paper proposes an explanation for this effect that is simpler than prior explanations. The new explanation is that people conflate relative with absolute evaluation, especially on subjective measures. The paper then presents a series of four studies that test this conflation explanation. These tests distinguish conflation from other explanations, such as differential weighting and selecting the wrong referent. The results suggest that conflation occurs at the response stage during which people attempt to disambiguate subjective response scales in order to choose an answer. This is because conflation has little effect on objective measures, which would be equally affected if the conflation occurred at encoding.

  1. Autoregressive Moving Average Graph Filtering

    OpenAIRE

    Isufi, Elvin; Loukas, Andreas; Simonetto, Andrea; Leus, Geert

    2016-01-01

    One of the cornerstones of the field of signal processing on graphs are graph filters, direct analogues of classical filters, but intended for signals defined on graphs. This work brings forth new insights on the distributed graph filtering problem. We design a family of autoregressive moving average (ARMA) recursions, which (i) are able to approximate any desired graph frequency response, and (ii) give exact solutions for tasks such as graph signal denoising and interpolation. The design phi...

  2. Averaging Robertson-Walker cosmologies

    International Nuclear Information System (INIS)

    Brown, Iain A.; Robbers, Georg; Behrend, Juliane

    2009-01-01

    The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ω eff 0 ≈ 4 × 10 −6 , with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10 −8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w eff < −1/3 can be found for strongly phantom models

  3. Estimating glomerular filtration rate (GFR) in children. The average between a cystatin C- and a creatinine-based equation improves estimation of GFR in both children and adults and enables diagnosing Shrunken Pore Syndrome.

    Science.gov (United States)

    Leion, Felicia; Hegbrant, Josefine; den Bakker, Emil; Jonsson, Magnus; Abrahamson, Magnus; Nyman, Ulf; Björk, Jonas; Lindström, Veronica; Larsson, Anders; Bökenkamp, Arend; Grubb, Anders

    2017-09-01

    Estimating glomerular filtration rate (GFR) in adults by using the average of values obtained by a cystatin C- (eGFR cystatin C ) and a creatinine-based (eGFR creatinine ) equation shows at least the same diagnostic performance as GFR estimates obtained by equations using only one of these analytes or by complex equations using both analytes. Comparison of eGFR cystatin C and eGFR creatinine plays a pivotal role in the diagnosis of Shrunken Pore Syndrome, where low eGFR cystatin C compared to eGFR creatinine has been associated with higher mortality in adults. The present study was undertaken to elucidate if this concept can also be applied in children. Using iohexol and inulin clearance as gold standard in 702 children, we studied the diagnostic performance of 10 creatinine-based, 5 cystatin C-based and 3 combined cystatin C-creatinine eGFR equations and compared them to the result of the average of 9 pairs of a eGFR cystatin C and a eGFR creatinine estimate. While creatinine-based GFR estimations are unsuitable in children unless calibrated in a pediatric or mixed pediatric-adult population, cystatin C-based estimations in general performed well in children. The average of a suitable creatinine-based and a cystatin C-based equation generally displayed a better diagnostic performance than estimates obtained by equations using only one of these analytes or by complex equations using both analytes. Comparing eGFR cystatin and eGFR creatinine may help identify pediatric patients with Shrunken Pore Syndrome.

  4. Average configuration of the geomagnetic tail

    International Nuclear Information System (INIS)

    Fairfield, D.H.

    1979-01-01

    Over 3000 hours of Imp 6 magnetic field data obtained between 20 and 33 R/sub E/ in the geomagnetic tail have been used in a statistical study of the tail configuration. A distribution of 2.5-min averages of B/sub z/ as a function of position across the tail reveals that more flux crosses the equatorial plane near the dawn and dusk flanks (B-bar/sub z/=3.γ) than near midnight (B-bar/sub z/=1.8γ). The tail field projected in the solar magnetospheric equatorial plane deviates from the x axis due to flaring and solar wind aberration by an angle α=-0.9 Y/sub SM/-2.7, where Y/sub SM/ is in earth radii and α is in degrees. After removing these effects, the B/sub y/ component of the tail field is found to depend on interplanetary sector structure. During an 'away' sector the B/sub y/ component of the tail field is on average 0.5γ greater than that during a 'toward' sector, a result that is true in both tail lobes and is independent of location across the tail. This effect means the average field reversal between northern and southern lobes of the tail is more often 178 0 rather than the 180 0 that is generally supposed

  5. Topological quantization of ensemble averages

    International Nuclear Information System (INIS)

    Prodan, Emil

    2009-01-01

    We define the current of a quantum observable and, under well-defined conditions, we connect its ensemble average to the index of a Fredholm operator. The present work builds on a formalism developed by Kellendonk and Schulz-Baldes (2004 J. Funct. Anal. 209 388) to study the quantization of edge currents for continuous magnetic Schroedinger operators. The generalization given here may be a useful tool to scientists looking for novel manifestations of the topological quantization. As a new application, we show that the differential conductance of atomic wires is given by the index of a certain operator. We also comment on how the formalism can be used to probe the existence of edge states

  6. How can the use of data within the immunisation programme be increased in order to improve data quality and ensure greater accountability in the health system? A protocol for implementation science study.

    Science.gov (United States)

    Tilahun, Binyam; Teklu, Alemayehu; Mancuso, Arielle; Abebaw, Zeleke; Dessie, Kassahun; Zegeye, Desalegn

    2018-05-03

    Immunisation remains one of the most important and cost-effective interventions to reduce vaccine-preventable child morbidity, disability and mortality. Health programmes like the Expanded Program of Immunization rely on complex decision-making and strong local level evidence is important to effectively and efficiently utilise limited resources. Lack of data use for decision-making at each level of the health system remains the main challenge in most developing countries. While there is much evidence on data quality and how to improve it, there is a lack of sufficient evidence on why the use of data for decision-making at each level of the health system is low. Herein, we describe a comprehensive implementation science study that will be conducted to identify organisational, technical and individual level factors affecting local data use at each level of the Ethiopian health system. We will apply a mixed methods approach using key informant interviews and document reviews. The qualitative data will be gathered through key informant interviews using a semi-structured guide with open- and closed-ended questions with four categories of respondents, namely decision-makers, data producers, data users and community representatives at the federal, regional, zonal, woreda and community levels of the health system. The document review will be conducted on selected reports and feedback documented at different levels of the health system. Data will be collected from July 2017 to March 2018. Descriptive statistics will be analysed for the quantitative study using SPSS version 20 software and thematic content analysis will be performed for the qualitative part using NVivo software. Appropriate and timely use of health and health-related information for decision-making is an essential element in the process of transforming the health sector. The findings of the study will inform stakeholders at different levels on the institutionalisation of evidence-based practice in

  7. Will immediate postoperative imbalance improve in patients with thoracolumbar/lumbar degenerative kyphoscoliosis? A comparison between Smith-Petersen osteotomy and pedicle subtraction osteotomy with an average 4 years of follow-up.

    Science.gov (United States)

    Bao, Hongda; He, Shouyu; Liu, Zhen; Zhu, Zezhang; Qiu, Yong; Zhu, Feng

    2015-03-01

    A retrospective radiographical study. To compare compensatory behavior of coronal and sagittal alignment after pedicle subtraction osteotomy (PSO) and Smith-Petersen osteotomy (SPO) for degenerative kyphoscoliosis. There was a paucity of literature paying attention to the postoperative imbalance after PSO or SPO and natural evolution of the imbalance. A retrospective study was performed on 68 consecutive patients with degenerative kyphoscoliosis treated by lumbar PSO (25 patients) or SPO (43 patients) procedures at a single institution. Long-cassette standing radiographs were taken preoperatively, postoperatively, and at the last follow-up and radiographical parameters were measured. The lower instrumented vertebral level and level of osteotomy were compared between the patients with and without improvement. Negative sagittal vertical axis (SVA) was observed in the PSO group postoperatively, implying an overcorrection of SVA. This negative SVA improved spontaneously during follow-up (P imbalance (P = 0.027), whereas no difference in term of levels of osteotomy was found (P > 0.05). The overcorrection of SVA is more often seen in the PSO group. The coronal imbalance is more likely to occur in the SPO group. The postoperative sagittal imbalance often spontaneously improves with time. Lower instrumented vertebra at S1 or with pelvic fixation should be regarded as potential risk factors for persistent coronal imbalance in patients with SPO. 3.

  8. The average Indian female nose.

    Science.gov (United States)

    Patil, Surendra B; Kale, Satish M; Jaiswal, Sumeet; Khare, Nishant; Math, Mahantesh

    2011-12-01

    This study aimed to delineate the anthropometric measurements of the noses of young women of an Indian population and to compare them with the published ideals and average measurements for white women. This anthropometric survey included a volunteer sample of 100 young Indian women ages 18 to 35 years with Indian parents and no history of previous surgery or trauma to the nose. Standardized frontal, lateral, oblique, and basal photographs of the subjects' noses were taken, and 12 standard anthropometric measurements of the nose were determined. The results were compared with published standards for North American white women. In addition, nine nasal indices were calculated and compared with the standards for North American white women. The nose of Indian women differs significantly from the white nose. All the nasal measurements for the Indian women were found to be significantly different from those for North American white women. Seven of the nine nasal indices also differed significantly. Anthropometric analysis suggests differences between the Indian female nose and the North American white nose. Thus, a single aesthetic ideal is inadequate. Noses of Indian women are smaller and wider, with a less projected and rounded tip than the noses of white women. This study established the nasal anthropometric norms for nasal parameters, which will serve as a guide for cosmetic and reconstructive surgery in Indian women.

  9. Averaging in the presence of sliding errors

    International Nuclear Information System (INIS)

    Yost, G.P.

    1991-08-01

    In many cases the precision with which an experiment can measure a physical quantity depends on the value of that quantity. Not having access to the true value, experimental groups are forced to assign their errors based on their own measured value. Procedures which attempt to derive an improved estimate of the true value by a suitable average of such measurements usually weight each experiment's measurement according to the reported variance. However, one is in a position to derive improved error estimates for each experiment from the average itself, provided an approximate idea of the functional dependence of the error on the central value is known. Failing to do so can lead to substantial biases. Techniques which avoid these biases without loss of precision are proposed and their performance is analyzed with examples. These techniques are quite general and can bring about an improvement even when the behavior of the errors is not well understood. Perhaps the most important application of the technique is in fitting curves to histograms

  10. The Easterlin Illusion: Economic growth does go with greater happiness

    NARCIS (Netherlands)

    R. Veenhoven (Ruut); F. Vergunst (Floris)

    2014-01-01

    markdownabstract__Abstract__ The 'Easterlin Paradox' holds that economic growth in nations does not buy greater happiness for the average citizen. This thesis was advanced in the 1970s on the basis of the then available data on happiness in nations. Later data have disproved most of the empirical

  11. Greater Sudbury fuel efficient driving handbook

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2009-12-15

    Reducing the amount of fuel that people use for personal driving saves money, improves local air quality, and reduces personal contributions to climate change. This handbook was developed to be used as a tool for a fuel efficient driving pilot program in Greater Sudbury in 2009-2010. Specifically, the purpose of the handbook was to provide greater Sudbury drivers with information on how to drive and maintain their personal vehicles in order to maximize fuel efficiency. The handbook also provides tips for purchasing fuel efficient vehicles. It outlines the benefits of fuel maximization, with particular reference to reducing contributions to climate change; reducing emissions of air pollutants; safe driving; and money savings. Some tips for efficient driving are to avoid aggressive driving; use cruise control; plan trips; and remove excess weight. Tips for efficient winter driving are to avoid idling to warm up the engine; use a block heater; remove snow and ice; use snow tires; and check tire pressure. The importance of car maintenance and tire pressure was emphasized. The handbook also explains how fuel consumption ratings are developed by vehicle manufacturers. refs., figs.

  12. CO2 reduction in the Danish transportation sector. Working paper 5: Technological improvement of energy efficiency. Average requirements to energy efficiency of the new vehicles. Subsidies to research and development

    International Nuclear Information System (INIS)

    1997-03-01

    The road traffic is expected to be responsible for 9/10 of the total CO 2 emission from transportation sector in 2005. Especially private cars contribute more than half of the total CO 2 emission. Cars are not produced in Denmark, so energy efficiency of the new models depends entirely on the foreign manufacturers. Measurements of energy efficiency on test facilities show usually slightly better efficiency than on-the-road results. Efficiency estimates are based on test results. Within 10-15 years the whole car park will show essential efficiency improvement due to exchanging to newer models. Shadow price of CO 2 emission reduction is defined. (EG) Prepared for Trafikministeriet. 27 refs

  13. Planning for greater-confinement disposal

    International Nuclear Information System (INIS)

    Gilbert, T.L.; Luner, C.; Meshkov, N.K.; Trevorrow, L.E.; Yu, C.

    1984-01-01

    This contribution is a progress report for preparation of a document that will summarize procedures and technical information needed to plan for and implement greater-confinement disposal (GCD) of low-level radioactive waste. Selection of a site and a facility design (Phase I), and construction, operation, and extended care (Phase II) will be covered in the document. This progress report is limited to Phase I. Phase I includes determination of the need for GCD, design alternatives, and selection of a site and facility design. Alternative designs considered are augered shafts, deep trenches, engineered structures, high-integrity containers, hydrofracture, and improved waste form. Design considerations and specifications, performance elements, cost elements, and comparative advantages and disadvantages of the different designs are covered. Procedures are discussed for establishing overall performance objectives and waste-acceptance criteria, and for comparative assessment of the performance and cost of the different alternatives. 16 references

  14. Greater confinement disposal of radioactive wastes

    International Nuclear Information System (INIS)

    Trevorrow, L.E.; Gilbert, T.L.; Luner, C.; Merry-Libby, P.A.; Meshkov, N.K.; Yu, C.

    1985-01-01

    Low-level radioactive waste (LLW) includes a broad spectrum of different radionuclide concentrations, half-lives, and hazards. Standard shallow-land burial practice can provide adequate protection of public health and safety for most LLW. A small volume fraction (approx. 1%) containing most of the activity inventory (approx. 90%) requires specific measures known as greater-confinement disposal (GCD). Different site characteristics and different waste characteristics - such as high radionuclide concentrations, long radionuclide half-lives, high radionuclide mobility, and physical or chemical characteristics that present exceptional hazards - lead to different GCD facility design requirements. Facility design alternatives considered for GCD include the augered shaft, deep trench, engineered structure, hydrofracture, improved waste form, and high-integrity container. Selection of an appropriate design must also consider the interplay between basic risk limits for protection of public health and safety, performance characteristics and objectives, costs, waste-acceptance criteria, waste characteristics, and site characteristics

  15. Planning for greater-confinement disposal

    International Nuclear Information System (INIS)

    Gilbert, T.L.; Luner, C.; Meshkov, N.K.; Trevorrow, L.E.; Yu, C.

    1984-01-01

    This contribution is a progress report for preparation of a document that will summarize procedures and technical information needed to plan for and implement greater-confinement disposal (GCD) of low-level radioactive waste. Selection of a site and a facility design (Phase I), and construction, operation, and extended care (Phase II) will be covered in the document. This progress report is limited to Phase I. Phase I includes determination of the need for GCD, design alternatives, and selection of a site and facility design. Alternative designs considered are augered shafts, deep trenches, engineered structures, high-integrity containers, hydrofracture, and improved waste form. Design considerations and specifications, performance elements, cost elements, and comparative advantages and disadvantages of the different designs are covered. Procedures are discussed for establishing overall performance objecties and waste-acceptance criteria, and for comparative assessment of the performance and cost of the different alternatives. 16 refs

  16. High average power linear induction accelerator development

    International Nuclear Information System (INIS)

    Bayless, J.R.; Adler, R.J.

    1987-07-01

    There is increasing interest in linear induction accelerators (LIAs) for applications including free electron lasers, high power microwave generators and other types of radiation sources. Lawrence Livermore National Laboratory has developed LIA technology in combination with magnetic pulse compression techniques to achieve very impressive performance levels. In this paper we will briefly discuss the LIA concept and describe our development program. Our goals are to improve the reliability and reduce the cost of LIA systems. An accelerator is presently under construction to demonstrate these improvements at an energy of 1.6 MeV in 2 kA, 65 ns beam pulses at an average beam power of approximately 30 kW. The unique features of this system are a low cost accelerator design and an SCR-switched, magnetically compressed, pulse power system. 4 refs., 7 figs

  17. Average spectral efficiency analysis of FSO links over turbulence channel with adaptive transmissions and aperture averaging

    Science.gov (United States)

    Aarthi, G.; Ramachandra Reddy, G.

    2018-03-01

    In our paper, the impact of adaptive transmission schemes: (i) optimal rate adaptation (ORA) and (ii) channel inversion with fixed rate (CIFR) on the average spectral efficiency (ASE) are explored for free-space optical (FSO) communications with On-Off Keying (OOK), Polarization shift keying (POLSK), and Coherent optical wireless communication (Coherent OWC) systems under different turbulence regimes. Further to enhance the ASE we have incorporated aperture averaging effects along with the above adaptive schemes. The results indicate that ORA adaptation scheme has the advantage of improving the ASE performance compared with CIFR under moderate and strong turbulence regime. The coherent OWC system with ORA excels the other modulation schemes and could achieve ASE performance of 49.8 bits/s/Hz at the average transmitted optical power of 6 dBm under strong turbulence. By adding aperture averaging effect we could achieve an ASE of 50.5 bits/s/Hz under the same conditions. This makes ORA with Coherent OWC modulation as a favorable candidate for improving the ASE of the FSO communication system.

  18. Averaging of nonlinearity-managed pulses

    International Nuclear Information System (INIS)

    Zharnitsky, Vadim; Pelinovsky, Dmitry

    2005-01-01

    We consider the nonlinear Schroedinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons

  19. Asymmetric network connectivity using weighted harmonic averages

    Science.gov (United States)

    Morrison, Greg; Mahadevan, L.

    2011-02-01

    We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.

  20. Socio-economic considerations of cleaning Greater Vancouver's air

    International Nuclear Information System (INIS)

    2005-08-01

    Socio-economic considerations of better air quality on the Greater Vancouver population and economy were discussed. The purpose of the study was to provide socio-economic information to staff and stakeholders of the Greater Vancouver Regional District (GVRD) who are participating in an Air Quality Management Plan (AQMP) development process and the Sustainable Region Initiative (SRI) process. The study incorporated the following methodologies: identification and review of Canadian, American, and European quantitative socio-economic, cost-benefit, cost effectiveness, competitiveness and health analyses of changes in air quality and measures to improve air quality; interviews with industry representatives in Greater Vancouver on competitiveness impacts of air quality changes and ways to improve air quality; and a qualitative analysis and discussion of secondary quantitative information that identifies and evaluates socio-economic impacts arising from changes in Greater Vancouver air quality. The study concluded that for the Greater Vancouver area, the qualitative analysis of an improvement in Greater Vancouver air quality shows positive socio-economic outcomes, as high positive economic efficiency impacts are expected along with good social quality of life impacts. 149 refs., 30 tabs., 6 appendices

  1. Geographic Gossip: Efficient Averaging for Sensor Networks

    Science.gov (United States)

    Dimakis, Alexandros D. G.; Sarwate, Anand D.; Wainwright, Martin J.

    Gossip algorithms for distributed computation are attractive due to their simplicity, distributed nature, and robustness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repeatedly recirculating redundant information. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing times of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of $n$ and $\\sqrt{n}$ respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy $\\epsilon$ using $O(\\frac{n^{1.5}}{\\sqrt{\\log n}} \\log \\epsilon^{-1})$ radio transmissions, which yields a $\\sqrt{\\frac{n}{\\log n}}$ factor improvement over standard gossip algorithms. We illustrate these theoretical results with experimental comparisons between our algorithm and standard methods as applied to various classes of random fields.

  2. Greater trochanteric pain syndrome diagnosis and treatment.

    Science.gov (United States)

    Mallow, Michael; Nazarian, Levon N

    2014-05-01

    Lateral hip pain, or greater trochanteric pain syndrome, is a commonly seen condition; in this article, the relevant anatomy, epidemiology, and evaluation strategies of greater trochanteric pain syndrome are reviewed. Specific attention is focused on imaging of this syndrome and treatment techniques, including ultrasound-guided interventions. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. The average size of ordered binary subgraphs

    NARCIS (Netherlands)

    van Leeuwen, J.; Hartel, Pieter H.

    To analyse the demands made on the garbage collector in a graph reduction system, the change in size of an average graph is studied when an arbitrary edge is removed. In ordered binary trees the average number of deleted nodes as a result of cutting a single edge is equal to the average size of a

  4. Average radiation weighting factors for specific distributed neutron spectra

    International Nuclear Information System (INIS)

    Ninkovic, M.M.; Raicevic, J.J.

    1993-01-01

    Spectrum averaged radiation weighting factors for 6 specific neutron fields in the environment of 3 categories of the neutron sources (fission, spontaneous fission and (α,n)) are determined in this paper. Obtained values of these factors are greater 1.5 to 2 times than the corresponding quality factors used for the same purpose until a few years ago. This fact is very important to have in mind in the conversion of the neutron fluence into the neutron dose equivalent. (author)

  5. A precise measurement of the average b hadron lifetime

    CERN Document Server

    Buskulic, Damir; De Bonis, I; Décamp, D; Ghez, P; Goy, C; Lees, J P; Lucotte, A; Minard, M N; Odier, P; Pietrzyk, B; Ariztizabal, F; Chmeissani, M; Crespo, J M; Efthymiopoulos, I; Fernández, E; Fernández-Bosman, M; Gaitan, V; Garrido, L; Martínez, M; Orteu, S; Pacheco, A; Padilla, C; Palla, Fabrizio; Pascual, A; Perlas, J A; Sánchez, F; Teubert, F; Colaleo, A; Creanza, D; De Palma, M; Farilla, A; Gelao, G; Girone, M; Iaselli, Giuseppe; Maggi, G; Maggi, M; Marinelli, N; Natali, S; Nuzzo, S; Ranieri, A; Raso, G; Romano, F; Ruggieri, F; Selvaggi, G; Silvestris, L; Tempesta, P; Zito, G; Huang, X; Lin, J; Ouyang, Q; Wang, T; Xie, Y; Xu, R; Xue, S; Zhang, J; Zhang, L; Zhao, W; Bonvicini, G; Cattaneo, M; Comas, P; Coyle, P; Drevermann, H; Engelhardt, A; Forty, Roger W; Frank, M; Hagelberg, R; Harvey, J; Jacobsen, R; Janot, P; Jost, B; Knobloch, J; Lehraus, Ivan; Markou, C; Martin, E B; Mato, P; Meinhard, H; Minten, Adolf G; Miquel, R; Oest, T; Palazzi, P; Pater, J R; Pusztaszeri, J F; Ranjard, F; Rensing, P E; Rolandi, Luigi; Schlatter, W D; Schmelling, M; Schneider, O; Tejessy, W; Tomalin, I R; Venturi, A; Wachsmuth, H W; Wiedenmann, W; Wildish, T; Witzeling, W; Wotschack, J; Ajaltouni, Ziad J; Bardadin-Otwinowska, Maria; Barrès, A; Boyer, C; Falvard, A; Gay, P; Guicheney, C; Henrard, P; Jousset, J; Michel, B; Monteil, S; Montret, J C; Pallin, D; Perret, P; Podlyski, F; Proriol, J; Rossignol, J M; Saadi, F; Fearnley, Tom; Hansen, J B; Hansen, J D; Hansen, J R; Hansen, P H; Nilsson, B S; Kyriakis, A; Simopoulou, Errietta; Siotis, I; Vayaki, Anna; Zachariadou, K; Blondel, A; Bonneaud, G R; Brient, J C; Bourdon, P; Passalacqua, L; Rougé, A; Rumpf, M; Tanaka, R; Valassi, Andrea; Verderi, M; Videau, H L; Candlin, D J; Parsons, M I; Focardi, E; Parrini, G; Corden, M; Delfino, M C; Georgiopoulos, C H; Jaffe, D E; Antonelli, A; Bencivenni, G; Bologna, G; Bossi, F; Campana, P; Capon, G; Chiarella, V; Felici, G; Laurelli, P; Mannocchi, G; Murtas, F; Murtas, G P; Pepé-Altarelli, M; Dorris, S J; Halley, A W; ten Have, I; Knowles, I G; Lynch, J G; Morton, W T; O'Shea, V; Raine, C; Reeves, P; Scarr, J M; Smith, K; Smith, M G; Thompson, A S; Thomson, F; Thorn, S; Turnbull, R M; Becker, U; Braun, O; Geweniger, C; Graefe, G; Hanke, P; Hepp, V; Kluge, E E; Putzer, A; Rensch, B; Schmidt, M; Sommer, J; Stenzel, H; Tittel, K; Werner, S; Wunsch, M; Beuselinck, R; Binnie, David M; Cameron, W; Colling, D J; Dornan, Peter J; Konstantinidis, N P; Moneta, L; Moutoussi, A; Nash, J; San Martin, G; Sedgbeer, J K; Stacey, A M; Dissertori, G; Girtler, P; Kneringer, E; Kuhn, D; Rudolph, G; Bowdery, C K; Brodbeck, T J; Colrain, P; Crawford, G; Finch, A J; Foster, F; Hughes, G; Sloan, Terence; Whelan, E P; Williams, M I; Galla, A; Greene, A M; Kleinknecht, K; Quast, G; Raab, J; Renk, B; Sander, H G; Wanke, R; Van Gemmeren, P; Zeitnitz, C; Aubert, Jean-Jacques; Bencheikh, A M; Benchouk, C; Bonissent, A; Bujosa, G; Calvet, D; Carr, J; Diaconu, C A; Etienne, F; Thulasidas, M; Nicod, D; Payre, P; Rousseau, D; Talby, M; Abt, I; Assmann, R W; Bauer, C; Blum, Walter; Brown, D; Dietl, H; Dydak, Friedrich; Ganis, G; Gotzhein, C; Jakobs, K; Kroha, H; Lütjens, G; Lutz, Gerhard; Männer, W; Moser, H G; Richter, R H; Rosado-Schlosser, A; Schael, S; Settles, Ronald; Seywerd, H C J; Stierlin, U; Saint-Denis, R; Wolf, G; Alemany, R; Boucrot, J; Callot, O; Cordier, A; Courault, F; Davier, M; Duflot, L; Grivaz, J F; Heusse, P; Jacquet, M; Kim, D W; Le Diberder, F R; Lefrançois, J; Lutz, A M; Musolino, G; Nikolic, I A; Park, H J; Park, I C; Schune, M H; Simion, S; Veillet, J J; Videau, I; Abbaneo, D; Azzurri, P; Bagliesi, G; Batignani, G; Bettarini, S; Bozzi, C; Calderini, G; Carpinelli, M; Ciocci, M A; Ciulli, V; Dell'Orso, R; Fantechi, R; Ferrante, I; Foà, L; Forti, F; Giassi, A; Giorgi, M A; Gregorio, A; Ligabue, F; Lusiani, A; Marrocchesi, P S; Messineo, A; Rizzo, G; Sanguinetti, G; Sciabà, A; Spagnolo, P; Steinberger, Jack; Tenchini, Roberto; Tonelli, G; Triggiani, G; Vannini, C; Verdini, P G; Walsh, J; Betteridge, A P; Blair, G A; Bryant, L M; Cerutti, F; Gao, Y; Green, M G; Johnson, D L; Medcalf, T; Mir, L M; Perrodo, P; Strong, J A; Bertin, V; Botterill, David R; Clifft, R W; Edgecock, T R; Haywood, S; Edwards, M; Maley, P; Norton, P R; Thompson, J C; Bloch-Devaux, B; Colas, P; Duarte, H; Emery, S; Kozanecki, Witold; Lançon, E; Lemaire, M C; Locci, E; Marx, B; Pérez, P; Rander, J; Renardy, J F; Rosowsky, A; Roussarie, A; Schuller, J P; Schwindling, J; Si Mohand, D; Trabelsi, A; Vallage, B; Johnson, R P; Kim, H Y; Litke, A M; McNeil, M A; Taylor, G; Beddall, A; Booth, C N; Boswell, R; Cartwright, S L; Combley, F; Dawson, I; Köksal, A; Letho, M; Newton, W M; Rankin, C; Thompson, L F; Böhrer, A; Brandt, S; Cowan, G D; Feigl, E; Grupen, Claus; Lutters, G; Minguet-Rodríguez, J A; Rivera, F; Saraiva, P; Smolik, L; Stephan, F; Apollonio, M; Bosisio, L; Della Marina, R; Giannini, G; Gobbo, B; Ragusa, F; Rothberg, J E; Wasserbaech, S R; Armstrong, S R; Bellantoni, L; Elmer, P; Feng, P; Ferguson, D P S; Gao, Y S; González, S; Grahl, J; Harton, J L; Hayes, O J; Hu, H; McNamara, P A; Nachtman, J M; Orejudos, W; Pan, Y B; Saadi, Y; Schmitt, M; Scott, I J; Sharma, V; Turk, J; Walsh, A M; Wu Sau Lan; Wu, X; Yamartino, J M; Zheng, M; Zobernig, G

    1996-01-01

    An improved measurement of the average b hadron lifetime is performed using a sample of 1.5 million hadronic Z decays, collected during the 1991-1993 runs of ALEPH, with the silicon vertex detector fully operational. This uses the three-dimensional impact parameter distribution of lepton tracks coming from semileptonic b decays and yields an average b hadron lifetime of 1.533 \\pm 0.013 \\pm 0.022 ps.

  6. Averaging Bias Correction for Future IPDA Lidar Mission MERLIN

    Directory of Open Access Journals (Sweden)

    Tellier Yoann

    2018-01-01

    Full Text Available The CNES/DLR MERLIN satellite mission aims at measuring methane dry-air mixing ratio column (XCH4 and thus improving surface flux estimates. In order to get a 1% precision on XCH4 measurements, MERLIN signal processing assumes an averaging of data over 50 km. The induced biases due to the non-linear IPDA lidar equation are not compliant with accuracy requirements. This paper analyzes averaging biases issues and suggests correction algorithms tested on realistic simulated scenes.

  7. Averaging Bias Correction for Future IPDA Lidar Mission MERLIN

    Science.gov (United States)

    Tellier, Yoann; Pierangelo, Clémence; Wirth, Martin; Gibert, Fabien

    2018-04-01

    The CNES/DLR MERLIN satellite mission aims at measuring methane dry-air mixing ratio column (XCH4) and thus improving surface flux estimates. In order to get a 1% precision on XCH4 measurements, MERLIN signal processing assumes an averaging of data over 50 km. The induced biases due to the non-linear IPDA lidar equation are not compliant with accuracy requirements. This paper analyzes averaging biases issues and suggests correction algorithms tested on realistic simulated scenes.

  8. Averaging for solitons with nonlinearity management

    International Nuclear Information System (INIS)

    Pelinovsky, D.E.; Kevrekidis, P.G.; Frantzeskakis, D.J.

    2003-01-01

    We develop an averaging method for solitons of the nonlinear Schroedinger equation with a periodically varying nonlinearity coefficient, which is used to effectively describe solitons in Bose-Einstein condensates, in the context of the recently proposed technique of Feshbach resonance management. Using the derived local averaged equation, we study matter-wave bright and dark solitons and demonstrate a very good agreement between solutions of the averaged and full equations

  9. DSCOVR Magnetometer Level 2 One Minute Averages

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-minute average of Level 1 data

  10. DSCOVR Magnetometer Level 2 One Second Averages

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-second average of Level 1 data

  11. Spacetime averaging of exotic singularity universes

    International Nuclear Information System (INIS)

    Dabrowski, Mariusz P.

    2011-01-01

    Taking a spacetime average as a measure of the strength of singularities we show that big-rips (type I) are stronger than big-bangs. The former have infinite spacetime averages while the latter have them equal to zero. The sudden future singularities (type II) and w-singularities (type V) have finite spacetime averages. The finite scale factor (type III) singularities for some values of the parameters may have an infinite average and in that sense they may be considered stronger than big-bangs.

  12. NOAA Average Annual Salinity (3-Zone)

    Data.gov (United States)

    California Natural Resource Agency — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...

  13. Declining average daily census. Part 1: Implications and options.

    Science.gov (United States)

    Weil, T P

    1985-12-01

    A national trend toward declining average daily (inpatient) census (ADC) started in late 1982 even before the Medicare prospective payment system began. The decrease in total days will continue despite an increasing number of aged persons in the U.S. population. This decline could have been predicted from trends during 1978 to 1983, such as increasing available beds but decreasing occupancy, 100 percent increases in hospital expenses, and declining lengths of stay. Assuming that health care costs will remain as a relatively fixed part of the gross national product and no major medical advances will occur in the next five years, certain implications and options exist for facilities experiencing a declining ADC. This article discusses several considerations: Attempts to improve market share; Reduction of full-time equivalent employees; Impact of greater acuity of illness among remaining inpatients; Implications of increasing the number of physicians on medical staffs; Option of a closed medical staff by clinical specialty; Unbundling with not-for-profit and profit-making corporations; Review of mergers, consolidations, and multihospital systems to decide when this option is most appropriate; Sale of a not-for-profit hospital to an investor-owned chain, with implications facing Catholic hospitals choosing this option; Impact and difficulty of developing meaningful alternative health care systems with the hospital's medical staff; Special problems of teaching hospitals; The social issue of the hospital shifting from the community's health center to a cost center; Increased turnover of hospital CEOs; With these in mind, institutions can then focus on solutions that can sometimes be used in tandem to resolve this problem's impact. The second part of this article will discuss some of them.

  14. 40 CFR 76.11 - Emissions averaging.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Emissions averaging. 76.11 Section 76.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General...

  15. Determinants of College Grade Point Averages

    Science.gov (United States)

    Bailey, Paul Dean

    2012-01-01

    Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the…

  16. MR Neurography of Greater Occipital Nerve Neuropathy: Initial Experience in Patients with Migraine.

    Science.gov (United States)

    Hwang, L; Dessouky, R; Xi, Y; Amirlak, B; Chhabra, A

    2017-11-01

    MR imaging of peripheral nerves (MR neurography) allows improved assessment of nerve anatomy and pathology. The objective of this study was to evaluate patients with unilateral occipital neuralgia using MR neurography and to assess the differences in greater occipital nerve signal and size between the symptomatic and asymptomatic sides. In this case-control evaluation using MR neurography, bilateral greater occipital nerve caliber, signal intensity, signal-to-noise ratios, and contrast-to-noise ratios were determined by 2 observers. Among 18 subjects with unilateral occipital migraines, the average greater occipital nerve diameter for the symptomatic side was significantly greater at 1.77 ± 0.4 mm than for the asymptomatic side at 1.29 ± 0.25 mm ( P = .001). The difference in nerve signal intensity between the symptomatic and asymptomatic sides was statistically significant at 269.06 ± 170.93 and 222.44 ± 170.46, respectively ( P = .043). The signal-to-noise ratios on the symptomatic side were higher at 15.79 ± 4.59 compared with the asymptomatic nerve at 14.02 ± 5.23 ( P = .009). Contrast-to-noise ratios were significantly higher on the symptomatic side than on the asymptomatic side at 2.57 ± 4.89 and -1.26 ± 5.02, respectively ( P = .004). Intraobserver performance was good to excellent (intraclass coefficient correlation, 0.68-0.93), and interobserver performance was fair to excellent (intraclass coefficient correlation, 0.54-0.81). MR neurography can be reliably used for the diagnosis of greater occipital nerve neuropathy in patients with unilateral occipital migraines with a good correlation of imaging findings to the clinical presentation. © 2017 by American Journal of Neuroradiology.

  17. Greater trochanteric fracture with occult intertrochanteric extension.

    Science.gov (United States)

    Reiter, Michael; O'Brien, Seth D; Bui-Mansfield, Liem T; Alderete, Joseph

    2013-10-01

    Proximal femoral fractures are frequently encountered in the emergency department (ED). Prompt diagnosis is paramount as delay will exacerbate the already poor outcomes associated with these injuries. In cases where radiography is negative but clinical suspicion remains high, magnetic resonance imaging (MRI) is the study of choice as it has the capability to depict fractures which are occult on other imaging modalities. Awareness of a particular subset of proximal femoral fractures, namely greater trochanteric fractures, is vital for both radiologists and clinicians since it has been well documented that they invariably have an intertrochanteric component which may require surgical management. The detection of intertrochanteric or cervical extension of greater trochanteric fractures has been described utilizing MRI but is underestimated with both computed tomography (CT) and bone scan. Therefore, if MRI is unavailable or contraindicated, the diagnosis of an isolated greater trochanteric fracture should be met with caution. The importance of avoiding this potential pitfall is demonstrated in the following case of an elderly woman with hip pain and CT demonstrating an isolated greater trochanteric fracture who subsequently returned to the ED with a displaced intertrochanteric fracture.

  18. Greater Somalia, the never-ending dream?

    DEFF Research Database (Denmark)

    Zoppi, Marco

    2015-01-01

    This paper provides an historical analysis of the concept of Greater Somalia, the nationalist project that advocates the political union of all Somali-speaking people, including those inhabiting areas in current Djibouti, Ethiopia and Kenya. The Somali territorial unification project of “lost...

  19. Computation of the bounce-average code

    International Nuclear Information System (INIS)

    Cutler, T.A.; Pearlstein, L.D.; Rensink, M.E.

    1977-01-01

    The bounce-average computer code simulates the two-dimensional velocity transport of ions in a mirror machine. The code evaluates and bounce-averages the collision operator and sources along the field line. A self-consistent equilibrium magnetic field is also computed using the long-thin approximation. Optionally included are terms that maintain μ, J invariance as the magnetic field changes in time. The assumptions and analysis that form the foundation of the bounce-average code are described. When references can be cited, the required results are merely stated and explained briefly. A listing of the code is appended

  20. A high speed digital signal averager for pulsed NMR

    International Nuclear Information System (INIS)

    Srinivasan, R.; Ramakrishna, J.; Ra agopalan, S.R.

    1978-01-01

    A 256-channel digital signal averager suitable for pulsed nuclear magnetic resonance spectroscopy is described. It implements 'stable averaging' algorithm and hence provides a calibrated display of the average signal at all times during the averaging process on a CRT. It has a maximum sampling rate of 2.5 μ sec and a memory capacity of 256 x 12 bit words. Number of sweeps is selectable through a front panel control in binary steps from 2 3 to 2 12 . The enhanced signal can be displayed either on a CRT or by a 3.5-digit LED display. The maximum S/N improvement that can be achieved with this instrument is 36 dB. (auth.)

  1. Rotational averaging of multiphoton absorption cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Friese, Daniel H., E-mail: daniel.h.friese@uit.no; Beerepoot, Maarten T. P.; Ruud, Kenneth [Centre for Theoretical and Computational Chemistry, University of Tromsø — The Arctic University of Norway, N-9037 Tromsø (Norway)

    2014-11-28

    Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.

  2. Sea Surface Temperature Average_SST_Master

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using ArcGIS...

  3. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-01-01

    to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic

  4. Should the average tax rate be marginalized?

    Czech Academy of Sciences Publication Activity Database

    Feldman, N. E.; Katuščák, Peter

    -, č. 304 (2006), s. 1-65 ISSN 1211-3298 Institutional research plan: CEZ:MSM0021620846 Keywords : tax * labor supply * average tax Subject RIV: AH - Economics http://www.cerge-ei.cz/pdf/wp/Wp304.pdf

  5. A practical guide to averaging functions

    CERN Document Server

    Beliakov, Gleb; Calvo Sánchez, Tomasa

    2016-01-01

    This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...

  6. MN Temperature Average (1961-1990) - Line

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  7. MN Temperature Average (1961-1990) - Polygon

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  8. Average Bandwidth Allocation Model of WFQ

    Directory of Open Access Journals (Sweden)

    Tomáš Balogh

    2012-01-01

    Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.

  9. Nonequilibrium statistical averages and thermo field dynamics

    International Nuclear Information System (INIS)

    Marinaro, A.; Scarpetta, Q.

    1984-01-01

    An extension of thermo field dynamics is proposed, which permits the computation of nonequilibrium statistical averages. The Brownian motion of a quantum oscillator is treated as an example. In conclusion it is pointed out that the procedure proposed to computation of time-dependent statistical average gives the correct two-point Green function for the damped oscillator. A simple extension can be used to compute two-point Green functions of free particles

  10. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....

  11. Utilization of wind energy in greater Hanover

    International Nuclear Information System (INIS)

    Sahling, U.

    1993-01-01

    Since the beginning of the Eighties, the association of communities of Greater Hanover has dealt intensively with energy and ecopolitical questions in the scope of regional planning. Renewable energy sources play a dominant role in this context. This brochure is the third contribution to the subject ''Energy policy and environmental protection''. Experts as well as possibly interested parties are addressed especially. For all 8 contributions contained, separate entries have been recorded in this database. (BWI) [de

  12. Small cities face greater impact from automation

    OpenAIRE

    Frank, Morgan R.; Sun, Lijun; Cebrian, Manuel; Youn, Hyejin; Rahwan, Iyad

    2017-01-01

    The city has proven to be the most successful form of human agglomeration and provides wide employment opportunities for its dwellers. As advances in robotics and artificial intelligence revive concerns about the impact of automation on jobs, a question looms: How will automation affect employment in cities? Here, we provide a comparative picture of the impact of automation across U.S. urban areas. Small cities will undertake greater adjustments, such as worker displacement and job content su...

  13. The Greater Sekhukhune-CAPABILITY outreach project.

    Science.gov (United States)

    Gregersen, Nerine; Lampret, Julie; Lane, Tony; Christianson, Arnold

    2013-07-01

    The Greater Sekhukhune-CAPABILITY Outreach Project was undertaken in a rural district in Limpopo, South Africa, as part of the European Union-funded CAPABILITY programme to investigate approaches for capacity building for the translation of genetic knowledge into care and prevention of congenital disorders. Based on previous experience of a clinical genetic outreach programme in Limpopo, it aimed to initiate a district clinical genetic service in Greater Sekhukhune to gain knowledge and experience to assist in the implementation and development of medical genetic services in South Africa. Implementing the service in Greater Sekhukhune was impeded by a developing staff shortage in the province and pressure on the health service from the existing HIV/AIDS and TB epidemics. This situation underscores the need for health needs assessment for developing services for the care and prevention of congenital disorders in middle- and low-income countries. However, these impediments stimulated the pioneering of innovate ways to offer medical genetic services in these circumstances, including tele-teaching of nurses and doctors, using cellular phones to enhance clinical care and adapting and assessing the clinical utility of a laboratory test, QF-PCR, for use in the local circumstances.

  14. Greater happiness for a greater number: Is that possible in Austria?

    NARCIS (Netherlands)

    R. Veenhoven (Ruut)

    2011-01-01

    textabstractWhat is the final goal of public policy? Jeremy Bentham (1789) would say: greater happiness for a greater number. He thought of happiness as subjective enjoyment of life; in his words as “the sum of pleasures and pains”. In his time the happiness of the great number could not be measured

  15. Greater happiness for a greater number: Is that possible? If so how? (Arabic)

    NARCIS (Netherlands)

    R. Veenhoven (Ruut); E. Samuel (Emad)

    2012-01-01

    textabstractWhat is the final goal of public policy? Jeremy Bentham (1789) would say: greater happiness for a greater number. He thought of happiness as subjective enjoyment of life; in his words as “the sum of pleasures and pains”. In his time, the happiness of the great number could not be

  16. Greater happiness for a greater number: Is that possible in Germany?

    NARCIS (Netherlands)

    R. Veenhoven (Ruut)

    2009-01-01

    textabstractWhat is the final goal of public policy? Jeremy Bentham (1789) would say: greater happiness for a greater number. He thought of happiness as subjective enjoyment of life; in his words as “the sum of pleasures and pains”. In his time the Happiness of the great number could not be measured

  17. Asynchronous Gossip for Averaging and Spectral Ranking

    Science.gov (United States)

    Borkar, Vivek S.; Makhijani, Rahul; Sundaresan, Rajesh

    2014-08-01

    We consider two variants of the classical gossip algorithm. The first variant is a version of asynchronous stochastic approximation. We highlight a fundamental difficulty associated with the classical asynchronous gossip scheme, viz., that it may not converge to a desired average, and suggest an alternative scheme based on reinforcement learning that has guaranteed convergence to the desired average. We then discuss a potential application to a wireless network setting with simultaneous link activation constraints. The second variant is a gossip algorithm for distributed computation of the Perron-Frobenius eigenvector of a nonnegative matrix. While the first variant draws upon a reinforcement learning algorithm for an average cost controlled Markov decision problem, the second variant draws upon a reinforcement learning algorithm for risk-sensitive control. We then discuss potential applications of the second variant to ranking schemes, reputation networks, and principal component analysis.

  18. Benchmarking statistical averaging of spectra with HULLAC

    Science.gov (United States)

    Klapisch, Marcel; Busquet, Michel

    2008-11-01

    Knowledge of radiative properties of hot plasmas is important for ICF, astrophysics, etc When mid-Z or high-Z elements are present, the spectra are so complex that one commonly uses statistically averaged description of atomic systems [1]. In a recent experiment on Fe[2], performed under controlled conditions, high resolution transmission spectra were obtained. The new version of HULLAC [3] allows the use of the same model with different levels of details/averaging. We will take advantage of this feature to check the effect of averaging with comparison with experiment. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Quant. Spectros. Rad. Transf. 65, 43 (2000). [2] J. E. Bailey, G. A. Rochau, C. A. Iglesias et al., Phys. Rev. Lett. 99, 265002-4 (2007). [3]. M. Klapisch, M. Busquet, and A. Bar-Shalom, AIP Conference Proceedings 926, 206-15 (2007).

  19. An approach to averaging digitized plantagram curves.

    Science.gov (United States)

    Hawes, M R; Heinemeyer, R; Sovak, D; Tory, B

    1994-07-01

    The averaging of outline shapes of the human foot for the purposes of determining information concerning foot shape and dimension within the context of comfort of fit of sport shoes is approached as a mathematical problem. An outline of the human footprint is obtained by standard procedures and the curvature is traced with a Hewlett Packard Digitizer. The paper describes the determination of an alignment axis, the identification of two ray centres and the division of the total curve into two overlapping arcs. Each arc is divided by equiangular rays which intersect chords between digitized points describing the arc. The radial distance of each ray is averaged within groups of foot lengths which vary by +/- 2.25 mm (approximately equal to 1/2 shoe size). The method has been used to determine average plantar curves in a study of 1197 North American males (Hawes and Sovak 1993).

  20. Books average previous decade of economic misery.

    Science.gov (United States)

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  1. Books Average Previous Decade of Economic Misery

    Science.gov (United States)

    Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  2. Stochastic Averaging and Stochastic Extremum Seeking

    CERN Document Server

    Liu, Shu-Jun

    2012-01-01

    Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering  and analysis of bacterial  convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...

  3. Search for greater stability in nuclear regulation

    International Nuclear Information System (INIS)

    Asselstine, J.K.

    1985-01-01

    The need for greater stability in nuclear regulation is discussed. Two possible approaches for dealing with the problems of new and rapidly changing regulatory requirements are discussed. The first approach relies on the more traditional licensing reform initiatives that have been considered off and on for the past decade. The second approach considers a new regulator philosophy aimed at the root causes of the proliferation of new safety requirements that have been imposed in recent years. For the past few years, the concepts of deregulation and regulatory reform have been in fashion in Washington, and the commercial nuclear power program has not remained unaffected. Many look to these concepts to provide greater stability in the regulatory program. The NRC, the nuclear industry and the administration have all been avidly pursuing regulatory reform initiatives, which take the form of both legislative and administrative proposals. Many of these proposals look to the future, and, if adopted, would have little impact on currently operating nuclear power plants or plants now under construction

  4. Women at greater risk of HIV infection.

    Science.gov (United States)

    Mahathir, M

    1997-04-01

    Although many people believe that mainly men get infected with HIV/AIDS, women are actually getting infected at a faster rate than men, especially in developing countries, and suffer more from the adverse impact of AIDS. As of mid-1996, the Joint UN Program on AIDS estimated that more than 10 million of the 25 million adults infected with HIV since the beginning of the epidemic are women. The proportion of HIV-positive women is growing, with almost half of the 7500 new infections daily occurring among women. 90% of HIV-positive women live in a developing country. In Asia-Pacific, 1.4 million women have been infected with HIV out of an estimated total 3.08 million adults from the late 1970s until late 1994. Biologically, women are more vulnerable than men to infection because of the greater mucus area exposed to HIV during penile penetration. Women under age 17 years are at even greater risk because they have an underdeveloped cervix and low vaginal mucus production. Concurrent sexually transmitted diseases increase the risk of HIV transmission. Women's risk is also related to their exposure to gender inequalities in society. The social and economic pressures of poverty exacerbate women's risk. Prevention programs are discussed.

  5. Regional averaging and scaling in relativistic cosmology

    International Nuclear Information System (INIS)

    Buchert, Thomas; Carfora, Mauro

    2002-01-01

    Averaged inhomogeneous cosmologies lie at the forefront of interest, since cosmological parameters such as the rate of expansion or the mass density are to be considered as volume-averaged quantities and only these can be compared with observations. For this reason the relevant parameters are intrinsically scale-dependent and one wishes to control this dependence without restricting the cosmological model by unphysical assumptions. In the latter respect we contrast our way to approach the averaging problem in relativistic cosmology with shortcomings of averaged Newtonian models. Explicitly, we investigate the scale-dependence of Eulerian volume averages of scalar functions on Riemannian three-manifolds. We propose a complementary view of a Lagrangian smoothing of (tensorial) variables as opposed to their Eulerian averaging on spatial domains. This programme is realized with the help of a global Ricci deformation flow for the metric. We explain rigorously the origin of the Ricci flow which, on heuristic grounds, has already been suggested as a possible candidate for smoothing the initial dataset for cosmological spacetimes. The smoothing of geometry implies a renormalization of averaged spatial variables. We discuss the results in terms of effective cosmological parameters that would be assigned to the smoothed cosmological spacetime. In particular, we find that on the smoothed spatial domain B-bar evaluated cosmological parameters obey Ω-bar B-bar m + Ω-bar B-bar R + Ω-bar B-bar A + Ω-bar B-bar Q 1, where Ω-bar B-bar m , Ω-bar B-bar R and Ω-bar B-bar A correspond to the standard Friedmannian parameters, while Ω-bar B-bar Q is a remnant of cosmic variance of expansion and shear fluctuations on the averaging domain. All these parameters are 'dressed' after smoothing out the geometrical fluctuations, and we give the relations of the 'dressed' to the 'bare' parameters. While the former provide the framework of interpreting observations with a 'Friedmannian bias

  6. Average: the juxtaposition of procedure and context

    Science.gov (United States)

    Watson, Jane; Chick, Helen; Callingham, Rosemary

    2014-09-01

    This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.

  7. Average-case analysis of numerical problems

    CERN Document Server

    2000-01-01

    The average-case analysis of numerical problems is the counterpart of the more traditional worst-case approach. The analysis of average error and cost leads to new insight on numerical problems as well as to new algorithms. The book provides a survey of results that were mainly obtained during the last 10 years and also contains new results. The problems under consideration include approximation/optimal recovery and numerical integration of univariate and multivariate functions as well as zero-finding and global optimization. Background material, e.g. on reproducing kernel Hilbert spaces and random fields, is provided.

  8. Grassmann Averages for Scalable Robust PCA

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Black, Michael J.

    2014-01-01

    As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...

  9. Model averaging, optimal inference and habit formation

    Directory of Open Access Journals (Sweden)

    Thomas H B FitzGerald

    2014-06-01

    Full Text Available Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function – the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge – that of determining which model or models of their environment are the best for guiding behaviour. Bayesian model averaging – which says that an agent should weight the predictions of different models according to their evidence – provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent’s behaviour should show an equivalent balance. We hypothesise that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realisable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behaviour. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded Bayesian inference, focussing particularly upon the relationship between goal-directed and habitual behaviour.

  10. Average beta measurement in EXTRAP T1

    International Nuclear Information System (INIS)

    Hedin, E.R.

    1988-12-01

    Beginning with the ideal MHD pressure balance equation, an expression for the average poloidal beta, Β Θ , is derived. A method for unobtrusively measuring the quantities used to evaluate Β Θ in Extrap T1 is described. The results if a series of measurements yielding Β Θ as a function of externally applied toroidal field are presented. (author)

  11. HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS

    International Nuclear Information System (INIS)

    2005-01-01

    Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department

  12. Bayesian Averaging is Well-Temperated

    DEFF Research Database (Denmark)

    Hansen, Lars Kai

    2000-01-01

    Bayesian predictions are stochastic just like predictions of any other inference scheme that generalize from a finite sample. While a simple variational argument shows that Bayes averaging is generalization optimal given that the prior matches the teacher parameter distribution the situation is l...

  13. Gibbs equilibrium averages and Bogolyubov measure

    International Nuclear Information System (INIS)

    Sankovich, D.P.

    2011-01-01

    Application of the functional integration methods in equilibrium statistical mechanics of quantum Bose-systems is considered. We show that Gibbs equilibrium averages of Bose-operators can be represented as path integrals over a special Gauss measure defined in the corresponding space of continuous functions. We consider some problems related to integration with respect to this measure

  14. High average-power induction linacs

    International Nuclear Information System (INIS)

    Prono, D.S.; Barrett, D.; Bowles, E.; Caporaso, G.J.; Chen, Yu-Jiuan; Clark, J.C.; Coffield, F.; Newton, M.A.; Nexsen, W.; Ravenscroft, D.; Turner, W.C.; Watson, J.A.

    1989-01-01

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of ∼ 50-ns duration pulses to > 100 MeV. In this paper the authors report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  15. Function reconstruction from noisy local averages

    International Nuclear Information System (INIS)

    Chen Yu; Huang Jianguo; Han Weimin

    2008-01-01

    A regularization method is proposed for the function reconstruction from noisy local averages in any dimension. Error bounds for the approximate solution in L 2 -norm are derived. A number of numerical examples are provided to show computational performance of the method, with the regularization parameters selected by different strategies

  16. A singularity theorem based on spatial averages

    Indian Academy of Sciences (India)

    journal of. July 2007 physics pp. 31–47. A singularity theorem based on spatial ... In this paper I would like to present a result which confirms – at least partially – ... A detailed analysis of how the model fits in with the .... Further, the statement that the spatial average ...... Financial support under grants FIS2004-01626 and no.

  17. Multiphase averaging of periodic soliton equations

    International Nuclear Information System (INIS)

    Forest, M.G.

    1979-01-01

    The multiphase averaging of periodic soliton equations is considered. Particular attention is given to the periodic sine-Gordon and Korteweg-deVries (KdV) equations. The periodic sine-Gordon equation and its associated inverse spectral theory are analyzed, including a discussion of the spectral representations of exact, N-phase sine-Gordon solutions. The emphasis is on physical characteristics of the periodic waves, with a motivation from the well-known whole-line solitons. A canonical Hamiltonian approach for the modulational theory of N-phase waves is prescribed. A concrete illustration of this averaging method is provided with the periodic sine-Gordon equation; explicit averaging results are given only for the N = 1 case, laying a foundation for a more thorough treatment of the general N-phase problem. For the KdV equation, very general results are given for multiphase averaging of the N-phase waves. The single-phase results of Whitham are extended to general N phases, and more importantly, an invariant representation in terms of Abelian differentials on a Riemann surface is provided. Several consequences of this invariant representation are deduced, including strong evidence for the Hamiltonian structure of N-phase modulational equations

  18. A dynamic analysis of moving average rules

    NARCIS (Netherlands)

    Chiarella, C.; He, X.Z.; Hommes, C.H.

    2006-01-01

    The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type

  19. Essays on model averaging and political economics

    NARCIS (Netherlands)

    Wang, W.

    2013-01-01

    This thesis first investigates various issues related with model averaging, and then evaluates two policies, i.e. West Development Drive in China and fiscal decentralization in U.S, using econometric tools. Chapter 2 proposes a hierarchical weighted least squares (HWALS) method to address multiple

  20. 7 CFR 1209.12 - On average.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false On average. 1209.12 Section 1209.12 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS... CONSUMER INFORMATION ORDER Mushroom Promotion, Research, and Consumer Information Order Definitions § 1209...

  1. High average-power induction linacs

    International Nuclear Information System (INIS)

    Prono, D.S.; Barrett, D.; Bowles, E.

    1989-01-01

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  2. Average Costs versus Net Present Value

    NARCIS (Netherlands)

    E.A. van der Laan (Erwin); R.H. Teunter (Ruud)

    2000-01-01

    textabstractWhile the net present value (NPV) approach is widely accepted as the right framework for studying production and inventory control systems, average cost (AC) models are more widely used. For the well known EOQ model it can be verified that (under certain conditions) the AC approach gives

  3. Average beta-beating from random errors

    CERN Document Server

    Tomas Garcia, Rogelio; Langner, Andy Sven; Malina, Lukas; Franchi, Andrea; CERN. Geneva. ATS Department

    2018-01-01

    The impact of random errors on average β-beating is studied via analytical derivations and simulations. A systematic positive β-beating is expected from random errors quadratic with the sources or, equivalently, with the rms β-beating. However, random errors do not have a systematic effect on the tune.

  4. Reliability Estimates for Undergraduate Grade Point Average

    Science.gov (United States)

    Westrick, Paul A.

    2017-01-01

    Undergraduate grade point average (GPA) is a commonly employed measure in educational research, serving as a criterion or as a predictor depending on the research question. Over the decades, researchers have used a variety of reliability coefficients to estimate the reliability of undergraduate GPA, which suggests that there has been no consensus…

  5. Small cities face greater impact from automation.

    Science.gov (United States)

    Frank, Morgan R; Sun, Lijun; Cebrian, Manuel; Youn, Hyejin; Rahwan, Iyad

    2018-02-01

    The city has proved to be the most successful form of human agglomeration and provides wide employment opportunities for its dwellers. As advances in robotics and artificial intelligence revive concerns about the impact of automation on jobs, a question looms: how will automation affect employment in cities? Here, we provide a comparative picture of the impact of automation across US urban areas. Small cities will undertake greater adjustments, such as worker displacement and job content substitutions. We demonstrate that large cities exhibit increased occupational and skill specialization due to increased abundance of managerial and technical professions. These occupations are not easily automatable, and, thus, reduce the potential impact of automation in large cities. Our results pass several robustness checks including potential errors in the estimation of occupational automation and subsampling of occupations. Our study provides the first empirical law connecting two societal forces: urban agglomeration and automation's impact on employment. © 2018 The Authors.

  6. Small cities face greater impact from automation

    Science.gov (United States)

    Sun, Lijun; Cebrian, Manuel; Rahwan, Iyad

    2018-01-01

    The city has proved to be the most successful form of human agglomeration and provides wide employment opportunities for its dwellers. As advances in robotics and artificial intelligence revive concerns about the impact of automation on jobs, a question looms: how will automation affect employment in cities? Here, we provide a comparative picture of the impact of automation across US urban areas. Small cities will undertake greater adjustments, such as worker displacement and job content substitutions. We demonstrate that large cities exhibit increased occupational and skill specialization due to increased abundance of managerial and technical professions. These occupations are not easily automatable, and, thus, reduce the potential impact of automation in large cities. Our results pass several robustness checks including potential errors in the estimation of occupational automation and subsampling of occupations. Our study provides the first empirical law connecting two societal forces: urban agglomeration and automation's impact on employment. PMID:29436514

  7. Tendon surveillance requirements - average tendon force

    International Nuclear Information System (INIS)

    Fulton, J.F.

    1982-01-01

    Proposed Rev. 3 to USNRC Reg. Guide 1.35 discusses the need for comparing, for individual tendons, the measured and predicted lift-off forces. Such a comparison is intended to detect any abnormal tendon force loss which might occur. Recognizing that there are uncertainties in the prediction of tendon losses, proposed Guide 1.35.1 has allowed specific tolerances on the fundamental losses. Thus, the lift-off force acceptance criteria for individual tendons appearing in Reg. Guide 1.35, Proposed Rev. 3, is stated relative to a lower bound predicted tendon force, which is obtained using the 'plus' tolerances on the fundamental losses. There is an additional acceptance criterion for the lift-off forces which is not specifically addressed in these two Reg. Guides; however, it is included in a proposed Subsection IWX to ASME Code Section XI. This criterion is based on the overriding requirement that the magnitude of prestress in the containment structure be sufficeint to meet the minimum prestress design requirements. This design requirement can be expressed as an average tendon force for each group of vertical hoop, or dome tendons. For the purpose of comparing the actual tendon forces with the required average tendon force, the lift-off forces measured for a sample of tendons within each group can be averaged to construct the average force for the entire group. However, the individual lift-off forces must be 'corrected' (normalized) prior to obtaining the sample average. This paper derives the correction factor to be used for this purpose. (orig./RW)

  8. Testing averaged cosmology with type Ia supernovae and BAO data

    Energy Technology Data Exchange (ETDEWEB)

    Santos, B.; Alcaniz, J.S. [Departamento de Astronomia, Observatório Nacional, 20921-400, Rio de Janeiro – RJ (Brazil); Coley, A.A. [Department of Mathematics and Statistics, Dalhousie University, Halifax, B3H 3J5 Canada (Canada); Devi, N. Chandrachani, E-mail: thoven@on.br, E-mail: aac@mathstat.dal.ca, E-mail: chandrachaniningombam@astro.unam.mx, E-mail: alcaniz@on.br [Instituto de Astronomía, Universidad Nacional Autónoma de México, Box 70-264, México City, México (Mexico)

    2017-02-01

    An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO data is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.

  9. Testing averaged cosmology with type Ia supernovae and BAO data

    International Nuclear Information System (INIS)

    Santos, B.; Alcaniz, J.S.; Coley, A.A.; Devi, N. Chandrachani

    2017-01-01

    An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO data is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.

  10. Setting the Greater Mekong Subregion - Development Analysis ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    The funding will support the first stage of a two-stage research program in ... Inclusive development in basic education and health in Cambodia : final report ... New website will help record vital life events to improve access to services for all.

  11. Statistics on exponential averaging of periodograms

    Energy Technology Data Exchange (ETDEWEB)

    Peeters, T.T.J.M. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Ciftcioglu, Oe. [Istanbul Technical Univ. (Turkey). Dept. of Electrical Engineering

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a {chi}{sup 2} distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.).

  12. Statistics on exponential averaging of periodograms

    International Nuclear Information System (INIS)

    Peeters, T.T.J.M.; Ciftcioglu, Oe.

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a χ 2 distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.)

  13. ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE

    Directory of Open Access Journals (Sweden)

    Carmen BOGHEAN

    2013-12-01

    Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.

  14. Weighted estimates for the averaging integral operator

    Czech Academy of Sciences Publication Activity Database

    Opic, Bohumír; Rákosník, Jiří

    2010-01-01

    Roč. 61, č. 3 (2010), s. 253-262 ISSN 0010-0757 R&D Projects: GA ČR GA201/05/2033; GA ČR GA201/08/0383 Institutional research plan: CEZ:AV0Z10190503 Keywords : averaging integral operator * weighted Lebesgue spaces * weights Subject RIV: BA - General Mathematics Impact factor: 0.474, year: 2010 http://link.springer.com/article/10.1007%2FBF03191231

  15. Average Transverse Momentum Quantities Approaching the Lightfront

    OpenAIRE

    Boer, Daniel

    2015-01-01

    In this contribution to Light Cone 2014, three average transverse momentum quantities are discussed: the Sivers shift, the dijet imbalance, and the $p_T$ broadening. The definitions of these quantities involve integrals over all transverse momenta that are overly sensitive to the region of large transverse momenta, which conveys little information about the transverse momentum distributions of quarks and gluons inside hadrons. TMD factorization naturally suggests alternative definitions of su...

  16. Time-averaged MSD of Brownian motion

    OpenAIRE

    Andreanov, Alexei; Grebenkov, Denis

    2012-01-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we de...

  17. Evaluation of The Surface Ozone Concentrations In Greater Cairo Area With Emphasis On Helwan, Egypt

    International Nuclear Information System (INIS)

    Ramadan, A.; Kandil, A.T.; Abd Elmaged, S.M.; Mubarak, I.

    2011-01-01

    Various biogenic and anthropogenic sources emit huge quantities of surface ozone. The main purpose of this study is to evaluate the surface ozone levels present at Helwan area in order to improve the knowledge and understanding troposphere processes. Surface Ozone has been measured at 2 sites at Helwan; these sites cover the most populated area in Helwan. Ozone concentration is continuously monitored by UV absorption photometry using the equipment O 3 41 M UV Photometric Ozone Analyzer. The daily maximum values of the ozone concentration in the greater Cairo area have approached but did not exceeded the critical levels during the year 2008. Higher ozone concentrations at Helwan are mainly due to the transport of ozone from regions further to the north of greater Cairo and to a lesser extent of ozone locally generated by photochemical smog process. The summer season has the largest diurnal variation, with the tendency of the daily ozone maxima occur in the late afternoon. The night time concentration of ozone was significantly higher at Helwan because there are no fast acting sinks, destroying ozone since the average night time concentration of ozone is maintained at 40 ppb at the site. No correlation between the diurnal total suspended particulate (TSP) matter and the diurnal cumulative ozone concentration was observed during the Khamasin period

  18. Capillary Electrophoresis Sensitivity Enhancement Based on Adaptive Moving Average Method.

    Science.gov (United States)

    Drevinskas, Tomas; Telksnys, Laimutis; Maruška, Audrius; Gorbatsova, Jelena; Kaljurand, Mihkel

    2018-06-05

    In the present work, we demonstrate a novel approach to improve the sensitivity of the "out of lab" portable capillary electrophoretic measurements. Nowadays, many signal enhancement methods are (i) underused (nonoptimal), (ii) overused (distorts the data), or (iii) inapplicable in field-portable instrumentation because of a lack of computational power. The described innovative migration velocity-adaptive moving average method uses an optimal averaging window size and can be easily implemented with a microcontroller. The contactless conductivity detection was used as a model for the development of a signal processing method and the demonstration of its impact on the sensitivity. The frequency characteristics of the recorded electropherograms and peaks were clarified. Higher electrophoretic mobility analytes exhibit higher-frequency peaks, whereas lower electrophoretic mobility analytes exhibit lower-frequency peaks. On the basis of the obtained data, a migration velocity-adaptive moving average algorithm was created, adapted, and programmed into capillary electrophoresis data-processing software. Employing the developed algorithm, each data point is processed depending on a certain migration time of the analyte. Because of the implemented migration velocity-adaptive moving average method, the signal-to-noise ratio improved up to 11 times for sampling frequency of 4.6 Hz and up to 22 times for sampling frequency of 25 Hz. This paper could potentially be used as a methodological guideline for the development of new smoothing algorithms that require adaptive conditions in capillary electrophoresis and other separation methods.

  19. Accurate phenotyping: Reconciling approaches through Bayesian model averaging.

    Directory of Open Access Journals (Sweden)

    Carla Chia-Ming Chen

    Full Text Available Genetic research into complex diseases is frequently hindered by a lack of clear biomarkers for phenotype ascertainment. Phenotypes for such diseases are often identified on the basis of clinically defined criteria; however such criteria may not be suitable for understanding the genetic composition of the diseases. Various statistical approaches have been proposed for phenotype definition; however our previous studies have shown that differences in phenotypes estimated using different approaches have substantial impact on subsequent analyses. Instead of obtaining results based upon a single model, we propose a new method, using Bayesian model averaging to overcome problems associated with phenotype definition. Although Bayesian model averaging has been used in other fields of research, this is the first study that uses Bayesian model averaging to reconcile phenotypes obtained using multiple models. We illustrate the new method by applying it to simulated genetic and phenotypic data for Kofendred personality disorder-an imaginary disease with several sub-types. Two separate statistical methods were used to identify clusters of individuals with distinct phenotypes: latent class analysis and grade of membership. Bayesian model averaging was then used to combine the two clusterings for the purpose of subsequent linkage analyses. We found that causative genetic loci for the disease produced higher LOD scores using model averaging than under either individual model separately. We attribute this improvement to consolidation of the cores of phenotype clusters identified using each individual method.

  20. Unscrambling The "Average User" Of Habbo Hotel

    Directory of Open Access Journals (Sweden)

    Mikael Johnson

    2007-01-01

    Full Text Available The “user” is an ambiguous concept in human-computer interaction and information systems. Analyses of users as social actors, participants, or configured users delineate approaches to studying design-use relationships. Here, a developer’s reference to a figure of speech, termed the “average user,” is contrasted with design guidelines. The aim is to create an understanding about categorization practices in design through a case study about the virtual community, Habbo Hotel. A qualitative analysis highlighted not only the meaning of the “average user,” but also the work that both the developer and the category contribute to this meaning. The average user a represents the unknown, b influences the boundaries of the target user groups, c legitimizes the designer to disregard marginal user feedback, and d keeps the design space open, thus allowing for creativity. The analysis shows how design and use are intertwined and highlights the developers’ role in governing different users’ interests.

  1. Changing mortality and average cohort life expectancy

    Directory of Open Access Journals (Sweden)

    Robert Schoen

    2005-10-01

    Full Text Available Period life expectancy varies with changes in mortality, and should not be confused with the life expectancy of those alive during that period. Given past and likely future mortality changes, a recent debate has arisen on the usefulness of the period life expectancy as the leading measure of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure, the average cohort life expectancy (ACLE, to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate measures of mortality are calculated for England and Wales, Norway, and Switzerland for the years 1880 to 2000. CAL is found to be sensitive to past and present changes in death rates. ACLE requires the most data, but gives the best representation of the survivorship of cohorts present at a given time.

  2. Strontium isotopic geochemistry of intrusive rocks, Puerto Rico, Greater Antilles

    International Nuclear Information System (INIS)

    Jones, L.M.; Kesler, S.E.

    1980-01-01

    The strontium isotope geochemistry is given for three Puerto Rican intrusive rocks: the granodioritic Morovis and San Lorenzo plutons and the Rio Blanco stock of quartz dioritic composition. The average calculated initial 87 Sr/ 86 Sr ratios are 0.70370, 0.70355 and 0.70408, respectively. In addition, the San Lorenzo data establish a whole-rock isochron of 71 +- 2 m.y., which agrees with the previously reported K-Ar age of 73 m.y. Similarity of most of the intrusive rocks in the Greater Antilles with respect to their strontium isotopic geochemistry regardless of their major element composition indicates that intrusive magmas with a wide range of composition can be derived from a single source material. The most likely source material, in view of the available isotopic data, is the mantle wedge overlying the subduction zone. (orig.)

  3. Plastic Foam Withstands Greater Temperatures And Pressures

    Science.gov (United States)

    Cranston, John A.; Macarthur, Doug

    1993-01-01

    Improved plastic foam suitable for use in foam-core laminated composite parts and in tooling for making fiber/matrix-composite parts. Stronger at high temperatures, more thermally and dimensionally stable, machinable, resistant to chemical degradation, and less expensive. Compatible with variety of matrix resins. Made of polyisocyanurate blown with carbon dioxide and has density of 12 to 15 pounds per cubic feet. Does not contibute to depletion of ozone from atmosphere. Improved foam used in cores of composite panels in such diverse products as aircraft, automobiles, railroad cars, boats, and sporting equipment like surfboards, skis, and skateboards. Also used in thermally stable flotation devices in submersible vehicles. Machined into mandrels upon which filaments wound to make shells.

  4. Urban acid deposition in Greater Manchester

    Energy Technology Data Exchange (ETDEWEB)

    Lee, D.S.; Longhurst, J.W.S.; Gee, D.R.; Hare, S.E. (Manchester Polytechnic, Manchester (UK). Acid Rain Information Centre)

    1989-08-01

    Data are presented from a monitoring network of 18 bulk precipitation collectors and one wet-only collector in the urban area of Greater Manchester, in the north west of England. Weekly samples were analysed for all the major ions in precipitation along with gaseous nitrogen dioxide concentrations from diffusion tubes. Statistical analysis of the data shows significant spatial variation of non marine sulphate, nitrate, ammonium, acidity and calcium concentrations, and nitrogen dioxide concentrations. Calcium is thought to be responsible for the buffering of acidity and is of local origin. Wet deposition is the likely removal process for calcium in the atmosphere and probably by below cloud scavenging. Nitrate and ammonium concentrations and depositions show close spatial, temporal and statistical association. Examination of high simultaneous episodes of nitrate and ammonium deposition shows that these depositions cannot be explained in terms of trajectories and it is suggested that UK emissions of ammonia may be important. Statistical analysis of the relationships between nitrate and ammonium depositions, concentrations and precipitation amount suggest that ammonia from mesoscale sources reacts reversibly with nitric acid aerosol and is removed by below cloud scavenging. High episodes of the deposition of non marine sulphate are difficult to explain by trajectory analysis alone, perhaps suggesting local sources. In a comparison between wet deposition and bulk deposition, it was shown that only 15.2% of the non marine sulphur was dry deposited to the bulk precipitation collector. 63 refs., 86 figs., 31 tabs.

  5. Application of NMR circuit for superconducting magnet using signal averaging

    International Nuclear Information System (INIS)

    Yamada, R.; Ishimoto, H.; Shea, M.F.; Schmidt, E.E.; Borer, K.

    1977-01-01

    An NMR circuit was used to measure the absolute field values of Fermilab Energy Doubler magnets up to 44 kG. A signal averaging method to improve the S/N ratio was implemented by means of a Tektronix Digital Processing Oscilloscope, followed by the development of an inexpensive microprocessor based system contained in a NIM module. Some of the data obtained from measuring two superconducting dipole magnets are presented

  6. Average Case Analysis of Java 7's Dual Pivot Quicksort

    OpenAIRE

    Wild, Sebastian; Nebel, Markus E.

    2013-01-01

    Recently, a new Quicksort variant due to Yaroslavskiy was chosen as standard sorting method for Oracle's Java 7 runtime library. The decision for the change was based on empirical studies showing that on average, the new algorithm is faster than the formerly used classic Quicksort. Surprisingly, the improvement was achieved by using a dual pivot approach, an idea that was considered not promising by several theoretical studies in the past. In this paper, we identify the reason for this unexpe...

  7. 40 CFR 63.1035 - Quality improvement program for pumps.

    Science.gov (United States)

    2010-07-01

    ...., piston, horizontal or vertical centrifugal, gear, bellows); pump manufacturer; seal type and manufacturer... 40 Protection of Environment 10 2010-07-01 2010-07-01 false Quality improvement program for pumps... improvement program for pumps. (a) Criteria. If, on a 6-month rolling average, at least the greater of either...

  8. 40 CFR 63.176 - Quality improvement program for pumps.

    Science.gov (United States)

    2010-07-01

    ... type (e.g., piston, horizontal or vertical centrifugal, gear, bellows); pump manufacturer; seal type... 40 Protection of Environment 9 2010-07-01 2010-07-01 false Quality improvement program for pumps... improvement program for pumps. (a) In Phase III, if, on a 6-month rolling average, the greater of either 10...

  9. Ultra-low noise miniaturized neural amplifier with hardware averaging.

    Science.gov (United States)

    Dweiri, Yazan M; Eggers, Thomas; McCallum, Grant; Durand, Dominique M

    2015-08-01

    Peripheral nerves carry neural signals that could be used to control hybrid bionic systems. Cuff electrodes provide a robust and stable interface but the recorded signal amplitude is small (concept of hardware averaging to nerve recordings obtained with cuff electrodes. An optimization procedure is developed to minimize noise and power simultaneously. The novel design was based on existing neural amplifiers (Intan Technologies, LLC) and is validated with signals obtained from the FINE in chronic dog experiments. We showed that hardware averaging leads to a reduction in the total recording noise by a factor of 1/√N or less depending on the source resistance. Chronic recording of physiological activity with FINE using the presented design showed significant improvement on the recorded baseline noise with at least two parallel operation transconductance amplifiers leading to a 46.1% reduction at N = 8. The functionality of these recordings was quantified by the SNR improvement and shown to be significant for N = 3 or more. The present design was shown to be capable of generating hardware averaging on noise improvement for neural recording with cuff electrodes, and can accommodate the presence of high source impedances that are associated with the miniaturized contacts and the high channel count in electrode arrays. This technique can be adopted for other applications where miniaturized and implantable multichannel acquisition systems with ultra-low noise and low power are required.

  10. Absorption spectrum of DNA for wavelengths greater than 300 nm

    International Nuclear Information System (INIS)

    Sutherland, J.C.; Griffin, K.P.

    1981-01-01

    Although DNA absorption at wavelengths greater than 300 nm is much weaker than that at shorter wavelengths, this absorption seems to be responsible for much of the biological damage caused by solar radiation of wavelengths less than 320 nm. Accurate measurement of the absorption spectrum of DNA above 300 nm is complicated by turbidity characteristic of concentrated solutions of DNA. We have measured the absorption spectra of DNA from calf thymus, Clostridium perfringens, Escherichia coli, Micrococcus luteus, salmon testis, and human placenta using procedures which separate optical density due to true absorption from that due to turbidity. Above 300 nm, the relative absorption of DNA increases as a function of guanine-cytosine content, presumably because the absorption of guanine is much greater than the absorption of adenine at these wavelengths. This result suggests that the photophysical processes which follow absorption of a long-wavelength photon may, on the average, differ from those induced by shorter-wavelength photons. It may also explain the lower quantum yield for the killing of cells by wavelengths above 300 nm compared to that by shorter wavelengths

  11. Black breast cancer survivors experience greater upper extremity disability.

    Science.gov (United States)

    Dean, Lorraine T; DeMichele, Angela; LeBlanc, Mously; Stephens-Shields, Alisa; Li, Susan Q; Colameco, Chris; Coursey, Morgan; Mao, Jun J

    2015-11-01

    Over one-third of breast cancer survivors experience upper extremity disability. Black women present with factors associated with greater upper extremity disability, including: increased body mass index (BMI), more advanced disease stage at diagnosis, and varying treatment type compared with Whites. No prior research has evaluated the relationship between race and upper extremity disability using validated tools and controlling for these factors. Data were drawn from a survey study among 610 women with stage I-III hormone receptor positive breast cancer. The disabilities of the arm, shoulder and hand (QuickDASH) is an 11-item self-administered questionnaire that has been validated for breast cancer survivors to assess global upper extremity function over the past 7 days. Linear regression and mediation analysis estimated the relationships between race, BMI and QuickDASH score, adjusting for demographics and treatment types. Black women (n = 98) had 7.3 points higher average QuickDASH scores than White (n = 512) women (p disability by 40 %. Even several years post-treatment, Black breast cancer survivors had greater upper extremity disability, which was partially mediated by higher BMIs. Close monitoring of high BMI Black women may be an important step in reducing disparities in cancer survivorship. More research is needed on the relationship between race, BMI, and upper extremity disability.

  12. Promoting greater Federal energy productivity [Final report

    Energy Technology Data Exchange (ETDEWEB)

    Hopkins, Mark; Dudich, Luther

    2003-03-05

    This document is a close-out report describing the work done under this DOE grant to improve Federal Energy Productivity. Over the four years covered in this document, the Alliance To Save Energy conducted liaison with the private sector through our Federal Energy Productivity Task Force. In this time, the Alliance held several successful workshops on the uses of metering in Federal facilities and other meetings. We also conducted significant research on energy efficiency, financing, facilitated studies of potential energy savings in energy intensive agencies, and undertook other tasks outlined in this report.

  13. Operator product expansion and its thermal average

    Energy Technology Data Exchange (ETDEWEB)

    Mallik, S [Saha Inst. of Nuclear Physics, Calcutta (India)

    1998-05-01

    QCD sum rules at finite temperature, like the ones at zero temperature, require the coefficients of local operators, which arise in the short distance expansion of the thermal average of two-point functions of currents. We extend the configuration space method, applied earlier at zero temperature, to the case at finite temperature. We find that, upto dimension four, two new operators arise, in addition to the two appearing already in the vacuum correlation functions. It is argued that the new operators would contribute substantially to the sum rules, when the temperature is not too low. (orig.) 7 refs.

  14. Fluctuations of wavefunctions about their classical average

    International Nuclear Information System (INIS)

    Benet, L; Flores, J; Hernandez-Saldana, H; Izrailev, F M; Leyvraz, F; Seligman, T H

    2003-01-01

    Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics

  15. Phase-averaged transport for quasiperiodic Hamiltonians

    CERN Document Server

    Bellissard, J; Schulz-Baldes, H

    2002-01-01

    For a class of discrete quasi-periodic Schroedinger operators defined by covariant re- presentations of the rotation algebra, a lower bound on phase-averaged transport in terms of the multifractal dimensions of the density of states is proven. This result is established under a Diophantine condition on the incommensuration parameter. The relevant class of operators is distinguished by invariance with respect to symmetry automorphisms of the rotation algebra. It includes the critical Harper (almost-Mathieu) operator. As a by-product, a new solution of the frame problem associated with Weyl-Heisenberg-Gabor lattices of coherent states is given.

  16. Baseline-dependent averaging in radio interferometry

    Science.gov (United States)

    Wijnholds, S. J.; Willis, A. G.; Salvini, S.

    2018-05-01

    This paper presents a detailed analysis of the applicability and benefits of baseline-dependent averaging (BDA) in modern radio interferometers and in particular the Square Kilometre Array. We demonstrate that BDA does not affect the information content of the data other than a well-defined decorrelation loss for which closed form expressions are readily available. We verify these theoretical findings using simulations. We therefore conclude that BDA can be used reliably in modern radio interferometry allowing a reduction of visibility data volume (and hence processing costs for handling visibility data) by more than 80 per cent.

  17. Multistage parallel-serial time averaging filters

    International Nuclear Information System (INIS)

    Theodosiou, G.E.

    1980-01-01

    Here, a new time averaging circuit design, the 'parallel filter' is presented, which can reduce the time jitter, introduced in time measurements using counters of large dimensions. This parallel filter could be considered as a single stage unit circuit which can be repeated an arbitrary number of times in series, thus providing a parallel-serial filter type as a result. The main advantages of such a filter over a serial one are much less electronic gate jitter and time delay for the same amount of total time uncertainty reduction. (orig.)

  18. Time-averaged MSD of Brownian motion

    International Nuclear Information System (INIS)

    Andreanov, Alexei; Grebenkov, Denis S

    2012-01-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution

  19. Time-dependent angularly averaged inverse transport

    International Nuclear Information System (INIS)

    Bal, Guillaume; Jollivet, Alexandre

    2009-01-01

    This paper concerns the reconstruction of the absorption and scattering parameters in a time-dependent linear transport equation from knowledge of angularly averaged measurements performed at the boundary of a domain of interest. Such measurement settings find applications in medical and geophysical imaging. We show that the absorption coefficient and the spatial component of the scattering coefficient are uniquely determined by such measurements. We obtain stability results on the reconstruction of the absorption and scattering parameters with respect to the measured albedo operator. The stability results are obtained by a precise decomposition of the measurements into components with different singular behavior in the time domain

  20. Independence, Odd Girth, and Average Degree

    DEFF Research Database (Denmark)

    Löwenstein, Christian; Pedersen, Anders Sune; Rautenbach, Dieter

    2011-01-01

      We prove several tight lower bounds in terms of the order and the average degree for the independence number of graphs that are connected and/or satisfy some odd girth condition. Our main result is the extension of a lower bound for the independence number of triangle-free graphs of maximum...... degree at most three due to Heckman and Thomas [Discrete Math 233 (2001), 233–237] to arbitrary triangle-free graphs. For connected triangle-free graphs of order n and size m, our result implies the existence of an independent set of order at least (4n−m−1) / 7.  ...

  1. Bootstrapping Density-Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...... (1989). In many cases validity of bootstrap-based inference procedures is found to depend crucially on whether the bandwidth sequence satisfies a particular (asymptotic linearity) condition. An exception to this rule occurs for inference procedures involving a studentized estimator employing a "robust...

  2. Average Nuclear properties based on statistical model

    International Nuclear Information System (INIS)

    El-Jaick, L.J.

    1974-01-01

    The rough properties of nuclei were investigated by statistical model, in systems with the same and different number of protons and neutrons, separately, considering the Coulomb energy in the last system. Some average nuclear properties were calculated based on the energy density of nuclear matter, from Weizsscker-Beth mass semiempiric formulae, generalized for compressible nuclei. In the study of a s surface energy coefficient, the great influence exercised by Coulomb energy and nuclear compressibility was verified. For a good adjust of beta stability lines and mass excess, the surface symmetry energy were established. (M.C.K.) [pt

  3. Time-averaged MSD of Brownian motion

    Science.gov (United States)

    Andreanov, Alexei; Grebenkov, Denis S.

    2012-07-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution.

  4. Bayesian model averaging and weighted average least squares : Equivariance, stability, and numerical issues

    NARCIS (Netherlands)

    De Luca, G.; Magnus, J.R.

    2011-01-01

    In this article, we describe the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals, which implement, respectively, the exact Bayesian model-averaging estimator and the weighted-average least-squares

  5. Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.

    Science.gov (United States)

    Dirks, Jean; And Others

    1983-01-01

    Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)

  6. Mapping grassland productivity with 250-m eMODIS NDVI and SSURGO database over the Greater Platte River Basin, USA

    Science.gov (United States)

    Gu, Yingxin; Wylie, Bruce K.; Bliss, Norman B.

    2013-01-01

    This study assessed and described a relationship between satellite-derived growing season averaged Normalized Difference Vegetation Index (NDVI) and annual productivity for grasslands within the Greater Platte River Basin (GPRB) of the United States. We compared growing season averaged NDVI (GSN) with Soil Survey Geographic (SSURGO) database rangeland productivity and flux tower Gross Primary Productivity (GPP) for grassland areas. The GSN was calculated for each of nine years (2000–2008) using the 7-day composite 250-m eMODIS (expedited Moderate Resolution Imaging Spectroradiometer) NDVI data. Strong correlations exist between the nine-year mean GSN (MGSN) and SSURGO annual productivity for grasslands (R2 = 0.74 for approximately 8000 pixels randomly selected from eight homogeneous regions within the GPRB; R2 = 0.96 for the 14 cluster-averaged points). Results also reveal a strong correlation between GSN and flux tower growing season averaged GPP (R2 = 0.71). Finally, we developed an empirical equation to estimate grassland productivity based on the MGSN. Spatially explicit estimates of grassland productivity over the GPRB were generated, which improved the regional consistency of SSURGO grassland productivity data and can help scientists and land managers to better understand the actual biophysical and ecological characteristics of grassland systems in the GPRB. This final estimated grassland production map can also be used as an input for biogeochemical, ecological, and climate change models.

  7. Assessing the accuracy of weather radar to track intense rain cells in the Greater Lyon area, France

    Science.gov (United States)

    Renard, Florent; Chapon, Pierre-Marie; Comby, Jacques

    2012-01-01

    The Greater Lyon is a dense area located in the Rhône Valley in the south east of France. The conurbation counts 1.3 million inhabitants and the rainfall hazard is a great concern. However, until now, studies on rainfall over the Greater Lyon have only been based on the network of rain gauges, despite the presence of a C-band radar located in the close vicinity. Consequently, the first aim of this study was to investigate the hydrological quality of this radar. This assessment, based on comparison of radar estimations and rain-gauges values concludes that the radar data has overall a good quality since 2006. Given this good accuracy, this study made a next step and investigated the characteristics of intense rain cells that are responsible of the majority of floods in the Greater Lyon area. Improved knowledge on these rainfall cells is important to anticipate dangerous events and to improve the monitoring of the sewage system. This paper discusses the analysis of the ten most intense rainfall events in the 2001-2010 period. Spatial statistics pointed towards straight and linear movements of intense rainfall cells, independently on the ground surface conditions and the topography underneath. The speed of these cells was found nearly constant during a rainfall event, but depend from event to ranges on average from 25 to 66 km/h.

  8. A new probabilistic seismic hazard assessment for greater Tokyo

    Science.gov (United States)

    Stein, R.S.; Toda, S.; Parsons, T.; Grunewald, E.; Blong, R.; Sparks, S.; Shah, H.; Kennedy, J.

    2006-01-01

    Tokyo and its outlying cities are home to one-quarter of Japan's 127 million people. Highly destructive earthquakes struck the capital in 1703, 1855 and 1923, the last of which took 105 000 lives. Fuelled by greater Tokyo's rich seismological record, but challenged by its magnificent complexity, our joint Japanese-US group carried out a new study of the capital's earthquake hazards. We used the prehistoric record of great earthquakes preserved by uplifted marine terraces and tsunami deposits (17 M???8 shocks in the past 7000 years), a newly digitized dataset of historical shaking (10 000 observations in the past 400 years), the dense modern seismic network (300 000 earthquakes in the past 30 years), and Japan's GeoNet array (150 GPS vectors in the past 10 years) to reinterpret the tectonic structure, identify active faults and their slip rates and estimate their earthquake frequency. We propose that a dislodged fragment of the Pacific plate is jammed between the Pacific, Philippine Sea and Eurasian plates beneath the Kanto plain on which Tokyo sits. We suggest that the Kanto fragment controls much of Tokyo's seismic behaviour for large earthquakes, including the damaging 1855 M???7.3 Ansei-Edo shock. On the basis of the frequency of earthquakes beneath greater Tokyo, events with magnitude and location similar to the M??? 7.3 Ansei-Edo event have a ca 20% likelihood in an average 30 year period. In contrast, our renewal (time-dependent) probability for the great M??? 7.9 plate boundary shocks such as struck in 1923 and 1703 is 0.5% for the next 30 years, with a time-averaged 30 year probability of ca 10%. The resulting net likelihood for severe shaking (ca 0.9g peak ground acceleration (PGA)) in Tokyo, Kawasaki and Yokohama for the next 30 years is ca 30%. The long historical record in Kanto also affords a rare opportunity to calculate the probability of shaking in an alternative manner exclusively from intensity observations. This approach permits robust estimates

  9. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  10. Averaged null energy condition from causality

    Science.gov (United States)

    Hartman, Thomas; Kundu, Sandipan; Tajdini, Amirhossein

    2017-07-01

    Unitary, Lorentz-invariant quantum field theories in flat spacetime obey mi-crocausality: commutators vanish at spacelike separation. For interacting theories in more than two dimensions, we show that this implies that the averaged null energy, ∫ duT uu , must be non-negative. This non-local operator appears in the operator product expansion of local operators in the lightcone limit, and therefore contributes to n-point functions. We derive a sum rule that isolates this contribution and is manifestly positive. The argument also applies to certain higher spin operators other than the stress tensor, generating an infinite family of new constraints of the form ∫ duX uuu··· u ≥ 0. These lead to new inequalities for the coupling constants of spinning operators in conformal field theory, which include as special cases (but are generally stronger than) the existing constraints from the lightcone bootstrap, deep inelastic scattering, conformal collider methods, and relative entropy. We also comment on the relation to the recent derivation of the averaged null energy condition from relative entropy, and suggest a more general connection between causality and information-theoretic inequalities in QFT.

  11. Beta-energy averaging and beta spectra

    International Nuclear Information System (INIS)

    Stamatelatos, M.G.; England, T.R.

    1976-07-01

    A simple yet highly accurate method for approximately calculating spectrum-averaged beta energies and beta spectra for radioactive nuclei is presented. This method should prove useful for users who wish to obtain accurate answers without complicated calculations of Fermi functions, complex gamma functions, and time-consuming numerical integrations as required by the more exact theoretical expressions. Therefore, this method should be a good time-saving alternative for investigators who need to make calculations involving large numbers of nuclei (e.g., fission products) as well as for occasional users interested in restricted number of nuclides. The average beta-energy values calculated by this method differ from those calculated by ''exact'' methods by no more than 1 percent for nuclides with atomic numbers in the 20 to 100 range and which emit betas of energies up to approximately 8 MeV. These include all fission products and the actinides. The beta-energy spectra calculated by the present method are also of the same quality

  12. Asymptotic Time Averages and Frequency Distributions

    Directory of Open Access Journals (Sweden)

    Muhammad El-Taha

    2016-01-01

    Full Text Available Consider an arbitrary nonnegative deterministic process (in a stochastic setting {X(t,  t≥0} is a fixed realization, i.e., sample-path of the underlying stochastic process with state space S=(-∞,∞. Using a sample-path approach, we give necessary and sufficient conditions for the long-run time average of a measurable function of process to be equal to the expectation taken with respect to the same measurable function of its long-run frequency distribution. The results are further extended to allow unrestricted parameter (time space. Examples are provided to show that our condition is not superfluous and that it is weaker than uniform integrability. The case of discrete-time processes is also considered. The relationship to previously known sufficient conditions, usually given in stochastic settings, will also be discussed. Our approach is applied to regenerative processes and an extension of a well-known result is given. For researchers interested in sample-path analysis, our results will give them the choice to work with the time average of a process or its frequency distribution function and go back and forth between the two under a mild condition.

  13. Chaotic Universe, Friedmannian on the average 2

    Energy Technology Data Exchange (ETDEWEB)

    Marochnik, L S [AN SSSR, Moscow. Inst. Kosmicheskikh Issledovanij

    1980-11-01

    The cosmological solutions are found for the equations for correlators, describing a statistically chaotic Universe, Friedmannian on the average in which delta-correlated fluctuations with amplitudes h >> 1 are excited. For the equation of state of matter p = n epsilon, the kind of solutions depends on the position of maximum of the spectrum of the metric disturbances. The expansion of the Universe, in which long-wave potential and vortical motions and gravitational waves (modes diverging at t ..-->.. 0) had been excited, tends asymptotically to the Friedmannian one at t ..-->.. identity and depends critically on n: at n < 0.26, the solution for the scalefactor is situated higher than the Friedmannian one, and lower at n > 0.26. The influence of finite at t ..-->.. 0 long-wave fluctuation modes leads to an averaged quasiisotropic solution. The contribution of quantum fluctuations and of short-wave parts of the spectrum of classical fluctuations to the expansion law is considered. Their influence is equivalent to the contribution from an ultrarelativistic gas with corresponding energy density and pressure. The restrictions are obtained for the degree of chaos (the spectrum characteristics) compatible with the observed helium abundance, which could have been retained by a completely chaotic Universe during its expansion up to the nucleosynthesis epoch.

  14. FEL system with homogeneous average output

    Energy Technology Data Exchange (ETDEWEB)

    Douglas, David R.; Legg, Robert; Whitney, R. Roy; Neil, George; Powers, Thomas Joseph

    2018-01-16

    A method of varying the output of a free electron laser (FEL) on very short time scales to produce a slightly broader, but smooth, time-averaged wavelength spectrum. The method includes injecting into an accelerator a sequence of bunch trains at phase offsets from crest. Accelerating the particles to full energy to result in distinct and independently controlled, by the choice of phase offset, phase-energy correlations or chirps on each bunch train. The earlier trains will be more strongly chirped, the later trains less chirped. For an energy recovered linac (ERL), the beam may be recirculated using a transport system with linear and nonlinear momentum compactions M.sub.56, which are selected to compress all three bunch trains at the FEL with higher order terms managed.

  15. Quetelet, the average man and medical knowledge.

    Science.gov (United States)

    Caponi, Sandra

    2013-01-01

    Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.

  16. [Quetelet, the average man and medical knowledge].

    Science.gov (United States)

    Caponi, Sandra

    2013-01-01

    Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.

  17. Angle-averaged Compton cross sections

    International Nuclear Information System (INIS)

    Nickel, G.H.

    1983-01-01

    The scattering of a photon by an individual free electron is characterized by six quantities: α = initial photon energy in units of m 0 c 2 ; α/sub s/ = scattered photon energy in units of m 0 c 2 ; β = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV

  18. Average Gait Differential Image Based Human Recognition

    Directory of Open Access Journals (Sweden)

    Jinyan Chen

    2014-01-01

    Full Text Available The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI, AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition.

  19. Reynolds averaged simulation of unsteady separated flow

    International Nuclear Information System (INIS)

    Iaccarino, G.; Ooi, A.; Durbin, P.A.; Behnia, M.

    2003-01-01

    The accuracy of Reynolds averaged Navier-Stokes (RANS) turbulence models in predicting complex flows with separation is examined. The unsteady flow around square cylinder and over a wall-mounted cube are simulated and compared with experimental data. For the cube case, none of the previously published numerical predictions obtained by steady-state RANS produced a good match with experimental data. However, evidence exists that coherent vortex shedding occurs in this flow. Its presence demands unsteady RANS computation because the flow is not statistically stationary. The present study demonstrates that unsteady RANS does indeed predict periodic shedding, and leads to much better concurrence with available experimental data than has been achieved with steady computation

  20. Angle-averaged Compton cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Nickel, G.H.

    1983-01-01

    The scattering of a photon by an individual free electron is characterized by six quantities: ..cap alpha.. = initial photon energy in units of m/sub 0/c/sup 2/; ..cap alpha../sub s/ = scattered photon energy in units of m/sub 0/c/sup 2/; ..beta.. = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV.

  1. Greater weight loss and hormonal changes after 6 months diet with carbohydrates eaten mostly at dinner.

    Science.gov (United States)

    Sofer, Sigal; Eliraz, Abraham; Kaplan, Sara; Voet, Hillary; Fink, Gershon; Kima, Tzadok; Madar, Zecharia

    2011-10-01

    This study was designed to investigate the effect of a low-calorie diet with carbohydrates eaten mostly at dinner on anthropometric, hunger/satiety, biochemical, and inflammatory parameters. Hormonal secretions were also evaluated. Seventy-eight police officers (BMI >30) were randomly assigned to experimental (carbohydrates eaten mostly at dinner) or control weight loss diets for 6 months. On day 0, 7, 90, and 180 blood samples and hunger scores were collected every 4 h from 0800 to 2000 hours. Anthropometric measurements were collected throughout the study. Greater weight loss, abdominal circumference, and body fat mass reductions were observed in the experimental diet in comparison to controls. Hunger scores were lower and greater improvements in fasting glucose, average daily insulin concentrations, and homeostasis model assessment for insulin resistance (HOMA(IR)), T-cholesterol, low-density lipoprotein (LDL) cholesterol, high-density lipoprotein (HDL) cholesterol, C-reactive protein (CRP), tumor necrosis factor-α (TNF-α), and interleukin-6 (IL-6) levels were observed in comparison to controls. The experimental diet modified daily leptin and adiponectin concentrations compared to those observed at baseline and to a control diet. A simple dietary manipulation of carbohydrate distribution appears to have additional benefits when compared to a conventional weight loss diet in individuals suffering from obesity. It might also be beneficial for individuals suffering from insulin resistance and the metabolic syndrome. Further research is required to confirm and clarify the mechanisms by which this relatively simple diet approach enhances satiety, leads to better anthropometric outcomes, and achieves improved metabolic response, compared to a more conventional dietary approach.

  2. The balanced survivor average causal effect.

    Science.gov (United States)

    Greene, Tom; Joffe, Marshall; Hu, Bo; Li, Liang; Boucher, Ken

    2013-05-07

    Statistical analysis of longitudinal outcomes is often complicated by the absence of observable values in patients who die prior to their scheduled measurement. In such cases, the longitudinal data are said to be "truncated by death" to emphasize that the longitudinal measurements are not simply missing, but are undefined after death. Recently, the truncation by death problem has been investigated using the framework of principal stratification to define the target estimand as the survivor average causal effect (SACE), which in the context of a two-group randomized clinical trial is the mean difference in the longitudinal outcome between the treatment and control groups for the principal stratum of always-survivors. The SACE is not identified without untestable assumptions. These assumptions have often been formulated in terms of a monotonicity constraint requiring that the treatment does not reduce survival in any patient, in conjunction with assumed values for mean differences in the longitudinal outcome between certain principal strata. In this paper, we introduce an alternative estimand, the balanced-SACE, which is defined as the average causal effect on the longitudinal outcome in a particular subset of the always-survivors that is balanced with respect to the potential survival times under the treatment and control. We propose a simple estimator of the balanced-SACE that compares the longitudinal outcomes between equivalent fractions of the longest surviving patients between the treatment and control groups and does not require a monotonicity assumption. We provide expressions for the large sample bias of the estimator, along with sensitivity analyses and strategies to minimize this bias. We consider statistical inference under a bootstrap resampling procedure.

  3. Practicing more retrieval routes leads to greater memory retention.

    Science.gov (United States)

    Zheng, Jun; Zhang, Wei; Li, Tongtong; Liu, Zhaomin; Luo, Liang

    2016-09-01

    A wealth of research has shown that retrieval practice plays a significant role in improving memory retention. The current study focused on one simple yet rarely examined question: would repeated retrieval using two different retrieval routes or using the same retrieval route twice lead to greater long-term memory retention? Participants elaborately learned 22 Japanese-Chinese translation word pairs using two different mediators. Half an hour after the initial study phase, the participants completed two retrieval sessions using either one mediator (Tm1Tm1) or two different mediators (Tm1Tm2). On the final test, which was performed 1week after the retrieval practice phase, the participants received only the cue with a request to report the mediator (M1 or M2) followed by the target (Experiment 1) or only the mediator (M1 or M2) with a request to report the target (Experiment 2). The results of Experiment 1 indicated that the participants who practiced under the Tm1Tm2 condition exhibited greater target retention than those who practiced under the Tm1Tm1 condition. This difference in performance was due to the significant disadvantage in mediator retrieval and decoding of the unpracticed mediator under the Tm1Tm1 condition. Although mediators were provided to participants on the final test in Experiment 2, decoding of the unpracticed mediators remained less effective than decoding of the practiced mediators. We conclude that practicing multiple retrieval routes leads to greater memory retention than focusing on a single retrieval route. Thus, increasing retrieval variability during repeated retrieval practice indeed significantly improves long-term retention in a delay test. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Zonally averaged chemical-dynamical model of the lower thermosphere

    International Nuclear Information System (INIS)

    Kasting, J.F.; Roble, R.G.

    1981-01-01

    A zonally averaged numerical model of the thermosphere is used to examine the coupling between neutral composition, including N 2 , O 2 and O, temperature, and winds at solstice for solar minimum conditions. The meridional circulation forced by solar heating results in a summer-to-winter flow, with a winter enhancement in atomic oxygen density that is a factor of about 1.8 greater than the summer hemisphere at 160 km. The O 2 and N 2 variations are associated with a latitudinal gradient in total number density, which is required to achieve pressure balance in the presence of large zonal jets. Latitudinal profiles OI (5577A) green line emission intensity are calculated by using both Chapman and Barth mechanisms. Composition of the lower thermosphere is shown to be strongly influenced by circulation patterns initiated in the stratosphere and lower mesosphere, below the lower boundary used in the model

  5. Average size of random polygons with fixed knot topology.

    Science.gov (United States)

    Matsuda, Hiroshi; Yao, Akihisa; Tsukahara, Hiroshi; Deguchi, Tetsuo; Furuta, Ko; Inami, Takeo

    2003-07-01

    We have evaluated by numerical simulation the average size R(K) of random polygons of fixed knot topology K=,3(1),3(1) musical sharp 4(1), and we have confirmed the scaling law R(2)(K) approximately N(2nu(K)) for the number N of polygonal nodes in a wide range; N=100-2200. The best fit gives 2nu(K) approximately 1.11-1.16 with good fitting curves in the whole range of N. The estimate of 2nu(K) is consistent with the exponent of self-avoiding polygons. In a limited range of N (N greater, similar 600), however, we have another fit with 2nu(K) approximately 1.01-1.07, which is close to the exponent of random polygons.

  6. High-average-power diode-pumped Yb: YAG lasers

    International Nuclear Information System (INIS)

    Avizonis, P V; Beach, R; Bibeau, C M; Emanuel, M A; Harris, D G; Honea, E C; Monroe, R S; Payne, S A; Skidmore, J A; Sutton, S B

    1999-01-01

    A scaleable diode end-pumping technology for high-average-power slab and rod lasers has been under development for the past several years at Lawrence Livermore National Laboratory (LLNL). This technology has particular application to high average power Yb:YAG lasers that utilize a rod configured gain element. Previously, this rod configured approach has achieved average output powers in a single 5 cm long by 2 mm diameter Yb:YAG rod of 430 W cw and 280 W q-switched. High beam quality (M(sup 2)= 2.4) q-switched operation has also been demonstrated at over 180 W of average output power. More recently, using a dual rod configuration consisting of two, 5 cm long by 2 mm diameter laser rods with birefringence compensation, we have achieved 1080 W of cw output with an M(sup 2) value of 13.5 at an optical-to-optical conversion efficiency of 27.5%. With the same dual rod laser operated in a q-switched mode, we have also demonstrated 532 W of average power with an M(sup 2) and lt; 2.5 at 17% optical-to-optical conversion efficiency. These q-switched results were obtained at a 10 kHz repetition rate and resulted in 77 nsec pulse durations. These improved levels of operational performance have been achieved as a result of technology advancements made in several areas that will be covered in this manuscript. These enhancements to our architecture include: (1) Hollow lens ducts that enable the use of advanced cavity architectures permitting birefringence compensation and the ability to run in large aperture-filling near-diffraction-limited modes. (2) Compound laser rods with flanged-nonabsorbing-endcaps fabricated by diffusion bonding. (3) Techniques for suppressing amplified spontaneous emission (ASE) and parasitics in the polished barrel rods

  7. Industrial Applications of High Average Power FELS

    CERN Document Server

    Shinn, Michelle D

    2005-01-01

    The use of lasers for material processing continues to expand, and the annual sales of such lasers exceeds $1 B (US). Large scale (many m2) processing of materials require the economical production of laser powers of the tens of kilowatts, and therefore are not yet commercial processes, although they have been demonstrated. The development of FELs based on superconducting RF (SRF) linac technology provides a scaleable path to laser outputs above 50 kW in the IR, rendering these applications economically viable, since the cost/photon drops as the output power increases. This approach also enables high average power ~ 1 kW output in the UV spectrum. Such FELs will provide quasi-cw (PRFs in the tens of MHz), of ultrafast (pulsewidth ~ 1 ps) output with very high beam quality. This talk will provide an overview of applications tests by our facility's users such as pulsed laser deposition, laser ablation, and laser surface modification, as well as present plans that will be tested with our upgraded FELs. These upg...

  8. Calculating Free Energies Using Average Force

    Science.gov (United States)

    Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)

    2001-01-01

    A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.

  9. High-average-power solid state lasers

    International Nuclear Information System (INIS)

    Summers, M.A.

    1989-01-01

    In 1987, a broad-based, aggressive R ampersand D program aimed at developing the technologies necessary to make possible the use of solid state lasers that are capable of delivering medium- to high-average power in new and demanding applications. Efforts were focused along the following major lines: development of laser and nonlinear optical materials, and of coatings for parasitic suppression and evanescent wave control; development of computational design tools; verification of computational models on thoroughly instrumented test beds; and applications of selected aspects of this technology to specific missions. In the laser materials areas, efforts were directed towards producing strong, low-loss laser glasses and large, high quality garnet crystals. The crystal program consisted of computational and experimental efforts aimed at understanding the physics, thermodynamics, and chemistry of large garnet crystal growth. The laser experimental efforts were directed at understanding thermally induced wave front aberrations in zig-zag slabs, understanding fluid mechanics, heat transfer, and optical interactions in gas-cooled slabs, and conducting critical test-bed experiments with various electro-optic switch geometries. 113 refs., 99 figs., 18 tabs

  10. The concept of average LET values determination

    International Nuclear Information System (INIS)

    Makarewicz, M.

    1981-01-01

    The concept of average LET (linear energy transfer) values determination, i.e. ordinary moments of LET in absorbed dose distribution vs. LET of ionizing radiation of any kind and any spectrum (even the unknown ones) has been presented. The method is based on measurement of ionization current with several values of voltage supplying an ionization chamber operating in conditions of columnar recombination of ions or ion recombination in clusters while the chamber is placed in the radiation field at the point of interest. By fitting a suitable algebraic expression to the measured current values one can obtain coefficients of the expression which can be interpreted as values of LET moments. One of the advantages of the method is its experimental and computational simplicity. It has been shown that for numerical estimation of certain effects dependent on LET of radiation it is not necessary to know the dose distribution but only a number of parameters of the distribution, i.e. the LET moments. (author)

  11. On spectral averages in nuclear spectroscopy

    International Nuclear Information System (INIS)

    Verbaarschot, J.J.M.

    1982-01-01

    In nuclear spectroscopy one tries to obtain a description of systems of bound nucleons. By means of theoretical models one attemps to reproduce the eigenenergies and the corresponding wave functions which then enable the computation of, for example, the electromagnetic moments and the transition amplitudes. Statistical spectroscopy can be used for studying nuclear systems in large model spaces. In this thesis, methods are developed and applied which enable the determination of quantities in a finite part of the Hilbert space, which is defined by specific quantum values. In the case of averages in a space defined by a partition of the nucleons over the single-particle orbits, the propagation coefficients reduce to Legendre interpolation polynomials. In chapter 1 these polynomials are derived with the help of a generating function and a generalization of Wick's theorem. One can then deduce the centroid and the variance of the eigenvalue distribution in a straightforward way. The results are used to calculate the systematic energy difference between states of even and odd parity for nuclei in the mass region A=10-40. In chapter 2 an efficient method for transforming fixed angular momentum projection traces into fixed angular momentum for the configuration space traces is developed. In chapter 3 it is shown that the secular behaviour can be represented by a Gaussian function of the energies. (Auth.)

  12. On the average configuration of the geomagnetic tail

    International Nuclear Information System (INIS)

    Fairfield, D.H.

    1978-03-01

    Over 3000 hours of IMP-6 magnetic field data obtained between 20 and 33 R sub E in the geomagnetic tail have been used in a statistical study of the tail configuration. A distribution of 2.5 minute averages of B sub Z as a function of position across the tail reveals that more flux crosses the equatorial plane near the dawn and dusk flanks than near midnight. The tail field projected in the solar magnetospheric equatorial plane deviates from the X axis due to flaring and solar wind aberration by an angle alpha = -0.9 y sub SM - 1.7, where y/sub SM/ is in earth radii and alpha is in degrees. After removing these effects the Y component of the tail field is found to depend on interplanetary sector structure. During an away sector the B/sub Y/ component of the tail field is on average 0.5 gamma greater than that during a toward sector, a result that is true in both tail lobes and is independent of location across the tail

  13. Updated precision measurement of the average lifetime of B hadrons

    CERN Document Server

    Abreu, P; Adye, T; Agasi, E; Ajinenko, I; Aleksan, Roy; Alekseev, G D; Alemany, R; Allport, P P; Almehed, S; Amaldi, Ugo; Amato, S; Andreazza, A; Andrieux, M L; Antilogus, P; Apel, W D; Arnoud, Y; Åsman, B; Augustin, J E; Augustinus, A; Baillon, Paul; Bambade, P; Barate, R; Barbi, M S; Barbiellini, Guido; Bardin, Dimitri Yuri; Baroncelli, A; Bärring, O; Barrio, J A; Bartl, Walter; Bates, M J; Battaglia, Marco; Baubillier, M; Baudot, J; Becks, K H; Begalli, M; Beillière, P; Belokopytov, Yu A; Benvenuti, Alberto C; Berggren, M; Bertrand, D; Bianchi, F; Bigi, M; Bilenky, S M; Billoir, P; Bloch, D; Blume, M; Blyth, S; Bolognese, T; Bonesini, M; Bonivento, W; Booth, P S L; Borisov, G; Bosio, C; Bosworth, S; Botner, O; Boudinov, E; Bouquet, B; Bourdarios, C; Bowcock, T J V; Bozzo, M; Branchini, P; Brand, K D; Brenke, T; Brenner, R A; Bricman, C; Brillault, L; Brown, R C A; Brückman, P; Brunet, J M; Bugge, L; Buran, T; Burgsmüller, T; Buschmann, P; Buys, A; Cabrera, S; Caccia, M; Calvi, M; Camacho-Rozas, A J; Camporesi, T; Canale, V; Canepa, M; Cankocak, K; Cao, F; Carena, F; Carroll, L; Caso, Carlo; Castillo-Gimenez, M V; Cattai, A; Cavallo, F R; Cerrito, L; Chabaud, V; Charpentier, P; Chaussard, L; Chauveau, J; Checchia, P; Chelkov, G A; Chen, M; Chierici, R; Chliapnikov, P V; Chochula, P; Chorowicz, V; Chudoba, J; Cindro, V; Collins, P; Contreras, J L; Contri, R; Cortina, E; Cosme, G; Cossutti, F; Crawley, H B; Crennell, D J; Crosetti, G; Cuevas-Maestro, J; Czellar, S; Dahl-Jensen, Erik; Dahm, J; D'Almagne, B; Dam, M; Damgaard, G; Dauncey, P D; Davenport, Martyn; Da Silva, W; Defoix, C; Deghorain, A; Della Ricca, G; Delpierre, P A; Demaria, N; De Angelis, A; de Boer, Wim; De Brabandere, S; De Clercq, C; La Vaissière, C de; De Lotto, B; De Min, A; De Paula, L S; De Saint-Jean, C; Dijkstra, H; Di Ciaccio, Lucia; Djama, F; Dolbeau, J; Dönszelmann, M; Doroba, K; Dracos, M; Drees, J; Drees, K A; Dris, M; Dufour, Y; Edsall, D M; Ehret, R; Eigen, G; Ekelöf, T J C; Ekspong, Gösta; Elsing, M; Engel, J P; Ershaidat, N; Erzen, B; Espirito-Santo, M C; Falk, E; Fassouliotis, D; Feindt, Michael; Fenyuk, A; Ferrer, A; Filippas-Tassos, A; Firestone, A; Fischer, P A; Föth, H; Fokitis, E; Fontanelli, F; Formenti, F; Franek, B J; Frenkiel, P; Fries, D E C; Frodesen, A G; Frühwirth, R; Fulda-Quenzer, F; Fuster, J A; Galloni, A; Gamba, D; Gandelman, M; García, C; García, J; Gaspar, C; Gasparini, U; Gavillet, P; Gazis, E N; Gelé, D; Gerber, J P; Gibbs, M; Gokieli, R; Golob, B; Gopal, Gian P; Gorn, L; Górski, M; Guz, Yu; Gracco, Valerio; Graziani, E; Grosdidier, G; Grzelak, K; Gumenyuk, S A; Gunnarsson, P; Günther, M; Guy, J; Hahn, F; Hahn, S; Hajduk, Z; Hallgren, A; Hamacher, K; Hao, W; Harris, F J; Hedberg, V; Henriques, R P; Hernández, J J; Herquet, P; Herr, H; Hessing, T L; Higón, E; Hilke, Hans Jürgen; Hill, T S; Holmgren, S O; Holt, P J; Holthuizen, D J; Hoorelbeke, S; Houlden, M A; Hrubec, Josef; Huet, K; Hultqvist, K; Jackson, J N; Jacobsson, R; Jalocha, P; Janik, R; Jarlskog, C; Jarlskog, G; Jarry, P; Jean-Marie, B; Johansson, E K; Jönsson, L B; Jönsson, P E; Joram, Christian; Juillot, P; Kaiser, M; Kapusta, F; Karafasoulis, K; Karlsson, M; Karvelas, E; Katsanevas, S; Katsoufis, E C; Keränen, R; Khokhlov, Yu A; Khomenko, B A; Khovanskii, N N; King, B J; Kjaer, N J; Klein, H; Klovning, A; Kluit, P M; Köne, B; Kokkinias, P; Koratzinos, M; Korcyl, K; Kourkoumelis, C; Kuznetsov, O; Kramer, P H; Krammer, Manfred; Kreuter, C; Kronkvist, I J; Krumshtein, Z; Krupinski, W; Kubinec, P; Kucewicz, W; Kurvinen, K L; Lacasta, C; Laktineh, I; Lamblot, S; Lamsa, J; Lanceri, L; Lane, D W; Langefeld, P; Last, I; Laugier, J P; Lauhakangas, R; Leder, Gerhard; Ledroit, F; Lefébure, V; Legan, C K; Leitner, R; Lemoigne, Y; Lemonne, J; Lenzen, Georg; Lepeltier, V; Lesiak, T; Liko, D; Lindner, R; Lipniacka, A; Lippi, I; Lörstad, B; Loken, J G; López, J M; Loukas, D; Lutz, P; Lyons, L; MacNaughton, J N; Maehlum, G; Maio, A; Malychev, V; Mandl, F; Marco, J; Marco, R P; Maréchal, B; Margoni, M; Marin, J C; Mariotti, C; Markou, A; Maron, T; Martínez-Rivero, C; Martínez-Vidal, F; Martí i García, S; Masik, J; Matorras, F; Matteuzzi, C; Matthiae, Giorgio; Mazzucato, M; McCubbin, M L; McKay, R; McNulty, R; Medbo, J; Merk, M; Meroni, C; Meyer, S; Meyer, W T; Michelotto, M; Migliore, E; Mirabito, L; Mitaroff, Winfried A; Mjörnmark, U; Moa, T; Møller, R; Mönig, K; Monge, M R; Morettini, P; Müller, H; Mundim, L M; Murray, W J; Muryn, B; Myatt, Gerald; Naraghi, F; Navarria, Francesco Luigi; Navas, S; Nawrocki, K; Negri, P; Neumann, W; Nicolaidou, R; Nielsen, B S; Nieuwenhuizen, M; Nikolaenko, V; Niss, P; Nomerotski, A; Normand, Ainsley; Novák, M; Oberschulte-Beckmann, W; Obraztsov, V F; Olshevskii, A G; Onofre, A; Orava, Risto; Österberg, K; Ouraou, A; Paganini, P; Paganoni, M; Pagès, P; Palka, H; Papadopoulou, T D; Papageorgiou, K; Pape, L; Parkes, C; Parodi, F; Passeri, A; Pegoraro, M; Peralta, L; Pernegger, H; Pernicka, Manfred; Perrotta, A; Petridou, C; Petrolini, A; Petrovykh, M; Phillips, H T; Piana, G; Pierre, F; Pimenta, M; Pindo, M; Plaszczynski, S; Podobrin, O; Pol, M E; Polok, G; Poropat, P; Pozdnyakov, V; Prest, M; Privitera, P; Pukhaeva, N; Pullia, Antonio; Radojicic, D; Ragazzi, S; Rahmani, H; Ratoff, P N; Read, A L; Reale, M; Rebecchi, P; Redaelli, N G; Regler, Meinhard; Reid, D; Renton, P B; Resvanis, L K; Richard, F; Richardson, J; Rídky, J; Rinaudo, G; Ripp, I; Romero, A; Roncagliolo, I; Ronchese, P; Ronjin, V M; Roos, L; Rosenberg, E I; Rosso, E; Roudeau, Patrick; Rovelli, T; Rückstuhl, W; Ruhlmann-Kleider, V; Ruiz, A; Rybicki, K; Saarikko, H; Sacquin, Yu; Sadovskii, A; Sajot, G; Salt, J; Sánchez, J; Sannino, M; Schimmelpfennig, M; Schneider, H; Schwickerath, U; Schyns, M A E; Sciolla, G; Scuri, F; Seager, P; Sedykh, Yu; Segar, A M; Seitz, A; Sekulin, R L; Shellard, R C; Siccama, I; Siegrist, P; Simonetti, S; Simonetto, F; Sissakian, A N; Sitár, B; Skaali, T B; Smadja, G; Smirnov, N; Smirnova, O G; Smith, G R; Solovyanov, O; Sosnowski, R; Souza-Santos, D; Spassoff, Tz; Spiriti, E; Sponholz, P; Squarcia, S; Stanescu, C; Stapnes, Steinar; Stavitski, I; Stichelbaut, F; Stocchi, A; Strauss, J; Strub, R; Stugu, B; Szczekowski, M; Szeptycka, M; Tabarelli de Fatis, T; Tavernet, J P; Chikilev, O G; Tilquin, A; Timmermans, J; Tkatchev, L G; Todorov, T; Toet, D Z; Tomaradze, A G; Tomé, B; Tonazzo, A; Tortora, L; Tranströmer, G; Treille, D; Trischuk, W; Tristram, G; Trombini, A; Troncon, C; Tsirou, A L; Turluer, M L; Tyapkin, I A; Tyndel, M; Tzamarias, S; Überschär, B; Ullaland, O; Uvarov, V; Valenti, G; Vallazza, E; Van der Velde, C; van Apeldoorn, G W; van Dam, P; Van Doninck, W K; Van Eldik, J; Vassilopoulos, N; Vegni, G; Ventura, L; Venus, W A; Verbeure, F; Verlato, M; Vertogradov, L S; Vilanova, D; Vincent, P; Vitale, L; Vlasov, E; Vodopyanov, A S; Vrba, V; Wahlen, H; Walck, C; Weierstall, M; Weilhammer, Peter; Weiser, C; Wetherell, Alan M; Wicke, D; Wickens, J H; Wielers, M; Wilkinson, G R; Williams, W S C; Winter, M; Witek, M; Woschnagg, K; Yip, K; Yushchenko, O P; Zach, F; Zaitsev, A; Zalewska-Bak, A; Zalewski, Piotr; Zavrtanik, D; Zevgolatakos, E; Zimin, N I; Zito, M; Zontar, D; Zuberi, R; Zucchelli, G C; Zumerle, G; Belokopytov, Yu; Charpentier, Ph; Gavillet, Ph; Gouz, Yu; Jarlskog, Ch; Khokhlov, Yu; Papadopoulou, Th D

    1996-01-01

    The measurement of the average lifetime of B hadrons using inclusively reconstructed secondary vertices has been updated using both an improved processing of previous data and additional statistics from new data. This has reduced the statistical and systematic uncertainties and gives \\tau_{\\mathrm{B}} = 1.582 \\pm 0.011\\ \\mathrm{(stat.)} \\pm 0.027\\ \\mathrm{(syst.)}\\ \\mathrm{ps.} Combining this result with the previous result based on charged particle impact parameter distributions yields \\tau_{\\mathrm{B}} = 1.575 \\pm 0.010\\ \\mathrm{(stat.)} \\pm 0.026\\ \\mathrm{(syst.)}\\ \\mathrm{ps.}

  14. Land cover mapping of Greater Mesoamerica using MODIS data

    Science.gov (United States)

    Giri, Chandra; Jenkins, Clinton N.

    2005-01-01

    A new land cover database of Greater Mesoamerica has been prepared using moderate resolution imaging spectroradiometer (MODIS, 500 m resolution) satellite data. Daily surface reflectance MODIS data and a suite of ancillary data were used in preparing the database by employing a decision tree classification approach. The new land cover data are an improvement over traditional advanced very high resolution radiometer (AVHRR) based land cover data in terms of both spatial and thematic details. The dominant land cover type in Greater Mesoamerica is forest (39%), followed by shrubland (30%) and cropland (22%). Country analysis shows forest as the dominant land cover type in Belize (62%), Cost Rica (52%), Guatemala (53%), Honduras (56%), Nicaragua (53%), and Panama (48%), cropland as the dominant land cover type in El Salvador (60.5%), and shrubland as the dominant land cover type in Mexico (37%). A three-step approach was used to assess the quality of the classified land cover data: (i) qualitative assessment provided good insight in identifying and correcting gross errors; (ii) correlation analysis of MODIS- and Landsat-derived land cover data revealed strong positive association for forest (r2 = 0.88), shrubland (r2 = 0.75), and cropland (r2 = 0.97) but weak positive association for grassland (r2 = 0.26); and (iii) an error matrix generated using unseen training data provided an overall accuracy of 77.3% with a Kappa coefficient of 0.73608. Overall, MODIS 500 m data and the methodology used were found to be quite useful for broad-scale land cover mapping of Greater Mesoamerica.

  15. Greater-than-Class-C Low-Level Waste Data Base user's manual

    International Nuclear Information System (INIS)

    1992-07-01

    The Greater-than-Class-C Low-level Waste (GTCC LLW) Data Base characterizes GTCC LLW using low, base, and high cases for three different scenarios: unpackaged, packaged, and concentration averages. The GTCC LLW Data Base can be used to project future volumes and radionuclide activities. This manual provides instructions for users of the GTCC LLW Data Base

  16. Greater-confinement disposal of low-level radioactive wastes

    International Nuclear Information System (INIS)

    Trevorrow, L.E.; Gilbert, T.L.; Luner, C.; Merry-Libby, P.A.; Meshkov, N.K.; Yu, C.

    1985-01-01

    Low-level radioactive wastes include a broad spectrum of wastes that have different radionuclide concentrations, half-lives, and physical and chemical properties. Standard shallow-land burial practice can provide adequate protection of public health and safety for most low-level wastes, but a small volume fraction (about 1%) containing most of the activity inventory (approx.90%) requires specific measures known as ''greater-confinement disposal'' (GCD). Different site characteristics and different waste characteristics - such as high radionuclide concentrations, long radionuclide half-lives, high radionuclide mobility, and physical or chemical characteristics that present exceptional hazards - lead to different GCD facility design requirements. Facility design alternatives considered for GCD include the augered shaft, deep trench, engineered structure, hydrofracture, improved waste form, and high-integrity container. Selection of an appropriate design must also consider the interplay between basic risk limits for protection of public health and safety, performance characteristics and objectives, costs, waste-acceptance criteria, waste characteristics, and site characteristics. This paper presents an overview of the factors that must be considered in planning the application of methods proposed for providing greater confinement of low-level wastes. 27 refs

  17. Vapour cloud explosion hazard greater with light feedstocks

    Energy Technology Data Exchange (ETDEWEB)

    Windebank, C.S.

    1980-03-03

    Because lighter chemical feedstocks such as propylene and butylenes are more reactive than LPG's they pose a greater risk of vapor cloud explosion, particularly during their transport. According to C.S. Windebank (Insurance Tech. Bur.), percussive unconfined vapor cloud explosions (PUVCE's) do not usually occur below the ten-ton threshold for saturated hydrocarbons but can occur well below this threshold in the case of unsaturated hydrocarbons such as propylene and butylenes. Boiling liquid expanding vapor explosions (BLEVE's) are more likely to be ''hot'' (i.e., the original explosion is associated with fire) than ''cold'' in the case of unsaturated hydrocarbons. No PUVCE or BLEVE incident has been reported in the UK. In the US, 16 out of 20 incidents recorded between 1970 and 1975 were related to chemical feedstocks, including propylene and butylenes, and only 4 were LPG-related. The average losses were $20 million per explosion. Between 1968 and 1978, 8% of LPG pipeline spillages led to explosions.

  18. Greater Confinement Disposal Program at the Savannah River Plant

    International Nuclear Information System (INIS)

    Towler, O.A.; Cook, J.R.; Peterson, D.L.

    1983-01-01

    Plans for improved LLW disposal at the Savannah River Plant include Greater Confinement Disposal (GCD) for the higher activity fractions of this waste. GCD practices will include waste segregation, packaging, emplacement below the root zone, and stabilizing the emplacement with cement. Statistical review of SRP burial records showed that about 95% of the radioactivity is associated with only 5% of the waste volume. Trigger values determined in this study were compared with actual burials in 1982 to determine what GCD facilities would be needed for a demonstration to begin in Fall 1983. Facilities selected include 8-feet-diameter x 30-feet-deep boreholes to contain reactor scrap, tritiated waste, and selected wastes from offsite

  19. Effect of force tightening on cable tension and displacement in greater trochanter reattachment.

    Science.gov (United States)

    Canet, Fanny; Duke, Kajsa; Bourgeois, Yan; Laflamme, G-Yves; Brailovski, Vladimir; Petit, Yvan

    2011-01-01

    The purpose of this study was to evaluate cable tension during installation, and during loading similar to walking in a cable grip type greater trochanter (GT), reattachment system. A 4th generation Sawbones composite femur with osteotomised GT was reattached with four Cable-Ready® systems (Zimmer, Warsaw, IN). Cables were tightened at 3 different target installation forces (178, 356 and 534 N) and retightened once as recommended by the manufacturer. Cables tension was continuously monitored using in-situ load cells. To simulate walking, a custom frame was used to apply quasi static load on the head of a femoral stem implant (2340 N) and abductor pull (667 N) on the GT. GT displacement (gap and sliding) relative to the femur was measured using a 3D camera system. During installation, a drop in cable tension was observed when tightening subsequent cables: an average 40+12.2% and 11 ± 5.9% tension loss was measured in the first and second cable. Therefore, retightening the cables, as recommended by the manufacturer, is important. During simulated walking, the second cable additionally lost up to 12.2+3.6% of tension. No difference was observed between the GT-femur gaps measured with cables tightened at different installation forces (p=0.32). The GT sliding however was significantly greater (0.9 ± 0.3 mm) when target installation force was set to only 178 N compared to 356 N (0.2 ± 0.1 mm); pcable tightening force should be as close as possible to that recommended by the manufacturer, because reducing it compromises the stability of the GT fragment, whereas increasing it does not improve this stability, but could lead to cable breakage.

  20. Face averages enhance user recognition for smartphone security.

    Science.gov (United States)

    Robertson, David J; Kramer, Robin S S; Burton, A Mike

    2015-01-01

    Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy). In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual's 'face-average'--a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1) and for real faces (Experiment 2): users could unlock their phones more reliably when the device stored an average of the user's face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings.

  1. [Clinical Results of Endoscopic Treatment of Greater Trochanteric Pain Syndrome].

    Science.gov (United States)

    Zeman, P; Rafi, M; Skala, P; Zeman, J; Matějka, J; Pavelka, T

    2017-01-01

    PURPOSE OF THE STUDY This retrospective study aims to present short-term clinical outcomes of endoscopic treatment of patients with greater trochanteric pain syndrome (GTPS). MATERIAL AND METHODS The evaluated study population was composed of a total of 19 patients (16 women, 3 men) with the mean age of 47 years (19-63 years). In twelve cases the right hip joint was affected, in the remaining seven cases it was the left side. The retrospective evaluation was carried out only in patients with greater trochanteric pain syndrome caused by independent chronic trochanteric bursitis without the presence of m. gluteus medius tear not responding to at least 3 months of conservative treatment. In patients from the followed-up study population, endoscopic trochanteric bursectomy was performed alone or in combination with iliotibial band release. The clinical results were evaluated preoperatively and with a minimum follow-up period of 1 year after the surgery (mean 16 months). The Visual Analogue Scale (VAS) for assessment of pain and WOMAC (Western Ontario MacMaster) score were used. In both the evaluated criteria (VAS and WOMAC score) preoperative and postoperative results were compared. Moreover, duration of surgery and presence of postoperative complications were assessed. Statistical evaluation of clinical results was carried out by an independent statistician. In order to compare the parameter of WOMAC score and VAS pre- and post-operatively the Mann-Whitney Exact Test was used. The statistical significance was set at 0.05. RESULTS The preoperative VAS score ranged 5-9 (mean 7.6) and the postoperative VAS ranged 0-5 (mean 2.3). The WOMAC score ranged 56.3-69.7 (mean 64.2) preoperatively and 79.8-98.3 (mean 89.7) postoperatively. When both the evaluated parameters of VAS and WOMAC score were compared in time, a statistically significant improvement (ppain syndrome yields statistically significant improvement of clinical results with the concurrent minimum incidence of

  2. Behavioral correlates of heart rates of free-living Greater White-fronted Geese

    Science.gov (United States)

    Ely, Craig R.; Ward, D.H.; Bollinger, K.S.

    1999-01-01

    We simultaneously monitored the heart rate and behavior of nine free-living Greater White-fronted Geese (Anser albifrons) on their wintering grounds in northern California. Heart rates of wild geese were monitored via abdominally-implanted radio transmitters with electrodes that received electrical impulses of the heart and emitted a radio signal with each ventricular contraction. Post-operative birds appeared to behave normally, readily rejoining flocks and flying up to 15 km daily from night-time roost sites to feed in surrounding agricultural fields. Heart rates varied significantly among individuals and among behaviors, and ranged from less than 100 beats per minute (BPM) during resting, to over 400 BPM during flight. Heart rates varied from 80 to 140 BPM during non-strenuous activities such as walking, feeding, and maintenance activities, to about 180 BPM when birds became alert, and over 400 BPM when birds were startled, even if they did not take flight. Postflight heart rate recovery time averaged postures, as heart rates were context-dependent, and were highest in initial encounters among individuals. Instantaneous measures of physiological parameters, such as heart rate, are often better indicators of the degree of response to external stimuli than visual observations and can be used to improve estimates of energy expenditure based solely on activity data.

  3. To quantum averages through asymptotic expansion of classical averages on infinite-dimensional space

    International Nuclear Information System (INIS)

    Khrennikov, Andrei

    2007-01-01

    We study asymptotic expansions of Gaussian integrals of analytic functionals on infinite-dimensional spaces (Hilbert and nuclear Frechet). We obtain an asymptotic equality coupling the Gaussian integral and the trace of the composition of scaling of the covariation operator of a Gaussian measure and the second (Frechet) derivative of a functional. In this way we couple classical average (given by an infinite-dimensional Gaussian integral) and quantum average (given by the von Neumann trace formula). We can interpret this mathematical construction as a procedure of 'dequantization' of quantum mechanics. We represent quantum mechanics as an asymptotic projection of classical statistical mechanics with infinite-dimensional phase space. This space can be represented as the space of classical fields, so quantum mechanics is represented as a projection of 'prequantum classical statistical field theory'

  4. Determining average path length and average trapping time on generalized dual dendrimer

    Science.gov (United States)

    Li, Ling; Guan, Jihong

    2015-03-01

    Dendrimer has wide number of important applications in various fields. In some cases during transport or diffusion process, it transforms into its dual structure named Husimi cactus. In this paper, we study the structure properties and trapping problem on a family of generalized dual dendrimer with arbitrary coordination numbers. We first calculate exactly the average path length (APL) of the networks. The APL increases logarithmically with the network size, indicating that the networks exhibit a small-world effect. Then we determine the average trapping time (ATT) of the trapping process in two cases, i.e., the trap placed on a central node and the trap is uniformly distributed in all the nodes of the network. In both case, we obtain explicit solutions of ATT and show how they vary with the networks size. Besides, we also discuss the influence of the coordination number on trapping efficiency.

  5. Higher Physiotherapy Frequency Is Associated with Shorter Length of Stay and Greater Functional Recovery in Hospitalized Frail Older Adults: A Retrospective Observational Study.

    Science.gov (United States)

    Hartley, P; Adamson, J; Cunningham, C; Embleton, G; Romero-Ortuno, R

    2016-01-01

    Extra physiotherapy has been associated with better outcomes in hospitalized patients, but this remains an under-researched area in geriatric medicine wards. We retrospectively studied the association between average physiotherapy frequency and outcomes in hospitalized geriatric patients. High frequency physiotherapy (HFP) was defined as ≥0.5 contacts/day. Of 358 eligible patients, 131 (36.6%) received low, and 227 (63.4%) HFP. Functional improvement (discharge versus admission) in the modified Rankin scale was greater in the HFP group (1.1 versus 0.7 points, Pphysiotherapy frequency and intensity in geriatric wards.

  6. Greater commitment to the domestic violence training is required.

    Science.gov (United States)

    Leppäkoski, Tuija Helena; Flinck, Aune; Paavilainen, Eija

    2015-05-01

    Domestic violence (DV) is a major public health problem with high health and social costs. A solution to this multi-faceted problem requires that various help providers work together in an effective and optimal manner when dealing with different parties of DV. The objective of our research and development project (2008-2013) was to improve the preparedness of the social and healthcare professionals to manage DV. This article focuses on the evaluation of interprofessional education (IPE) to provide knowledge and skills for identifying and intervening in DV and to improve collaboration among social and health care professionals and other help providers at the local and regional level. The evaluation data were carried out with an internal evaluation. The evaluation data were collected from the participants orally and in the written form. The participants were satisfied with the content of the IPE programme itself and the teaching methods used. Participation in the training sessions could have been more active. Moreover, some of the people who had enrolled for the trainings could not attend all of them. IPE is a valuable way to develop intervening in DV. However, greater commitment to the training is required from not only the participants and their superiors but also from trustees.

  7. Greater Vancouver's water supply receives ozone treatment

    Energy Technology Data Exchange (ETDEWEB)

    Crosby, J.; Singh, I.; Reil, D. D.; Neden, G.

    2000-10-01

    To improve the overall quality of the treated water delivered to the member municipalities of the Greater Vancouver Water District (GVWD), the GVWD implemented a phased drinking water quality improvement program. The phased treatment program is directed at attaining effective disinfection while minimizing the formation of chlorinated disinfection by-products. Accordingly, the current primary disinfection method of chlorination was reevaluated and an ozone primary disinfection without filtration was authorized. Ozonization provides increased protection against Giardia and Cryptosporidium and a decrease in the formation potential for disinfection by-products (DPBs). This paper describes the design for the ozonation facility at Coquitlam, construction of which began in 1998 and completed during the summer of 2000. The facility houses the liquid oxygen supply, ozone generation, cooling water, ozone injection, primary off-gas ozone destruct system, and provides a home for various office, electrical maintenance and diesel generating functions. The second site at Capilano is expected to start construction in the fall of 2000 and be completed late in 2002. Wit its kilometre long stainless steel ozone contactor and sidestream injector tower, the Coquitlam Ozonation Facility is the first ozone pressure injection system of its kind in North America. 1 tab., 2 figs.

  8. Use of Processed Nerve Allografts to Repair Nerve Injuries Greater Than 25 mm in the Hand.

    Science.gov (United States)

    Rinker, Brian; Zoldos, Jozef; Weber, Renata V; Ko, Jason; Thayer, Wesley; Greenberg, Jeffrey; Leversedge, Fraser J; Safa, Bauback; Buncke, Gregory

    2017-06-01

    Processed nerve allografts (PNAs) have been demonstrated to have improved clinical results compared with hollow conduits for reconstruction of digital nerve gaps less than 25 mm; however, the use of PNAs for longer gaps warrants further clinical investigation. Long nerve gaps have been traditionally hard to study because of low incidence. The advent of the RANGER registry, a large, institutional review board-approved, active database for PNA (Avance Nerve Graft; AxoGen, Inc, Alachua, FL) has allowed evaluation of lower incidence subsets. The RANGER database was queried for digital nerve repairs of 25 mm or greater. Demographics, injury, treatment, and functional outcomes were recorded on standardized forms. Patients younger than 18 and those lacking quantitative follow-up data were excluded. Recovery was graded according to the Medical Research Council Classification for sensory function, with meaningful recovery defined as S3 or greater level. Fifty digital nerve injuries in 28 subjects were included. There were 22 male and 6 female subjects, and the mean age was 45. Three patients gave a previous history of diabetes, and there were 6 active smokers. The most commonly reported mechanisms of injury were saw injuries (n = 13), crushing injuries (n = 9), resection of neuroma (n = 9), amputation/avulsions (n = 8), sharp lacerations (n = 7), and blast/gunshots (n = 4). The average gap length was 35 ± 8 mm (range, 25-50 mm). Recovery to the S3 or greater level was reported in 86% of repairs. Static 2-point discrimination (s2PD) and Semmes-Weinstein monofilament (SWF) were the most common completed assessments. Mean s2PD in 24 repairs reporting 2PD data was 9 ± 4 mm. For the 38 repairs with SWF data, protective sensation was reported in 33 repairs, deep pressure in 2, and no recovery in 3. These data compared favorably with historical data for nerve autograft repairs, with reported levels of meaningful recovery of 60% to 88%. There were no reported adverse effects

  9. Dietary breadth of grizzly bears in the Greater Yellowstone Ecosystem

    Science.gov (United States)

    Gunther, Kerry A.; Shoemaker, Rebecca; Frey, Kevin L.; Haroldson, Mark A.; Cain, Steven L.; van Manen, Frank T.; Fortin, Jennifer K.

    2014-01-01

    Grizzly bears (Ursus arctos) in the Greater Yellowstone Ecosystem (GYE) are opportunistic omnivores that eat a great diversity of plant and animal species. Changes in climate may affect regional vegetation, hydrology, insects, and fire regimes, likely influencing the abundance, range, and elevational distribution of the plants and animals consumed by GYE grizzly bears. Determining the dietary breadth of grizzly bears is important to document future changes in food resources and how those changes may affect the nutritional ecology of grizzlies. However, no synthesis exists of all foods consumed by grizzly bears in the GYE. We conducted a review of available literature and compiled a list of species consumed by grizzly bears in the GYE. We documented >266 species within 200 genera from 4 kingdoms, including 175 plant, 37 invertebrate, 34 mammal, 7 fungi, 7 bird, 4 fish, 1 amphibian, and 1 algae species as well as 1 soil type consumed by grizzly bears. The average energy values of the ungulates (6.8 kcal/g), trout (Oncorhynchus spp., 6.1 kcal/g), and small mammals (4.5 kcal/g) eaten by grizzlies were higher than those of the plants (3.0 kcal/g) and invertebrates (2.7 kcal/g) they consumed. The most frequently detected diet items were graminoids, ants (Formicidae), whitebark pine seeds (Pinus albicaulis), clover (Trifolium spp.), and dandelion (Taraxacum spp.). The most consistently used foods on a temporal basis were graminoids, ants, whitebark pine seeds, clover, elk (Cervus elaphus), thistle (Cirsium spp.), and horsetail (Equisetum spp.). Historically, garbage was a significant diet item for grizzlies until refuse dumps were closed. Use of forbs increased after garbage was no longer readily available. The list of foods we compiled will help managers of grizzly bears and their habitat document future changes in grizzly bear food habits and how bears respond to changing food resources.

  10. Reserves in western basins: Part 1, Greater Green River basin

    Energy Technology Data Exchange (ETDEWEB)

    1993-10-01

    This study characterizes an extremely large gas resource located in low permeability, overpressured sandstone reservoirs located below 8,000 feet drill depth in the Greater Green River basin, Wyoming. Total in place resource is estimated at 1,968 Tcf. Via application of geologic, engineering and economic criteria, the portion of this resource potentially recoverable as reserves is estimated. Those volumes estimated include probable, possible and potential categories and total 33 Tcf as a mean estimate of recoverable gas for all plays considered in the basin. Five plays (formations) were included in this study and each was separately analyzed in terms of its overpressured, tight gas resource, established productive characteristics and future reserves potential based on a constant $2/Mcf wellhead gas price scenario. A scheme has been developed to break the overall resource estimate down into components that can be considered as differing technical and economic challenges that must be overcome in order to exploit such resources: in other words, to convert those resources to economically recoverable reserves. Total recoverable reserves estimates of 33 Tcf do not include the existing production from overpressured tight reservoirs in the basin. These have estimated ultimate recovery of approximately 1.6 Tcf, or a per well average recovery of 2.3 Bcf. Due to the fact that considerable pay thicknesses can be present, wells can be economic despite limited drainage areas. It is typical for significant bypassed gas to be present at inter-well locations because drainage areas are commonly less than regulatory well spacing requirements.

  11. Review of the different methods to derive average spacing from resolved resonance parameters sets

    International Nuclear Information System (INIS)

    Fort, E.; Derrien, H.; Lafond, D.

    1979-12-01

    The average spacing of resonances is an important parameter for statistical model calculations, especially concerning non fissile nuclei. The different methods to derive this average value from resonance parameters sets have been reviewed and analyzed in order to tentatively detect their respective weaknesses and propose recommendations. Possible improvements are suggested

  12. Using Bayes Model Averaging for Wind Power Forecasts

    Science.gov (United States)

    Preede Revheim, Pål; Beyer, Hans Georg

    2014-05-01

    For operational purposes predictions of the forecasts of the lumped output of groups of wind farms spread over larger geographic areas will often be of interest. A naive approach is to make forecasts for each individual site and sum them up to get the group forecast. It is however well documented that a better choice is to use a model that also takes advantage of spatial smoothing effects. It might however be the case that some sites tends to more accurately reflect the total output of the region, either in general or for certain wind directions. It will then be of interest giving these a greater influence over the group forecast. Bayesian model averaging (BMA) is a statistical post-processing method for producing probabilistic forecasts from ensembles. Raftery et al. [1] show how BMA can be used for statistical post processing of forecast ensembles, producing PDFs of future weather quantities. The BMA predictive PDF of a future weather quantity is a weighted average of the ensemble members' PDFs, where the weights can be interpreted as posterior probabilities and reflect the ensemble members' contribution to overall forecasting skill over a training period. In Revheim and Beyer [2] the BMA procedure used in Sloughter, Gneiting and Raftery [3] were found to produce fairly accurate PDFs for the future mean wind speed of a group of sites from the single sites wind speeds. However, when the procedure was attempted applied to wind power it resulted in either problems with the estimation of the parameters (mainly caused by longer consecutive periods of no power production) or severe underestimation (mainly caused by problems with reflecting the power curve). In this paper the problems that arose when applying BMA to wind power forecasting is met through two strategies. First, the BMA procedure is run with a combination of single site wind speeds and single site wind power production as input. This solves the problem with longer consecutive periods where the input data

  13. A Single Image Dehazing Method Using Average Saturation Prior

    Directory of Open Access Journals (Sweden)

    Zhenfei Gu

    2017-01-01

    Full Text Available Outdoor images captured in bad weather are prone to yield poor visibility, which is a fatal problem for most computer vision applications. The majority of existing dehazing methods rely on an atmospheric scattering model and therefore share a common limitation; that is, the model is only valid when the atmosphere is homogeneous. In this paper, we propose an improved atmospheric scattering model to overcome this inherent limitation. By adopting the proposed model, a corresponding dehazing method is also presented. In this method, we first create a haze density distribution map of a hazy image, which enables us to segment the hazy image into scenes according to the haze density similarity. Then, in order to improve the atmospheric light estimation accuracy, we define an effective weight assignment function to locate a candidate scene based on the scene segmentation results and therefore avoid most potential errors. Next, we propose a simple but powerful prior named the average saturation prior (ASP, which is a statistic of extensive high-definition outdoor images. Using this prior combined with the improved atmospheric scattering model, we can directly estimate the scene atmospheric scattering coefficient and restore the scene albedo. The experimental results verify that our model is physically valid, and the proposed method outperforms several state-of-the-art single image dehazing methods in terms of both robustness and effectiveness.

  14. 20 CFR 404.221 - Computing your average monthly wage.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the average...

  15. Average and local structure of α-CuI by configurational averaging

    International Nuclear Information System (INIS)

    Mohn, Chris E; Stoelen, Svein

    2007-01-01

    Configurational Boltzmann averaging together with density functional theory are used to study in detail the average and local structure of the superionic α-CuI. We find that the coppers are spread out with peaks in the atom-density at the tetrahedral sites of the fcc sublattice of iodines. We calculate Cu-Cu, Cu-I and I-I pair radial distribution functions, the distribution of coordination numbers and the distribution of Cu-I-Cu, I-Cu-I and Cu-Cu-Cu bond-angles. The partial pair distribution functions are in good agreement with experimental neutron diffraction-reverse Monte Carlo, extended x-ray absorption fine structure and ab initio molecular dynamics results. In particular, our results confirm the presence of a prominent peak at around 2.7 A in the Cu-Cu pair distribution function as well as a broader, less intense peak at roughly 4.3 A. We find highly flexible bonds and a range of coordination numbers for both iodines and coppers. This structural flexibility is of key importance in order to understand the exceptional conductivity of coppers in α-CuI; the iodines can easily respond to changes in the local environment as the coppers diffuse, and a myriad of different diffusion-pathways is expected due to the large variation in the local motifs

  16. Extracting Credible Dependencies for Averaged One-Dependence Estimator Analysis

    Directory of Open Access Journals (Sweden)

    LiMin Wang

    2014-01-01

    Full Text Available Of the numerous proposals to improve the accuracy of naive Bayes (NB by weakening the conditional independence assumption, averaged one-dependence estimator (AODE demonstrates remarkable zero-one loss performance. However, indiscriminate superparent attributes will bring both considerable computational cost and negative effect on classification accuracy. In this paper, to extract the most credible dependencies we present a new type of seminaive Bayesian operation, which selects superparent attributes by building maximum weighted spanning tree and removes highly correlated children attributes by functional dependency and canonical cover analysis. Our extensive experimental comparison on UCI data sets shows that this operation efficiently identifies possible superparent attributes at training time and eliminates redundant children attributes at classification time.

  17. Lower inhibitory control interacts with greater pain catastrophizing to predict greater pain intensity in women with migraine and overweight/obesity.

    Science.gov (United States)

    Galioto, Rachel; O'Leary, Kevin C; Thomas, J Graham; Demos, Kathryn; Lipton, Richard B; Gunstad, John; Pavlović, Jelena M; Roth, Julie; Rathier, Lucille; Bond, Dale S

    2017-12-01

    Pain catastrophizing (PC) is associated with more severe and disabling migraine attacks. However, factors that moderate this relationship are unknown. Failure of inhibitory control (IC), or the ability to suppress automatic or inappropriate responses, may be one such factor given previous research showing a relationship between higher PC and lower IC in non-migraine samples, and research showing reduced IC in migraine. Therefore, we examined whether lower IC interacts with increased PC to predict greater migraine severity as measured by pain intensity, attack frequency, and duration. Women (n = 105) aged 18-50 years old (M = 38.0 ± 1.2) with overweight/obesity and migraine who were seeking behavioral treatment for weight loss and migraine reduction completed a 28-day smartphone-based headache diary assessing migraine headache severity. Participants then completed a modified computerized Stroop task as a measure of IC and self-report measures of PC (Pain Catastrophizing Scale [PCS]), anxiety, and depression. Linear regression was used to examine independent and joint associations of PC and IC with indices of migraine severity after controlling for age, body mass index (BMI) depression, and anxiety. Participants on average had BMI of 35.1 ± 6.5 kg/m 2 and reported 5.3 ± 2.6 migraine attacks (8.3 ± 4.4 migraine days) over 28 days that produced moderate pain intensity (5.9 ± 1.4 out of 10) with duration of 20.0 ± 14.2 h. After adjusting for covariates, higher PCS total (β = .241, SE = .14, p = .03) and magnification subscale (β = .311, SE = .51, p migraine attacks. Future studies are needed to determine whether interventions to improve IC could lead to less painful migraine attacks via improvements in PC.

  18. The effects of average revenue regulation on electricity transmission investment and pricing

    International Nuclear Information System (INIS)

    Matsukawa, Isamu

    2008-01-01

    This paper investigates the long-run effects of average revenue regulation on an electricity transmission monopolist who applies a two-part tariff comprising a variable congestion price and a non-negative fixed access fee. A binding constraint on the monopolist's expected average revenue lowers the access fee, promotes transmission investment, and improves consumer surplus. In a case of any linear or log-linear electricity demand function with a positive probability that no congestion occurs, average revenue regulation is allocatively more efficient than a Coasian two-part tariff if the level of capacity under average revenue regulation is higher than that under a Coasian two-part tariff. (author)

  19. An application of commercial data averaging techniques in pulsed photothermal experiments

    International Nuclear Information System (INIS)

    Grozescu, I.V.; Moksin, M.M.; Wahab, Z.A.; Yunus, W.M.M.

    1997-01-01

    We present an application of data averaging technique commonly implemented in many commercial digital oscilloscopes or waveform digitizers. The technique was used for transient data averaging in the pulsed photothermal radiometry experiments. Photothermal signals are surrounded by an important amount of noise which affect the precision of the measurements. The effect of the noise level on photothermal signal parameters in our particular case, fitted decay time, is shown. The results of the analysis can be used in choosing the most effective averaging technique and estimating the averaging parameter values. This would help to reduce the data acquisition time while improving the signal-to-noise ratio

  20. Analysis and comparison of safety models using average daily, average hourly, and microscopic traffic.

    Science.gov (United States)

    Wang, Ling; Abdel-Aty, Mohamed; Wang, Xuesong; Yu, Rongjie

    2018-02-01

    There have been plenty of traffic safety studies based on average daily traffic (ADT), average hourly traffic (AHT), or microscopic traffic at 5 min intervals. Nevertheless, not enough research has compared the performance of these three types of safety studies, and seldom of previous studies have intended to find whether the results of one type of study is transferable to the other two studies. First, this study built three models: a Bayesian Poisson-lognormal model to estimate the daily crash frequency using ADT, a Bayesian Poisson-lognormal model to estimate the hourly crash frequency using AHT, and a Bayesian logistic regression model for the real-time safety analysis using microscopic traffic. The model results showed that the crash contributing factors found by different models were comparable but not the same. Four variables, i.e., the logarithm of volume, the standard deviation of speed, the logarithm of segment length, and the existence of diverge segment, were positively significant in the three models. Additionally, weaving segments experienced higher daily and hourly crash frequencies than merge and basic segments. Then, each of the ADT-based, AHT-based, and real-time models was used to estimate safety conditions at different levels: daily and hourly, meanwhile, the real-time model was also used in 5 min intervals. The results uncovered that the ADT- and AHT-based safety models performed similar in predicting daily and hourly crash frequencies, and the real-time safety model was able to provide hourly crash frequency. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Blood transfusion sampling and a greater role for error recovery.

    Science.gov (United States)

    Oldham, Jane

    Patient identification errors in pre-transfusion blood sampling ('wrong blood in tube') are a persistent area of risk. These errors can potentially result in life-threatening complications. Current measures to address root causes of incidents and near misses have not resolved this problem and there is a need to look afresh at this issue. PROJECT PURPOSE: This narrative review of the literature is part of a wider system-improvement project designed to explore and seek a better understanding of the factors that contribute to transfusion sampling error as a prerequisite to examining current and potential approaches to error reduction. A broad search of the literature was undertaken to identify themes relating to this phenomenon. KEY DISCOVERIES: Two key themes emerged from the literature. Firstly, despite multi-faceted causes of error, the consistent element is the ever-present potential for human error. Secondly, current focus on error prevention could potentially be augmented with greater attention to error recovery. Exploring ways in which clinical staff taking samples might learn how to better identify their own errors is proposed to add to current safety initiatives.

  2. Strengthened glass for high average power laser applications

    International Nuclear Information System (INIS)

    Cerqua, K.A.; Lindquist, A.; Jacobs, S.D.; Lambropoulos, J.

    1987-01-01

    Recent advancements in high repetition rate and high average power laser systems have put increasing demands on the development of improved solid state laser materials with high thermal loading capabilities. The authors have developed a process for strengthening a commercially available Nd doped phosphate glass utilizing an ion-exchange process. Results of thermal loading fracture tests on moderate size (160 x 15 x 8 mm) glass slabs have shown a 6-fold improvement in power loading capabilities for strengthened samples over unstrengthened slabs. Fractographic analysis of post-fracture samples has given insight into the mechanism of fracture in both unstrengthened and strengthened samples. Additional stress analysis calculations have supported these findings. In addition to processing the glass' surface during strengthening in a manner which preserves its post-treatment optical quality, the authors have developed an in-house optical fabrication technique utilizing acid polishing to minimize subsurface damage in samples prior to exchange treatment. Finally, extension of the strengthening process to alternate geometries of laser glass has produced encouraging results, which may expand the potential or strengthened glass in laser systems, making it an exciting prospect for many applications

  3. Accelerated Distributed Dual Averaging Over Evolving Networks of Growing Connectivity

    Science.gov (United States)

    Liu, Sijia; Chen, Pin-Yu; Hero, Alfred O.

    2018-04-01

    We consider the problem of accelerating distributed optimization in multi-agent networks by sequentially adding edges. Specifically, we extend the distributed dual averaging (DDA) subgradient algorithm to evolving networks of growing connectivity and analyze the corresponding improvement in convergence rate. It is known that the convergence rate of DDA is influenced by the algebraic connectivity of the underlying network, where better connectivity leads to faster convergence. However, the impact of network topology design on the convergence rate of DDA has not been fully understood. In this paper, we begin by designing network topologies via edge selection and scheduling. For edge selection, we determine the best set of candidate edges that achieves the optimal tradeoff between the growth of network connectivity and the usage of network resources. The dynamics of network evolution is then incurred by edge scheduling. Further, we provide a tractable approach to analyze the improvement in the convergence rate of DDA induced by the growth of network connectivity. Our analysis reveals the connection between network topology design and the convergence rate of DDA, and provides quantitative evaluation of DDA acceleration for distributed optimization that is absent in the existing analysis. Lastly, numerical experiments show that DDA can be significantly accelerated using a sequence of well-designed networks, and our theoretical predictions are well matched to its empirical convergence behavior.

  4. Decreasing food fussiness in children with obesity leads to greater weight loss in family-based treatment.

    Science.gov (United States)

    Hayes, Jacqueline F; Altman, Myra; Kolko, Rachel P; Balantekin, Katherine N; Holland, Jodi Cahill; Stein, Richard I; Saelens, Brian E; Welch, R Robinson; Perri, Michael G; Schechtman, Kenneth B; Epstein, Leonard H; Wilfley, Denise E

    2016-10-01

    Food fussiness (FF), or the frequent rejection of both familiar and unfamiliar foods, is common among children and, given its link to poor diet quality, may contribute to the onset and/or maintenance of childhood obesity. This study examined child FF in association with anthropometric variables and diet in children with overweight/obesity participating in family-based behavioral weight loss treatment (FBT). Change in FF was assessed in relation to FBT outcome, including whether change in diet quality mediated the relation between change in FF and change in child weight. Child (N = 170; age = 9.41 ± 1.23) height and weight were measured, and parents completed FF questionnaires and three 24-h recalls of child diet at baseline and post-treatment. Healthy Eating Index-2005 scores were calculated. At baseline, child FF was related to lower vegetable intake. Average child FF decreased from start to end of FBT. Greater decreases in FF were associated with greater reductions in child body mass index and improved overall diet quality. Overall, diet quality change through FBT mediated the relation between child FF change and child body mass index change. Children with high FF can benefit from FBT, and addressing FF may be important in childhood obesity treatment to maximize weight outcomes. © 2016 The Obesity Society.

  5. Technical concept for a greater-confinement-disposal test facility

    International Nuclear Information System (INIS)

    Hunter, P.H.

    1982-01-01

    Greater confinement disposal (GCO) has been defined by the National Low-Level Waste Program as the disposal of low-level waste in such a manner as to provide greater containment of radiation, reduce potential for migration or dispersion or radionuclides, and provide greater protection from inadvertent human and biological intrusions in order to protect the public health and safety. This paper discusses: the need for GCD; definition of GCD; advantages and disadvantages of GCD; relative dose impacts of GCD versus shallow land disposal; types of waste compatible with GCD; objectives of GCD borehole demonstration test; engineering and technical issues; and factors affecting performance of the greater confinement disposal facility

  6. Analytical expressions for conditional averages: A numerical test

    DEFF Research Database (Denmark)

    Pécseli, H.L.; Trulsen, J.

    1991-01-01

    Conditionally averaged random potential fluctuations are an important quantity for analyzing turbulent electrostatic plasma fluctuations. Experimentally, this averaging can be readily performed by sampling the fluctuations only when a certain condition is fulfilled at a reference position...

  7. Experimental demonstration of squeezed-state quantum averaging

    DEFF Research Database (Denmark)

    Lassen, Mikael Østergaard; Madsen, Lars Skovgaard; Sabuncu, Metin

    2010-01-01

    We propose and experimentally demonstrate a universal quantum averaging process implementing the harmonic mean of quadrature variances. The averaged variances are prepared probabilistically by means of linear optical interference and measurement-induced conditioning. We verify that the implemented...

  8. The flattening of the average potential in models with fermions

    International Nuclear Information System (INIS)

    Bornholdt, S.

    1993-01-01

    The average potential is a scale dependent scalar effective potential. In a phase with spontaneous symmetry breaking its inner region becomes flat as the averaging extends over infinite volume and the average potential approaches the convex effective potential. Fermion fluctuations affect the shape of the average potential in this region and its flattening with decreasing physical scale. They have to be taken into account to find the true minimum of the scalar potential which determines the scale of spontaneous symmetry breaking. (orig.)

  9. Assessing Human Impacts on the Greater Akaki River, Ethiopia ...

    African Journals Online (AJOL)

    We assessed the impacts of human activities on the Greater Akaki River using physicochemical parameters and macroinvertebrate metrics. Physicochemical samples and macroinvertebrates were collected bimonthly from eight sites established on the Greater Akaki River from February 2006 to April 2006. Eleven metrics ...

  10. Comparative Education in Greater China: Contexts, Characteristics, Contrasts and Contributions.

    Science.gov (United States)

    Bray, Mark; Qin, Gui

    2001-01-01

    The evolution of comparative education in Greater China (mainland China, Taiwan, Hong Kong, and Macau) has been influenced by size, culture, political ideologies, standard of living, and colonialism. Similarities and differences in conceptions of comparative education are identified among the four components and between Greater China and other…

  11. Greater temperature sensitivity of plant phenology at colder sites

    DEFF Research Database (Denmark)

    Prevey, Janet; Vellend, Mark; Ruger, Nadja

    2017-01-01

    Warmer temperatures are accelerating the phenology of organisms around the world. Temperature sensitivity of phenology might be greater in colder, higher latitude sites than in warmer regions, in part because small changes in temperature constitute greater relative changes in thermal balance...

  12. Breeding of Greater and Lesser Flamingos at Sua Pan, Botswana ...

    African Journals Online (AJOL)

    to fledging was unknown owing to the rapid drying of the pan in late March 1999. No Greater Flamingo breeding was seen that season. Exceptional flooding during 1999–2000 produced highly favourable breeding conditions, with numbers of Greater and Lesser Flamingos breeding estimated to be 23 869 and 64 287 pairs, ...

  13. Surgical anatomy of greater occipital nerve and its relation to ...

    African Journals Online (AJOL)

    Introduction: The knowledge of the anatomy of greater occipital nerve and its relation to occipital artery is important for the surgeon. Blockage or surgical release of greater occipital nerve is clinically effective in reducing or eliminating chronic migraine symptoms. Aim: The aim of this research was to study the anatomy of ...

  14. Surgical anatomy of greater occipital nerve and its relation to ...

    African Journals Online (AJOL)

    Nancy Mohamed El Sekily

    2014-08-19

    Aug 19, 2014 ... Abstract Introduction: The knowledge of the anatomy of greater occipital nerve and its relation to occipital artery is important for the surgeon. Blockage or surgical release of greater occipital nerve is clinically effective in reducing or eliminating chronic migraine symptoms. Aim: The aim of this research was to ...

  15. INDUSTRIAL LAND DEVELOPMENT AND MANUFACTURING DECONCENTRATION IN GREATER JAKARTA

    NARCIS (Netherlands)

    Hudalah, Delik; Viantari, Dimitra; Firman, Tommy; Woltjer, Johan

    2013-01-01

    Industrial land development has become a key feature of urbanization in Greater Jakarta, one of the largest metropolitan areas in Southeast Asia. Following Suharto's market-oriented policy measures in the late 1980s, private developers have dominated the land development projects in Greater Jakarta.

  16. Strategies for Talent Management: Greater Philadelphia Companies in Action

    Science.gov (United States)

    Council for Adult and Experiential Learning (NJ1), 2008

    2008-01-01

    Human capital is one of the critical issues that impacts the Greater Philadelphia region's ability to grow and prosper. The CEO Council for Growth (CEO Council) is committed to ensuring a steady and talented supply of quality workers for this region. "Strategies for Talent Management: Greater Philadelphia Companies in Action" provides…

  17. 20 CFR 404.220 - Average-monthly-wage method.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Average-monthly-wage method. 404.220 Section... INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.220 Average-monthly-wage method. (a) Who is eligible for this method. You must...

  18. A time-averaged cosmic ray propagation theory

    International Nuclear Information System (INIS)

    Klimas, A.J.

    1975-01-01

    An argument is presented, which casts doubt on our ability to choose an appropriate magnetic field ensemble for computing the average behavior of cosmic ray particles. An alternate procedure, using time-averages rather than ensemble-averages, is presented. (orig.) [de

  19. 7 CFR 51.2561 - Average moisture content.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except when...

  20. Averaging in SU(2) open quantum random walk

    International Nuclear Information System (INIS)

    Ampadu Clement

    2014-01-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT

  1. Averaging in SU(2) open quantum random walk

    Science.gov (United States)

    Clement, Ampadu

    2014-03-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT.

  2. Inter-comparison of interpolated background nitrogen dioxide concentrations across Greater Manchester, UK

    Science.gov (United States)

    Lindley, S. J.; Walsh, T.

    There are many modelling methods dedicated to the estimation of spatial patterns in pollutant concentrations, each with their distinctive advantages and disadvantages. The derivation of a surface of air quality values from monitoring data alone requires the conversion of point-based data from a limited number of monitoring stations to a continuous surface using interpolation. Since interpolation techniques involve the estimation of data at un-sampled points based on calculated relationships between data measured at a number of known sample points, they are subject to some uncertainty, both in terms of the values estimated and their spatial distribution. These uncertainties, which are incorporated into many empirical and semi-empirical mapping methodologies, could be recognised in any further usage of the data and also in the assessment of the extent of an exceedence of an air quality standard and the degree of exposure this may represent. There is a wide range of available interpolation techniques and the differences in the characteristics of these result in variations in the output surfaces estimated from the same set of input points. The work presented in this paper provides an examination of uncertainties through the application of a number of interpolation techniques available in standard GIS packages to a case study nitrogen dioxide data set for the Greater Manchester conurbation in northern England. The implications of the use of different techniques are discussed through application to hourly concentrations during an air quality episode and annual average concentrations in 2001. Patterns of concentrations demonstrate considerable differences in the estimated spatial pattern of maxima as the combined effects of chemical processes, topography and meteorology. In the case of air quality episodes, the considerable spatial variability of concentrations results in large uncertainties in the surfaces produced but these uncertainties vary widely from area to area

  3. Predictability of Seasonal Rainfall over the Greater Horn of Africa

    Science.gov (United States)

    Ngaina, J. N.

    2016-12-01

    The El Nino-Southern Oscillation (ENSO) is a primary mode of climate variability in the Greater of Africa (GHA). The expected impacts of climate variability and change on water, agriculture, and food resources in GHA underscore the importance of reliable and accurate seasonal climate predictions. The study evaluated different model selection criteria which included the Coefficient of determination (R2), Akaike's Information Criterion (AIC), Bayesian Information Criterion (BIC), and the Fisher information approximation (FIA). A forecast scheme based on the optimal model was developed to predict the October-November-December (OND) and March-April-May (MAM) rainfall. The predictability of GHA rainfall based on ENSO was quantified based on composite analysis, correlations and contingency tables. A test for field-significance considering the properties of finiteness and interdependence of the spatial grid was applied to avoid correlations by chance. The study identified FIA as the optimal model selection criterion. However, complex model selection criteria (FIA followed by BIC) performed better compared to simple approach (R2 and AIC). Notably, operational seasonal rainfall predictions over the GHA makes of simple model selection procedures e.g. R2. Rainfall is modestly predictable based on ENSO during OND and MAM seasons. El Nino typically leads to wetter conditions during OND and drier conditions during MAM. The correlations of ENSO indices with rainfall are statistically significant for OND and MAM seasons. Analysis based on contingency tables shows higher predictability of OND rainfall with the use of ENSO indices derived from the Pacific and Indian Oceans sea surfaces showing significant improvement during OND season. The predictability based on ENSO for OND rainfall is robust on a decadal scale compared to MAM. An ENSO-based scheme based on an optimal model selection criterion can thus provide skillful rainfall predictions over GHA. This study concludes that the

  4. Deviance and resistance: Malaria elimination in the greater Mekong subregion.

    Science.gov (United States)

    Lyttleton, Chris

    2016-02-01

    Malaria elimination rather than control is increasingly globally endorsed, requiring new approaches wherein success is not measured by timely treatment of presenting cases but eradicating all presence of infection. This shift has gained urgency as resistance to artemisinin-combination therapies spreads in the Greater Mekong Sub-region (GMS) posing a threat to global health security. In the GMS, endemic malaria persists in forested border areas and elimination will require calibrated approaches to remove remaining pockets of residual infection. A new public health strategy called 'positive deviance' is being used to improve health promotion and community outreach in some of these zones. However, outbreaks sparked by alternative understandings of appropriate behaviour expose the unpredictable nature of 'border malaria' and difficulties eradication faces. Using a recent spike in infections allegedly linked to luxury timber trade in Thai borderlands, this article suggests that opportunities for market engagement can cause people to see 'deviance' as a means to material advancement in ways that increase disease vulnerability. A malaria outbreak in Ubon Ratchathani was investigated during two-week field-visit in November 2014 as part of longer project researching border malaria in Thai provinces. Qualitative data were collected in four villages in Ubon's three most-affected districts. Discussions with villagers focused primarily on changing livelihoods, experience with malaria, and rosewood cutting. Informants included ten men and two women who had recently overnighted in the nearby forest. Data from health officials and villagers are used to frame Ubon's rise in malaria transmission within moral and behavioural responses to expanding commodity supply-chains. The article argues that elimination strategies in the GMS must contend with volatile outbreaks among border populations wherein 'infectiousness' and 'resistance' are not simply pathogen characteristics but also

  5. Effect of temporal averaging of meteorological data on predictions of groundwater recharge

    Directory of Open Access Journals (Sweden)

    Batalha Marcia S.

    2018-06-01

    Full Text Available Accurate estimates of infiltration and groundwater recharge are critical for many hydrologic, agricultural and environmental applications. Anticipated climate change in many regions of the world, especially in tropical areas, is expected to increase the frequency of high-intensity, short-duration precipitation events, which in turn will affect the groundwater recharge rate. Estimates of recharge are often obtained using monthly or even annually averaged meteorological time series data. In this study we employed the HYDRUS-1D software package to assess the sensitivity of groundwater recharge calculations to using meteorological time series of different temporal resolutions (i.e., hourly, daily, weekly, monthly and yearly averaged precipitation and potential evaporation rates. Calculations were applied to three sites in Brazil having different climatological conditions: a tropical savanna (the Cerrado, a humid subtropical area (the temperate southern part of Brazil, and a very wet tropical area (Amazonia. To simplify our current analysis, we did not consider any land use effects by ignoring root water uptake. Temporal averaging of meteorological data was found to lead to significant bias in predictions of groundwater recharge, with much greater estimated recharge rates in case of very uneven temporal rainfall distributions during the year involving distinct wet and dry seasons. For example, at the Cerrado site, using daily averaged data produced recharge rates of up to 9 times greater than using yearly averaged data. In all cases, an increase in the time of averaging of meteorological data led to lower estimates of groundwater recharge, especially at sites having coarse-textured soils. Our results show that temporal averaging limits the ability of simulations to predict deep penetration of moisture in response to precipitation, so that water remains in the upper part of the vadose zone subject to upward flow and evaporation.

  6. Fractures of the greater trochanter following total hip replacement.

    Science.gov (United States)

    Brun, Ole-Christian L; Maansson, Lukas

    2013-01-01

    We studied the incidence of greater trochanteric fractures at our department following THR. In all we examined 911 patients retrospectively and found the occurance of a greater trochanteric fracture to be 3%. Patients with fractures had significantly poorer outcome on Oxford Hip score, Pain VAS, Satisfaction VAS and EQ-5D compared to THR without fractures. Greater trochanteric fracture following THR is one of the most common complications following THR. It has previously been thought to have little impact on the overall outcome following THR, but our study suggests otherwise.

  7. Perceptual learning in Williams syndrome: looking beyond averages.

    Directory of Open Access Journals (Sweden)

    Patricia Gervan

    Full Text Available Williams Syndrome is a genetically determined neurodevelopmental disorder characterized by an uneven cognitive profile and surprisingly large neurobehavioral differences among individuals. Previous studies have already shown different forms of memory deficiencies and learning difficulties in WS. Here we studied the capacity of WS subjects to improve their performance in a basic visual task. We employed a contour integration paradigm that addresses occipital visual function, and analyzed the initial (i.e. baseline and after-learning performance of WS individuals. Instead of pooling the very inhomogeneous results of WS subjects together, we evaluated individual performance by expressing it in terms of the deviation from the average performance of the group of typically developing subjects of similar age. This approach helped us to reveal information about the possible origins of poor performance of WS subjects in contour integration. Although the majority of WS individuals showed both reduced baseline and reduced learning performance, individual analysis also revealed a dissociation between baseline and learning capacity in several WS subjects. In spite of impaired initial contour integration performance, some WS individuals presented learning capacity comparable to learning in the typically developing population, and vice versa, poor learning was also observed in subjects with high initial performance levels. These data indicate a dissociation between factors determining initial performance and perceptual learning.

  8. Potential of high-average-power solid state lasers

    International Nuclear Information System (INIS)

    Emmett, J.L.; Krupke, W.F.; Sooy, W.R.

    1984-01-01

    We discuss the possibility of extending solid state laser technology to high average power and of improving the efficiency of such lasers sufficiently to make them reasonable candidates for a number of demanding applications. A variety of new design concepts, materials, and techniques have emerged over the past decade that, collectively, suggest that the traditional technical limitations on power (a few hundred watts or less) and efficiency (less than 1%) can be removed. The core idea is configuring the laser medium in relatively thin, large-area plates, rather than using the traditional low-aspect-ratio rods or blocks. This presents a large surface area for cooling, and assures that deposited heat is relatively close to a cooled surface. It also minimizes the laser volume distorted by edge effects. The feasibility of such configurations is supported by recent developments in materials, fabrication processes, and optical pumps. Two types of lasers can, in principle, utilize this sheet-like gain configuration in such a way that phase and gain profiles are uniformly sampled and, to first order, yield high-quality (undistorted) beams. The zig-zag laser does this with a single plate, and should be capable of power levels up to several kilowatts. The disk laser is designed around a large number of plates, and should be capable of scaling to arbitrarily high power levels

  9. Beamforming using subspace estimation from a diagonally averaged sample covariance.

    Science.gov (United States)

    Quijano, Jorge E; Zurk, Lisa M

    2017-08-01

    The potential benefit of a large-aperture sonar array for high resolution target localization is often challenged by the lack of sufficient data required for adaptive beamforming. This paper introduces a Toeplitz-constrained estimator of the clairvoyant signal covariance matrix corresponding to multiple far-field targets embedded in background isotropic noise. The estimator is obtained by averaging along subdiagonals of the sample covariance matrix, followed by covariance extrapolation using the method of maximum entropy. The sample covariance is computed from limited data snapshots, a situation commonly encountered with large-aperture arrays in environments characterized by short periods of local stationarity. Eigenvectors computed from the Toeplitz-constrained covariance are used to construct signal-subspace projector matrices, which are shown to reduce background noise and improve detection of closely spaced targets when applied to subspace beamforming. Monte Carlo simulations corresponding to increasing array aperture suggest convergence of the proposed projector to the clairvoyant signal projector, thereby outperforming the classic projector obtained from the sample eigenvectors. Beamforming performance of the proposed method is analyzed using simulated data, as well as experimental data from the Shallow Water Array Performance experiment.

  10. Averaging and sampling for magnetic-observatory hourly data

    Directory of Open Access Journals (Sweden)

    J. J. Love

    2010-11-01

    Full Text Available A time and frequency-domain analysis is made of the effects of averaging and sampling methods used for constructing magnetic-observatory hourly data values. Using 1-min data as a proxy for continuous, geomagnetic variation, we construct synthetic hourly values of two standard types: instantaneous "spot" measurements and simple 1-h "boxcar" averages. We compare these average-sample types with others: 2-h average, Gaussian, and "brick-wall" low-frequency-pass. Hourly spot measurements provide a statistically unbiased representation of the amplitude range of geomagnetic-field variation, but as a representation of continuous field variation over time, they are significantly affected by aliasing, especially at high latitudes. The 1-h, 2-h, and Gaussian average-samples are affected by a combination of amplitude distortion and aliasing. Brick-wall values are not affected by either amplitude distortion or aliasing, but constructing them is, in an operational setting, relatively more difficult than it is for other average-sample types. It is noteworthy that 1-h average-samples, the present standard for observatory hourly data, have properties similar to Gaussian average-samples that have been optimized for a minimum residual sum of amplitude distortion and aliasing. For 1-h average-samples from medium and low-latitude observatories, the average of the combination of amplitude distortion and aliasing is less than the 5.0 nT accuracy standard established by Intermagnet for modern 1-min data. For medium and low-latitude observatories, average differences between monthly means constructed from 1-min data and monthly means constructed from any of the hourly average-sample types considered here are less than the 1.0 nT resolution of standard databases. We recommend that observatories and World Data Centers continue the standard practice of reporting simple 1-h-average hourly values.

  11. System for evaluation of the true average input-pulse rate

    International Nuclear Information System (INIS)

    Eichenlaub, D.P.; Garrett, P.

    1977-01-01

    The description is given of a digital radiation monitoring system making use of current digital circuit and microprocessor for rapidly processing the pulse data coming from remote radiation controllers. This system analyses the pulse rates in order to determine if a new datum is statistically the same as that previously received. Hence it determines the best possible average time for itself. So long as the true average pulse rate stays constant, the time required to establish an average can increase until the statistical error is under the desired level, i.e. 1%. When the digital processing of the pulse data indicates a change in the true average pulse rate, the time required to establish an average can be reduced so as to improve the response time of the system at the statistical error. This concept includes a fixed compromise between the statistical error and the response time [fr

  12. Prey selection by a reintroduced lion population in the Greater ...

    African Journals Online (AJOL)

    Prey selection by a reintroduced lion population in the Greater Makalali Conservancy, South Africa. Dave Druce, Heleen Genis, Jonathan Braak, Sophie Greatwood, Audrey Delsink, Ross Kettles, Luke Hunter, Rob Slotow ...

  13. LiveDiverse: Case study area, Greater Kruger South Africa

    CSIR Research Space (South Africa)

    Nortje, Karen

    2011-01-01

    Full Text Available Livelihoods and Biodiversity in Developing Countries Case study area: Greater Kruger, South Africa January 2011 Kolhapur, India Where are we? HARDSHIP LIVELIHOODS NATURE & BIODIVERSITY BELIEFS & CULTURAL PRACTISE threesansinv foursansinv onesansinv...

  14. Exploration of the Energy Efficiency of the Greater London Authority ...

    African Journals Online (AJOL)

    GLA Building/City Hall) ... Journal Home > Vol 11, No 2 (2007) > ... The Greater London Authority building was acclaimed as being energy efficient, with claims of 75 % reduction in its annual energy consumption compared to a high specification ...

  15. Molecular insights into the biology of Greater Sage-Grouse

    Science.gov (United States)

    Oyler-McCance, Sara J.; Quinn, Thomas W.

    2011-01-01

    Recent research on Greater Sage-Grouse (Centrocercus urophasianus) genetics has revealed some important findings. First, multiple paternity in broods is more prevalent than previously thought, and leks do not comprise kin groups. Second, the Greater Sage-Grouse is genetically distinct from the congeneric Gunnison sage-grouse (C. minimus). Third, the Lyon-Mono population in the Mono Basin, spanning the border between Nevada and California, has unique genetic characteristics. Fourth, the previous delineation of western (C. u. phaios) and eastern Greater Sage-Grouse (C. u. urophasianus) is not supported genetically. Fifth, two isolated populations in Washington show indications that genetic diversity has been lost due to population declines and isolation. This chapter examines the use of molecular genetics to understand the biology of Greater Sage-Grouse for the conservation and management of this species and put it into the context of avian ecology based on selected molecular studies.

  16. Greater saphenous vein anomaly and aneurysm with subsequent pulmonary embolism

    OpenAIRE

    Ma, Truong; Kornbau, Craig

    2017-01-01

    Abstract Venous aneurysms often present as painful masses. They can present either in the deep or superficial venous system. Deep venous system aneurysms have a greater risk of thromboembolism. Though rare, there have been case reports of superficial aneurysms and thrombus causing significant morbidity such as pulmonary embolism. We present a case of an anomalous greater saphenous vein connection with an aneurysm and thrombus resulting in a pulmonary embolism. This is the only reported case o...

  17. GREATER OMENTUM: MORPHOFUNCTIONAL CHARACTERISTICS AND CLINICAL SIGNIFICANCE IN PEDIATRICS

    Directory of Open Access Journals (Sweden)

    A.V. Nekrutov

    2007-01-01

    Full Text Available The review analyzes the structure organization and pathophysiological age specificities of the greater omentum, which determine its uniqueness and functional diversity in a child's organism. the article discusses protective functions of the organ, its role in the development of post operative complications of children, and the usage in children's reconstructive plastic surgery.Key words: greater omentum, omentitis, of post operative complications, children.

  18. The Effects of Average Revenue Regulation on Electricity Transmission Investment and Pricing

    OpenAIRE

    Isamu Matsukawa

    2005-01-01

    This paper investigates the long-run effects of average revenue regulation on an electricity transmission monopolist who applies a two- part tariff comprising a variable congestion price and a non-negative fixed access fee. A binding constraint on the monopolist fs expected average revenue lowers the access fee, promotes transmission investment, and improves consumer surplus. In a case of any linear or log-linear electricity demand function with a positive probability that no congestion occur...

  19. Sonography of greater trochanteric pain syndrome and the rarity of primary bursitis.

    Science.gov (United States)

    Long, Suzanne S; Surrey, David E; Nazarian, Levon N

    2013-11-01

    Greater trochanteric pain syndrome is a common condition with clinical features of pain and tenderness at the lateral aspect of the hip. Diagnosing the origin of greater trochanteric pain is important because the treatment varies depending on the cause. We hypothesized that sonographic evaluation of sources for greater trochanteric pain syndrome would show that bursitis was not the most commonly encountered abnormality. We performed a retrospective review of musculoskeletal sonographic examinations performed at our institution over a 6-year period for greater trochanteric pain syndrome; completed a tabulation of the sonographic findings; and assessed the prevalence of trochanteric bursitis, gluteal tendon abnormalities, iliotibial band abnormalities, or a combination of findings. Prevalence of abnormal findings, associations of bursitis, gluteal tendinosis, gluteal tendon tears, and iliotibial band abnormalities were calculated. The final study population consisted of 877 unique patients: 602 women, 275 men; average age, 54 years; and age range, 15-87 years). Of the 877 patients with greater trochanteric pain, 700 (79.8%) did not have bursitis on ultrasound. A minority of patients (177, 20.2%) had trochanteric bursitis. Of the 877 patients with greater trochanteric pain, 438 (49.9%) had gluteal tendinosis, four (0.5%) had gluteal tendon tears, and 250 (28.5%) had a thickened iliotibial band. The cause of greater trochanteric pain syndrome is usually some combination of pathology involving the gluteus medius and gluteus minimus tendons as well as the iliotibial band. Bursitis is present in only the minority of patients. These findings have implications for treatment of this common condition.

  20. Safety Impact of Average Speed Control in the UK

    DEFF Research Database (Denmark)

    Lahrmann, Harry Spaabæk; Brassøe, Bo; Johansen, Jonas Wibert

    2016-01-01

    of automatic speed control was point-based, but in recent years a potentially more effective alternative automatic speed control method has been introduced. This method is based upon records of drivers’ average travel speed over selected sections of the road and is normally called average speed control...... in the UK. The study demonstrates that the introduction of average speed control results in statistically significant and substantial reductions both in speed and in number of accidents. The evaluation indicates that average speed control has a higher safety effect than point-based automatic speed control....

  1. on the performance of Autoregressive Moving Average Polynomial

    African Journals Online (AJOL)

    Timothy Ademakinwa

    Distributed Lag (PDL) model, Autoregressive Polynomial Distributed Lag ... Moving Average Polynomial Distributed Lag (ARMAPDL) model. ..... Global Journal of Mathematics and Statistics. Vol. 1. ... Business and Economic Research Center.

  2. Decision trees with minimum average depth for sorting eight elements

    KAUST Repository

    AbouEisha, Hassan M.

    2015-11-19

    We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees is approximately equal to 8.548×10^326365), has also minimum depth. Both problems were considered by Knuth (1998). To obtain these results, we use tools based on extensions of dynamic programming which allow us to make sequential optimization of decision trees relative to depth and average depth, and to count the number of decision trees with minimum average depth.

  3. Comparison of Interpolation Methods as Applied to Time Synchronous Averaging

    National Research Council Canada - National Science Library

    Decker, Harry

    1999-01-01

    Several interpolation techniques were investigated to determine their effect on time synchronous averaging of gear vibration signals and also the effects on standard health monitoring diagnostic parameters...

  4. Light-cone averaging in cosmology: formalism and applications

    International Nuclear Information System (INIS)

    Gasperini, M.; Marozzi, G.; Veneziano, G.; Nugier, F.

    2011-01-01

    We present a general gauge invariant formalism for defining cosmological averages that are relevant for observations based on light-like signals. Such averages involve either null hypersurfaces corresponding to a family of past light-cones or compact surfaces given by their intersection with timelike hypersurfaces. Generalized Buchert-Ehlers commutation rules for derivatives of these light-cone averages are given. After introducing some adapted ''geodesic light-cone'' coordinates, we give explicit expressions for averaging the redshift to luminosity-distance relation and the so-called ''redshift drift'' in a generic inhomogeneous Universe

  5. DOD Financial Management: Greater Visibility Needed to Better Assess Audit Readiness for Property, Plant, and Equipment

    Science.gov (United States)

    2016-05-01

    with U.S. generally accepted accounting principles and establish and maintain effective internal control over financial reporting and compliance with... Accountability Office Highlights of GAO-16-383, a report to congressional committees May 2016 DOD FINANCIAL MANAGEMENT Greater Visibility... Accounting Standards Advisory Board FIAR Financial Improvement and Audit Readiness IUS internal-use software NDAA National Defense Authorization Act

  6. The outcome of endoscopy for recalcitrant greater trochanteric pain syndrome.

    Science.gov (United States)

    Drummond, James; Fary, Camdon; Tran, Phong

    2016-11-01

    Greater trochanteric pain syndrome (GTPS), previously referred as trochanteric bursitis, is a debilitating condition characterised by chronic lateral hip pain. The syndrome is thought to relate to gluteal tendinopathy, with most cases responding to non-operative treatment. A number of open and endoscopic surgical techniques targeting the iliotibial band, trochanteric bursa and gluteal tendons have, however, been described for severe recalcitrant cases. We report the outcomes of one such endoscopic approach here. We retrospectively reviewed 49 patients (57 operations) who had undergone endoscopic longitudinal vertical iliotibial band release and trochanteric bursectomy. Inclusion criteria included diagnosed GTPS with a minimum of six months of non-operative treatment. Exclusion criteria included concomitant intra- or extra-articular hip pathology and previous hip surgery including total hip arthroplasty. Outcomes were assessed using the Visual Analogue Scale, Oxford hip Score and International Hip Outcome Tool (iHOT-33). The series included 42 females and 7 males with a mean age of 65.0 years (26.7-88.6). Mean follow-up time was 20.7 months (5.3-41.2). Eight patients had full thickness gluteal tendon tears, of which 7 were repaired. Adjuvant PRP was injected intraoperatively in 38 of 57 operations (67.2 %). At follow-up, overall mean Visual Analogue Scale values had decreased from 7.8 to 2.8 (p < 0.001), Oxford hip Scores had increased from 20.4 to 37.3 (p < 0.001) and iHOT-33 scores had increased from 23.8 to 70.2 (p < 0.001). Of the 57 operations performed, patients reported feeling very satisfied with the surgical outcome in 28 operations (49.1 %), satisfied in 17 operations (29.8 %) and less than satisfied in 12 operations (21.1 %). While the majority of patients with GTPS will improve with non-operative management, endoscopic iliotibial band release, trochanteric bursectomy and gluteal tendon repair is a safe and effective treatment for severe

  7. Malaria in the Greater Mekong Subregion: Heterogeneity and Complexity

    Science.gov (United States)

    Cui, Liwang; Yan, Guiyun; Sattabongkot, Jetsumon; Cao, Yaming; Chen, Bin; Chen, Xiaoguang; Fan, Qi; Fang, Qiang; Jongwutiwes, Somchai; Parker, Daniel; Sirichaisinthop, Jeeraphat; Kyaw, Myat Phone; Su, Xin-zhuan; Yang, Henglin; Yang, Zhaoqing; Wang, Baomin; Xu, Jianwei; Zheng, Bin; Zhong, Daibin; Zhou, Guofa

    2011-01-01

    The Greater Mekong Subregion (GMS), comprised of six countries including Cambodia, China's Yunnan Province, Lao PDR, Myanmar (Burma), Thailand and Vietnam, is one of the most threatening foci of malaria. Since the initiation of the WHO's Mekong Malaria Program a decade ago, malaria situation in the GMS has greatly improved, reflected in the continuous decline in annual malaria incidence and deaths. However, as many nations are moving towards malaria elimination, the GMS nations still face great challenges. Malaria epidemiology in this region exhibits enormous geographical heterogeneity with Myanmar and Cambodia remaining high-burden countries. Within each country, malaria distribution is also patchy, exemplified by ‘border malaria’ and ‘forest malaria’ with high transmission occurring along international borders and in forests or forest fringes, respectively. ‘Border malaria’ is extremely difficult to monitor, and frequent malaria introductions by migratory human populations constitute a major threat to neighboring, malaria-eliminating countries. Therefore, coordination between neighboring countries is essential for malaria elimination from the entire region. In addition to these operational difficulties, malaria control in the GMS also encounters several technological challenges. Contemporary malaria control measures rely heavily on effective chemotherapy and insecticide control of vector mosquitoes. However, the spread of multidrug resistance and potential emergence of artemisinin resistance in Plasmodium falciparum make resistance management a high priority in the GMS. This situation is further worsened by the circulation of counterfeit and substandard artemisinin-related drugs. In most endemic areas of the GMS, P. falciparum and P. vivax coexist, and in recent malaria control history, P. vivax has demonstrated remarkable resilience to control measures. Deployment of the only registered drug (primaquine) for the radical cure of vivax malaria is

  8. High-resolution quantification of atmospheric CO2 mixing ratios in the Greater Toronto Area, Canada

    Science.gov (United States)

    Pugliese, Stephanie C.; Murphy, Jennifer G.; Vogel, Felix R.; Moran, Michael D.; Zhang, Junhua; Zheng, Qiong; Stroud, Craig A.; Ren, Shuzhan; Worthy, Douglas; Broquet, Gregoire

    2018-03-01

    Many stakeholders are seeking methods to reduce carbon dioxide (CO2) emissions in urban areas, but reliable, high-resolution inventories are required to guide these efforts. We present the development of a high-resolution CO2 inventory available for the Greater Toronto Area and surrounding region in Southern Ontario, Canada (area of ˜ 2.8 × 105 km2, 26 % of the province of Ontario). The new SOCE (Southern Ontario CO2 Emissions) inventory is available at the 2.5 × 2.5 km spatial and hourly temporal resolution and characterizes emissions from seven sectors: area, residential natural-gas combustion, commercial natural-gas combustion, point, marine, on-road, and off-road. To assess the accuracy of the SOCE inventory, we developed an observation-model framework using the GEM-MACH chemistry-transport model run on a high-resolution grid with 2.5 km grid spacing coupled to the Fossil Fuel Data Assimilation System (FFDAS) v2 inventories for anthropogenic CO2 emissions and the European Centre for Medium-Range Weather Forecasts (ECMWF) land carbon model C-TESSEL for biogenic fluxes. A run using FFDAS for the Southern Ontario region was compared to a run in which its emissions were replaced by the SOCE inventory. Simulated CO2 mixing ratios were compared against in situ measurements made at four sites in Southern Ontario - Downsview, Hanlan's Point, Egbert and Turkey Point - in 3 winter months, January-March 2016. Model simulations had better agreement with measurements when using the SOCE inventory emissions versus other inventories, quantified using a variety of statistics such as correlation coefficient, root-mean-square error, and mean bias. Furthermore, when run with the SOCE inventory, the model had improved ability to capture the typical diurnal pattern of CO2 mixing ratios, particularly at the Downsview, Hanlan's Point, and Egbert sites. In addition to improved model-measurement agreement, the SOCE inventory offers a sectoral breakdown of emissions

  9. High-resolution quantification of atmospheric CO2 mixing ratios in the Greater Toronto Area, Canada

    Directory of Open Access Journals (Sweden)

    S. C. Pugliese

    2018-03-01

    Full Text Available Many stakeholders are seeking methods to reduce carbon dioxide (CO2 emissions in urban areas, but reliable, high-resolution inventories are required to guide these efforts. We present the development of a high-resolution CO2 inventory available for the Greater Toronto Area and surrounding region in Southern Ontario, Canada (area of  ∼ 2.8 × 105 km2, 26 % of the province of Ontario. The new SOCE (Southern Ontario CO2 Emissions inventory is available at the 2.5 × 2.5 km spatial and hourly temporal resolution and characterizes emissions from seven sectors: area, residential natural-gas combustion, commercial natural-gas combustion, point, marine, on-road, and off-road. To assess the accuracy of the SOCE inventory, we developed an observation–model framework using the GEM-MACH chemistry–transport model run on a high-resolution grid with 2.5 km grid spacing coupled to the Fossil Fuel Data Assimilation System (FFDAS v2 inventories for anthropogenic CO2 emissions and the European Centre for Medium-Range Weather Forecasts (ECMWF land carbon model C-TESSEL for biogenic fluxes. A run using FFDAS for the Southern Ontario region was compared to a run in which its emissions were replaced by the SOCE inventory. Simulated CO2 mixing ratios were compared against in situ measurements made at four sites in Southern Ontario – Downsview, Hanlan's Point, Egbert and Turkey Point – in 3 winter months, January–March 2016. Model simulations had better agreement with measurements when using the SOCE inventory emissions versus other inventories, quantified using a variety of statistics such as correlation coefficient, root-mean-square error, and mean bias. Furthermore, when run with the SOCE inventory, the model had improved ability to capture the typical diurnal pattern of CO2 mixing ratios, particularly at the Downsview, Hanlan's Point, and Egbert sites. In addition to improved model–measurement agreement, the SOCE inventory offers a

  10. Delineation of facial archetypes by 3d averaging.

    Science.gov (United States)

    Shaweesh, Ashraf I; Thomas, C David L; Bankier, Agnes; Clement, John G

    2004-10-01

    The objective of this study was to investigate the feasibility of creating archetypal 3D faces through computerized 3D facial averaging. A 3D surface scanner Fiore and its software were used to acquire the 3D scans of the faces while 3D Rugle3 and locally-developed software generated the holistic facial averages. 3D facial averages were created from two ethnic groups; European and Japanese and from children with three previous genetic disorders; Williams syndrome, achondroplasia and Sotos syndrome as well as the normal control group. The method included averaging the corresponding depth (z) coordinates of the 3D facial scans. Compared with other face averaging techniques there was not any warping or filling in the spaces by interpolation; however, this facial average lacked colour information. The results showed that as few as 14 faces were sufficient to create an archetypal facial average. In turn this would make it practical to use face averaging as an identification tool in cases where it would be difficult to recruit a larger number of participants. In generating the average, correcting for size differences among faces was shown to adjust the average outlines of the facial features. It is assumed that 3D facial averaging would help in the identification of the ethnic status of persons whose identity may not be known with certainty. In clinical medicine, it would have a great potential for the diagnosis of syndromes with distinctive facial features. The system would also assist in the education of clinicians in the recognition and identification of such syndromes.

  11. 76 FR 18476 - Improving Communications Services for Native Nations by Promoting Greater Utilization of Spectrum...

    Science.gov (United States)

    2011-04-04

    ... forms of information technology. In addition, pursuant to the Small Business Paperwork Relief Act of... eco-system for devices and equipment where spectrum has already been licensed, so that new licensees...

  12. Interpreting Bivariate Regression Coefficients: Going beyond the Average

    Science.gov (United States)

    Halcoussis, Dennis; Phillips, G. Michael

    2010-01-01

    Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…

  13. Average stress in a Stokes suspension of disks

    NARCIS (Netherlands)

    Prosperetti, Andrea

    2004-01-01

    The ensemble-average velocity and pressure in an unbounded quasi-random suspension of disks (or aligned cylinders) are calculated in terms of average multipoles allowing for the possibility of spatial nonuniformities in the system. An expression for the stress due to the suspended particles is

  14. 47 CFR 1.959 - Computation of average terrain elevation.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Computation of average terrain elevation. 1.959 Section 1.959 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Wireless Radio Services Applications and Proceedings Application Requirements and Procedures § 1.959 Computation of average terrain elevation. Except a...

  15. 47 CFR 80.759 - Average terrain elevation.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Average terrain elevation. 80.759 Section 80.759 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.759 Average terrain elevation. (a)(1) Draw radials...

  16. The average covering tree value for directed graph games

    NARCIS (Netherlands)

    Khmelnitskaya, Anna Borisovna; Selcuk, Özer; Talman, Dolf

    We introduce a single-valued solution concept, the so-called average covering tree value, for the class of transferable utility games with limited communication structure represented by a directed graph. The solution is the average of the marginal contribution vectors corresponding to all covering

  17. The Average Covering Tree Value for Directed Graph Games

    NARCIS (Netherlands)

    Khmelnitskaya, A.; Selcuk, O.; Talman, A.J.J.

    2012-01-01

    Abstract: We introduce a single-valued solution concept, the so-called average covering tree value, for the class of transferable utility games with limited communication structure represented by a directed graph. The solution is the average of the marginal contribution vectors corresponding to all

  18. 18 CFR 301.7 - Average System Cost methodology functionalization.

    Science.gov (United States)

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Average System Cost... REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER MARKETING ADMINISTRATIONS AVERAGE SYSTEM COST METHODOLOGY FOR SALES FROM UTILITIES TO BONNEVILLE POWER ADMINISTRATION UNDER NORTHWEST POWER...

  19. Analytic computation of average energy of neutrons inducing fission

    International Nuclear Information System (INIS)

    Clark, Alexander Rich

    2016-01-01

    The objective of this report is to describe how I analytically computed the average energy of neutrons that induce fission in the bare BeRP ball. The motivation of this report is to resolve a discrepancy between the average energy computed via the FMULT and F4/FM cards in MCNP6 by comparison to the analytic results.

  20. An alternative scheme of the Bogolyubov's average method

    International Nuclear Information System (INIS)

    Ortiz Peralta, T.; Ondarza R, R.; Camps C, E.

    1990-01-01

    In this paper the average energy and the magnetic moment conservation laws in the Drift Theory of charged particle motion are obtained in a simple way. The approach starts from the energy and magnetic moment conservation laws and afterwards the average is performed. This scheme is more economic from the standpoint of time and algebraic calculations than the usual procedure of Bogolyubov's method. (Author)

  1. Decision trees with minimum average depth for sorting eight elements

    KAUST Repository

    AbouEisha, Hassan M.; Chikalov, Igor; Moshkov, Mikhail

    2015-01-01

    We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees

  2. Bounds on Average Time Complexity of Decision Trees

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any

  3. A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, Manfred

    2003-01-01

    We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages...

  4. Forecasting house prices in the 50 states using Dynamic Model Averaging and Dynamic Model Selection

    DEFF Research Database (Denmark)

    Bork, Lasse; Møller, Stig Vinther

    2015-01-01

    We examine house price forecastability across the 50 states using Dynamic Model Averaging and Dynamic Model Selection, which allow for model change and parameter shifts. By allowing the entire forecasting model to change over time and across locations, the forecasting accuracy improves substantia......We examine house price forecastability across the 50 states using Dynamic Model Averaging and Dynamic Model Selection, which allow for model change and parameter shifts. By allowing the entire forecasting model to change over time and across locations, the forecasting accuracy improves...

  5. Hybrid Reynolds-Averaged/Large-Eddy Simulations of a Coaxial Supersonic Free-Jet Experiment

    Science.gov (United States)

    Baurle, Robert A.; Edwards, Jack R.

    2010-01-01

    Reynolds-averaged and hybrid Reynolds-averaged/large-eddy simulations have been applied to a supersonic coaxial jet flow experiment. The experiment was designed to study compressible mixing flow phenomenon under conditions that are representative of those encountered in scramjet combustors. The experiment utilized either helium or argon as the inner jet nozzle fluid, and the outer jet nozzle fluid consisted of laboratory air. The inner and outer nozzles were designed and operated to produce nearly pressure-matched Mach 1.8 flow conditions at the jet exit. The purpose of the computational effort was to assess the state-of-the-art for each modeling approach, and to use the hybrid Reynolds-averaged/large-eddy simulations to gather insight into the deficiencies of the Reynolds-averaged closure models. The Reynolds-averaged simulations displayed a strong sensitivity to choice of turbulent Schmidt number. The initial value chosen for this parameter resulted in an over-prediction of the mixing layer spreading rate for the helium case, but the opposite trend was observed when argon was used as the injectant. A larger turbulent Schmidt number greatly improved the comparison of the results with measurements for the helium simulations, but variations in the Schmidt number did not improve the argon comparisons. The hybrid Reynolds-averaged/large-eddy simulations also over-predicted the mixing layer spreading rate for the helium case, while under-predicting the rate of mixing when argon was used as the injectant. The primary reason conjectured for the discrepancy between the hybrid simulation results and the measurements centered around issues related to the transition from a Reynolds-averaged state to one with resolved turbulent content. Improvements to the inflow conditions were suggested as a remedy to this dilemma. Second-order turbulence statistics were also compared to their modeled Reynolds-averaged counterparts to evaluate the effectiveness of common turbulence closure

  6. Self-similarity of higher-order moving averages

    Science.gov (United States)

    Arianos, Sergio; Carbone, Anna; Türk, Christian

    2011-10-01

    In this work, higher-order moving average polynomials are defined by straightforward generalization of the standard moving average. The self-similarity of the polynomials is analyzed for fractional Brownian series and quantified in terms of the Hurst exponent H by using the detrending moving average method. We prove that the exponent H of the fractional Brownian series and of the detrending moving average variance asymptotically agree for the first-order polynomial. Such asymptotic values are compared with the results obtained by the simulations. The higher-order polynomials correspond to trend estimates at shorter time scales as the degree of the polynomial increases. Importantly, the increase of polynomial degree does not require to change the moving average window. Thus trends at different time scales can be obtained on data sets with the same size. These polynomials could be interesting for those applications relying on trend estimates over different time horizons (financial markets) or on filtering at different frequencies (image analysis).

  7. Anomalous behavior of q-averages in nonextensive statistical mechanics

    International Nuclear Information System (INIS)

    Abe, Sumiyoshi

    2009-01-01

    A generalized definition of average, termed the q-average, is widely employed in the field of nonextensive statistical mechanics. Recently, it has however been pointed out that such an average value may behave unphysically under specific deformations of probability distributions. Here, the following three issues are discussed and clarified. Firstly, the deformations considered are physical and may be realized experimentally. Secondly, in view of the thermostatistics, the q-average is unstable in both finite and infinite discrete systems. Thirdly, a naive generalization of the discussion to continuous systems misses a point, and a norm better than the L 1 -norm should be employed for measuring the distance between two probability distributions. Consequently, stability of the q-average is shown not to be established in all of the cases

  8. Bootstrapping pre-averaged realized volatility under market microstructure noise

    DEFF Research Database (Denmark)

    Hounyo, Ulrich; Goncalves, Sílvia; Meddahi, Nour

    The main contribution of this paper is to propose a bootstrap method for inference on integrated volatility based on the pre-averaging approach of Jacod et al. (2009), where the pre-averaging is done over all possible overlapping blocks of consecutive observations. The overlapping nature of the pre......-averaged returns implies that these are kn-dependent with kn growing slowly with the sample size n. This motivates the application of a blockwise bootstrap method. We show that the "blocks of blocks" bootstrap method suggested by Politis and Romano (1992) (and further studied by Bühlmann and Künsch (1995......)) is valid only when volatility is constant. The failure of the blocks of blocks bootstrap is due to the heterogeneity of the squared pre-averaged returns when volatility is stochastic. To preserve both the dependence and the heterogeneity of squared pre-averaged returns, we propose a novel procedure...

  9. Technical concept for a Greater Confinement Disposal test facility

    International Nuclear Information System (INIS)

    Hunter, P.H.

    1982-01-01

    For the past two years, Ford, Bacon and Davis has been performing technical services for the Department of Energy at the Nevada Test Site in specific development of defense low-level waste management concepts for greater confinement disposal concept with particular application to arid sites. The investigations have included the development of Criteria for Greater Confinement Disposal, NVO-234, which was published in May of 1981 and the draft of the technical concept for Greater Confinement Disposal, with the latest draft published in November 1981. The final draft of the technical concept and design specifications are expected to be published imminently. The document is prerequisite to the actual construction and implementation of the demonstration facility this fiscal year. The GCD Criteria Document, NVO-234 is considered to contain information complimentary and compatible with that being developed for the reserved section 10 CFR 61.51b of the NRCs proposed licensing rule for low level waste disposal facilities

  10. Expatriate job performance in Greater China: Does age matter?

    DEFF Research Database (Denmark)

    Selmer, Jan; Lauring, Jakob; Feng, Yunxia

    to expatriates in Chinese societies. It is possible that older business expatriates will receive more respect and be treated with more deference in a Chinese cultural context than their apparently younger colleagues. This may have a positive impact on expatriates’ job performance. To empirically test...... this presumption, business expatriates in Greater Chine were targeted by a survey. Controlling for the potential bias of a number of background variables, results indicate that contextual/managerial performance, including general managerial functions applied to the subsidiary in Greater China, had a positive...

  11. Absenteeism movement in Greater Poland in 1840–1902

    OpenAIRE

    Izabela Krasińska

    2013-01-01

    The article presents the origins and development of the idea of absenteeism in Greater Poland in the 19th century. The start date for the research is 1840, which is considered to be a breakthrough year in the history of an organized absenteeism movement in Greater Poland. It was due to the Association for the Suppression of the Use of Vodka (Towarzystwo ku Przytłumieniu Używania Wódki) in the Great Duchy of Posen that was then established in Kórnik. It was a secular organization that came int...

  12. Affordability Assessment to Implement Light Rail Transit (LRT for Greater Yogyakarta

    Directory of Open Access Journals (Sweden)

    Anjang Nugroho

    2015-06-01

    Full Text Available The high population density and the increasing visitors in Yogyakarta aggravate the traffic congestion problem. BRT (Bus Rapid Transit services, Trans Jogja has not managed to solve this problem yet. Introducing Light Rail Transit (LRT has been considered as one of the solutions to restrain the congestion in Greater Yogyakarta. As the first indication that the LRT can be built in Greater Yogyakarta, the transportation affordability index was used to understand whether the LRT tariff was affordable. That tariff was calculated based on government policy in determining railway tariff. The forecasted potential passengers and LRT route have been analyzed as the previous steps to get LRT tariff. Potential passenger was forecasted from gravity mode, and the proposed LRT route was chosen using Multi Criteria Decision Analysis (MCDA. The existing transportation affordability index was calculated for comparison analysis using the percentage of the expenditures for transportation made by monthly income of each household. The result showed that the LRT for Greater Yogyakarta was the most affordable transport mode compared to the Trans Jogja Bus and motorcycle. The affordability index of Tram Jogja for people having average income was 10.66% while another people with bottom quartile income was 13.56%. Keywords: Greater Yogyakarta, LRT, affordability.

  13. Bounds on Average Time Complexity of Decision Trees

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any diagnostic problem, the minimum average depth of decision tree is bounded from below by the entropy of probability distribution (with a multiplier 1/log2 k for a problem over a k-valued information system). Among diagnostic problems, the problems with a complete set of attributes have the lowest minimum average depth of decision trees (e.g, the problem of building optimal prefix code [1] and a blood test study in assumption that exactly one patient is ill [23]). For such problems, the minimum average depth of decision tree exceeds the lower bound by at most one. The minimum average depth reaches the maximum on the problems in which each attribute is "indispensable" [44] (e.g., a diagnostic problem with n attributes and kn pairwise different rows in the decision table and the problem of implementing the modulo 2 summation function). These problems have the minimum average depth of decision tree equal to the number of attributes in the problem description. © Springer-Verlag Berlin Heidelberg 2011.

  14. Lateral dispersion coefficients as functions of averaging time

    International Nuclear Information System (INIS)

    Sheih, C.M.

    1980-01-01

    Plume dispersion coefficients are discussed in terms of single-particle and relative diffusion, and are investigated as functions of averaging time. To demonstrate the effects of averaging time on the relative importance of various dispersion processes, and observed lateral wind velocity spectrum is used to compute the lateral dispersion coefficients of total, single-particle and relative diffusion for various averaging times and plume travel times. The results indicate that for a 1 h averaging time the dispersion coefficient of a plume can be approximated by single-particle diffusion alone for travel times <250 s and by relative diffusion for longer travel times. Furthermore, it is shown that the power-law formula suggested by Turner for relating pollutant concentrations for other averaging times to the corresponding 15 min average is applicable to the present example only when the averaging time is less than 200 s and the tral time smaller than about 300 s. Since the turbulence spectrum used in the analysis is an observed one, it is hoped that the results could represent many conditions encountered in the atmosphere. However, as the results depend on the form of turbulence spectrum, the calculations are not for deriving a set of specific criteria but for demonstrating the need in discriminating various processes in studies of plume dispersion

  15. 40 CFR 80.205 - How is the annual refinery or importer average and corporate pool average sulfur level determined?

    Science.gov (United States)

    2010-07-01

    ... volume of gasoline produced or imported in batch i. Si=The sulfur content of batch i determined under § 80.330. n=The number of batches of gasoline produced or imported during the averaging period. i=Individual batch of gasoline produced or imported during the averaging period. (b) All annual refinery or...

  16. 40 CFR 600.510-12 - Calculation of average fuel economy and average carbon-related exhaust emissions.

    Science.gov (United States)

    2010-07-01

    ... and average carbon-related exhaust emissions. 600.510-12 Section 600.510-12 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF... Transportation. (iv) [Reserved] (2) Average carbon-related exhaust emissions will be calculated to the nearest...

  17. Similarity-based distortion of visual short-term memory is due to perceptual averaging.

    Science.gov (United States)

    Dubé, Chad; Zhou, Feng; Kahana, Michael J; Sekuler, Robert

    2014-03-01

    A task-irrelevant stimulus can distort recall from visual short-term memory (VSTM). Specifically, reproduction of a task-relevant memory item is biased in the direction of the irrelevant memory item (Huang & Sekuler, 2010a). The present study addresses the hypothesis that such effects reflect the influence of neural averaging under conditions of uncertainty about the contents of VSTM (Alvarez, 2011; Ball & Sekuler, 1980). We manipulated subjects' attention to relevant and irrelevant study items whose similarity relationships were held constant, while varying how similar the study items were to a subsequent recognition probe. On each trial, subjects were shown one or two Gabor patches, followed by the probe; their task was to indicate whether the probe matched one of the study items. A brief cue told subjects which Gabor, first or second, would serve as that trial's target item. Critically, this cue appeared either before, between, or after the study items. A distributional analysis of the resulting mnemometric functions showed an inflation in probability density in the region spanning the spatial frequency of the average of the two memory items. This effect, due to an elevation in false alarms to probes matching the perceptual average, was diminished when cues were presented before both study items. These results suggest that (a) perceptual averages are computed obligatorily and (b) perceptual averages are relied upon to a greater extent when item representations are weakened. Implications of these results for theories of VSTM are discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Adjustment of Business Expatriates in Greater China: A Strategic Perspective

    DEFF Research Database (Denmark)

    Selmer, Jan

    2006-01-01

    Research has found that due to similarities, firms which have gained business experience elsewhere in Greater China may exhibit relatively better performance in mainland China. Hence, the experience of business expatriates could be of strategic importance for the expansion path of their firms...

  19. College Students with ADHD at Greater Risk for Sleep Disorders

    Science.gov (United States)

    Gaultney, Jane F.

    2014-01-01

    The pediatric literature indicates that children with ADHD are at greater risk for sleep problems, daytime sleepiness, and some sleep disorders than children with no diagnosed disability. It has not been determined whether this pattern holds true among emerging adults, and whether comorbid sleep disorders with ADHD predict GPA. The present study…

  20. Ecology of greater sage-grouse in the Dakotas

    Science.gov (United States)

    Christopher C. Swanson

    2009-01-01

    Greater sage-grouse (Centrocercus urophasianus) populations and the sagebrush (Artemisia spp.) communities that they rely on have dramatically declined from historic levels. Moreover, information regarding sage-grouse annual life-history requirements at the eastern-most extension of sagebrush steppe communities is lacking....

  1. Job-Sharing at the Greater Victoria Public Library.

    Science.gov (United States)

    Miller, Don

    1978-01-01

    Describes the problems associated with the management of part-time library employees and some solutions afforded by a job sharing arrangement in use at the Greater Victoria Public Library. This is a voluntary work arrangement, changing formerly full-time positions into multiple part-time positions. (JVP)

  2. Radiographic features of tuberculous osteitis in greater trochanter and lschium

    International Nuclear Information System (INIS)

    Hahm, So Hee; Lee, Ye Ri; Kim, Dong Jin; Sung, Ki Jun; Lim, Jong Nam

    1996-01-01

    To evaluate, if possible, the radiographic features of tuberculous osteitis in the greater trochanter and ischium, and to determine the cause of the lesions. We reterospectively reviewed the plain radiographic findings of 14 ptients with histologically proven tuberculous osteitis involving the greater trochanter and ischium. In each case, the following were analyzed:morphology of bone destruction, including cortical erosion;periosteal reaction;presence or abscence of calcific shadows in adjacent soft tissue. On the basis of an analysis of radiographic features and correlation of the anatomy with adjacent structures we attempted to determine causes. Of the 14 cases evaluated, 12 showed varrious degrees of extrinsic erosion on the outer cortical bone of the greater trochanter and ischium ; in two cases, bone destruction was so severe that the radiographic features of advanced perforated osteomyelitis were simulated. In addition to findings of bone destruction, in these twelve cases, the presence of sequestrum or calcific shadows was seen in adjacent soft tissue. Tuberculous osteitis in the greater trochanter and ischium showed the characteristic findings of chronic extrinsic erosion. On the basis of these findings we can suggest that these lesions result from an extrinsic pathophysiologic cause such as adjacent bursitis

  3. Radiographic features of tuberculous osteitis in greater trochanter and lschium

    Energy Technology Data Exchange (ETDEWEB)

    Hahm, So Hee; Lee, Ye Ri [Hanil Hospital Affiliated to KEPCO, Seoul (Korea, Republic of); Kim, Dong Jin; Sung, Ki Jun [Yonsei Univ. Wonju College of Medicine, Wonju (Korea, Republic of); Lim, Jong Nam [Konkuk Univ. College of Medicine, Seoul (Korea, Republic of)

    1996-11-01

    To evaluate, if possible, the radiographic features of tuberculous osteitis in the greater trochanter and ischium, and to determine the cause of the lesions. We reterospectively reviewed the plain radiographic findings of 14 ptients with histologically proven tuberculous osteitis involving the greater trochanter and ischium. In each case, the following were analyzed:morphology of bone destruction, including cortical erosion;periosteal reaction;presence or abscence of calcific shadows in adjacent soft tissue. On the basis of an analysis of radiographic features and correlation of the anatomy with adjacent structures we attempted to determine causes. Of the 14 cases evaluated, 12 showed varrious degrees of extrinsic erosion on the outer cortical bone of the greater trochanter and ischium ; in two cases, bone destruction was so severe that the radiographic features of advanced perforated osteomyelitis were simulated. In addition to findings of bone destruction, in these twelve cases, the presence of sequestrum or calcific shadows was seen in adjacent soft tissue. Tuberculous osteitis in the greater trochanter and ischium showed the characteristic findings of chronic extrinsic erosion. On the basis of these findings we can suggest that these lesions result from an extrinsic pathophysiologic cause such as adjacent bursitis.

  4. Greater Confinement Disposal trench and borehole operations status

    International Nuclear Information System (INIS)

    Harley, J.P. Jr.; Wilhite, E.L.; Jaegge, W.J.

    1987-01-01

    Greater Confinement Disposal (GCD) facilities have been constructed within the operating burial ground at the Savannah River Plant (SRP) to dispose of the higher activity fraction of SRP low-level waste. GCD practices of waste segregation, packaging, emplacement below the root zone, and waste stabilization are being used in the demonstration. 2 refs., 2 figs., 2 tabs

  5. The Mesozoic-Cenozoic tectonic evolution of the Greater Caucasus

    NARCIS (Netherlands)

    Saintot, A.N.; Brunet, M.F.; Yakovlev, F.; Sébrier, M.; Stephenson, R.A.; Ershov, A.V.; Chalot-Prat, F.; McCann, T.

    2006-01-01

    The Greater Caucasus (GC) fold-and-thrust belt lies on the southern deformed edge of the Scythian Platform (SP) and results from the Cenoozoic structural inversion of a deep marine Mesozoic basin in response to the northward displacement of the Transcaucasus (lying south of the GC subsequent to the

  6. Introduction. China and the Challenges in Greater Middle East

    DEFF Research Database (Denmark)

    Sørensen, Camilla T. N.; Andersen, Lars Erslev; Jiang, Yang

    2016-01-01

    This collection of short papers is an outcome of an international conference entitled China and the Challenges in Greater Middle East, organized by the Danish Institute for International Studies and Copenhagen University on 10 November 2015. The conference sought answers to the following questions...

  7. On the Occurrence of Standardized Regression Coefficients Greater than One.

    Science.gov (United States)

    Deegan, John, Jr.

    1978-01-01

    It is demonstrated here that standardized regression coefficients greater than one can legitimately occur. Furthermore, the relationship between the occurrence of such coefficients and the extent of multicollinearity present among the set of predictor variables in an equation is examined. Comments on the interpretation of these coefficients are…

  8. The Educational Afterlife of Greater Britain, 1903-1914

    Science.gov (United States)

    Gardner, Philip

    2012-01-01

    Following its late nineteenth-century emergence as an important element within federalist thinking across the British Empire, the idea of Greater Britain lost much of its political force in the years following the Boer War. The concept however continued to retain considerable residual currency in other fields of Imperial debate, including those…

  9. Average inactivity time model, associated orderings and reliability properties

    Science.gov (United States)

    Kayid, M.; Izadkhah, S.; Abouammoh, A. M.

    2018-02-01

    In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.

  10. Average L-shell fluorescence, Auger, and electron yields

    International Nuclear Information System (INIS)

    Krause, M.O.

    1980-01-01

    The dependence of the average L-shell fluorescence and Auger yields on the initial vacancy distribution is shown to be small. By contrast, the average electron yield pertaining to both Auger and Coster-Kronig transitions is shown to display a strong dependence. Numerical examples are given on the basis of Krause's evaluation of subshell radiative and radiationless yields. Average yields are calculated for widely differing vacancy distributions and are intercompared graphically for 40 3 subshell yields in most cases of inner-shell ionization

  11. Simultaneous inference for model averaging of derived parameters

    DEFF Research Database (Denmark)

    Jensen, Signe Marie; Ritz, Christian

    2015-01-01

    Model averaging is a useful approach for capturing uncertainty due to model selection. Currently, this uncertainty is often quantified by means of approximations that do not easily extend to simultaneous inference. Moreover, in practice there is a need for both model averaging and simultaneous...... inference for derived parameters calculated in an after-fitting step. We propose a method for obtaining asymptotically correct standard errors for one or several model-averaged estimates of derived parameters and for obtaining simultaneous confidence intervals that asymptotically control the family...

  12. Salecker-Wigner-Peres clock and average tunneling times

    International Nuclear Information System (INIS)

    Lunardi, Jose T.; Manzoni, Luiz A.; Nystrom, Andrew T.

    2011-01-01

    The quantum clock of Salecker-Wigner-Peres is used, by performing a post-selection of the final state, to obtain average transmission and reflection times associated to the scattering of localized wave packets by static potentials in one dimension. The behavior of these average times is studied for a Gaussian wave packet, centered around a tunneling wave number, incident on a rectangular barrier and, in particular, on a double delta barrier potential. The regime of opaque barriers is investigated and the results show that the average transmission time does not saturate, showing no evidence of the Hartman effect (or its generalized version).

  13. Time average vibration fringe analysis using Hilbert transformation

    International Nuclear Information System (INIS)

    Kumar, Upputuri Paul; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad

    2010-01-01

    Quantitative phase information from a single interferogram can be obtained using the Hilbert transform (HT). We have applied the HT method for quantitative evaluation of Bessel fringes obtained in time average TV holography. The method requires only one fringe pattern for the extraction of vibration amplitude and reduces the complexity in quantifying the data experienced in the time average reference bias modulation method, which uses multiple fringe frames. The technique is demonstrated for the measurement of out-of-plane vibration amplitude on a small scale specimen using a time average microscopic TV holography system.

  14. Average multiplications in deep inelastic processes and their interpretation

    International Nuclear Information System (INIS)

    Kiselev, A.V.; Petrov, V.A.

    1983-01-01

    Inclusive production of hadrons in deep inelastic proceseseus is considered. It is shown that at high energies the jet evolution in deep inelastic processes is mainly of nonperturbative character. With the increase of a final hadron state energy the leading contribution to an average multiplicity comes from a parton subprocess due to production of massive quark and gluon jets and their further fragmentation as diquark contribution becomes less and less essential. The ratio of the total average multiplicity in deep inelastic processes to the average multiplicity in e + e - -annihilation at high energies tends to unity

  15. Fitting a function to time-dependent ensemble averaged data

    DEFF Research Database (Denmark)

    Fogelmark, Karl; Lomholt, Michael A.; Irbäck, Anders

    2018-01-01

    Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion...... method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software....

  16. Average wind statistics for SRP area meteorological towers

    International Nuclear Information System (INIS)

    Laurinat, J.E.

    1987-01-01

    A quality assured set of average wind Statistics for the seven SRP area meteorological towers has been calculated for the five-year period 1982--1986 at the request of DOE/SR. A Similar set of statistics was previously compiled for the years 1975-- 1979. The updated wind statistics will replace the old statistics as the meteorological input for calculating atmospheric radionuclide doses from stack releases, and will be used in the annual environmental report. This report details the methods used to average the wind statistics and to screen out bad measurements and presents wind roses generated by the averaged statistics

  17. An Experimental Study Related to Planning Abilities of Gifted and Average Students

    Directory of Open Access Journals (Sweden)

    Marilena Z. Leana-Taşcılar

    2016-02-01

    Full Text Available Gifted students differ from their average peers in psychological, social, emotional and cognitive development. One of these differences in the cognitive domain is related to executive functions. One of the most important executive functions is planning and organization ability. The aim of this study was to compare planning abilities of gifted students with those of their average peers and to test the effectiveness of a training program on planning abilities of gifted students and average students. First, students’ intelligence and planning abilities were measured and then assigned to either experimental or control group. The groups were matched by intelligence and planning ability (experimental: (13 gifted and 8 average; control: 14 gifted and 8 average. In total 182 students (79 gifted and 103 average participated in the study. Then, a training program was implemented in the experimental group to find out if it improved students’ planning ability. Results showed that boys had better planning abilities than girls did, and gifted students had better planning abilities than their average peers did. Significant results were obtained in favor of the experimental group in the posttest scores

  18. Human-experienced temperature changes exceed global average climate changes for all income groups

    Science.gov (United States)

    Hsiang, S. M.; Parshall, L.

    2009-12-01

    Global climate change alters local climates everywhere. Many climate change impacts, such as those affecting health, agriculture and labor productivity, depend on these local climatic changes, not global mean change. Traditional, spatially averaged climate change estimates are strongly influenced by the response of icecaps and oceans, providing limited information on human-experienced climatic changes. If used improperly by decision-makers, these estimates distort estimated costs of climate change. We overlay the IPCC’s 20 GCM simulations on the global population distribution to estimate local climatic changes experienced by the world population in the 21st century. The A1B scenario leads to a well-known rise in global average surface temperature of +2.0°C between the periods 2011-2030 and 2080-2099. Projected on the global population distribution in 2000, the median human will experience an annual average rise of +2.3°C (4.1°F) and the average human will experience a rise of +2.4°C (4.3°F). Less than 1% of the population will experience changes smaller than +1.0°C (1.8°F), while 25% and 10% of the population will experience changes greater than +2.9°C (5.2°F) and +3.5°C (6.2°F) respectively. 67% of the world population experiences temperature changes greater than the area-weighted average change of +2.0°C (3.6°F). Using two approaches to characterize the spatial distribution of income, we show that the wealthiest, middle and poorest thirds of the global population experience similar changes, with no group dominating the global average. Calculations for precipitation indicate that there is little change in average precipitation, but redistributions of precipitation occur in all income groups. These results suggest that economists and policy-makers using spatially averaged estimates of climate change to approximate local changes will systematically and significantly underestimate the impacts of climate change on the 21st century population. Top: The

  19. Average monthly and annual climate maps for Bolivia

    KAUST Repository

    Vicente-Serrano, Sergio M.

    2015-02-24

    This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.

  20. Medicare Part B Drug Average Sales Pricing Files

    Data.gov (United States)

    U.S. Department of Health & Human Services — Manufacturer reporting of Average Sales Price (ASP) data - A manufacturers ASP must be calculated by the manufacturer every calendar quarter and submitted to CMS...

  1. High Average Power Fiber Laser for Satellite Communications, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Very high average power lasers with high electrical-top-optical (E-O) efficiency, which also support pulse position modulation (PPM) formats in the MHz-data rate...

  2. A time averaged background compensator for Geiger-Mueller counters

    International Nuclear Information System (INIS)

    Bhattacharya, R.C.; Ghosh, P.K.

    1983-01-01

    The GM tube compensator described stores background counts to cancel an equal number of pulses from the measuring channel providing time averaged compensation. The method suits portable instruments. (orig.)

  3. Time averaging, ageing and delay analysis of financial time series

    Science.gov (United States)

    Cherstvy, Andrey G.; Vinod, Deepak; Aghion, Erez; Chechkin, Aleksei V.; Metzler, Ralf

    2017-06-01

    We introduce three strategies for the analysis of financial time series based on time averaged observables. These comprise the time averaged mean squared displacement (MSD) as well as the ageing and delay time methods for varying fractions of the financial time series. We explore these concepts via statistical analysis of historic time series for several Dow Jones Industrial indices for the period from the 1960s to 2015. Remarkably, we discover a simple universal law for the delay time averaged MSD. The observed features of the financial time series dynamics agree well with our analytical results for the time averaged measurables for geometric Brownian motion, underlying the famed Black-Scholes-Merton model. The concepts we promote here are shown to be useful for financial data analysis and enable one to unveil new universal features of stock market dynamics.

  4. Historical Data for Average Processing Time Until Hearing Held

    Data.gov (United States)

    Social Security Administration — This dataset provides historical data for average wait time (in days) from the hearing request date until a hearing was held. This dataset includes data from fiscal...

  5. GIS Tools to Estimate Average Annual Daily Traffic

    Science.gov (United States)

    2012-06-01

    This project presents five tools that were created for a geographical information system to estimate Annual Average Daily : Traffic using linear regression. Three of the tools can be used to prepare spatial data for linear regression. One tool can be...

  6. The average-shadowing property and topological ergodicity for flows

    International Nuclear Information System (INIS)

    Gu Rongbao; Guo Wenjing

    2005-01-01

    In this paper, the transitive property for a flow without sensitive dependence on initial conditions is studied and it is shown that a Lyapunov stable flow with the average-shadowing property on a compact metric space is topologically ergodic

  7. Application of Bayesian approach to estimate average level spacing

    International Nuclear Information System (INIS)

    Huang Zhongfu; Zhao Zhixiang

    1991-01-01

    A method to estimate average level spacing from a set of resolved resonance parameters by using Bayesian approach is given. Using the information given in the distributions of both levels spacing and neutron width, the level missing in measured sample can be corrected more precisely so that better estimate for average level spacing can be obtained by this method. The calculation of s-wave resonance has been done and comparison with other work was carried out

  8. Annual average equivalent dose of workers form health area

    International Nuclear Information System (INIS)

    Daltro, T.F.L.; Campos, L.L.

    1992-01-01

    The data of personnel monitoring during 1985 and 1991 of personnel that work in health area were studied, obtaining a general overview of the value change of annual average equivalent dose. Two different aspects were presented: the analysis of annual average equivalent dose in the different sectors of a hospital and the comparison of these doses in the same sectors in different hospitals. (C.G.C.)

  9. Bivariate copulas on the exponentially weighted moving average control chart

    Directory of Open Access Journals (Sweden)

    Sasigarn Kuvattana

    2016-10-01

    Full Text Available This paper proposes four types of copulas on the Exponentially Weighted Moving Average (EWMA control chart when observations are from an exponential distribution using a Monte Carlo simulation approach. The performance of the control chart is based on the Average Run Length (ARL which is compared for each copula. Copula functions for specifying dependence between random variables are used and measured by Kendall’s tau. The results show that the Normal copula can be used for almost all shifts.

  10. The average action for scalar fields near phase transitions

    International Nuclear Information System (INIS)

    Wetterich, C.

    1991-08-01

    We compute the average action for fields in two, three and four dimensions, including the effects of wave function renormalization. A study of the one loop evolution equations for the scale dependence of the average action gives a unified picture of the qualitatively different behaviour in various dimensions for discrete as well as abelian and nonabelian continuous symmetry. The different phases and the phase transitions can be infered from the evolution equation. (orig.)

  11. Wave function collapse implies divergence of average displacement

    OpenAIRE

    Marchewka, A.; Schuss, Z.

    2005-01-01

    We show that propagating a truncated discontinuous wave function by Schr\\"odinger's equation, as asserted by the collapse axiom, gives rise to non-existence of the average displacement of the particle on the line. It also implies that there is no Zeno effect. On the other hand, if the truncation is done so that the reduced wave function is continuous, the average coordinate is finite and there is a Zeno effect. Therefore the collapse axiom of measurement needs to be revised.

  12. Average geodesic distance of skeleton networks of Sierpinski tetrahedron

    Science.gov (United States)

    Yang, Jinjin; Wang, Songjing; Xi, Lifeng; Ye, Yongchao

    2018-04-01

    The average distance is concerned in the research of complex networks and is related to Wiener sum which is a topological invariant in chemical graph theory. In this paper, we study the skeleton networks of the Sierpinski tetrahedron, an important self-similar fractal, and obtain their asymptotic formula for average distances. To provide the formula, we develop some technique named finite patterns of integral of geodesic distance on self-similar measure for the Sierpinski tetrahedron.

  13. Bayesian model averaging using particle filtering and Gaussian mixture modeling : Theory, concepts, and simulation experiments

    NARCIS (Netherlands)

    Rings, J.; Vrugt, J.A.; Schoups, G.; Huisman, J.A.; Vereecken, H.

    2012-01-01

    Bayesian model averaging (BMA) is a standard method for combining predictive distributions from different models. In recent years, this method has enjoyed widespread application and use in many fields of study to improve the spread-skill relationship of forecast ensembles. The BMA predictive

  14. Assembly Test of Elastic Averaging Technique to Improve Mechanical Alignment for Accelerating Structure Assemblies in CLIC

    CERN Document Server

    Huopana, J

    2010-01-01

    The CLIC (Compact LInear Collider) is being studied at CERN as a potential multi-TeV e+e- collider [1]. The manufacturing and assembly tolerances for the required RF-components are important for the final efficiency and for the operation of CLIC. The proper function of an accelerating structure is very sensitive to errors in shape and location of the accelerating cavity. This causes considerable issues in the field of mechanical design and manufacturing. Currently the design of the accelerating structures is a disk design. Alternatively it is possible to create the accelerating assembly from quadrants, which favour the mass manufacturing. The functional shape inside of the accelerating structure remains the same and a single assembly uses less parts. The alignment of these quadrants has been previously made kinematic by using steel pins or spheres to align the pieces together. This method proved to be a quite tedious and time consuming method of assembly. To limit the number of different error sources, a meth...

  15. Labour intensity of guidelines may have a greater effect on adherence than GPs' workload

    Directory of Open Access Journals (Sweden)

    Westert Gert P

    2009-11-01

    Full Text Available Abstract Background Physicians' heavy workload is often thought to jeopardise the quality of care and to be a barrier to improving quality. The relationship between these has, however, rarely been investigated. In this study quality of care is defined as care 'in accordance with professional guidelines'. In this study we investigated whether GPs with a higher workload adhere less to guidelines than those with a lower workload and whether guideline recommendations that require a greater time investment are less adhered to than those that can save time. Methods Data were used from the Second Dutch National survey of General Practice (DNSGP-2. This nationwide study was carried out between April 2000 and January 2002. A multilevel logistic-regression analysis was conducted of 170,677 decisions made by GPs, referring to 41 Guideline Adherence Indicators (GAIs, which were derived from 32 different guidelines. Data were used from 130 GPs, working in 83 practices with 98,577 patients. GP-characteristics as well as guideline characteristics were used as independent variables. Measures include workload (number of contacts, hours spent on continuing medical education, satisfaction with available time, practice characteristics and patient characteristics. Outcome measure is an indicator score, which is 1 when a decision is in accordance with professional guidelines or 0 when the decision deviates from guidelines. Results On average, 66% of the decisions GPs made were in accordance with guidelines. No relationship was found between the objective workload of GPs and their adherence to guidelines. Subjective workload (measured on a five point scale was negatively related to guideline adherence (OR = 0.95. After controlling for all other variables, the variation between GPs in adherence to guideline recommendations showed a range of less than 10%. 84% of the variation in guideline adherence was located at the GAI-level. Which means that the differences in

  16. Higher motivation - greater control? The effect of arousal on judgement.

    Science.gov (United States)

    Riemer, Hila; Viswanathan, Madhu

    2013-01-01

    This research examines control over the effect of arousal, a dimension of affect, on judgement. Past research shows that high processing motivation enhances control over the effects of affect on judgement. Isolating and studying arousal as opposed to valence, the other dimension of affect, and its effect on judgement, we identify boundary conditions for past findings. Drawing from the literature on processes by which arousal influences judgement, we demonstrate that the role of motivation is contingent upon the type of judgement task (i.e., memory- versus stimulus-based judgement). In stimulus-based judgement, individuals exert greater control over the effect of arousal on judgement under low compared to high motivation. In contrast, in memory-based judgement individuals exert greater control over the effect of arousal under high compared to low motivation. Theoretical implications and avenues for future research are discussed.

  17. Patient expectations predict greater pain relief with joint arthroplasty.

    Science.gov (United States)

    Gandhi, Rajiv; Davey, John Roderick; Mahomed, Nizar

    2009-08-01

    We examined the relationship between patient expectations of total joint arthroplasty and functional outcomes. We surveyed 1799 patients undergoing primary hip or knee arthroplasty for demographic data and Western Ontario McMaster University Osteoarthritis Index scores at baseline, 3 months, and 1 year of follow-up. Patient expectations were determined with 3 survey questions. The patients with the greatest expectations of surgery were younger, male, and had a lower body mass index. Linear regression modeling showed that a greater expectation of pain relief with surgery independently predicted greater reported pain relief at 1 year of follow-up, adjusted for all relevant covariates (P relief after joint arthroplasty is an important predictor of outcomes at 1 year.

  18. Theory and analysis of accuracy for the method of characteristics direction probabilities with boundary averaging

    International Nuclear Information System (INIS)

    Liu, Zhouyu; Collins, Benjamin; Kochunas, Brendan; Downar, Thomas; Xu, Yunlin; Wu, Hongchun

    2015-01-01

    Highlights: • The CDP combines the benefits of the CPM’s efficiency and the MOC’s flexibility. • Boundary averaging reduces the computation effort with losing minor accuracy. • An analysis model is used to justify the choice of optimize averaging strategy. • Numerical results show the performance and accuracy. - Abstract: The method of characteristic direction probabilities (CDP) combines the benefits of the collision probability method (CPM) and the method of characteristics (MOC) for the solution of the integral form of the Botlzmann Transport Equation. By coupling only the fine regions traversed by the characteristic rays in a particular direction, the computational effort required to calculate the probability matrices and to solve the matrix system is considerably reduced compared to the CPM. Furthermore, boundary averaging is performed to reduce the storage and computation but the capability of dealing with complicated geometries is preserved since the same ray tracing information is used as in MOC. An analysis model for the outgoing angular flux is used to analyze a variety of outgoing angular flux averaging methods for the boundary and to justify the choice of optimize averaging strategy. The boundary average CDP method was then implemented in the Michigan PArallel Characteristic based Transport (MPACT) code to perform 2-D and 3-D transport calculations. The numerical results are given for different cases to show the effect of averaging on the outgoing angular flux, region scalar flux and the eigenvalue. Comparison of the results with the case with no averaging demonstrates that an angular dependent averaging strategy is possible for the CDP to improve its computational performance without compromising the achievable accuracy

  19. Torsion of the greater omentum: A rare preoperative diagnosis

    International Nuclear Information System (INIS)

    Tandon, Ankit Anil; Lim, Kian Soon

    2010-01-01

    Torsion of the greater omentum is a rare acute abdominal condition that is seldom diagnosed preoperatively. We report the characteristic computed tomography (CT) scan findings and the clinical implications of this unusual diagnosis in a 41-year-old man, who also had longstanding right inguinal hernia. Awareness of omental torsion as a differential diagnosis in the acute abdomen setting is necessary for correct patient management

  20. Ecological specialization and morphological diversification in Greater Antillean boas.

    Science.gov (United States)

    Reynolds, R Graham; Collar, David C; Pasachnik, Stesha A; Niemiller, Matthew L; Puente-Rolón, Alberto R; Revell, Liam J

    2016-08-01

    Colonization of islands can dramatically influence the evolutionary trajectories of organisms, with both deterministic and stochastic processes driving adaptation and diversification. Some island colonists evolve extremely large or small body sizes, presumably in response to unique ecological circumstances present on islands. One example of this phenomenon, the Greater Antillean boas, includes both small (<90 cm) and large (4 m) species occurring on the Greater Antilles and Bahamas, with some islands supporting pairs or trios of body-size divergent species. These boas have been shown to comprise a monophyletic radiation arising from a Miocene dispersal event to the Greater Antilles, though it is not known whether co-occurrence of small and large species is a result of dispersal or in situ evolution. Here, we provide the first comprehensive species phylogeny for this clade combined with morphometric and ecological data to show that small body size evolved repeatedly on separate islands in association with specialization in substrate use. Our results further suggest that microhabitat specialization is linked to increased rates of head shape diversification among specialists. Our findings show that ecological specialization following island colonization promotes morphological diversity through deterministic body size evolution and cranial morphological diversification that is contingent on island- and species-specific factors. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.

  1. Moderate Baseline Vagal Tone Predicts Greater Prosociality in Children

    Science.gov (United States)

    Miller, Jonas G.; Kahle, Sarah; Hastings, Paul D.

    2016-01-01

    Vagal tone is widely believed to be an important physiological aspect of emotion regulation and associated positive behaviors. However, there is inconsistent evidence for relations between children’s baseline vagal tone and their helpful or prosocial responses to others (Hastings & Miller, 2014). Recent work in adults suggests a quadratic association (inverted U-shape curve) between baseline vagal tone and prosociality (Kogan et al., 2014). The present research examined whether this nonlinear association was evident in children. We found consistent evidence for a quadratic relation between vagal tone and prosociality across 3 samples of children using 6 different measures. Compared to low and high vagal tone, moderate vagal tone in early childhood concurrently predicted greater self-reported prosociality (Study 1), observed empathic concern in response to the distress of others and greater generosity toward less fortunate peers (Study 2), and longitudinally predicted greater self-, mother-, and teacher-reported prosociality 5.5 years later in middle childhood (Study 3). Taken together, our findings suggest that moderate vagal tone at rest represents a physiological preparedness or tendency to engage in different forms of prosociality across different contexts. Early moderate vagal tone may reflect an optimal balance of regulation and arousal that helps prepare children to sympathize, comfort, and share with others. PMID:27819463

  2. Average Soil Water Retention Curves Measured by Neutron Radiography

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Chu-Lin [ORNL; Perfect, Edmund [University of Tennessee, Knoxville (UTK); Kang, Misun [ORNL; Voisin, Sophie [ORNL; Bilheux, Hassina Z [ORNL; Horita, Juske [Texas Tech University (TTU); Hussey, Dan [NIST Center for Neutron Research (NCRN), Gaithersburg, MD

    2011-01-01

    Water retention curves are essential for understanding the hydrologic behavior of partially-saturated porous media and modeling flow transport processes within the vadose zone. In this paper we report direct measurements of the main drying and wetting branches of the average water retention function obtained using 2-dimensional neutron radiography. Flint sand columns were saturated with water and then drained under quasi-equilibrium conditions using a hanging water column setup. Digital images (2048 x 2048 pixels) of the transmitted flux of neutrons were acquired at each imposed matric potential (~10-15 matric potential values per experiment) at the NCNR BT-2 neutron imaging beam line. Volumetric water contents were calculated on a pixel by pixel basis using Beer-Lambert s law after taking into account beam hardening and geometric corrections. To remove scattering effects at high water contents the volumetric water contents were normalized (to give relative saturations) by dividing the drying and wetting sequences of images by the images obtained at saturation and satiation, respectively. The resulting pixel values were then averaged and combined with information on the imposed basal matric potentials to give average water retention curves. The average relative saturations obtained by neutron radiography showed an approximate one-to-one relationship with the average values measured volumetrically using the hanging water column setup. There were no significant differences (at p < 0.05) between the parameters of the van Genuchten equation fitted to the average neutron radiography data and those estimated from replicated hanging water column data. Our results indicate that neutron imaging is a very effective tool for quantifying the average water retention curve.

  3. Data requirements of GREAT-ER: Modelling and validation using LAS in four UK catchments

    International Nuclear Information System (INIS)

    Price, Oliver R.; Munday, Dawn K.; Whelan, Mick J.; Holt, Martin S.; Fox, Katharine K.; Morris, Gerard; Young, Andrew R.

    2009-01-01

    Higher-tier environmental risk assessments on 'down-the-drain' chemicals in river networks can be conducted using models such as GREAT-ER (Geography-referenced Regional Exposure Assessment Tool for European Rivers). It is important these models are evaluated and their sensitivities to input variables understood. This study had two primary objectives: evaluate GREAT-ER model performance, comparing simulated modelled predictions for LAS (linear alkylbenzene sulphonate) with measured concentrations, for four rivers in the UK, and investigate model sensitivity to input variables. We demonstrate that the GREAT-ER model is very sensitive to variability in river discharges. However it is insensitive to the form of distributions used to describe chemical usage and removal rate in sewage treatment plants (STPs). It is concluded that more effort should be directed towards improving empirical estimates of effluent load and reducing uncertainty associated with usage and removal rates in STPs. Simulations could be improved by incorporating the effect of river depth on dissipation rates. - Validation of GREAT-ER.

  4. Data requirements of GREAT-ER: Modelling and validation using LAS in four UK catchments

    Energy Technology Data Exchange (ETDEWEB)

    Price, Oliver R., E-mail: oliver.price@unilever.co [Safety and Environmental Assurance Centre, Unilever, Colworth Science Park, Sharnbrook, Bedfordshire MK44 1LQ (United Kingdom); Munday, Dawn K. [Safety and Environmental Assurance Centre, Unilever, Colworth Science Park, Sharnbrook, Bedfordshire MK44 1LQ (United Kingdom); Whelan, Mick J. [Department of Natural Resources, School of Applied Sciences, Cranfield University, College Road, Cranfield, Bedfordshire MK43 0AL (United Kingdom); Holt, Martin S. [ECETOC, Ave van Nieuwenhuyse 4, Box 6, B-1160 Brussels (Belgium); Fox, Katharine K. [85 Park Road West, Birkenhead, Merseyside CH43 8SQ (United Kingdom); Morris, Gerard [Environment Agency, Phoenix House, Global Avenue, Leeds LS11 8PG (United Kingdom); Young, Andrew R. [Wallingford HydroSolutions Ltd, Maclean building, Crowmarsh Gifford, Wallingford, Oxon OX10 8BB (United Kingdom)

    2009-10-15

    Higher-tier environmental risk assessments on 'down-the-drain' chemicals in river networks can be conducted using models such as GREAT-ER (Geography-referenced Regional Exposure Assessment Tool for European Rivers). It is important these models are evaluated and their sensitivities to input variables understood. This study had two primary objectives: evaluate GREAT-ER model performance, comparing simulated modelled predictions for LAS (linear alkylbenzene sulphonate) with measured concentrations, for four rivers in the UK, and investigate model sensitivity to input variables. We demonstrate that the GREAT-ER model is very sensitive to variability in river discharges. However it is insensitive to the form of distributions used to describe chemical usage and removal rate in sewage treatment plants (STPs). It is concluded that more effort should be directed towards improving empirical estimates of effluent load and reducing uncertainty associated with usage and removal rates in STPs. Simulations could be improved by incorporating the effect of river depth on dissipation rates. - Validation of GREAT-ER.

  5. Absenteeism movement in Greater Poland in 1840–1902

    Directory of Open Access Journals (Sweden)

    Izabela Krasińska

    2013-12-01

    Full Text Available The article presents the origins and development of the idea of absenteeism in Greater Poland in the 19th century. The start date for the research is 1840, which is considered to be a breakthrough year in the history of an organized absenteeism movement in Greater Poland. It was due to the Association for the Suppression of the Use of Vodka (Towarzystwo ku Przytłumieniu Używania Wódki in the Great Duchy of Posen that was then established in Kórnik. It was a secular organization that came into being on an initiative of doctor De La Roch, who was a German surgeon of a French origin. However, as early as 1844, the idea of absenteeism raised an interest of catholic clergymen of Greater Poland with high ranking clergy such as Rev. Leon Michał Przyłuski, Archbishop of Gniezno and Rev. Jan Kanty Dąbrowski, Archbishop of Posen, and later on Archbishops Rev. Mieczysław Halka Ledóchowski and Rev. Florian Oksza Stablewski. They were fascinated with activities of Rev. Jan Nepomucen Fick, Parish Priest of Piekary Śląskie and several other priests on whose initiative a lot of church brotherhoods of so called holy continence were set up in Upper Silesia as early as the first half-year of 1844. It was due to Bishop Dąbrowski that 100 000 people took vows of absenteeism in 1844–1845, becoming members of brotherhoods of absenteeism. In turn, it was an initiative of Archbishop Przyłuski that Jesuit missionaries – Rev. Karol Bołoz Antoniewicz, Rev. Teofil Baczyński and Rev. Kamil Praszałowicz, arrived in Greater Poland from Galicia in 1852 to promote the idea of absenteeism. Starting from 1848, they were helping Silesian clergymen to spread absenteeism. Clergymen of Greater Poland were also active in secular absenteeism associations. They became involved in the workings of the Association for the Promotion of Absenteeism that was set up by Zygmunt Celichowski in Kórnik in 1887, and especially in the Jutrzenka Absenteeism Association

  6. Call to action: Better care, better health, and greater value in college health.

    Science.gov (United States)

    Ciotoli, Carlo; Smith, Allison J; Keeling, Richard P

    2018-03-05

    It is time for action by leaders across higher education to strengthen quality improvement (QI) in college health, in pursuit of better care, better health, and increased value - goals closely linked to students' learning and success. The size and importance of the college student population; the connections between wellbeing, and therefore QI, and student success; the need for improved standards and greater accountability; and the positive contributions of QI to employee satisfaction and professionalism all warrant a widespread commitment to building greater capacity and capability for QI in college health. This report aims to inspire, motivate, and challenge college health professionals and their colleagues, campus leaders, and national entities to take both immediate and sustainable steps to bring QI to the forefront of college health practice - and, by doing so, to elevate care, health, and value of college health as a key pathway to advancing student success.

  7. Estimating average glandular dose by measuring glandular rate in mammograms

    International Nuclear Information System (INIS)

    Goto, Sachiko; Azuma, Yoshiharu; Sumimoto, Tetsuhiro; Eiho, Shigeru

    2003-01-01

    The glandular rate of the breast was objectively measured in order to calculate individual patient exposure dose (average glandular dose) in mammography. By employing image processing techniques and breast-equivalent phantoms with various glandular rate values, a conversion curve for pixel value to glandular rate can be determined by a neural network. Accordingly, the pixel values in clinical mammograms can be converted to the glandular rate value for each pixel. The individual average glandular dose can therefore be calculated using the individual glandular rates on the basis of the dosimetry method employed for quality control in mammography. In the present study, a data set of 100 craniocaudal mammograms from 50 patients was used to evaluate our method. The average glandular rate and average glandular dose of the data set were 41.2% and 1.79 mGy, respectively. The error in calculating the individual glandular rate can be estimated to be less than ±3%. When the calculation error of the glandular rate is taken into consideration, the error in the individual average glandular dose can be estimated to be 13% or less. We feel that our method for determining the glandular rate from mammograms is useful for minimizing subjectivity in the evaluation of patient breast composition. (author)

  8. Yearly, seasonal and monthly daily average diffuse sky radiation models

    International Nuclear Information System (INIS)

    Kassem, A.S.; Mujahid, A.M.; Turner, D.W.

    1993-01-01

    A daily average diffuse sky radiation regression model based on daily global radiation was developed utilizing two year data taken near Blytheville, Arkansas (Lat. =35.9 0 N, Long. = 89.9 0 W), U.S.A. The model has a determination coefficient of 0.91 and 0.092 standard error of estimate. The data were also analyzed for a seasonal dependence and four seasonal average daily models were developed for the spring, summer, fall and winter seasons. The coefficient of determination is 0.93, 0.81, 0.94 and 0.93, whereas the standard error of estimate is 0.08, 0.102, 0.042 and 0.075 for spring, summer, fall and winter, respectively. A monthly average daily diffuse sky radiation model was also developed. The coefficient of determination is 0.92 and the standard error of estimate is 0.083. A seasonal monthly average model was also developed which has 0.91 coefficient of determination and 0.085 standard error of estimate. The developed monthly daily average and daily models compare well with a selected number of previously developed models. (author). 11 ref., figs., tabs

  9. The effect of averaging adjacent planes for artifact reduction in matrix inversion tomosynthesis

    Science.gov (United States)

    Godfrey, Devon J.; Page McAdams, H.; Dobbins, James T.

    2013-01-01

    averaged to sufficiently remove partial-pixel artifacts. MITSa7 does appear to subtly reduce the contrast of high-frequency “edge” information, but the removal of partial-pixel artifacts makes the appearance of low-contrast, fine-detail anatomy even more conspicuous in MITSa7 slices. MITSa7 also appears to render simulated subtle 5 mm pulmonary nodules with greater visibility than MITS alone, in both the open lung and regions overlying the mediastinum. Finally, the MITSa7 technique reduces stochastic image variance, though the in-plane stochastic SNR (for very thin objects which do not span multiple MITS planes) is only improved at spatial frequencies between 0.05 and 0.20 cycles/mm. Conclusions: The MITSa7 method is an improvement over traditional single-plane MITS for thoracic imaging and the pulmonary nodule detection task, and thus the authors plan to use the MITSa7 approach for all future MITS research at the authors’ institution. PMID:23387755

  10. The effect of averaging adjacent planes for artifact reduction in matrix inversion tomosynthesis

    Energy Technology Data Exchange (ETDEWEB)

    Godfrey, Devon J. [Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina 27705 (United States); Page McAdams, H. [Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University Medical Center, Durham, North Carolina 27705 (United States); Dobbins, James T. III [Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Department of Biomedical Engineering, Department of Physics, and Medical Physics Graduate Program, Duke University Medical Center, Durham, North Carolina 27705 (United States)

    2013-02-15

    planes must be averaged to sufficiently remove partial-pixel artifacts. MITSa7 does appear to subtly reduce the contrast of high-frequency 'edge' information, but the removal of partial-pixel artifacts makes the appearance of low-contrast, fine-detail anatomy even more conspicuous in MITSa7 slices. MITSa7 also appears to render simulated subtle 5 mm pulmonary nodules with greater visibility than MITS alone, in both the open lung and regions overlying the mediastinum. Finally, the MITSa7 technique reduces stochastic image variance, though the in-plane stochastic SNR (for very thin objects which do not span multiple MITS planes) is only improved at spatial frequencies between 0.05 and 0.20 cycles/mm. Conclusions: The MITSa7 method is an improvement over traditional single-plane MITS for thoracic imaging and the pulmonary nodule detection task, and thus the authors plan to use the MITSa7 approach for all future MITS research at the authors' institution.

  11. The effect of averaging adjacent planes for artifact reduction in matrix inversion tomosynthesis.

    Science.gov (United States)

    Godfrey, Devon J; McAdams, H Page; Dobbins, James T

    2013-02-01

    remove partial-pixel artifacts. MITSa7 does appear to subtly reduce the contrast of high-frequency "edge" information, but the removal of partial-pixel artifacts makes the appearance of low-contrast, fine-detail anatomy even more conspicuous in MITSa7 slices. MITSa7 also appears to render simulated subtle 5 mm pulmonary nodules with greater visibility than MITS alone, in both the open lung and regions overlying the mediastinum. Finally, the MITSa7 technique reduces stochastic image variance, though the in-plane stochastic SNR (for very thin objects which do not span multiple MITS planes) is only improved at spatial frequencies between 0.05 and 0.20 cycles∕mm. The MITSa7 method is an improvement over traditional single-plane MITS for thoracic imaging and the pulmonary nodule detection task, and thus the authors plan to use the MITSa7 approach for all future MITS research at the authors' institution.

  12. The effect of averaging adjacent planes for artifact reduction in matrix inversion tomosynthesis

    International Nuclear Information System (INIS)

    Godfrey, Devon J.; Page McAdams, H.; Dobbins, James T. III

    2013-01-01

    averaged to sufficiently remove partial-pixel artifacts. MITSa7 does appear to subtly reduce the contrast of high-frequency “edge” information, but the removal of partial-pixel artifacts makes the appearance of low-contrast, fine-detail anatomy even more conspicuous in MITSa7 slices. MITSa7 also appears to render simulated subtle 5 mm pulmonary nodules with greater visibility than MITS alone, in both the open lung and regions overlying the mediastinum. Finally, the MITSa7 technique reduces stochastic image variance, though the in-plane stochastic SNR (for very thin objects which do not span multiple MITS planes) is only improved at spatial frequencies between 0.05 and 0.20 cycles/mm. Conclusions: The MITSa7 method is an improvement over traditional single-plane MITS for thoracic imaging and the pulmonary nodule detection task, and thus the authors plan to use the MITSa7 approach for all future MITS research at the authors’ institution.

  13. Some implications of batch average burnup calculations on predicted spent fuel compositions

    International Nuclear Information System (INIS)

    Alexander, C.W.; Croff, A.G.

    1984-01-01

    The accuracy of using batch-averaged burnups to determine spent fuel characteristics (such as isotopic composition, activity, etc.) was examined for a typical pressurized-water reactor (PWR) fuel discharge batch by comparing characteristics computed by (a) performing a single depletion calculation using the average burnup of the spent fuel and (b) performing separate depletion calculations based on the relative amounts of spent fuel in each of twelve burnup ranges and summing the results. The computations were done using ORIGEN 2. Procedure (b) showed a significant shift toward a greater quantity of the heavier transuranics, which derive from multiple neutron captures, and a corresponding decrease in the amounts of lower transuranics. Those characteristics which derive primarily from fission products, such as total radioactivity and total thermal power, are essentially identical for the two procedures. Those characteristics that derive primarily from the heavier transuranics, such as spontaneous fission neutrons, are underestimated by procedure (a)

  14. Fiscal consequences of greater openness: from tax avoidance and tax arbitrage to revenue growth

    OpenAIRE

    Jouko Ylä-Liedenpohja

    2008-01-01

    Revenue from corporation tax and taxes on capital income, net of revenue loss from deductibility of interest, as a percentage of the GDP has tripled in Finland over the past two decades. This is argued to result from greater openness of the economy as well as from simultaneous tax reforms towards neutrality of capital income taxation by combining tax-base broadening with tax-rate reductions. They implied improved efficiency of real investments, elimination of tax avoidance in entrepreneurial ...

  15. The post-orgasmic prolactin increase following intercourse is greater than following masturbation and suggests greater satiety.

    Science.gov (United States)

    Brody, Stuart; Krüger, Tillmann H C

    2006-03-01

    Research indicates that prolactin increases following orgasm are involved in a feedback loop that serves to decrease arousal through inhibitory central dopaminergic and probably peripheral processes. The magnitude of post-orgasmic prolactin increase is thus a neurohormonal index of sexual satiety. Using data from three studies of men and women engaging in masturbation or penile-vaginal intercourse to orgasm in the laboratory, we report that for both sexes (adjusted for prolactin changes in a non-sexual control condition), the magnitude of prolactin increase following intercourse is 400% greater than that following masturbation. The results are interpreted as an indication of intercourse being more physiologically satisfying than masturbation, and discussed in light of prior research reporting greater physiological and psychological benefits associated with coitus than with any other sexual activities.

  16. Modified parity space averaging approaches for online cross-calibration of redundant sensors in nuclear reactors

    Directory of Open Access Journals (Sweden)

    Moath Kassim

    2018-05-01

    Full Text Available To maintain safety and reliability of reactors, redundant sensors are usually used to measure critical variables and estimate their averaged time-dependency. Nonhealthy sensors can badly influence the estimation result of the process variable. Since online condition monitoring was introduced, the online cross-calibration method has been widely used to detect any anomaly of sensor readings among the redundant group. The cross-calibration method has four main averaging techniques: simple averaging, band averaging, weighted averaging, and parity space averaging (PSA. PSA is used to weigh redundant signals based on their error bounds and their band consistency. Using the consistency weighting factor (C, PSA assigns more weight to consistent signals that have shared bands, based on how many bands they share, and gives inconsistent signals of very low weight. In this article, three approaches are introduced for improving the PSA technique: the first is to add another consistency factor, so called trend consistency (TC, to include a consideration of the preserving of any characteristic edge that reflects the behavior of equipment/component measured by the process parameter; the second approach proposes replacing the error bound/accuracy based weighting factor (Wa with a weighting factor based on the Euclidean distance (Wd, and the third approach proposes applying Wd,TC,andC, all together. Cold neutron source data sets of four redundant hydrogen pressure transmitters from a research reactor were used to perform the validation and verification. Results showed that the second and third modified approaches lead to reasonable improvement of the PSA technique. All approaches implemented in this study were similar in that they have the capability to (1 identify and isolate a drifted sensor that should undergo calibration, (2 identify a faulty sensor/s due to long and continuous missing data range, and (3 identify a healthy sensor. Keywords: Nuclear Reactors

  17. Average cross sections for the 252Cf neutron spectrum

    International Nuclear Information System (INIS)

    Dezso, Z.; Csikai, J.

    1977-01-01

    A number of average cross sections have been measured for 252 Cf neutrons in (n, γ), (n,p), (n,2n), (n,α) reactions by the activation method and for fission by fission chamber. Cross sections have been determined for 19 elements and 45 reactions. The (n,γ) cross section values lie in the interval from 0.3 to 200 mb. The data as a function of target neutron number increases up to about N=60 with minimum near to dosed shells. The values lie between 0.3 mb and 113 mb. These cross sections decrease significantly with increasing the threshold energy. The values are below 20 mb. The data do not exceed 10 mb. Average (n,p) cross sections as a function of the threshold energy and average fission cross sections as a function of Zsup(4/3)/A are shown. The results obtained are summarized in tables

  18. Average contraction and synchronization of complex switched networks

    International Nuclear Information System (INIS)

    Wang Lei; Wang Qingguo

    2012-01-01

    This paper introduces an average contraction analysis for nonlinear switched systems and applies it to investigating the synchronization of complex networks of coupled systems with switching topology. For a general nonlinear system with a time-dependent switching law, a basic convergence result is presented according to average contraction analysis, and a special case where trajectories of a distributed switched system converge to a linear subspace is then investigated. Synchronization is viewed as the special case with all trajectories approaching the synchronization manifold, and is thus studied for complex networks of coupled oscillators with switching topology. It is shown that the synchronization of a complex switched network can be evaluated by the dynamics of an isolated node, the coupling strength and the time average of the smallest eigenvalue associated with the Laplacians of switching topology and the coupling fashion. Finally, numerical simulations illustrate the effectiveness of the proposed methods. (paper)

  19. The Health Effects of Income Inequality: Averages and Disparities.

    Science.gov (United States)

    Truesdale, Beth C; Jencks, Christopher

    2016-01-01

    Much research has investigated the association of income inequality with average life expectancy, usually finding negative correlations that are not very robust. A smaller body of work has investigated socioeconomic disparities in life expectancy, which have widened in many countries since 1980. These two lines of work should be seen as complementary because changes in average life expectancy are unlikely to affect all socioeconomic groups equally. Although most theories imply long and variable lags between changes in income inequality and changes in health, empirical evidence is confined largely to short-term effects. Rising income inequality can affect individuals in two ways. Direct effects change individuals' own income. Indirect effects change other people's income, which can then change a society's politics, customs, and ideals, altering the behavior even of those whose own income remains unchanged. Indirect effects can thus change both average health and the slope of the relationship between individual income and health.

  20. Perceived Average Orientation Reflects Effective Gist of the Surface.

    Science.gov (United States)

    Cha, Oakyoon; Chong, Sang Chul

    2018-03-01

    The human ability to represent ensemble visual information, such as average orientation and size, has been suggested as the foundation of gist perception. To effectively summarize different groups of objects into the gist of a scene, observers should form ensembles separately for different groups, even when objects have similar visual features across groups. We hypothesized that the visual system utilizes perceptual groups characterized by spatial configuration and represents separate ensembles for different groups. Therefore, participants could not integrate ensembles of different perceptual groups on a task basis. We asked participants to determine the average orientation of visual elements comprising a surface with a contour situated inside. Although participants were asked to estimate the average orientation of all the elements, they ignored orientation signals embedded in the contour. This constraint may help the visual system to keep the visual features of occluding objects separate from those of the occluded objects.

  1. Object detection by correlation coefficients using azimuthally averaged reference projections.

    Science.gov (United States)

    Nicholson, William V

    2004-11-01

    A method of computing correlation coefficients for object detection that takes advantage of using azimuthally averaged reference projections is described and compared with two alternative methods-computing a cross-correlation function or a local correlation coefficient versus the azimuthally averaged reference projections. Two examples of an application from structural biology involving the detection of projection views of biological macromolecules in electron micrographs are discussed. It is found that a novel approach to computing a local correlation coefficient versus azimuthally averaged reference projections, using a rotational correlation coefficient, outperforms using a cross-correlation function and a local correlation coefficient in object detection from simulated images with a range of levels of simulated additive noise. The three approaches perform similarly in detecting macromolecular views in electron microscope images of a globular macrolecular complex (the ribosome). The rotational correlation coefficient outperforms the other methods in detection of keyhole limpet hemocyanin macromolecular views in electron micrographs.

  2. Measurement of average radon gas concentration at workplaces

    International Nuclear Information System (INIS)

    Kavasi, N.; Somlai, J.; Kovacs, T.; Gorjanacz, Z.; Nemeth, Cs.; Szabo, T.; Varhegyi, A.; Hakl, J.

    2003-01-01

    In this paper results of measurement of average radon gas concentration at workplaces (the schools and kindergartens and the ventilated workplaces) are presented. t can be stated that the one month long measurements means very high variation (as it is obvious in the cases of the hospital cave and the uranium tailing pond). Consequently, in workplaces where the expectable changes of radon concentration considerable with the seasons should be measure for 12 months long. If it is not possible, the chosen six months period should contain summer and winter months as well. The average radon concentration during working hours can be differ considerable from the average of the whole time in the cases of frequent opening the doors and windows or using artificial ventilation. (authors)

  3. A Martian PFS average spectrum: Comparison with ISO SWS

    Science.gov (United States)

    Formisano, V.; Encrenaz, T.; Fonti, S.; Giuranna, M.; Grassi, D.; Hirsh, H.; Khatuntsev, I.; Ignatiev, N.; Lellouch, E.; Maturilli, A.; Moroz, V.; Orleanski, P.; Piccioni, G.; Rataj, M.; Saggin, B.; Zasova, L.

    2005-08-01

    The evaluation of the planetary Fourier spectrometer performance at Mars is presented by comparing an average spectrum with the ISO spectrum published by Lellouch et al. [2000. Planet. Space Sci. 48, 1393.]. First, the average conditions of Mars atmosphere are compared, then the mixing ratios of the major gases are evaluated. Major and minor bands of CO 2 are compared, from the point of view of features characteristics and bands depth. The spectral resolution is also compared using several solar lines. The result indicates that PFS radiance is valid to better than 1% in the wavenumber range 1800-4200 cm -1 for the average spectrum considered (1680 measurements). The PFS monochromatic transfer function generates an overshooting on the left-hand side of strong narrow lines (solar or atmospheric). The spectral resolution of PFS is of the order of 1.3 cm -1 or better. A large number of narrow features to be identified are discovered.

  4. Size and emotion averaging: costs of dividing attention after all.

    Science.gov (United States)

    Brand, John; Oriet, Chris; Tottenham, Laurie Sykes

    2012-03-01

    Perceptual averaging is a process by which sets of similar items are represented by summary statistics such as their average size, luminance, or orientation. Researchers have argued that this process is automatic, able to be carried out without interference from concurrent processing. Here, we challenge this conclusion and demonstrate a reliable cost of computing the mean size of circles distinguished by colour (Experiments 1 and 2) and the mean emotionality of faces distinguished by sex (Experiment 3). We also test the viability of two strategies that could have allowed observers to guess the correct response without computing the average size or emotionality of both sets concurrently. We conclude that although two means can be computed concurrently, doing so incurs a cost of dividing attention.

  5. A virtual pebble game to ensemble average graph rigidity.

    Science.gov (United States)

    González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J

    2015-01-01

    The body-bar Pebble Game (PG) algorithm is commonly used to calculate network rigidity properties in proteins and polymeric materials. To account for fluctuating interactions such as hydrogen bonds, an ensemble of constraint topologies are sampled, and average network properties are obtained by averaging PG characterizations. At a simpler level of sophistication, Maxwell constraint counting (MCC) provides a rigorous lower bound for the number of internal degrees of freedom (DOF) within a body-bar network, and it is commonly employed to test if a molecular structure is globally under-constrained or over-constrained. MCC is a mean field approximation (MFA) that ignores spatial fluctuations of distance constraints by replacing the actual molecular structure by an effective medium that has distance constraints globally distributed with perfect uniform density. The Virtual Pebble Game (VPG) algorithm is a MFA that retains spatial inhomogeneity in the density of constraints on all length scales. Network fluctuations due to distance constraints that may be present or absent based on binary random dynamic variables are suppressed by replacing all possible constraint topology realizations with the probabilities that distance constraints are present. The VPG algorithm is isomorphic to the PG algorithm, where integers for counting "pebbles" placed on vertices or edges in the PG map to real numbers representing the probability to find a pebble. In the VPG, edges are assigned pebble capacities, and pebble movements become a continuous flow of probability within the network. Comparisons between the VPG and average PG results over a test set of proteins and disordered lattices demonstrate the VPG quantitatively estimates the ensemble average PG results well. The VPG performs about 20% faster than one PG, and it provides a pragmatic alternative to averaging PG rigidity characteristics over an ensemble of constraint topologies. The utility of the VPG falls in between the most

  6. Exactly averaged equations for flow and transport in random media

    International Nuclear Information System (INIS)

    Shvidler, Mark; Karasaki, Kenzi

    2001-01-01

    It is well known that exact averaging of the equations of flow and transport in random porous media can be realized only for a small number of special, occasionally exotic, fields. On the other hand, the properties of approximate averaging methods are not yet fully understood. For example, the convergence behavior and the accuracy of truncated perturbation series. Furthermore, the calculation of the high-order perturbations is very complicated. These problems for a long time have stimulated attempts to find the answer for the question: Are there in existence some exact general and sufficiently universal forms of averaged equations? If the answer is positive, there arises the problem of the construction of these equations and analyzing them. There exist many publications related to these problems and oriented on different applications: hydrodynamics, flow and transport in porous media, theory of elasticity, acoustic and electromagnetic waves in random fields, etc. We present a method of finding the general form of exactly averaged equations for flow and transport in random fields by using (1) an assumption of the existence of Green's functions for appropriate stochastic problems, (2) some general properties of the Green's functions, and (3) the some basic information about the random fields of the conductivity, porosity and flow velocity. We present a general form of the exactly averaged non-local equations for the following cases. 1. Steady-state flow with sources in porous media with random conductivity. 2. Transient flow with sources in compressible media with random conductivity and porosity. 3. Non-reactive solute transport in random porous media. We discuss the problem of uniqueness and the properties of the non-local averaged equations, for the cases with some types of symmetry (isotropic, transversal isotropic, orthotropic) and we analyze the hypothesis of the structure non-local equations in general case of stochastically homogeneous fields. (author)

  7. Increase in average foveal thickness after internal limiting membrane peeling

    Directory of Open Access Journals (Sweden)

    Kumagai K

    2017-04-01

    Full Text Available Kazuyuki Kumagai,1 Mariko Furukawa,1 Tetsuyuki Suetsugu,1 Nobuchika Ogino2 1Department of Ophthalmology, Kami-iida Daiichi General Hospital, 2Department of Ophthalmology, Nishigaki Eye Clinic, Aichi, Japan Purpose: To report the findings in three cases in which the average foveal thickness was increased after a thin epiretinal membrane (ERM was removed by vitrectomy with internal limiting membrane (ILM peeling.Methods: The foveal contour was normal preoperatively in all eyes. All cases underwent successful phacovitrectomy with ILM peeling for a thin ERM. The optical coherence tomography (OCT images were examined before and after the surgery. The changes in the average foveal (1 mm thickness and the foveal areas within 500 µm from the foveal center were measured. The postoperative changes in the inner and outer retinal areas determined from the cross-sectional OCT images were analyzed.Results: The average foveal thickness and the inner and outer foveal areas increased significantly after the surgery in each of the three cases. The percentage increase in the average foveal thickness relative to the baseline thickness was 26% in Case 1, 29% in Case 2, and 31% in Case 3. The percentage increase in the foveal inner retinal area was 71% in Case 1, 113% in Case 2, and 110% in Case 3, and the percentage increase in foveal outer retinal area was 8% in Case 1, 13% in Case 2, and 18% in Case 3.Conclusion: The increase in the average foveal thickness and the inner and outer foveal areas suggests that a centripetal movement of the inner and outer retinal layers toward the foveal center probably occurred due to the ILM peeling. Keywords: internal limiting membrane, optical coherence tomography, average foveal thickness, epiretinal membrane, vitrectomy

  8. The Chicken Soup Effect: The Role of Recreation and Intramural Participation in Boosting Freshman Grade Point Average

    Science.gov (United States)

    Gibbison, Godfrey A.; Henry, Tracyann L.; Perkins-Brown, Jayne

    2011-01-01

    Freshman grade point average, in particular first semester grade point average, is an important predictor of survival and eventual student success in college. As many institutions of higher learning are searching for ways to improve student success, one would hope that policies geared towards the success of freshmen have long term benefits…

  9. Positivity of the spherically averaged atomic one-electron density

    DEFF Research Database (Denmark)

    Fournais, Søren; Hoffmann-Ostenhof, Maria; Hoffmann-Ostenhof, Thomas

    2008-01-01

    We investigate the positivity of the spherically averaged atomic one-electron density . For a which stems from a physical ground state we prove that for r ≥  0. This article may be reproduced in its entirety for non-commercial purposes.......We investigate the positivity of the spherically averaged atomic one-electron density . For a which stems from a physical ground state we prove that for r ≥  0. This article may be reproduced in its entirety for non-commercial purposes....

  10. Research & development and growth: A Bayesian model averaging analysis

    Czech Academy of Sciences Publication Activity Database

    Horváth, Roman

    2011-01-01

    Roč. 28, č. 6 (2011), s. 2669-2673 ISSN 0264-9993. [Society for Non-linear Dynamics and Econometrics Annual Conferencen. Washington DC, 16.03.2011-18.03.2011] R&D Projects: GA ČR GA402/09/0965 Institutional research plan: CEZ:AV0Z10750506 Keywords : Research and development * Growth * Bayesian model averaging Subject RIV: AH - Economic s Impact factor: 0.701, year: 2011 http://library.utia.cas.cz/separaty/2011/E/horvath-research & development and growth a bayesian model averaging analysis.pdf

  11. MAIN STAGES SCIENTIFIC AND PRODUCTION MASTERING THE TERRITORY AVERAGE URAL

    Directory of Open Access Journals (Sweden)

    V.S. Bochko

    2006-09-01

    Full Text Available Questions of the shaping Average Ural, as industrial territory, on base her scientific study and production mastering are considered in the article. It is shown that studies of Ural resources and particularities of the vital activity of its population were concerned by Russian and foreign scientist in XVIII-XIX centuries. It is noted that in XX century there was a transition to systematic organizing-economic study of production power, society and natures of Average Ural. More attention addressed on new problems of region and on needs of their scientific solving.

  12. High-Average, High-Peak Current Injector Design

    CERN Document Server

    Biedron, S G; Virgo, M

    2005-01-01

    There is increasing interest in high-average-power (>100 kW), um-range FELs. These machines require high peak current (~1 kA), modest transverse emittance, and beam energies of ~100 MeV. High average currents (~1 A) place additional constraints on the design of the injector. We present a design for an injector intended to produce the required peak currents at the injector, eliminating the need for magnetic compression within the linac. This reduces the potential for beam quality degradation due to CSR and space charge effects within magnetic chicanes.

  13. Non-self-averaging nucleation rate due to quenched disorder

    International Nuclear Information System (INIS)

    Sear, Richard P

    2012-01-01

    We study the nucleation of a new thermodynamic phase in the presence of quenched disorder. The quenched disorder is a generic model of both impurities and disordered porous media; both are known to have large effects on nucleation. We find that the nucleation rate is non-self-averaging. This is in a simple Ising model with clusters of quenched spins. We also show that non-self-averaging behaviour is straightforward to detect in experiments, and may be rather common. (fast track communication)

  14. A note on moving average models for Gaussian random fields

    DEFF Research Database (Denmark)

    Hansen, Linda Vadgård; Thorarinsdottir, Thordis L.

    The class of moving average models offers a flexible modeling framework for Gaussian random fields with many well known models such as the Matérn covariance family and the Gaussian covariance falling under this framework. Moving average models may also be viewed as a kernel smoothing of a Lévy...... basis, a general modeling framework which includes several types of non-Gaussian models. We propose a new one-parameter spatial correlation model which arises from a power kernel and show that the associated Hausdorff dimension of the sample paths can take any value between 2 and 3. As a result...

  15. Sexual predators, energy development, and conservation in greater Yellowstone.

    Science.gov (United States)

    Berger, Joel; Beckmann, Jon P

    2010-06-01

    In the United States, as elsewhere, a growing debate pits national energy policy and homeland security against biological conservation. In rural communities the extraction of fossil fuels is often encouraged because of the employment opportunities it offers, although the concomitant itinerant workforce is often associated with increased wildlife poaching. We explored possible positive and negative factors associated with energy extraction in the Greater Yellowstone Ecosystem (GYE), an area known for its national parks, intact biological diversity, and some of the New World's longest terrestrial migrations. Specifically, we asked whether counties with different economies-recreation (ski), agrarian (ranching or farming), and energy extractive (petroleum)-differed in healthcare (gauged by the abundance of hospital beds) and in the frequency of sexual predators. The absolute and relative frequency of registered sex offenders grew approximately two to three times faster in areas reliant on energy extraction. Healthcare among counties did not differ. The strong conflation of community dishevel, as reflected by in-migrant sexual predators, and ecological decay in Greater Yellowstone is consistent with patterns seen in similar systems from Ecuador to northern Canada, where social and environmental disarray exist around energy boomtowns. In our case, that groups (albeit with different aims) mobilized campaigns to help maintain the quality of rural livelihoods by protecting open space is a positive sign that conservation can matter, especially in the face of rampant and poorly executed energy extraction projects. Our findings further suggest that the public and industry need stronger regulatory action to instill greater vigilance when and where social factors and land conversion impact biological systems.

  16. Taino and African maternal heritage in the Greater Antilles.

    Science.gov (United States)

    Bukhari, Areej; Luis, Javier Rodriguez; Alfonso-Sanchez, Miguel A; Garcia-Bertrand, Ralph; Herrera, Rene J

    2017-12-30

    Notwithstanding the general interest and the geopolitical importance of the island countries in the Greater Antilles, little is known about the specific ancestral Native American and African populations that settled them. In an effort to alleviate this lacuna of information on the genetic constituents of the Greater Antilles, we comprehensively compared the mtDNA compositions of Cuba, Dominican Republic, Haiti, Jamaica and Puerto Rico. To accomplish this, the mtDNA HVRI and HVRII regions, as well as coding diagnostic sites, were assessed in the Haitian general population and compared to data from reference populations. The Taino maternal DNA is prominent in the ex-Spanish colonies (61.3%-22.0%) while it is basically non-existent in the ex-French and ex-English colonies of Haiti (0.0%) and Jamaica (0.5%), respectively. The most abundant Native American mtDNA haplogroups in the Greater Antilles are A2, B2 and C1. The African mtDNA component is almost fixed in Haiti (98.2%) and Jamaica (98.5%), and the frequencies of specific African haplogroups vary considerably among the five island nations. The strong persistence of Taino mtDNA in the ex-Spanish colonies (and especially in Puerto Rico), and its absence in the French and English excolonies is likely the result of different social norms regarding mixed marriages with Taino women during the early years after the first contact with Europeans. In addition, this article reports on the results of an integrative approach based on mtDNA analysis and demographic data that tests the hypothesis of a southward shift in raiding zones along the African west coast during the period encompassing the Transatlantic Slave Trade. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Environmental characteristics of shallow bottoms used by Greater Flamingo Phoenicopterus roseus in a northern Adriatic lagoon

    Directory of Open Access Journals (Sweden)

    Scarton Francesco

    2017-12-01

    Full Text Available Since the beginning of this century, Greater Flamingo Phoenicopterus roseus flocks have been observed regularly when feeding in the large extensions of shallow bottoms in the Lagoon of Venice (NE Italy, the largest lagoon along the Mediterranean. Nowadays thousands of flamingos are present throughout the year. Between 2013 and 2017 I collected data on the environmental features of the shallow bottoms used by feeding flocks, along with measurements of flight initiation distance (FID of Greater Flamingo in response to the approach of boats and pedestrians. Shallow bottoms were shown to be used when covered with approximately 10 to 60 cm of water. All the feeding sites were in open landscapes, with low occurrence of saltmarshes in a radius of 500 m. The bottoms were barely covered with seagrasses (<4% of the surface around the survey points and were mostly silty. Feeding flocks were on average 1.2 km far from the nearest road or dyke, while the mean distance from channels that could be used by boats was about 420 m. The mean FID caused by boats or pedestrians was 241 m ± 117 m (N = 31, ± 1 SD without significant differences between those for the two disturbance sources. The use of shallow bottoms by the Greater Flamingo appears governed primarily by the tidal cycle, but boat disturbance probably modifies this effect. According to FID values, a set-back distance of 465 m is suggested to reduce the disturbance caused by boats and pedestrians to the flamingo feeding flocks.

  18. Remotely Sensed Estimation of Net Primary Productivity (NPP and Its Spatial and Temporal Variations in the Greater Khingan Mountain Region, China

    Directory of Open Access Journals (Sweden)

    Qiang Zhu

    2017-07-01

    Full Text Available We improved the CASA model based on differences in the types of land use, the values of the maximum light use efficiency, and the calculation methods of solar radiation. Then, the parameters of the model were examined and recombined into 16 cases. We estimated the net primary productivity (NPP using the NDVI3g dataset, meteorological data, and vegetation classification data from the Greater Khingan Mountain region, China. We assessed the accuracy and temporal-spatial distribution characteristics of NPP in the Greater Khingan Mountain region from 1982 to 2013. Based on a comparison of the results of the 16 cases, we found that different values of maximum light use efficiency affect the estimation more than differences in the fraction of photosynthetically active radiation (FPAR. However, the FPARmax and the constant Tε2 values did not show marked effects. Different schemes were used to assess different model combinations. Models using a combination of parameters established by scholars from China and the United States produced different results and had large errors. These ideas are meaningful references for the estimation of NPP in other regions. The results reveal that the annual average NPP in the Greater Khingan Mountain region was 760 g C/m2·a in 1982–2013 and that the inter-annual fluctuations were not dramatic. The NPP estimation results of the 16 cases exhibit an increasing trend. In terms of the spatial distribution of the changes, the model indicated that the values in 75% of this area seldom or never increased. Prominent growth occurred in the areas of Taipingling, Genhe, and the Oroqen Autonomous Banner. Notably, NPP decreased in the southeastern region of the Greater Khingan Mountains, the Hulunbuir Pasture Land, and Holingol.

  19. Averaging processes in granular flows driven by gravity

    Science.gov (United States)

    Rossi, Giulia; Armanini, Aronne

    2016-04-01

    One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental

  20. 40 CFR 63.1332 - Emissions averaging provisions.

    Science.gov (United States)

    2010-07-01

    ... control technology or standard had been applied instead of the pollution prevention measure. (d) The... technology with an approved nominal efficiency greater than 98 percent or a pollution prevention measure... Section 63.1332 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS...

  1. What Does Average Really Mean? Making Sense of Statistics

    Science.gov (United States)

    DeAngelis, Karen J.; Ayers, Steven

    2009-01-01

    The recent shift toward greater accountability has put many educational leaders in a position where they are expected to collect and use increasing amounts of data to inform their decision making. Yet, because many programs that prepare administrators, including school business officials, do not require a statistics course or a course that is more…

  2. Who Are Most, Average, or High-Functioning Adults?

    Science.gov (United States)

    Gregg, Noel; Coleman, Chris; Lindstrom, Jennifer; Lee, Christopher

    2007-01-01

    The growing number of high-functioning adults seeking accommodations from testing agencies and postsecondary institutions presents an urgent need to ensure reliable and valid diagnostic decision making. The potential for this population to make significant contributions to society will be greater if we provide the learning and testing…

  3. Greater efficiency in attentional processing related to mindfulness meditation.

    NARCIS (Netherlands)

    Hurk, P.A.M. van den; Giommi, F.; Gielen, S.C.A.M.; Speckens, A.E.M.; Barendregt, H.P.

    2010-01-01

    In this study, attentional processing in relation to mindfulness meditation was investigated. Since recent studies have suggested that mindfulness meditation may induce improvements in attentional processing, we have tested 20 expert mindfulness meditators in the attention network test. Their

  4. Small Bandwidth Asymptotics for Density-Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    This paper proposes (apparently) novel standard error formulas for the density-weighted average derivative estimator of Powell, Stock, and Stoker (1989). Asymptotic validity of the standard errors developed in this paper does not require the use of higher-order kernels and the standard errors...

  5. High Average Power UV Free Electron Laser Experiments At JLAB

    International Nuclear Information System (INIS)

    Douglas, David; Benson, Stephen; Evtushenko, Pavel; Gubeli, Joseph; Hernandez-Garcia, Carlos; Legg, Robert; Neil, George; Powers, Thomas; Shinn, Michelle; Tennant, Christopher; Williams, Gwyn

    2012-01-01

    Having produced 14 kW of average power at ∼2 microns, JLAB has shifted its focus to the ultraviolet portion of the spectrum. This presentation will describe the JLab UV Demo FEL, present specifics of its driver ERL, and discuss the latest experimental results from FEL experiments and machine operations.

  6. Average subentropy, coherence and entanglement of random mixed quantum states

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Lin, E-mail: godyalin@163.com [Institute of Mathematics, Hangzhou Dianzi University, Hangzhou 310018 (China); Singh, Uttam, E-mail: uttamsingh@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India); Pati, Arun K., E-mail: akpati@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India)

    2017-02-15

    Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate that mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.

  7. Establishment of Average Body Measurement and the Development ...

    African Journals Online (AJOL)

    cce

    body measurement for height and backneck to waist for ages 2,3,4 and 5 years. The ... average measurements of the different parts of the body must be established. ..... and OAU Charter on Rights of the child: Lagos: Nigeria Country office.

  8. Adaptive Spontaneous Transitions between Two Mechanisms of Numerical Averaging.

    Science.gov (United States)

    Brezis, Noam; Bronfman, Zohar Z; Usher, Marius

    2015-06-04

    We investigated the mechanism with which humans estimate numerical averages. Participants were presented with 4, 8 or 16 (two-digit) numbers, serially and rapidly (2 numerals/second) and were instructed to convey the sequence average. As predicted by a dual, but not a single-component account, we found a non-monotonic influence of set-size on accuracy. Moreover, we observed a marked decrease in RT as set-size increases and RT-accuracy tradeoff in the 4-, but not in the 16-number condition. These results indicate that in accordance with the normative directive, participants spontaneously employ analytic/sequential thinking in the 4-number condition and intuitive/holistic thinking in the 16-number condition. When the presentation rate is extreme (10 items/sec) we find that, while performance still remains high, the estimations are now based on intuitive processing. The results are accounted for by a computational model postulating population-coding underlying intuitive-averaging and working-memory-mediated symbolic procedures underlying analytical-averaging, with flexible allocation between the two.

  9. Determination of the average lifetime of bottom hadrons

    Energy Technology Data Exchange (ETDEWEB)

    Althoff, M; Braunschweig, W; Kirschfink, F J; Martyn, H U; Rosskamp, P; Schmitz, D; Siebke, H; Wallraff, W [Technische Hochschule Aachen (Germany, F.R.). Lehrstuhl fuer Experimentalphysik 1A und 1. Physikalisches Inst.; Eisenmann, J; Fischer, H M

    1984-12-27

    We have determined the average lifetime of hadrons containing b quarks produced in e/sup +/e/sup -/ annihilation to be tausub(B)=1.83 x 10/sup -12/ s. Our method uses charged decay products from both non-leptonic and semileptonic decay modes.

  10. Determination of the average lifetime of bottom hadrons

    Energy Technology Data Exchange (ETDEWEB)

    Althoff, M; Braunschweig, W; Kirschfink, F J; Martyn, H U; Rosskamp, P; Schmitz, D; Siebke, H; Wallraff, W; Eisenmann, J; Fischer, H M

    1984-12-27

    We have determined the average lifetime of hadrons containing b quarks produced in e e annihilation to be tausub(B)=1.83x10 S s. Our method uses charged decay products from both non-leptonic and semileptonic decay modes. (orig./HSI).

  11. Time Series ARIMA Models of Undergraduate Grade Point Average.

    Science.gov (United States)

    Rogers, Bruce G.

    The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…

  12. Crystallographic extraction and averaging of data from small image areas

    NARCIS (Netherlands)

    Perkins, GA; Downing, KH; Glaeser, RM

    The accuracy of structure factor phases determined from electron microscope images is determined mainly by the level of statistical significance, which is limited by the low level of allowed electron exposure and by the number of identical unit cells that can be averaged. It is shown here that

  13. Reducing Noise by Repetition: Introduction to Signal Averaging

    Science.gov (United States)

    Hassan, Umer; Anwar, Muhammad Sabieh

    2010-01-01

    This paper describes theory and experiments, taken from biophysics and physiological measurements, to illustrate the technique of signal averaging. In the process, students are introduced to the basic concepts of signal processing, such as digital filtering, Fourier transformation, baseline correction, pink and Gaussian noise, and the cross- and…

  14. Environmental stresses can alleviate the average deleterious effect of mutations

    Directory of Open Access Journals (Sweden)

    Leibler Stanislas

    2003-05-01

    Full Text Available Abstract Background Fundamental questions in evolutionary genetics, including the possible advantage of sexual reproduction, depend critically on the effects of deleterious mutations on fitness. Limited existing experimental evidence suggests that, on average, such effects tend to be aggravated under environmental stresses, consistent with the perception that stress diminishes the organism's ability to tolerate deleterious mutations. Here, we ask whether there are also stresses with the opposite influence, under which the organism becomes more tolerant to mutations. Results We developed a technique, based on bioluminescence, which allows accurate automated measurements of bacterial growth rates at very low cell densities. Using this system, we measured growth rates of Escherichia coli mutants under a diverse set of environmental stresses. In contrast to the perception that stress always reduces the organism's ability to tolerate mutations, our measurements identified stresses that do the opposite – that is, despite decreasing wild-type growth, they alleviate, on average, the effect of deleterious mutations. Conclusions Our results show a qualitative difference between various environmental stresses ranging from alleviation to aggravation of the average effect of mutations. We further show how the existence of stresses that are biased towards alleviation of the effects of mutations may imply the existence of average epistatic interactions between mutations. The results thus offer a connection between the two main factors controlling the effects of deleterious mutations: environmental conditions and epistatic interactions.

  15. The background effective average action approach to quantum gravity

    DEFF Research Database (Denmark)

    D’Odorico, G.; Codello, A.; Pagani, C.

    2016-01-01

    of an UV attractive non-Gaussian fixed-point, which we find characterized by real critical exponents. Our closure method is general and can be applied systematically to more general truncations of the gravitational effective average action. © Springer International Publishing Switzerland 2016....

  16. Error estimates in horocycle averages asymptotics: challenges from string theory

    NARCIS (Netherlands)

    Cardella, M.A.

    2010-01-01

    For modular functions of rapid decay, a classical result connects the error estimate in their long horocycle average asymptotic to the Riemann hypothesis. We study similar asymptotics, for modular functions with not that mild growing conditions, such as of polynomial growth and of exponential growth

  17. Moving average rules as a source of market instability

    NARCIS (Netherlands)

    Chiarella, C.; He, X.Z.; Hommes, C.H.

    2006-01-01

    Despite the pervasiveness of the efficient markets paradigm in the academic finance literature, the use of various moving average (MA) trading rules remains popular with financial market practitioners. This paper proposes a stochastic dynamic financial market model in which demand for traded assets

  18. arXiv Averaged Energy Conditions and Bouncing Universes

    CERN Document Server

    Giovannini, Massimo

    2017-11-16

    The dynamics of bouncing universes is characterized by violating certain coordinate-invariant restrictions on the total energy-momentum tensor, customarily referred to as energy conditions. Although there could be epochs in which the null energy condition is locally violated, it may perhaps be enforced in an averaged sense. Explicit examples of this possibility are investigated in different frameworks.

  19. 26 CFR 1.1301-1 - Averaging of farm income.

    Science.gov (United States)

    2010-04-01

    ... January 1, 2003, rental income based on a share of a tenant's production determined under an unwritten... the Collection of Income Tax at Source on Wages (Federal income tax withholding), or the amount of net... 26 Internal Revenue 11 2010-04-01 2010-04-01 true Averaging of farm income. 1.1301-1 Section 1...

  20. Implications of Methodist clergies' average lifespan and missional ...

    African Journals Online (AJOL)

    2015-06-09

    Jun 9, 2015 ... The author of Genesis 5 paid meticulous attention to the lifespan of several people ... of Southern Africa (MCSA), and to argue that memories of the ... average ages at death were added up and the sum was divided by 12 (which represents the 12 ..... not explicit in how the departed Methodist ministers were.

  1. Pareto Principle in Datamining: an Above-Average Fencing Algorithm

    Directory of Open Access Journals (Sweden)

    K. Macek

    2008-01-01

    Full Text Available This paper formulates a new datamining problem: which subset of input space has the relatively highest output where the minimal size of this subset is given. This can be useful where usual datamining methods fail because of error distribution asymmetry. The paper provides a novel algorithm for this datamining problem, and compares it with clustering of above-average individuals.

  2. Average Distance Travelled To School by Primary and Secondary ...

    African Journals Online (AJOL)

    This study investigated average distance travelled to school by students in primary and secondary schools in Anambra, Enugu, and Ebonyi States and effect on attendance. These are among the top ten densely populated and educationally advantaged States in Nigeria. Research evidences report high dropout rates in ...

  3. Trend of Average Wages as Indicator of Hypothetical Money Illusion

    Directory of Open Access Journals (Sweden)

    Julian Daszkowski

    2010-06-01

    Full Text Available The definition of wage in Poland not before 1998 includes any value of social security contribution. Changed definition creates higher level of reported wages, but was expected not to influence the take home pay. Nevertheless, the trend of average wages, after a short period, has returned to its previous line. Such effect is explained in the term of money illusion.

  4. Computation of the average energy for LXY electrons

    International Nuclear Information System (INIS)

    Grau Carles, A.; Grau, A.

    1996-01-01

    The application of an atomic rearrangement model in which we only consider the three shells K, L and M, to compute the counting efficiency for electron capture nuclides, requires a fine averaged energy value for LMN electrons. In this report, we illustrate the procedure with two example, ''125 I and ''109 Cd. (Author) 4 refs

  5. Bounding quantum gate error rate based on reported average fidelity

    International Nuclear Information System (INIS)

    Sanders, Yuval R; Wallman, Joel J; Sanders, Barry C

    2016-01-01

    Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli distance as a measure of this deviation, and we show that knowledge of the Pauli distance enables tighter estimates of the error rate of quantum gates. (fast track communication)

  6. 75 FR 78157 - Farmer and Fisherman Income Averaging

    Science.gov (United States)

    2010-12-15

    ... to the averaging of farm and fishing income in computing income tax liability. The regulations...: PART 1--INCOME TAXES 0 Paragraph 1. The authority citation for part 1 continues to read in part as... section 1 tax would be increased if one-third of elected farm income were allocated to each year. The...

  7. Domain-averaged Fermi-hole Analysis for Solids

    Czech Academy of Sciences Publication Activity Database

    Baranov, A.; Ponec, Robert; Kohout, M.

    2012-01-01

    Roč. 137, č. 21 (2012), s. 214109 ISSN 0021-9606 R&D Projects: GA ČR GA203/09/0118 Institutional support: RVO:67985858 Keywords : bonding in solids * domain averaged fermi hole * natural orbitals Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 3.164, year: 2012

  8. Characteristics of phase-averaged equations for modulated wave groups

    NARCIS (Netherlands)

    Klopman, G.; Petit, H.A.H.; Battjes, J.A.

    2000-01-01

    The project concerns the influence of long waves on coastal morphology. The modelling of the combined motion of the long waves and short waves in the horizontal plane is done by phase-averaging over the short wave motion and using intra-wave modelling for the long waves, see e.g. Roelvink (1993).

  9. A depth semi-averaged model for coastal dynamics

    Science.gov (United States)

    Antuono, M.; Colicchio, G.; Lugni, C.; Greco, M.; Brocchini, M.

    2017-05-01

    The present work extends the semi-integrated method proposed by Antuono and Brocchini ["Beyond Boussinesq-type equations: Semi-integrated models for coastal dynamics," Phys. Fluids 25(1), 016603 (2013)], which comprises a subset of depth-averaged equations (similar to Boussinesq-like models) and a Poisson equation that accounts for vertical dynamics. Here, the subset of depth-averaged equations has been reshaped in a conservative-like form and both the Poisson equation formulations proposed by Antuono and Brocchini ["Beyond Boussinesq-type equations: Semi-integrated models for coastal dynamics," Phys. Fluids 25(1), 016603 (2013)] are investigated: the former uses the vertical velocity component (formulation A) and the latter a specific depth semi-averaged variable, ϒ (formulation B). Our analyses reveal that formulation A is prone to instabilities as wave nonlinearity increases. On the contrary, formulation B allows an accurate, robust numerical implementation. Test cases derived from the scientific literature on Boussinesq-type models—i.e., solitary and Stokes wave analytical solutions for linear dispersion and nonlinear evolution and experimental data for shoaling properties—are used to assess the proposed solution strategy. It is found that the present method gives reliable predictions of wave propagation in shallow to intermediate waters, in terms of both semi-averaged variables and conservation properties.

  10. Effect of tank geometry on its average performance

    Science.gov (United States)

    Orlov, Aleksey A.; Tsimbalyuk, Alexandr F.; Malyugin, Roman V.; Leontieva, Daria A.; Kotelnikova, Alexandra A.

    2018-03-01

    The mathematical model of non-stationary filling of vertical submerged tanks with gaseous uranium hexafluoride is presented in the paper. There are calculations of the average productivity, heat exchange area, and filling time of various volumes tanks with smooth inner walls depending on their "height : radius" ratio as well as the average productivity, degree, and filling time of horizontal ribbing tank with volume 6.10-2 m3 with change central hole diameter of the ribs. It has been shown that the growth of "height / radius" ratio in tanks with smooth inner walls up to the limiting values allows significantly increasing tank average productivity and reducing its filling time. Growth of H/R ratio of tank with volume 1.0 m3 to the limiting values (in comparison with the standard tank having H/R equal 3.49) augments tank productivity by 23.5 % and the heat exchange area by 20%. Besides, we have demonstrated that maximum average productivity and a minimum filling time are reached for the tank with volume 6.10-2 m3 having central hole diameter of horizontal ribs 6.4.10-2 m.

  11. An averaged polarizable potential for multiscale modeling in phospholipid membranes

    DEFF Research Database (Denmark)

    Witzke, Sarah; List, Nanna Holmgaard; Olsen, Jógvan Magnus Haugaard

    2017-01-01

    A set of average atom-centered charges and polarizabilities has been developed for three types of phospholipids for use in polarizable embedding calculations. The lipids investigated are 1,2-dimyristoyl-sn-glycero-3-phosphocholine, 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine, and 1-palmitoyl...

  12. Understanding coastal morphodynamic patterns from depth-averaged sediment concentration

    NARCIS (Netherlands)

    Ribas, F.; Falques, A.; de Swart, H. E.; Dodd, N.; Garnier, R.; Calvete, D.

    This review highlights the important role of the depth-averaged sediment concentration (DASC) to understand the formation of a number of coastal morphodynamic features that have an alongshore rhythmic pattern: beach cusps, surf zone transverse and crescentic bars, and shoreface-connected sand

  13. Post-model selection inference and model averaging

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  14. Determination of average activating thermal neutron flux in bulk samples

    International Nuclear Information System (INIS)

    Doczi, R.; Csikai, J.; Doczi, R.; Csikai, J.; Hassan, F. M.; Ali, M.A.

    2004-01-01

    A previous method used for the determination of the average neutron flux within bulky samples has been applied for the measurements of hydrogen contents of different samples. An analytical function is given for the description of the correlation between the activity of Dy foils and the hydrogen concentrations. Results obtained by the activation and the thermal neutron reflection methods are compared

  15. Grade Point Average: What's Wrong and What's the Alternative?

    Science.gov (United States)

    Soh, Kay Cheng

    2011-01-01

    Grade point average (GPA) has been around for more than two centuries. However, it has created a lot of confusion, frustration, and anxiety to GPA-producers and users alike, especially when used across-nation for different purposes. This paper looks into the reasons for such a state of affairs from the perspective of educational measurement. It…

  16. The Effect of Honors Courses on Grade Point Averages

    Science.gov (United States)

    Spisak, Art L.; Squires, Suzanne Carter

    2016-01-01

    High-ability entering college students give three main reasons for not choosing to become part of honors programs and colleges; they and/or their parents believe that honors classes at the university level require more work than non-honors courses, are more stressful, and will adversely affect their self-image and grade point average (GPA) (Hill;…

  17. 40 CFR 63.652 - Emissions averaging provisions.

    Science.gov (United States)

    2010-07-01

    ... emissions more than the reference control technology, but the combination of the pollution prevention... emissions average. This must include any Group 1 emission points to which the reference control technology... agrees has a higher nominal efficiency than the reference control technology. Information on the nominal...

  18. Average and local structure of selected metal deuterides

    Energy Technology Data Exchange (ETDEWEB)

    Soerby, Magnus H.

    2005-07-01

    The main topic of this thesis is improved understanding of site preference and mutual interactions of deuterium (D) atoms in selected metallic metal deuterides. The work was partly motivated by reports of abnormally short D-D distances in RENiInD1.33 compounds (RE = rear-earth element; D-D {upsilon} square root 1.6 Aa) which show that the so-called Switendick criterion that demands a D-D separation of at least 2 Aa, is not a universal rule. The work is experimental and heavily based on scattering measurements using x-rays (lab and synchrotron) and neutrons. In order to enhance data quality, deuterium is almost exclusively used instead of natural hydrogen in sample preparations. The data-analyses are in some cases taken beyond ''conventional'' analysis of the Bragg scattering, as the diffuse scattering contains important information on D-D distances in disordered deuterides (Paper 3 and 4). A considerable part of this work is devoted to determination of the crystal structure of saturated Zr2Ni deuteride, Zr2NiD-4.8. The structure remained unsolved when only a few months remained of the scholarship. The route to the correct structure was found in the last moment. In Chapter II this winding road towards the structure determination is described; an interesting exercise in how to cope with triclinic superstructures of metal hydrides. The solution emerged by combining data from synchrotron radiation powder x-ray diffraction (SR-PXD), powder neutron diffraction (PND) and electron diffraction (ED). The triclinic crystal structure, described in space group P1 , is fully ordered with composition Zr4Ni2D9 (Zr2NiD4.5). The unit cell is doubled as compared to lower Zr2Ni deuterides due to a deuterium superstructure: asuper = a, bsuper = b - c, csuper = b + c. The deviation from higher symmetry is very small. The metal lattice is pseudo-I-centred tetragonal and the deuterium lattice is pseudo-C-centred monoclinic. The deuterium site preference in Zr2Ni

  19. Average and local structure of selected metal deuterides

    International Nuclear Information System (INIS)

    Soerby, Magnus H.

    2004-01-01

    The main topic of this thesis is improved understanding of site preference and mutual interactions of deuterium (D) atoms in selected metallic metal deuterides. The work was partly motivated by reports of abnormally short D-D distances in RENiInD1.33 compounds (RE = rear-earth element; D-D Υ square root 1.6 Aa) which show that the so-called Switendick criterion that demands a D-D separation of at least 2 Aa, is not a universal rule. The work is experimental and heavily based on scattering measurements using x-rays (lab and synchrotron) and neutrons. In order to enhance data quality, deuterium is almost exclusively used instead of natural hydrogen in sample preparations. The data-analyses are in some cases taken beyond ''conventional'' analysis of the Bragg scattering, as the diffuse scattering contains important information on D-D distances in disordered deuterides (Paper 3 and 4). A considerable part of this work is devoted to determination of the crystal structure of saturated Zr2Ni deuteride, Zr2NiD-4.8. The structure remained unsolved when only a few months remained of the scholarship. The route to the correct structure was found in the last moment. In Chapter II this winding road towards the structure determination is described; an interesting exercise in how to cope with triclinic superstructures of metal hydrides. The solution emerged by combining data from synchrotron radiation powder x-ray diffraction (SR-PXD), powder neutron diffraction (PND) and electron diffraction (ED). The triclinic crystal structure, described in space group P1 , is fully ordered with composition Zr4Ni2D9 (Zr2NiD4.5). The unit cell is doubled as compared to lower Zr2Ni deuterides due to a deuterium superstructure: asuper = a, bsuper = b - c, csuper = b + c. The deviation from higher symmetry is very small. The metal lattice is pseudo-I-centred tetragonal and the deuterium lattice is pseudo-C-centred monoclinic. The deuterium site preference in Zr2Ni deuterides at 1 bar D2 and

  20. [Three-dimensional gait analysis of patients with osteonecrosis of femoral head before and after treatments with vascularized greater trochanter bone flap].

    Science.gov (United States)

    Cui, Daping; Zhao, Dewei

    2011-03-01

    To provide the objective basis for the evaluation of the operative results of vascularized greater trochanter bone flap in treating osteonecrosis of the femoral head (ONFH) by three-dimensional gait analysis. Between March 2006 and March 2007, 35 patients with ONFH were treated with vascularized greater trochanter bone flap, and gait analysis was made by using three-dimensional gait analysis system before operation and at 1, 2 years after operation. There were 23 males and 12 females, aged 21-52 years (mean, 35.2 years), including 8 cases of steroid-induced, 7 cases of traumatic, 6 cases of alcoholic, and 14 cases of idiopathic ONFH. The left side was involved in 15 cases, and right side in 20 cases. According to Association Research Circulation Osseous (ARCO) classification, all patients were diagnosed as having femoral-head necrosis at stage III. Preoperative Harris hip functional score (HHS) was 56.2 +/- 5.6. The disease duration was 1.5-18.6 years (mean, 5.2 years). All incisions healed at stage I without early postoperative complications of deep vein thrombosis and infections of incision. Thirty-five patients were followed up 2-3 years with an average of 2.5 years. At 2 years after operation, the HHS score was 85.8 +/- 4.1, showing significant difference when compared with the preoperative score (t = 23.200, P = 0.000). Before operation, patients showed a hip muscles gait, short gait, reduce pain gait, and the pathological gaits significantly improved at 1 year after operation. At 1 year and 2 years after operation, step frequency, pace, step length and hip flexion, hip extension, knee flexion, ankle flexion were significantly improved (P petronas wave appeared at swing phase; the preoperative situation was three normal phase waves. These results suggest that three-dimensional gait analysis before and after vascularized greater trochanter for ONFH can evaluate precisely hip vitodynamics variation.

  1. Forecasting of Average Monthly River Flows in Colombia

    Science.gov (United States)

    Mesa, O. J.; Poveda, G.

    2006-05-01

    The last two decades have witnessed a marked increase in our knowledge of the causes of interannual hydroclimatic variability and our ability to make predictions. Colombia, located near the seat of the ENSO phenomenon, has been shown to experience negative (positive) anomalies in precipitation in concert with El Niño (La Niña). In general besides the Pacific Ocean, Colombia has climatic influences from the Atlantic Ocean and the Caribbean Sea through the tropical forest of the Amazon basin and the savannas of the Orinoco River, in top of the orographic and hydro-climatic effects introduced by the Andes. As in various other countries of the region, hydro-electric power contributes a large proportion (75 %) of the total electricity generation in Colombia. Also, most agriculture is rain-fed dependant, and domestic water supply relies mainly on surface waters from creeks and rivers. Besides, various vector borne tropical diseases intensify in response to rain and temperature changes. Therefore, there is a direct connection between climatic fluctuations and national and regional economies. This talk specifically presents different forecasts of average monthly stream flows for the inflow into the largest reservoir used for hydropower generation in Colombia, and illustrates the potential economic savings of such forecasts. Because of planning of the reservoir operation, the most appropriated time scale for this application is the annual to interannual. Fortunately, this corresponds to the scale at which hydroclimate variability understanding has improved significantly. Among the different possibilities we have explored: traditional statistical ARIMA models, multiple linear regression, natural and constructed analogue models, the linear inverse model, neural network models, the non-parametric regression splines (MARS) model, regime dependant Markovian models and one we termed PREBEO, which is based on spectral bands decomposition using wavelets. Most of the methods make

  2. An average salary: approaches to the index determination

    Directory of Open Access Journals (Sweden)

    T. M. Pozdnyakova

    2017-01-01

    Full Text Available The article “An average salary: approaches to the index determination” is devoted to studying various methods of calculating this index, both used by official state statistics of the Russian Federation and offered by modern researchers.The purpose of this research is to analyze the existing approaches to calculating the average salary of employees of enterprises and organizations, as well as to make certain additions that would help to clarify this index.The information base of the research is laws and regulations of the Russian Federation Government, statistical and analytical materials of the Federal State Statistics Service of Russia for the section «Socio-economic indexes: living standards of the population», as well as materials of scientific papers, describing different approaches to the average salary calculation. The data on the average salary of employees of educational institutions of the Khabarovsk region served as the experimental base of research. In the process of conducting the research, the following methods were used: analytical, statistical, calculated-mathematical and graphical.The main result of the research is an option of supplementing the method of calculating average salary index within enterprises or organizations, used by Goskomstat of Russia, by means of introducing a correction factor. Its essence consists in the specific formation of material indexes for different categories of employees in enterprises or organizations, mainly engaged in internal secondary jobs. The need for introducing this correction factor comes from the current reality of working conditions of a wide range of organizations, when an employee is forced, in addition to the main position, to fulfill additional job duties. As a result, the situation is frequent when the average salary at the enterprise is difficult to assess objectively because it consists of calculating multiple rates per staff member. In other words, the average salary of

  3. High average power diode pumped solid state lasers for CALIOPE

    International Nuclear Information System (INIS)

    Comaskey, B.; Halpin, J.; Moran, B.

    1994-07-01

    Diode pumping of solid state media offers the opportunity for very low maintenance, high efficiency, and compact laser systems. For remote sensing, such lasers may be used to pump tunable non-linear sources, or if tunable themselves, act directly or through harmonic crystals as the probe. The needs of long range remote sensing missions require laser performance in the several watts to kilowatts range. At these power performance levels, more advanced thermal management technologies are required for the diode pumps. The solid state laser design must now address a variety of issues arising from the thermal loads, including fracture limits, induced lensing and aberrations, induced birefringence, and laser cavity optical component performance degradation with average power loading. In order to highlight the design trade-offs involved in addressing the above issues, a variety of existing average power laser systems are briefly described. Included are two systems based on Spectra Diode Laboratory's water impingement cooled diode packages: a two times diffraction limited, 200 watt average power, 200 Hz multi-rod laser/amplifier by Fibertek, and TRW's 100 watt, 100 Hz, phase conjugated amplifier. The authors also present two laser systems built at Lawrence Livermore National Laboratory (LLNL) based on their more aggressive diode bar cooling package, which uses microchannel cooler technology capable of 100% duty factor operation. They then present the design of LLNL's first generation OPO pump laser for remote sensing. This system is specified to run at 100 Hz, 20 nsec pulses each with 300 mJ, less than two times diffraction limited, and with a stable single longitudinal mode. The performance of the first testbed version will be presented. The authors conclude with directions their group is pursuing to advance average power lasers. This includes average power electro-optics, low heat load lasing media, and heat capacity lasers

  4. Construction of average adult Japanese voxel phantoms for dose assessment

    International Nuclear Information System (INIS)

    Sato, Kaoru; Takahashi, Fumiaki; Satoh, Daiki; Endo, Akira

    2011-12-01

    The International Commission on Radiological Protection (ICRP) adopted the adult reference voxel phantoms based on the physiological and anatomical reference data of Caucasian on October, 2007. The organs and tissues of these phantoms were segmented on the basis of ICRP Publication 103. In future, the dose coefficients for internal dose and dose conversion coefficients for external dose calculated using the adult reference voxel phantoms will be widely used for the radiation protection fields. On the other hand, the body sizes and organ masses of adult Japanese are generally smaller than those of adult Caucasian. In addition, there are some cases that the anatomical characteristics such as body sizes, organ masses and postures of subjects influence the organ doses in dose assessment for medical treatments and radiation accident. Therefore, it was needed to use human phantoms with average anatomical characteristics of Japanese. The authors constructed the averaged adult Japanese male and female voxel phantoms by modifying the previously developed high-resolution adult male (JM) and female (JF) voxel phantoms. It has been modified in the following three aspects: (1) The heights and weights were agreed with the Japanese averages; (2) The masses of organs and tissues were adjusted to the Japanese averages within 10%; (3) The organs and tissues, which were newly added for evaluation of the effective dose in ICRP Publication 103, were modeled. In this study, the organ masses, distances between organs, specific absorbed fractions (SAFs) and dose conversion coefficients of these phantoms were compared with those evaluated using the ICRP adult reference voxel phantoms. This report provides valuable information on the anatomical and dosimetric characteristics of the averaged adult Japanese male and female voxel phantoms developed as reference phantoms of adult Japanese. (author)

  5. Reproducing multi-model ensemble average with Ensemble-averaged Reconstructed Forcings (ERF) in regional climate modeling

    Science.gov (United States)

    Erfanian, A.; Fomenko, L.; Wang, G.

    2016-12-01

    Multi-model ensemble (MME) average is considered the most reliable for simulating both present-day and future climates. It has been a primary reference for making conclusions in major coordinated studies i.e. IPCC Assessment Reports and CORDEX. The biases of individual models cancel out each other in MME average, enabling the ensemble mean to outperform individual members in simulating the mean climate. This enhancement however comes with tremendous computational cost, which is especially inhibiting for regional climate modeling as model uncertainties can originate from both RCMs and the driving GCMs. Here we propose the Ensemble-based Reconstructed Forcings (ERF) approach to regional climate modeling that achieves a similar level of bias reduction at a fraction of cost compared with the conventional MME approach. The new method constructs a single set of initial and boundary conditions (IBCs) by averaging the IBCs of multiple GCMs, and drives the RCM with this ensemble average of IBCs to conduct a single run. Using a regional climate model (RegCM4.3.4-CLM4.5), we tested the method over West Africa for multiple combination of (up to six) GCMs. Our results indicate that the performance of the ERF method is comparable to that of the MME average in simulating the mean climate. The bias reduction seen in ERF simulations is achieved by using more realistic IBCs in solving the system of equations underlying the RCM physics and dynamics. This endows the new method with a theoretical advantage in addition to reducing computational cost. The ERF output is an unaltered solution of the RCM as opposed to a climate state that might not be physically plausible due to the averaging of multiple solutions with the conventional MME approach. The ERF approach should be considered for use in major international efforts such as CORDEX. Key words: Multi-model ensemble, ensemble analysis, ERF, regional climate modeling

  6. Determination of the diagnostic x-ray tube practical peak voltage (PPV) from average or average peak voltage measurements

    Energy Technology Data Exchange (ETDEWEB)

    Hourdakis, C J, E-mail: khour@gaec.gr [Ionizing Radiation Calibration Laboratory-Greek Atomic Energy Commission, PO Box 60092, 15310 Agia Paraskevi, Athens, Attiki (Greece)

    2011-04-07

    The practical peak voltage (PPV) has been adopted as the reference measuring quantity for the x-ray tube voltage. However, the majority of commercial kV-meter models measure the average peak, U-bar{sub P}, the average, U-bar, the effective, U{sub eff} or the maximum peak, U{sub P} tube voltage. This work proposed a method for determination of the PPV from measurements with a kV-meter that measures the average U-bar or the average peak, U-bar{sub p} voltage. The kV-meter reading can be converted to the PPV by applying appropriate calibration coefficients and conversion factors. The average peak k{sub PPV,kVp} and the average k{sub PPV,Uav} conversion factors were calculated from virtual voltage waveforms for conventional diagnostic radiology (50-150 kV) and mammography (22-35 kV) tube voltages and for voltage ripples from 0% to 100%. Regression equation and coefficients provide the appropriate conversion factors at any given tube voltage and ripple. The influence of voltage waveform irregularities, like 'spikes' and pulse amplitude variations, on the conversion factors was investigated and discussed. The proposed method and the conversion factors were tested using six commercial kV-meters at several x-ray units. The deviations between the reference and the calculated - according to the proposed method - PPV values were less than 2%. Practical aspects on the voltage ripple measurement were addressed and discussed. The proposed method provides a rigorous base to determine the PPV with kV-meters from U-bar{sub p} and U-bar measurement. Users can benefit, since all kV-meters, irrespective of their measuring quantity, can be used to determine the PPV, complying with the IEC standard requirements.

  7. Gas, Oil, and Water Production from Jonah, Pinedale, Greater Wamsutter, and Stagecoach Draw Fields in the Greater Green River Basin, Wyoming

    Science.gov (United States)

    Nelson, Philip H.; Ewald, Shauna M.; Santus, Stephen L.; Trainor, Patrick K.

    2010-01-01

    Gas, oil, and water production data were compiled from selected wells in four gas fields in rocks of Late Cretaceous age in southwestern Wyoming. This study is one of a series of reports examining fluid production from tight-gas reservoirs, which are characterized by low permeability, low porosity, and the presence of clay minerals in pore space. Production from each well is represented by two samples spaced five years apart, the first sample typically taken two years after commencement of production. For each producing interval, summary diagrams of oil versus gas and water versus gas production show fluid production rates, the change in rates during five years, the water-gas and oil-gas ratios, and the fluid type. These diagrams permit well-to-well and field-to-field comparisons. Fields producing water at low rates (water dissolved in gas in the reservoir) can be distinguished from fields producing water at moderate or high rates, and the water-gas ratios are quantified. The ranges of first-sample gas rates in Pinedale field and Jonah field are quite similar, and the average gas production rate for the second sample, taken five years later, is about one-half that of the first sample for both fields. Water rates are generally substantially higher in Pinedale than in Jonah, and water-gas ratios in Pinedale are roughly a factor of ten greater in Pinedale than in Jonah. Gas and water production rates from each field are fairly well grouped, indicating that Pinedale and Jonah fields are fairly cohesive gas-water systems. Pinedale field appears to be remarkably uniform in its flow behavior with time. Jonah field, which is internally faulted, exhibits a small spread in first-sample production rates. In the Greater Wamsutter field, gas production from the upper part of the Almond Formation is greater than from the main part of the Almond. Some wells in the main and the combined (upper and main parts) Almond show increases in water production with time, whereas increases

  8. Improving Photosynthesis

    Science.gov (United States)

    Evans, John R.

    2013-01-01

    Photosynthesis is the basis of plant growth, and improving photosynthesis can contribute toward greater food security in the coming decades as world population increases. Multiple targets have been identified that could be manipulated to increase crop photosynthesis. The most important target is Rubisco because it catalyses both carboxylation and oxygenation reactions and the majority of responses of photosynthesis to light, CO2, and temperature are reflected in its kinetic properties. Oxygenase activity can be reduced either by concentrating CO2 around Rubisco or by modifying the kinetic properties of Rubisco. The C4 photosynthetic pathway is a CO2-concentrating mechanism that generally enables C4 plants to achieve greater efficiency in their use of light, nitrogen, and water than C3 plants. To capitalize on these advantages, attempts have been made to engineer the C4 pathway into C3 rice (Oryza sativa). A simpler approach is to transfer bicarbonate transporters from cyanobacteria into chloroplasts and prevent CO2 leakage. Recent technological breakthroughs now allow higher plant Rubisco to be engineered and assembled successfully in planta. Novel amino acid sequences can be introduced that have been impossible to reach via normal evolution, potentially enlarging the range of kinetic properties and breaking free from the constraints associated with covariation that have been observed between certain kinetic parameters. Capturing the promise of improved photosynthesis in greater yield potential will require continued efforts to improve carbon allocation within the plant as well as to maintain grain quality and resistance to disease and lodging. PMID:23812345

  9. Bootstrap inference for pre-averaged realized volatility based on non-overlapping returns

    DEFF Research Database (Denmark)

    Gonçalves, Sílvia; Hounyo, Ulrich; Meddahi, Nour

    The main contribution of this paper is to propose bootstrap methods for realized volatility-like estimators defined on pre-averaged returns. In particular, we focus on the pre-averaged realized volatility estimator proposed by Podolskij and Vetter (2009). This statistic can be written (up to a bias......-overlapping nature of the pre-averaged returns implies that these are asymptotically independent, but possibly heteroskedastic. This motivates the application of the wild bootstrap in this context. We provide a proof of the first order asymptotic validity of this method for percentile and percentile-t intervals. Our...... Monte Carlo simulations show that the wild bootstrap can improve the finite sample properties of the existing first order asymptotic theory provided we choose the external random variable appropriately. We use empirical work to illustrate its use in practice....

  10. Slimness is associated with greater intercourse and lesser masturbation frequency.

    Science.gov (United States)

    Brody, Stuart

    2004-01-01

    I examined the relationship of recalled and diary recorded frequency of penile-vaginal intercourse (FSI), noncoital partnered sexual activity, and masturbation to measured waist and hip circumference in 120 healthy adults aged 19-38. Slimmer waist (in men and in the sexes combined) and slimmer hips (in men and women) were associated with greater FSI. Slimmer waist and hips were associated with rated importance of intercourse for men. Noncoital partnered sexual activity had a less consistent association with slimness. Slimmer waist and hips were associated with less masturbation (in men and in the sexes combined). I discuss the results in terms of differences between different sexual behaviors, attractiveness, emotional relatedness, physical sensitivity, sexual dysfunction, sociobiology, psychopharmacological aspects of excess fat and carbohydrate consumption, and implications for sex therapy.

  11. ADR characteristics and corporate governance in the Greater China region

    Directory of Open Access Journals (Sweden)

    Lee-Hsien Pan

    2012-04-01

    Full Text Available We examine the relationship between firm valuation and governance mechanisms, firm characteristics, and institutional factors of the American Depository Receipts (ADRs domiciled in the Greater China region. We find that China ADRs have the highest market-to-book value ratio followed by Hong Kong and Taiwan ADRs. It appears that Chinese firms with the poorest external governance environment stand to benefit the most from cross listing under the ADR programs. Listing in the U.S. that requires more stringent regulations and disclosure rules may strengthen the firms’ governance practices and thereby enhance their firm value. Among the internal governance mechanisms, institutional ownership and insider ownership are important for firm value.

  12. Greater confinement disposal program at the Savannah River Plant

    International Nuclear Information System (INIS)

    Cook, J.R.; Towler, O.A.; Peterson, D.L.; Johnson, G.M.; Helton, B.D.

    1984-01-01

    The first facility to demonstrate Greater Confinement Disposal (GCD) in a humid environment in the United States has been built and is operating at the Savannah River Plant. GCD practices of waste segregation, packaging, emplacement below the root zone, and waste stabilization are being used in the demonstration. Activity concentrations to select wastes for GCD are based on a study of SRP burial records, and are equal to or less than those for Class B waste in 10CFR61. The first disposal units to be constructed are 9-foot diameter, thirty-foot deep boreholes which will be used to dispose of wastes from production reactors, tritiated wastes, and selected wastes from off-site. In 1984 an engineered GCD trench will be constructed for disposal of boxed wastes and large bulky items. 2 figures, 1 table

  13. Evil genius? How dishonesty can lead to greater creativity.

    Science.gov (United States)

    Gino, Francesca; Wiltermuth, Scott S

    2014-04-01

    We propose that dishonest and creative behavior have something in common: They both involve breaking rules. Because of this shared feature, creativity may lead to dishonesty (as shown in prior work), and dishonesty may lead to creativity (the hypothesis we tested in this research). In five experiments, participants had the opportunity to behave dishonestly by overreporting their performance on various tasks. They then completed one or more tasks designed to measure creativity. Those who cheated were subsequently more creative than noncheaters, even when we accounted for individual differences in their creative ability (Experiment 1). Using random assignment, we confirmed that acting dishonestly leads to greater creativity in subsequent tasks (Experiments 2 and 3). The link between dishonesty and creativity is explained by a heightened feeling of being unconstrained by rules, as indicated by both mediation (Experiment 4) and moderation (Experiment 5).

  14. Use of renewable energy in the greater metropolitan area

    International Nuclear Information System (INIS)

    Arias Garcia, Rocio; Castro Gomez, Gustavo; Fallas Cordero, Kenneth; Grant Chaves, Samuel; Mendez Parrales, Tony; Parajeles Fernandez, Ivan

    2012-01-01

    A study is conducted on different renewable energy within the larger metropolitan area, selecting the most suitable for the area and the implementation for distributed generation. A research methodology is practiced type pretending gather the necessary information to make proposals selected of different type of energy. The geography of the greater metropolitan area is studied along with the different existing renewable energy: distributed generation, remote measurement of energy which is one of the elements of the concept of intelligent networks (Smart Grid) in the electricity sector, legislation of Costa Rica regarding the generation of renewable energy and environmental impact. An analysis of economic feasibility is covered for each of the proposals estimating current rates for leading distributors of a future value, concluding with the viability of projects for possible execution of the same. (author) [es

  15. The hydrogen village in the Greater Toronto Area (GTA)

    International Nuclear Information System (INIS)

    Kimmel, T.B.; Smith, R.

    2004-01-01

    'Full text:' A Hydrogen Village (H2V) is a public/private partnership with an objective to accelerate the commercialization of hydrogen and fuel cell technology in Canada and firmly position Canada as the international leader in this sector. The first Hydrogen Village is planned for the Greater Toronto Area (GTA) and will make use of existing hydrogen and fuel cell deployments to assist in its creation. This five year GTA Hydrogen Village program is planned to begin operations in 2004. The Hydrogen Village will demonstrate and deploy various hydrogen production and delivery techniques as well as fuel cells for stationary, transportation (mobile) and portable applications. This paper will provide an overview of the Hydrogen Village and identify the missions, objectives, members and progress within the H2V. (author)

  16. Age and Expatriate Job Performance in Greater China

    DEFF Research Database (Denmark)

    Selmer, Jan; Lauring, Jakob; Feng, Yunxia

    2009-01-01

    a positive impact on expatriates' job performance. Therefore, the purpose of this paper is toexamine the association between the age of business expatriates and their work performance in a Chinese cultural setting. Design/methodology/approach - Controlling for the potential bias of a number of background......, companies should not discriminate against older candidatesin expatriate selection for Greater China. Furthermore, older expatriates destined for a Chinesecultural context could be trained how to exploit their age advantage. Originality/value - In contrast to previous studies, this investigation attempts...... to match a certain personal characteristic of expatriates with a specific host culture. The results have implications for and contribute to the literature on expatriate selection as well as to the body of research on crosscultural training....

  17. The Greater Caucasus Glacier Inventory (Russia, Georgia and Azerbaijan)

    Science.gov (United States)

    Tielidze, Levan G.; Wheate, Roger D.

    2018-01-01

    There have been numerous studies of glaciers in the Greater Caucasus, but none that have generated a modern glacier database across the whole mountain range. Here, we present an updated and expanded glacier inventory at three time periods (1960, 1986, 2014) covering the entire Greater Caucasus. Large-scale topographic maps and satellite imagery (Corona, Landsat 5, Landsat 8 and ASTER) were used to conduct a remote-sensing survey of glacier change, and the 30 m resolution Advanced Spaceborne Thermal Emission and Reflection Radiometer Global Digital Elevation Model (ASTER GDEM; 17 November 2011) was used to determine the aspect, slope and height distribution of glaciers. Glacier margins were mapped manually and reveal that in 1960 the mountains contained 2349 glaciers with a total glacier surface area of 1674.9 ± 70.4 km2. By 1986, glacier surface area had decreased to 1482.1 ± 64.4 km2 (2209 glaciers), and by 2014 to 1193.2 ± 54.0 km2 (2020 glaciers). This represents a 28.8 ± 4.4 % (481 ± 21.2 km2) or 0.53 % yr-1 reduction in total glacier surface area between 1960 and 2014 and an increase in the rate of area loss since 1986 (0.69 % yr-1) compared to 1960-1986 (0.44 % yr-1). Glacier mean size decreased from 0.70 km2 in 1960 to 0.66 km2 in 1986 and to 0.57 km2 in 2014. This new glacier inventory has been submitted to the Global Land Ice Measurements from Space (GLIMS) database and can be used as a basis data set for future studies.

  18. Myiasis in Dogs in the Greater Accra Region of Ghana.

    Science.gov (United States)

    Johnson, Sherry A M; Gakuya, Daniel W; Mbuthia, Paul G; Mande, John D; Afakye, Kofi; Maingi, Ndichu

    2016-01-01

    Myiasis is the infestation of tissues of live vertebrate animals and humans with dipterous larvae. In sub-Saharan Africa, Cordylobia anthropohaga and Cordylobia rodhaini are known to be responsible for cutaneous myiasis in animals and humans. Human cases of myiasis, purportedly acquired in Ghana but diagnosed in other countries, have been reported; however, published data on its occurrence in animals in Ghana is unavailable. This study assessed the prevalence of canine myiasis among owned dogs in the Greater Accra region (GAR) of Ghana. A cross-sectional study was conducted in the Greater Accra region of Ghana, selected for being the region with the highest estimated population density of owned dogs. Physical examination and demographic characteristics of the study dogs were assessed. Management of the dogs was assessed through a questionnaire administered to the dog owners. A total of 392 owned dogs were sampled. Twenty-nine (7.4%) had cutaneous myiasis caused by C. rodhaini. In addition, one (0.2%) of the dogs had intestinal myiasis, with Dermatobia hominis as the offending larvae. Among the breeds of dogs with myiasis, the mongrel was most affected, with 24 (82.8%) out of the 29 cases. The mongrels, majority of which (24; 82.8%) were males, were left to roam freely in the community. Results from this study demonstrate that C. rodhaini and D. hominis are important causes of myiasis in owned dogs in the GAR of Ghana. Dogs could play a role in the spread of myiasis to humans, with its attendant public health implications.

  19. Economic and geographic factors affecting the development of Greater Baku

    Directory of Open Access Journals (Sweden)

    Vusat AFANDIYEV

    2014-12-01

    Full Text Available Globally, the responsible factors for the ongoing development of urbanization are the high speed of population growth, and the mass migration of humans to cities and large urban areas. In most countries, this process resulted in the emergence of ‘pseudo-urbanization’ which is difficult to be regulated. The purpose of the carried researches to determine the development priorities in the territory of Greater Baku – the capital city of the Republic of Azerbaijan; to define the problems that take place in this connection; and to develop ways of elimination of these problems. The reason of taking Baku as a research area is connected with some of the factors. Firstly, studies on Baku have been conducted based on the Soviet geographical and urban planning school and their methods for a long period. In this regard, it is necessary to carry out research in this field based on the principles adopted in most countries. Secondly, since 1992, the intensive accumulation of population in the territory of the capital city and the surrounding areas is being observed because of socio-economic problems. As a result, the process of pseudo-urbanization intensified, entailing a densely-populated area. Thirdly, low-rise buildings still continue to exist in the large areas within the territory of Baku, and they are not associated with the functional structure of the city. This situation creates many challenges, particularly in terms of density growth and effective use of the city’s territory. Finally, numerous new buildings have been constructed in the residential areas of Baku in recent years, and this may entailserious problems in water supply, energy provision, and utilities. The study is carried out referring to previous works of researchers, statistic data, and the results of the population census conducted in 1959-2009.The practical significance of the scientific work is that positive and negative factors affecting the further development of Greater Baku

  20. Greater learnability is not sufficient to produce cultural universals.

    Science.gov (United States)

    Rafferty, Anna N; Griffiths, Thomas L; Ettlinger, Marc

    2013-10-01

    Looking across human societies reveals regularities in the languages that people speak and the concepts that they use. One explanation that has been proposed for these "cultural universals" is differences in the ease with which people learn particular languages and concepts. A difference in learnability means that languages and concepts possessing a particular property are more likely to be accurately transmitted from one generation of learners to the next. Intuitively, this difference could allow languages and concepts that are more learnable to become more prevalent after multiple generations of cultural transmission. If this is the case, the prevalence of languages and concepts with particular properties can be explained simply by demonstrating empirically that they are more learnable. We evaluate this argument using mathematical analysis and behavioral experiments. Specifically, we provide two counter-examples that show how greater learnability need not result in a property becoming prevalent. First, more learnable languages and concepts can nonetheless be less likely to be produced spontaneously as a result of transmission failures. We simulated cultural transmission in the laboratory to show that this can occur for memory of distinctive items: these items are more likely to be remembered, but not generated spontaneously once they have been forgotten. Second, when there are many languages or concepts that lack the more learnable property, sheer numbers can swamp the benefit produced by greater learnability. We demonstrate this using a second series of experiments involving artificial language learning. Both of these counter-examples show that simply finding a learnability bias experimentally is not sufficient to explain why a particular property is prevalent in the languages or concepts used in human societies: explanations for cultural universals based on cultural transmission need to consider the full set of hypotheses a learner could entertain and all of