WorldWideScience

Sample records for group average decrease

  1. Decreased group velocity in compositionally graded films.

    Science.gov (United States)

    Gao, Lei

    2006-03-01

    A theoretical formalism is presented that describes the group velocity of electromagnetic signals in compositionally graded films. The theory is first based on effective medium approximation or the Maxwell-Garnett approximation to obtain the equivalent dielectric function in a z slice. Then the effective dielectric tensor of the graded film is directly determined, and the group velocities for ordinary and extraordinary waves in the film are derived. It is found that the group velocity is sensitively dependent on the graded profile. For a power-law graded profile f(x)=ax(m), increasing m results in the decreased extraordinary group velocity. Such a decreased tendency becomes significant when the incident angle increases. Therefore the group velocity in compositionally graded films can be effectively decreased by our suitable adjustment of the total volume fraction, the graded profile, and the incident angle. As a result, the compositionally graded films may serve as candidate material for realizing small group velocity.

  2. Experimental Warming Decreases the Average Size and Nucleic Acid Content of Marine Bacterial Communities.

    Science.gov (United States)

    Huete-Stauffer, Tamara M; Arandia-Gorostidi, Nestor; Alonso-Sáez, Laura; Morán, Xosé Anxelu G

    2016-01-01

    Organism size reduction with increasing temperature has been suggested as a universal response to global warming. Since genome size is usually correlated to cell size, reduction of genome size in unicells could be a parallel outcome of warming at ecological and evolutionary time scales. In this study, the short-term response of cell size and nucleic acid content of coastal marine prokaryotic communities to temperature was studied over a full annual cycle at a NE Atlantic temperate site. We used flow cytometry and experimental warming incubations, spanning a 6°C range, to analyze the hypothesized reduction with temperature in the size of the widespread flow cytometric bacterial groups of high and low nucleic acid content (HNA and LNA bacteria, respectively). Our results showed decreases in size in response to experimental warming, which were more marked in 0.8 μm pre-filtered treatment rather than in the whole community treatment, thus excluding the role of protistan grazers in our findings. Interestingly, a significant effect of temperature on reducing the average nucleic acid content (NAC) of prokaryotic cells in the communities was also observed. Cell size and nucleic acid decrease with temperature were correlated, showing a common mean decrease of 0.4% per °C. The usually larger HNA bacteria consistently showed a greater reduction in cell and NAC compared with their LNA counterparts, especially during the spring phytoplankton bloom period associated to maximum bacterial growth rates in response to nutrient availability. Our results show that the already smallest planktonic microbes, yet with key roles in global biogeochemical cycling, are likely undergoing important structural shrinkage in response to rising temperatures.

  3. Experimental warming decreases the average size and nucleic acid content of marine bacterial communities

    Directory of Open Access Journals (Sweden)

    Tamara Megan Huete-Stauffer

    2016-05-01

    Full Text Available Organism size reduction with increasing temperature has been suggested as a universal response to global warming. Since genome size is usually correlated to cell size, reduction of genome size in unicells could be a parallel outcome of warming at ecological and evolutionary time scales. In this study, the short-term response of cell size and nucleic acid content of coastal marine prokaryotic communities to temperature was studied over a full annual cycle at a NE Atlantic temperate site. We used flow cytometry and experimental warming incubations, spanning a 6ºC range, to analyze the hypothesized reduction with temperature in the size of the widespread flow cytometric bacterial groups of high and low nucleic acid content (HNA and LNA bacteria, respectively. Our results showed decreases in size in response to experimental warming, which were more marked in 0.8 µm pre-filtered treatment rather than in the whole community treatment, thus excluding the role of protistan grazers in our findings. Interestingly, a significant effect of temperature on reducing the average nucleic acid content of prokaryotic cells in the communities was also observed. Cell size and nucleic acid decrease with temperature were correlated, showing a common mean decrease of 0.4% per ºC. The usually larger HNA bacteria consistently showed a greater reduction in cell and nucleic acid content compared with their LNA counterparts, especially during the spring phytoplankton bloom period associated to maximum bacterial growth rates in response to nutrient availability. Our results show that the already smallest planktonic microbes, yet with key roles in global biogeochemical cycling, are likely undergoing important structural shrinkage in response to rising temperatures.

  4. Experimental Warming Decreases the Average Size and Nucleic Acid Content of Marine Bacterial Communities

    KAUST Repository

    Huete-Stauffer, Tamara M.

    2016-05-23

    Organism size reduction with increasing temperature has been suggested as a universal response to global warming. Since genome size is usually correlated to cell size, reduction of genome size in unicells could be a parallel outcome of warming at ecological and evolutionary time scales. In this study, the short-term response of cell size and nucleic acid content of coastal marine prokaryotic communities to temperature was studied over a full annual cycle at a NE Atlantic temperate site. We used flow cytometry and experimental warming incubations, spanning a 6°C range, to analyze the hypothesized reduction with temperature in the size of the widespread flow cytometric bacterial groups of high and low nucleic acid content (HNA and LNA bacteria, respectively). Our results showed decreases in size in response to experimental warming, which were more marked in 0.8 μm pre-filtered treatment rather than in the whole community treatment, thus excluding the role of protistan grazers in our findings. Interestingly, a significant effect of temperature on reducing the average nucleic acid content (NAC) of prokaryotic cells in the communities was also observed. Cell size and nucleic acid decrease with temperature were correlated, showing a common mean decrease of 0.4% per °C. The usually larger HNA bacteria consistently showed a greater reduction in cell and NAC compared with their LNA counterparts, especially during the spring phytoplankton bloom period associated to maximum bacterial growth rates in response to nutrient availability. Our results show that the already smallest planktonic microbes, yet with key roles in global biogeochemical cycling, are likely undergoing important structural shrinkage in response to rising temperatures.

  5. Decrease in dynamic viscosity and average molecular weight of alginate from Laminaria digitata during alkaline extraction

    OpenAIRE

    Vauchel, Peggy; Arhaliass, Abdellah; Legrand, Jack; Kaas, Raymond; Baron, Regis

    2008-01-01

    Alginates are natural polysaccharides that are extracted from brown seaweeds and widely used for their rheological properties. The central step in the extraction protocol used in the alginate industry is the alkaline extraction, which requires several hours. In this study, a significant decrease in alginate dynamic viscosity was observed after 2 h of alkaline treatment. Intrinsic viscosity and average molecular weight of alginates from alkaline extractions 1-4 h in duration were determined, i...

  6. Decreases in average bacterial community rRNA operon copy number during succession.

    Science.gov (United States)

    Nemergut, Diana R; Knelman, Joseph E; Ferrenberg, Scott; Bilinski, Teresa; Melbourne, Brett; Jiang, Lin; Violle, Cyrille; Darcy, John L; Prest, Tiffany; Schmidt, Steven K; Townsend, Alan R

    2016-05-01

    Trait-based studies can help clarify the mechanisms driving patterns of microbial community assembly and coexistence. Here, we use a trait-based approach to explore the importance of rRNA operon copy number in microbial succession, building on prior evidence that organisms with higher copy numbers respond more rapidly to nutrient inputs. We set flasks of heterotrophic media into the environment and examined bacterial community assembly at seven time points. Communities were arrayed along a geographic gradient to introduce stochasticity via dispersal processes and were analyzed using 16 S rRNA gene pyrosequencing, and rRNA operon copy number was modeled using ancestral trait reconstruction. We found that taxonomic composition was similar between communities at the beginning of the experiment and then diverged through time; as well, phylogenetic clustering within communities decreased over time. The average rRNA operon copy number decreased over the experiment, and variance in rRNA operon copy number was lowest both early and late in succession. We then analyzed bacterial community data from other soil and sediment primary and secondary successional sequences from three markedly different ecosystem types. Our results demonstrate that decreases in average copy number are a consistent feature of communities across various drivers of ecological succession. Importantly, our work supports the scaling of the copy number trait over multiple levels of biological organization, ranging from cells to populations and communities, with implications for both microbial ecology and evolution.

  7. Characteristics of phase-averaged equations for modulated wave groups

    NARCIS (Netherlands)

    Klopman, G.; Petit, H.A.H.; Battjes, J.A.

    2000-01-01

    The project concerns the influence of long waves on coastal morphology. The modelling of the combined motion of the long waves and short waves in the horizontal plane is done by phase-averaging over the short wave motion and using intra-wave modelling for the long waves, see e.g. Roelvink (1993). Th

  8. Average Exceptional Lie Group Hierarchy and High Energy Physics

    CERN Document Server

    El Naschie, M S

    2008-01-01

    Starting from an invariant total dimension for an exceptional Lie symmetry groups hierarchy, we drive all the essential characteristic and coupling constants of the fundamental interactions of physics. It is shown in a most simplistic fashion that all physical fields are various transfinite scaling transformation and topological deformation of each other. An extended standard model on the other hand turned out to be a compact sub group H of a version of E7 exceptional Lie group E7(−5) with dimH =69. Thus particle physics, electromagnetism as well as gravity and the bulk are all representable via modular spaces akin to the famous compactified version of F. Klein’s modular curve.

  9. Renormalization group and scaling within the microcanonical fermionic average approach

    CERN Document Server

    Azcoiti, V; Di Carlo, G; Galante, A; Grillo, A F; Azcoiti, V; Laliena, V; Di Carlo, G; Galante, A; Grillo, A F

    1994-01-01

    The MFA approach for simulations with dynamical fermions in lattice gauge theories allows in principle to explore the parameters space of the theory (e.g. the \\beta, m plane for the study of chiral condensate in QED) without the need of computing the fermionic determinant at each point. We exploit this possibility for extracting both the renormalization group trajectories ("constant physics lines") and the scaling function, and we test it in the Schwinger Model. We discuss the applicability of this method to realistic theories.

  10. Designing Group Examinations to Decrease Social Loafing and Increase Learning

    Science.gov (United States)

    Revere, Lee; Elden, Max; Bartsch, Robert

    2008-01-01

    This study examines a method to decrease social loafing in a group examination. Students who met in teams during the semester took an exam in groups. Rules for the exam, based on the Jeopardy game show, facilitated both group and individual accountability. Feedback from students indicated that compared to a class that did not have group exams,…

  11. A group's physical attractiveness is greater than the average attractiveness of its members : The group attractiveness effect

    NARCIS (Netherlands)

    van Osch, Y.M.J.; Blanken, Irene; Meijs, Maartje H. J.; van Wolferen, Job

    2015-01-01

    We tested whether the perceived physical attractiveness of a group is greater than the average attractiveness of its members. In nine studies, we find evidence for the so-called group attractiveness effect (GA-effect), using female, male, and mixed-gender groups, indicating that group impressions of

  12. A group's physical attractiveness is greater than the average attractiveness of its members : The group attractiveness effect

    NARCIS (Netherlands)

    van Osch, Y.M.J.; Blanken, Irene; Meijs, Maartje H. J.; van Wolferen, Job

    2015-01-01

    We tested whether the perceived physical attractiveness of a group is greater than the average attractiveness of its members. In nine studies, we find evidence for the so-called group attractiveness effect (GA-effect), using female, male, and mixed-gender groups, indicating that group impressions of

  13. A group's physical attractiveness is greater than the average attractiveness of its members: the group attractiveness effect.

    Science.gov (United States)

    van Osch, Yvette; Blanken, Irene; Meijs, Maartje H J; van Wolferen, Job

    2015-04-01

    We tested whether the perceived physical attractiveness of a group is greater than the average attractiveness of its members. In nine studies, we find evidence for the so-called group attractiveness effect (GA-effect), using female, male, and mixed-gender groups, indicating that group impressions of physical attractiveness are more positive than the average ratings of the group members. A meta-analysis on 33 comparisons reveals that the effect is medium to large (Cohen's d = 0.60) and moderated by group size. We explored two explanations for the GA-effect: (a) selective attention to attractive group members, and (b) the Gestalt principle of similarity. The results of our studies are in favor of the selective attention account: People selectively attend to the most attractive members of a group and their attractiveness has a greater influence on the evaluation of the group. © 2015 by the Society for Personality and Social Psychology, Inc.

  14. Group Enrollment and Open Gym Format Decreases Cardiac Rehabilitation Wait Times.

    Science.gov (United States)

    Bachmann, Justin M; Klint, Zachary W; Jagoda, Allison M; McNatt, Jeremy K; Abney, Lesa R; Huang, Shi; Liddle, David G; Frontera, Walter R; Freiberg, Matthew S

    2017-09-01

    Wait times for the first cardiac rehabilitation (CR) session are inversely related to CR participation rates. We hypothesized that changing from individually scheduled appointments to a group enrollment and open gym format, in which patients were enrolled during group intake sessions and could arrive for subsequent CR sessions any time during open gym periods, would decrease wait times. A total of 603 patients enrolled in CR at Vanderbilt University Medical Center from July 2012 to December 2014 were included in the study. We evaluated the effect of changing to a group enrollment and open gym format after adjusting for referral diagnosis, insurance status, seasonality, and other factors. We compared outcomes, including exercise capacity and quality of life, between the 2 groups. Patients in the group enrollment and open gym format had significantly lower average wait times than those receiving individual appointments (14.9 vs 19.5 days, P < .001). After multivariable adjustment, the new CR delivery model was associated with a 22% (3.7 days) decrease in average wait times (95% CI, 1.9-5.6, P < .001). Patients completing CR had equally beneficial changes in 6-minute walk distance and Patient Health Questionnaire scores between the 2 groups, although there was no significant difference in participation rates or the number of sessions attended. Implementation of a group enrollment and open gym format was associated with a significant decrease in wait times for first CR sessions. This CR delivery model may be an option for programs seeking to decrease wait times.

  15. How many carboxyl groups does an average molecule of humic-like substances contain?

    Directory of Open Access Journals (Sweden)

    I. Salma

    2008-05-01

    Full Text Available The carboxyl groups of atmospheric humic-like substances (HULIS are of special interest because they influence the solubility in water, affect the water activity and surface tension of droplets in the air, and allow formation of chelates with biologically active elements. Experimentally determined abundances of the carboxyl group within HULIS by functional group analysis are consistent with our knowledge on the average molecular mass of HULIS if the number of dissociable carboxyl groups is assumed to be rather small. The best agreement between the average molecular mass derived from the existing abundance data and the average molecular mass published earlier occurs for assuming approximately one dissociable carboxyl group only. This implies that HULIS can not be regarded as polycarboxilic acid. The average molecular mass of HULIS derived from our electrochemical measurements with the assumption of one dissociable carboxyl group per molecule ranges from 248 to 305 Da. It was concluded that HULIS are a moderately strong/weak acid with a dissociation constant of about pK=3.4, which fits well into the interval represented by fulvic and humic acids. The mean number of dissociable carboxyl groups in HULIS molecules was refined to be between 1.1 and 1.4.

  16. How many carboxyl groups does an average molecule of humic-like substances contain?

    Directory of Open Access Journals (Sweden)

    I. Salma

    2008-10-01

    Full Text Available The carboxyl groups of atmospheric humic-like substances (HULIS are of special interest because they influence the solubility in water, affect the water activity and surface tension of droplets in the air, and allow formation of chelates with biologically active elements. Experimentally determined abundances of the carboxyl group within HULIS by functional group analysis are consistent with our knowledge on the average molecular mass of HULIS if the number of dissociable carboxyl groups is assumed to be rather small. The best agreement between the average molecular mass derived from the existing abundance data and the average molecular mass published earlier occurs for assuming approximately one dissociable carboxyl group only. This implies that HULIS can not be regarded as polycarboxilic acid in diluted solutions. The average molecular mass of HULIS derived from our electrochemical measurements with the assumption of one dissociable carboxyl group or equivalently, one dissociable sulphate ester per molecule ranges from 250 to 310 Da. It was concluded that HULIS are a moderately strong/weak acid with a dissociation constant of about pK=3.4, which fits well into the interval represented by fulvic and humic acids. The mean number of dissociable hydrogen (i.e. of carboxyl groups and sulphate esters jointly in HULIS molecules was refined to be between 1.1 and 1.4 in acidic solutions.

  17. Using Group Theory to Obtain Eigenvalues of Nonsymmetric Systems by Symmetry Averaging

    Directory of Open Access Journals (Sweden)

    Marion L. Ellzey

    2009-08-01

    Full Text Available If the Hamiltonian in the time independent Schrödinger equation, HΨ = EΨ, is invariant under a group of symmetry transformations, the theory of group representations can help obtain the eigenvalues and eigenvectors of H. A finite group that is not a symmetry group of H is nevertheless a symmetry group of an operator Hsym projected from H by the process of symmetry averaging. In this case H = Hsym + HR where HR is the nonsymmetric remainder. Depending on the nature of the remainder, the solutions for the full operator may be obtained by perturbation theory. It is shown here that when H is represented as a matrix [H] over a basis symmetry adapted to the group, the reduced matrix elements of [Hsym] are simple averages of certain elements of [H], providing a substantial enhancement in computational efficiency. A series of examples are given for the smallest molecular graphs. The first is a two vertex graph corresponding to a heteronuclear diatomic molecule. The symmetrized component then corresponds to a homonuclear system. A three vertex system is symmetry averaged in the first case to Cs and in the second case to the nonabelian C3v. These examples illustrate key aspects of the symmetry-averaging process.

  18. A novel car following model considering average speed of preceding vehicles group

    Science.gov (United States)

    Sun, Dihua; Kang, Yirong; Yang, Shuhong

    2015-10-01

    In this paper, a new car following model is presented by considering the average speed effect of preceding vehicles group in cyber-physical systems (CPS) environment. The effect of this new consideration upon the stability of traffic flow is examined through linear stability analysis. A modified Korteweg-de Vries (mKdV) equation was derived via nonlinear analysis to describe the propagating behavior of traffic density wave near the critical point. Good agreement between the simulation and the analytical results shows that average speed of preceding vehicles group leads to the stabilization of traffic systems, and thus can efficiently suppress the emergence of traffic jamming.

  19. Bombay blood group: Is prevalence decreasing with urbanization and the decreasing rate of consanguineous marriage.

    Science.gov (United States)

    Mallick, Sujata; Kotasthane, Dhananjay S; Chowdhury, Puskar S; Sarkar, Sonali

    2015-01-01

    Bombay blood group although rare is found to be more prevalent in the Western and Southern states of India, believed to be associated with consanguineous marriage. To estimate the prevalence of the Bombay blood group (Oh) in the urban population of Puducherry. To find the effect of urbanization on consanguineous marriage and to establish whether consanguinity plays a part in the prevalence of Oh group. To compare Oh group prevalence with that of other neighboring states, where population is not predominantly urban. This is a descriptive study in a tertiary care hospital in Puducherry, over a period of 6 years. All blood samples showing 'O' group were tested with anti-H lectin. Specialized tests like Adsorption Elution Technique, inhibition assay for determination of secretor status were performed on Oh positive cases. Any history of consanguineous marriage was recorded. All variables were categorical variable and percentage and proportions were calculated manually. Analysis of the results of 35,497 study subjects showed that the most common group was 'O' group constituting 14,164 (39.90%) of subjects. Only three "Oh" that is, Bombay phenotype (0.008%) were detected. Consanguinity was observed in two cases (66.66%). This study shows the prevalence of Bombay blood group representing the urban population of Puducherry, to be high (0.008%) and associated with consanguineous marriage (66.66%). Thus, consanguinity is still an important risk factor present, even in an urban population in Southern India.

  20. Bombay blood group: Is prevalence decreasing with urbanization and the decreasing rate of consanguineous marriage

    Directory of Open Access Journals (Sweden)

    Sujata Mallick

    2015-01-01

    Full Text Available Context: Bombay blood group although rare is found to be more prevalent in the Western and Southern states of India, believed to be associated with consanguineous marriage. Aims: To estimate the prevalence of the Bombay blood group (Oh in the urban population of Puducherry. To find the effect of urbanization on consanguineous marriage and to establish whether consanguinity plays a part in the prevalence of Oh group. To compare Oh group prevalence with that of other neighboring states, where population is not predominantly urban. Settings and Design: This is a descriptive study in a tertiary care hospital in Puducherry, over a period of 6 years. Materials and Methods: All blood samples showing ′O′ group were tested with anti-H lectin. Specialized tests like Adsorption Elution Technique, inhibition assay for determination of secretor status were performed on Oh positive cases. Any history of consanguineous marriage was recorded. Statistical Analysis Used: All variables were categorical variable and percentage and proportions were calculated manually. Results: Analysis of the results of 35,497 study subjects showed that the most common group was ′O′ group constituting 14,164 (39.90% of subjects. Only three "Oh" that is, Bombay phenotype (0.008% were detected. Consanguinity was observed in two cases (66.66%. Conclusions: This study shows the prevalence of Bombay blood group representing the urban population of Puducherry, to be high (0.008% and associated with consanguineous marriage (66.66%. Thus, consanguinity is still an important risk factor present, even in an urban population in Southern India.

  1. Decreasing the Inattentive Behavior of Jordanian Children: A Group Experiment

    Science.gov (United States)

    Zaghlawan, Hasan Y.; Ostrosky, Michaelene M.; Al-Khateeb, Jamal M.

    2007-01-01

    The present study investigated the efficacy of using response cost paired with Differential Reinforcement of Incompatible Behavior (DRI) to manage the inattentive behavior of 30 students attending third and fourth grade in Jordan. A pretest-posttest control group design was employed to evaluate the efficacy of response cost and DRI. Results showed…

  2. Interval Generalized Ordered Weighted Utility Multiple Averaging Operators and Their Applications to Group Decision-Making

    Directory of Open Access Journals (Sweden)

    Yunna Wu

    2017-07-01

    Full Text Available We propose a new class of aggregation operator based on utility function and apply them to group decision-making problem. First of all, based on an optimal deviation model, a new operator called the interval generalized ordered weighted utility multiple averaging (IGOWUMA operator is proposed, it incorporates the risk attitude of decision-makers (DMs in the aggregation process. Some desirable properties of the IGOWUMA operator are studied afterward. Subsequently, under the hyperbolic absolute risk aversion (HARA utility function, another new operator named as interval generalized ordered weighted hyperbolic absolute risk aversion utility multiple averaging-HARA (IGOWUMA-HARA operator is also defined. Then, we discuss its families and find that it includes a wide range of aggregation operators. To determine the weights of the IGOWUMA-HARA operator, a preemptive nonlinear objective programming model is constructed, which can determine a uniform weighting vector to guarantee the uniform standard comparison between the alternatives and measure their fair competition under the condition of valid comparison between various alternatives. Moreover, a new approach for group decision-making is developed based on the IGOWUMA-HARA operator. Finally, a comparison analysis is carried out to illustrate the superiority of the proposed method and the result implies that our operator is superior to the existing operator.

  3. Biosphere Dose Conversion Factors for Reasonably Maximally Exposed Individual and Average Member of Critical Group

    Energy Technology Data Exchange (ETDEWEB)

    K. Montague

    2000-02-23

    The purpose of this calculation is to develop additional Biosphere Dose Conversion Factors (BDCFs) for a reasonably maximally exposed individual (RMEI) for the periods 10,000 years and 1,000,000 years after the repository closure. In addition, Biosphere Dose Conversion Factors for the average member of a critical group are calculated for those additional radionuclides postulated to reach the environment during the period after 10,000 years and up to 1,000,000 years. After the permanent closure of the repository, the engineered systems within the repository will eventually lose their abilities to contain radionuclide inventory, and the radionuclides will migrate through the geosphere and eventually enter the local water table moving toward inhabited areas. The primary release scenario is a groundwater well used for drinking water supply and irrigation, and this calculation takes these postulated releases and follows them through various pathways until they result in a dose to either a member of critical group or a reasonably maximally exposed individual. The pathways considered in this calculation include inhalation, ingestion, and direct exposure.

  4. The development of early numeracy skills in kindergarten in low-, average- and high-performance groups

    NARCIS (Netherlands)

    Aunio, P.; Heiskari, P.; van Luit, J.E.H.; Vuorio, J.-M.

    2015-01-01

    In this study, we investigated how early numeracy skills develop in kindergarten-age children. The participants were 235 Finnish children (111 girls and 124 boys). At the time of the first measurement, the average age of the children was 6 years. The measurements were conducted three times during 1

  5. Capillary-wave models and the effective-average-action scheme of functional renormalization group.

    Science.gov (United States)

    Jakubczyk, P

    2011-08-01

    We reexamine the functional renormalization-group theory of wetting transitions. As a starting point of the analysis we apply an exact equation describing renormalization group flow of the generating functional for irreducible vertex functions. We show how the standard nonlinear renormalization group theory of wetting transitions can be recovered by a very simple truncation of the exact flow equation. The derivation makes all the involved approximations transparent and demonstrates the applicability of the approach in any spatial dimension d≥2. Exploiting the nonuniqueness of the renormalization-group cutoff scheme, we find, however, that the capillary parameter ω is a scheme-dependent quantity below d=3. For d=3 the parameter ω is perfectly robust against scheme variation.

  6. Loop expansion of the average effective action in the functional renormalization group approach

    Science.gov (United States)

    Lavrov, Peter M.; Merzlikin, Boris S.

    2015-10-01

    We formulate a perturbation expansion for the effective action in a new approach to the functional renormalization group method based on the concept of composite fields for regulator functions being their most essential ingredients. We demonstrate explicitly the principal difference between the properties of effective actions in these two approaches existing already on the one-loop level in a simple gauge model.

  7. Loop expansion of average effective action in functional renormalization group approach

    CERN Document Server

    Lavrov, Peter M

    2015-01-01

    We formulate a perturbation expansion for the effective action in new approach to the functional renormalization group (FRG) method based on concept of composite fields for regulator functions being therein most essential ingredients. We demonstrate explicitly the principal difference between properties of effective actions in these two approaches existing already on the one-loop level in a simple gauge model.

  8. Average bond energies between boron and elements of the fourth, fifth, sixth, and seventh groups of the periodic table

    Science.gov (United States)

    Altshuller, Aubrey P

    1955-01-01

    The average bond energies D(gm)(B-Z) for boron-containing molecules have been calculated by the Pauling geometric-mean equation. These calculated bond energies are compared with the average bond energies D(exp)(B-Z) obtained from experimental data. The higher values of D(exp)(B-Z) in comparison with D(gm)(B-Z) when Z is an element in the fifth, sixth, or seventh periodic group may be attributed to resonance stabilization or double-bond character.

  9. Bisulfite-based epityping on pooled genomic DNA provides an accurate estimate of average group DNA methylation

    Directory of Open Access Journals (Sweden)

    Docherty Sophia J

    2009-03-01

    Full Text Available Abstract Background DNA methylation plays a vital role in normal cellular function, with aberrant methylation signatures being implicated in a growing number of human pathologies and complex human traits. Methods based on the modification of genomic DNA with sodium bisulfite are considered the 'gold-standard' for DNA methylation profiling on genomic DNA; however, they require relatively large amounts of DNA and may be prohibitively expensive when used on the large sample sizes necessary to detect small effects. We propose that a high-throughput DNA pooling approach will facilitate the use of emerging methylomic profiling techniques in large samples. Results Compared with data generated from 89 individual samples, our analysis of 205 CpG sites spanning nine independent regions of the genome demonstrates that DNA pools can be used to provide an accurate and reliable quantitative estimate of average group DNA methylation. Comparison of data generated from the pooled DNA samples with results averaged across the individual samples comprising each pool revealed highly significant correlations for individual CpG sites across all nine regions, with an average overall correlation across all regions and pools of 0.95 (95% bootstrapped confidence intervals: 0.94 to 0.96. Conclusion In this study we demonstrate the validity of using pooled DNA samples to accurately assess group DNA methylation averages. Such an approach can be readily applied to the assessment of disease phenotypes reducing the time, cost and amount of DNA starting material required for large-scale epigenetic analyses.

  10. The birth of an infant decreases group spacing in a zoo-housed lowland gorilla group (Gorilla gorilla gorilla).

    Science.gov (United States)

    Kurtycz, Laura M; Shender, Marisa A; Ross, Stephen R

    2014-01-01

    Changes in group composition can alter the behavior of social animals such as gorillas. Although gorilla births are presumed to affect group spacing patterns, there is relatively little data about how these events affect gorilla group cohesion. We investigated how members of a western lowland gorilla group (n = 6) at Lincoln Park Zoo (Chicago, IL, USA) spaced themselves prior to and after the birth of an infant, to investigate changes in group cohesion. Gorillas were housed in an indoor-outdoor enclosure in which access to the outdoors was permitted when temperatures exceeded 5°C. We recorded spatial locations of each group member using 30-min group scans on tablet computers with an electronic map interface, as well as noting their access to outdoor areas. Data from the 4 months following the birth was compared to a control period corresponding to early pregnancy. We measured distances between all possible group dyads for each scan and subsequently calculated a mean distance between all group members. An ANOVA revealed that access to the outdoors had no effect on group spacing (F(1,56) = 0.066, P = 0.799). However, the presence of an infant resulted in a significant reduction in inter-individual distance (F(1,56) = 23.988, P = 0.000), decreasing inter-individual spacing by 12.5%. This information helps characterize the behavioral impact of a new birth on captive gorilla social structure and could potentially inform future management of breeding gorilla groups.

  11. The Effectiveness of Cognitive Group Therapy on Decreasing Depression among High School Students

    Directory of Open Access Journals (Sweden)

    Ali Mohammad nazariy

    2011-01-01

    Full Text Available Introduction: Depending on its etiology, many methods have been established for the treatment of depression among adolescents; cognitive therapy is one of them. The purpose of the present research was to investigate the effect of cognitive group therapy on decreasing depression among high school students. Methods: From the male students of a boarding high school in Tarom district of Gazvin province, a sample of 16 students were randomly selected and assigned into experimental and control groups. The measurement tool was Beck depression inventory. The experimental group participated in 8 sessions of cognitive therapy, while the control group did not receive any treatment. The mean scores of the two groups were compared through independent t-test. Results: The results of the study showed significant differences between the mean scores of the pre-tests and post-tests of the experimental and control groups, so that cognitive group therapy had reduced the depression mean score in the experimental group (-2.1 vs. -0.25. Conclusion: The findings of the study indicate that cognitive group therapy can reduce the depression among students. These findings can be used for therapeutic planning within the cognitive paradigm to reduce or prevent depression among student

  12. Comparison of cognition abilities between groups of children with specific learning disability having average, bright normal and superior nonverbal intelligence

    Directory of Open Access Journals (Sweden)

    Karande Sunil

    2005-03-01

    Full Text Available BACKGROUND: Specific learning disabilities (SpLD viz. dyslexia, dysgraphia and dyscalculia are an important cause of academic underachievement. Aims: To assess whether cognition abilities vary in children with SpLD having different grades of nonverbal intelligence. SETTING: Government recognized clinic in a medical college. DESIGN: Cross-sectional study. SUBJECTS AND METHODS: Ninety-five children with SpLD (aged 9-14 years were assessed. An academic achievement of two years below the actual grade placement on educational assessment with a Curriculum-Based test was considered diagnostic of SpLD. On basis of their nonverbal Intelligence Quotient (IQ scores obtained on the Wechsler Intelligence Scale for Children test, the study children were divided into three groups: (i average-nonverbal intelligence group (IQ 90-109, (ii bright normal-nonverbal intelligence group (IQ 110-119, and (iii superior-nonverbal intelligence group (IQ 120-129. A battery of 13 Cognition Function tests (CFTs devised by Jnana Prabodhini′s Institute of Psychology, Pune based on Guilford′s Structure of Intellect Model was administered individually on each child in the four areas of information viz. figural, symbolic, semantic and behavioral. STATISTICAL ANALYSIS USED: The mean CFTs scores obtained in the four areas of information were calculated for each of the three groups and compared using one-way analysis of variance test. A P value < 0.05 was to be considered statistically significant. RESULTS: There were no statistically significant differences between their mean CFTs scores in any of the four areas of information. CONCLUSIONS: Cognition abilities are similar in children with SpLD having average, bright-normal and superior nonverbal intelligence.

  13. When larger brains do not have more neurons: increased numbers of cells are compensated by decreased average cell size across mouse individuals

    Science.gov (United States)

    Herculano-Houzel, Suzana; Messeder, Débora J.; Fonseca-Azevedo, Karina; Pantoja, Nilma A.

    2015-01-01

    There is a strong trend toward increased brain size in mammalian evolution, with larger brains composed of more and larger neurons than smaller brains across species within each mammalian order. Does the evolution of increased numbers of brain neurons, and thus larger brain size, occur simply through the selection of individuals with more and larger neurons, and thus larger brains, within a population? That is, do individuals with larger brains also have more, and larger, neurons than individuals with smaller brains, such that allometric relationships across species are simply an extension of intraspecific scaling? Here we show that this is not the case across adult male mice of a similar age. Rather, increased numbers of neurons across individuals are accompanied by increased numbers of other cells and smaller average cell size of both types, in a trade-off that explains how increased brain mass does not necessarily ensue. Fundamental regulatory mechanisms thus must exist that tie numbers of neurons to numbers of other cells and to average cell size within individual brains. Finally, our results indicate that changes in brain size in evolution are not an extension of individual variation in numbers of neurons, but rather occur through step changes that must simultaneously increase numbers of neurons and cause cell size to increase, rather than decrease. PMID:26082686

  14. When larger brains do not have more neurons: increased numbers of cells are compensated by decreased average cell size across mouse individuals.

    Science.gov (United States)

    Herculano-Houzel, Suzana; Messeder, Débora J; Fonseca-Azevedo, Karina; Pantoja, Nilma A

    2015-01-01

    There is a strong trend toward increased brain size in mammalian evolution, with larger brains composed of more and larger neurons than smaller brains across species within each mammalian order. Does the evolution of increased numbers of brain neurons, and thus larger brain size, occur simply through the selection of individuals with more and larger neurons, and thus larger brains, within a population? That is, do individuals with larger brains also have more, and larger, neurons than individuals with smaller brains, such that allometric relationships across species are simply an extension of intraspecific scaling? Here we show that this is not the case across adult male mice of a similar age. Rather, increased numbers of neurons across individuals are accompanied by increased numbers of other cells and smaller average cell size of both types, in a trade-off that explains how increased brain mass does not necessarily ensue. Fundamental regulatory mechanisms thus must exist that tie numbers of neurons to numbers of other cells and to average cell size within individual brains. Finally, our results indicate that changes in brain size in evolution are not an extension of individual variation in numbers of neurons, but rather occur through step changes that must simultaneously increase numbers of neurons and cause cell size to increase, rather than decrease.

  15. When larger brains do not have more neurons: Increased numbers of cells are compensated by decreased average cell size across mouse individuals

    Directory of Open Access Journals (Sweden)

    Suzana eHerculano-Houzel

    2015-06-01

    Full Text Available There is a strong trend toward increased brain size in mammalian evolution, with larger brains composed of more and larger neurons than smaller brains across species within each mammalian order. Does the evolution of increased numbers of brain neurons, and thus larger brain size, occur simply through the selection of individuals with more and larger neurons, and thus larger brains, within a population? That is, do individuals with larger brains also have more, and larger, neurons than individuals with smaller brains, such that allometric relationships across species are simply an extension of intraspecific scaling? Here we show that this is not the case across adult male mice of a similar age. Rather, increased numbers of neurons across individuals are accompanied by increased numbers of other cells and smaller average cell size of both types, in a trade-off that explains how increased brain mass does not necessarily ensue. Fundamental regulatory mechanisms thus must exist that tie numbers of neurons to numbers of other cells and to average cell size within individual brains. Finally, our results indicate that changes in brain size in evolution are not an extension of individual variation in numbers of neurons, but rather occur through step changes that must simultaneously increase numbers of neurons and cause cell size to increase, rather than decrease.

  16. [The effect of group-based psychodrama therapy on decreasing the level of aggression in adolescents].

    Science.gov (United States)

    Karataş, Zeynep; Gökçakan, Dan Zafer

    2009-01-01

    This study aimed to examine the effect of group-based psychodrama therapy on the level aggression in adolescents. The study included 23 students from Nezihe Yalvac Anatolian Vocational High School of Hotel Management and Tourism that had high aggression scores. Eleven of the participants (6 female, 5 male) constituted the experimental group and 12 (6 male, 6 female) were in the control group. The 34-item Aggression Scale was used to measure level of aggression. We utilized mixed pattern design including experiment-control, pre-test and post test and follow up. The experimental group participated in group-based psychodrama therapy once a week for 90 minutes, for 14 weeks in total. The Aggression Scale was administered to the experimental and control groups before and after treatment; it was additionally administered to the experimental group 16 weeks after treatment. Data were analyzed using ANCOVA and dependent samples t tests. Our analysis shows that group-based psychodrama had an effect on the experimental group in terms of total aggression, anger, hostility, and indirect aggression scores (F=65.109, F=20.175, F=18.593, F=40.987, respectively, P<.001). There was no effect of the group-based treatment on verbal or physical aggression scores. Follow-up indicated that the effect of the therapy was still measureable 16 weeks after the cessation of the therapy. Results of the present study indicate that group-based psychodrama therapy decreased the level of aggression in the experimental group. Current findings are discussed with reference to the literature. Recommendations for further research and for psychiatric counselors are provided.

  17. Average exceptional Lie and Coxeter group hierarchies with special reference to the standard model of high energy particle physics

    Energy Technology Data Exchange (ETDEWEB)

    El Naschie, M.S. [King Abdullah Al Saud Institute of Nano and Advanced Technologies, Riyadh (Saudi Arabia)], E-mail: Chaossf@aol.com

    2008-08-15

    The notions of the order of a symmetry group may be extended to that of an average, non-integer order. Building on this extension, it can be shown that the five classical exceptional Lie symmetry groups could be extended to a hierarchy, the total sum of which is four times {alpha}-bar{sub 0}=137+k{sub 0} of the electromagnetic field. Subsequently it can be shown that all known and conjectured physical fields may be derived by E-infinity transfinite scaling transformation. Consequently E{sub 8}E{sub 8} exceptional Lie symmetry groups manifold, the SL(2,7){sub c} holographic modular curve boundary {gamma}(7), Einstein-Kaluza gravity R{sup (n=4)} and R{sup (n=5)} as well as the electromagnetic field are all topological transformations of each other. It is largely a matter of mathematical taste to choose E{sub 8} or the electromagnetic field associated with {alpha}-bar{sub 0} as derived or as fundamental. However since E{sub 8} has been extensively studied by the founding father of group theory and has recently been mapped completely, it seems beneficial to discuss at least high energy physics starting from the largest of the exceptional groups.

  18. Approximate Solution of the Point Reactor Kinetic Equations of Average One-Group of Delayed Neutrons for Step Reactivity Insertion

    Directory of Open Access Journals (Sweden)

    S. Yamoah

    2012-04-01

    Full Text Available The understanding of the time-dependent behaviour of the neutron population in a nuclear reactor in response to either a planned or unplanned change in the reactor conditions is of great importance to the safe and reliable operation of the reactor. In this study two analytical methods have been presented to solve the point kinetic equations of average one-group of delayed neutrons. These methods which are both approximate solution of the point reactor kinetic equations are compared with a numerical solution using the Euler’s first order method. To obtain accurate solution for the Euler method, a relatively small time step was chosen for the numerical solution. These methods are applied to different types of reactivity to check the validity of the analytical method by comparing the analytical results with the numerical results. From the results, it is observed that the analytical solution agrees well with the numerical solution.

  19. Generic features of the dynamics of complex open quantum systems: statistical approach based on averages over the unitary group.

    Science.gov (United States)

    Gessner, Manuel; Breuer, Heinz-Peter

    2013-04-01

    We obtain exact analytic expressions for a class of functions expressed as integrals over the Haar measure of the unitary group in d dimensions. Based on these general mathematical results, we investigate generic dynamical properties of complex open quantum systems, employing arguments from ensemble theory. We further generalize these results to arbitrary eigenvalue distributions, allowing a detailed comparison of typical regular and chaotic systems with the help of concepts from random matrix theory. To illustrate the physical relevance and the general applicability of our results we present a series of examples related to the fields of open quantum systems and nonequilibrium quantum thermodynamics. These include the effect of initial correlations, the average quantum dynamical maps, the generic dynamics of system-environment pure state entanglement and, finally, the equilibration of generic open and closed quantum systems.

  20. Distraction sneakers decrease the expected level of aggression within groups: a game-theoretic model.

    Science.gov (United States)

    Dubois, Frédérique; Giraldeau, Luc-Alain; Hamilton, Ian M; Grant, James W A; Lefebvre, Louis

    2004-08-01

    Hawk-dove games have been extensively used to predict the conditions under which group-living animals should defend their resources against potential usurpers. Typically, game-theoretic models on aggression consider that resource defense may entail energetic and injury costs. However, intruders may also take advantage of owners who are busy fighting to sneak access to unguarded resources, imposing thereby an additional cost on the use of the escalated hawk strategy. In this article we modify the two-strategy hawk-dove game into a three-strategy hawk-dove-sneaker game that incorporates a distraction-sneaking tactic, allowing us to explore its consequences on the expected level of aggression within groups. Our model predicts a lower proportion of hawks and hence lower frequencies of aggressive interactions within groups than do previous two-strategy hawk-dove games. The extent to which distraction sneakers decrease the frequency of aggression within groups, however, depends on whether they search only for opportunities to join resources uncovered by other group members or for both unchallenged resources and opportunities to usurp.

  1. Teleradiology based CT colonography to screen a population group of a remote island; at average risk for colorectal cancer

    Energy Technology Data Exchange (ETDEWEB)

    Lefere, Philippe, E-mail: radiologie@skynet.be [VCTC, Virtual Colonoscopy Teaching Centre, Akkerstraat 32c, B-8830 Hooglede (Belgium); Silva, Celso, E-mail: caras@uma.pt [Human Anatomy of Medical Course, University of Madeira, Praça do Município, 9000-082 Funchal (Portugal); Gryspeerdt, Stefaan, E-mail: stefaan@sgryspeerdt.be [VCTC, Virtual Colonoscopy Teaching Centre, Akkerstraat 32c, B-8830 Hooglede (Belgium); Rodrigues, António, E-mail: nucleo@nid.pt [Nucleo Imagem Diagnostica, Rua 5 De Outubro, 9000-216 Funchal (Portugal); Vasconcelos, Rita, E-mail: rita@uma.pt [Department of Engineering and Mathematics, University of Madeira, Praça do Município, 9000-082 Funchal (Portugal); Teixeira, Ricardo, E-mail: j.teixeira1947@gmail.com [Department of Gastroenterology, Central Hospital of Funchal, Avenida Luís de Camões, 9004513 Funchal (Portugal); Gouveia, Francisco Henriques de, E-mail: fhgouveia@netmadeira.com [LANA, Pathology Centre, Rua João Gago, 10, 9000-071 Funchal (Portugal)

    2013-06-15

    Purpose: To prospectively assess the performance of teleradiology-based CT colonography to screen a population group of an island, at average risk for colorectal cancer. Materials and methods: A cohort of 514 patients living in Madeira, Portugal, was enrolled in the study. Institutional review board approval was obtained and all patients signed an informed consent. All patients underwent both CT colonography and optical colonoscopy. CT colonography was interpreted by an experienced radiologist at a remote centre using tele-radiology. Per-patient sensitivity, specificity, positive (PPV) and negative (NPV) predictive values with 95% confidence intervals (95%CI) were calculated for colorectal adenomas and advanced neoplasia ≥6 mm. Results: 510 patients were included in the study. CT colonography obtained a per-patient sensitivity, specificity, PPV and, NPV for adenomas ≥6 mm of 98.11% (88.6–99.9% 95% CI), 90.97% (87.8–93.4% 95% CI), 56.52% (45.8–66.7% 95% CI), 99.75% (98.4–99.9% 95% CI). For advanced neoplasia ≥6 mm per-patient sensitivity, specificity, PPV and, NPV were 100% (86.7–100% 95% CI), 87.07% (83.6–89.9% 95% CI), 34.78% (25.3–45.5% 95% CI) and 100% (98.8–100% 95% CI), respectively. Conclusion: In this prospective trial, teleradiology-based CT colonography was accurate to screen a patient cohort of a remote island, at average risk for colorectal cancer.

  2. Enhanced Phospholipase A2 Group 3 Expression by Oxidative Stress Decreases the Insulin-Degrading Enzyme.

    Directory of Open Access Journals (Sweden)

    Daishi Yui

    Full Text Available Oxidative stress has a ubiquitous role in neurodegenerative diseases and oxidative damage in specific regions of the brain is associated with selective neurodegeneration. We previously reported that Alzheimer disease (AD model mice showed decreased insulin-degrading enzyme (IDE levels in the cerebrum and accelerated phenotypic features of AD when crossbred with alpha-tocopherol transfer protein knockout (Ttpa-/- mice. To further investigate the role of chronic oxidative stress in AD pathophysiology, we performed DNA microarray analysis using young and aged wild-type mice and aged Ttpa-/- mice. Among the genes whose expression changed dramatically was Phospholipase A2 group 3 (Pla2g3; Pla2g3 was identified because of its expression profile of cerebral specific up-regulation by chronic oxidative stress in silico and in aged Ttpa-/- mice. Immunohistochemical studies also demonstrated that human astrocytic Pla2g3 expression was significantly increased in human AD brains compared with control brains. Moreover, transfection of HEK293 cells with human Pla2g3 decreased endogenous IDE expression in a dose-dependent manner. Our findings show a key role of Pla2g3 on the reduction of IDE, and suggest that cerebrum specific increase of Pla2g3 is involved in the initiation and/or progression of AD.

  3. Enhanced Phospholipase A2 Group 3 Expression by Oxidative Stress Decreases the Insulin-Degrading Enzyme.

    Science.gov (United States)

    Yui, Daishi; Nishida, Yoichiro; Nishina, Tomoko; Mogushi, Kaoru; Tajiri, Mio; Ishibashi, Satoru; Ajioka, Itsuki; Ishikawa, Kinya; Mizusawa, Hidehiro; Murayama, Shigeo; Yokota, Takanori

    2015-01-01

    Oxidative stress has a ubiquitous role in neurodegenerative diseases and oxidative damage in specific regions of the brain is associated with selective neurodegeneration. We previously reported that Alzheimer disease (AD) model mice showed decreased insulin-degrading enzyme (IDE) levels in the cerebrum and accelerated phenotypic features of AD when crossbred with alpha-tocopherol transfer protein knockout (Ttpa-/-) mice. To further investigate the role of chronic oxidative stress in AD pathophysiology, we performed DNA microarray analysis using young and aged wild-type mice and aged Ttpa-/- mice. Among the genes whose expression changed dramatically was Phospholipase A2 group 3 (Pla2g3); Pla2g3 was identified because of its expression profile of cerebral specific up-regulation by chronic oxidative stress in silico and in aged Ttpa-/- mice. Immunohistochemical studies also demonstrated that human astrocytic Pla2g3 expression was significantly increased in human AD brains compared with control brains. Moreover, transfection of HEK293 cells with human Pla2g3 decreased endogenous IDE expression in a dose-dependent manner. Our findings show a key role of Pla2g3 on the reduction of IDE, and suggest that cerebrum specific increase of Pla2g3 is involved in the initiation and/or progression of AD.

  4. Investigate Methods to Decrease Compilation Time-AX-Program Code Group Computer Science R& D Project

    Energy Technology Data Exchange (ETDEWEB)

    Cottom, T

    2003-06-11

    Large simulation codes can take on the order of hours to compile from scratch. In Kull, which uses generic programming techniques, a significant portion of the time is spent generating and compiling template instantiations. I would like to investigate methods that would decrease the overall compilation time for large codes. These would be methods which could then be applied, hopefully, as standard practice to any large code. Success is measured by the overall decrease in wall clock time a developer spends waiting for an executable. Analyzing the make system of a slow to build project can benefit all developers on the project. Taking the time to analyze the number of processors used over the life of the build and restructuring the system to maximize the parallelization can significantly reduce build times. Distributing the build across multiple machines with the same configuration can increase the number of available processors for building and can help evenly balance the load. Becoming familiar with compiler options can have its benefits as well. The time improvements of the sum can be significant. Initial compilation time for Kull on OSF1 was {approx} 3 hours. Final time on OSF1 after completion is 16 minutes. Initial compilation time for Kull on AIX was {approx} 2 hours. Final time on AIX after completion is 25 minutes. Developers now spend 3 hours less waiting for a Kull executable on OSF1, and 2 hours less on AIX platforms. In the eyes of many Kull code developers, the project was a huge success.

  5. State Averages

    Data.gov (United States)

    U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...

  6. A Study of Effectiveness of Rational, Emotive, Behavior Therapy (REBT with Group Method on Decrease of Stress among Diabetic Patients

    Directory of Open Access Journals (Sweden)

    Kianoush Zahrakar

    2012-11-01

    Full Text Available Introduction: The purpose of the present research was studying the effectiveness of Rational Emotive Behavior Therapy (REBT with Group method in decreasing stress of diabetic patients. Methods: The population of research consisted of all diabetic patients that are member of diabetic patient’s association 0f karaj city. The sample consisted of 30 diabetic patients (experimental group 15 persons and control group 15 persons that selected through random sampling. Research design was experimental (pretest – posttest with control group. Stress inoculation training on experimental group was performance for 10 sessions. Research instrument were stress signs test. 5 hypotheses were formulated about effectiveness of Rational Emotive Behavior Therapy (REBT on decrease stress signs (physical, emotional, behavioral, cognitive and total stress. Analysis of covariance was used for analyzing the data. Results: The finding indicated that Rational Emotive Behavior Therapy (REBT had significant effect on decrease every dimension of stress signs. Conclusion: The results of present study demonstrate the effectiveness of Rational Emotive Behavior Therapy (REBT in decreasing stress of diabetic patients. Due to the increase of stress in diabetic patients and the effectiveness of mental intervention, special attention should be given psychological treatment in this group of patients.

  7. IL-4 Deficiency Decreases Mortality but Increases Severity of Arthritis in Experimental Group B Streptococcus Infection

    Directory of Open Access Journals (Sweden)

    Luciana Tissi

    2009-01-01

    Full Text Available IL-4 is an anti-inflammatory cytokine that inhibits the onset and severity in different experimental arthritis models. Group B streptococci (GBS have been recognized as an ever-growing cause of serious invasive infections in nonpregnant adults. Septic arthritis is a clinical manifestation of GBS infection. To investigate the role of IL-4 in experimental GBS infection, IL-4 deficient or competent mice were inoculated with 1×107 GBS/mouse. Mortality, appearance of arthritis, GBS growth in the organs, and local and systemic cytokine and chemokine production were examined. IL-4–/– mice showed lower mortality rates but increased severity of arthritis and exhibited a lower microbial load in blood, kidneys, and joints than wt mice. Increased local levels of IL-1 β, IL-6, TNF-α, MIP-1α, and MIP-2 accompanied the more severe arthritis in IL-4–/– mice. Our results suggest a detrimental role of IL-4 in GBS sepsis, whereas it plays a beneficial effect on GBS-induced arthritis.

  8. Calculation of average molecular parameters, functional groups, and a surrogate molecule for heavy fuel oils using 1H and 13C NMR spectroscopy

    KAUST Repository

    Abdul Jameel, Abdul Gani

    2016-04-22

    Heavy fuel oil (HFO) is primarily used as fuel in marine engines and in boilers to generate electricity. Nuclear Magnetic Resonance (NMR) is a powerful analytical tool for structure elucidation and in this study, 1H NMR and 13C NMR spectroscopy were used for the structural characterization of 2 HFO samples. The NMR data was combined with elemental analysis and average molecular weight to quantify average molecular parameters (AMPs), such as the number of paraffinic carbons, naphthenic carbons, aromatic hydrogens, olefinic hydrogens, etc. in the HFO samples. Recent formulae published in the literature were used for calculating various derived AMPs like aromaticity factor 〖(f〗_a), C/H ratio, average paraffinic chain length (¯n), naphthenic ring number 〖(R〗_N), aromatic ring number〖 (R〗_A), total ring number〖 (R〗_T), aromatic condensation index (φ) and aromatic condensation degree (Ω). These derived AMPs help in understanding the overall structure of the fuel. A total of 19 functional groups were defined to represent the HFO samples, and their respective concentrations were calculated by formulating balance equations that equate the concentration of the functional groups with the concentration of the AMPs. Heteroatoms like sulfur, nitrogen, and oxygen were also included in the functional groups. Surrogate molecules were finally constructed to represent the average structure of the molecules present in the HFO samples. This surrogate molecule can be used for property estimation of the HFO samples and also serve as a surrogate to represent the molecular structure for use in kinetic studies.

  9. Prevalence of colorectal polyps in a group of subjects at average-risk of colorectal cancer undergoing colonoscopic screening in Tehran, Iran between 2008 and 2013.

    Science.gov (United States)

    Sohrabi, Masoudreza; Zamani, Farhad; Ajdarkosh, Hossien; Rakhshani, Naser; Ameli, Mitra; Mohamadnejad, Mehdi; Kabir, Ali; Hemmasi, Gholamreza; Khonsari, Mahmoudreza; Motamed, Nima

    2014-01-01

    Colorectal cancer (CRC) is one of the prime causes of mortality around the globe, with a significantly rising incidence in the Middle East region in recent decades. Since detection of CRC in the early stages is an important issue, and also since to date there are no comprehensive epidemiologic studies depicting the Middle East region with special attention to the average risk group, further investigation is of significant necessity in this regard. Our aim was to investigate the prevalence of preneoplastic and neoplastic lesions of the colon in an average risk population. A total of 1,208 eligible asymptomatic, average- risk adults older than 40 years of age, referred to Firuzgar Hospotal in the years 2008-2012, were enrolled. They underwent colonoscopy screening and all polypoid lesions were removed and examined by an expert gastrointestinal pathologist. The lesions were classified by size, location, numbers and pathologic findings. Size of lesions was measured objectively by endoscopists. The mean age of participants was 56.5±9.59 and 51.6% were male. The overall polyp detection rate was 199/1208 (16.5 %), 26 subjects having non-neoplastic polyps, including hyperplastic lesions, and 173/1208 (14.3%) having neoplastic polyps, of which 26 (2.15%) were advanced neoplasms .The prevalence of colorectal neoplasia was more common among the 50-59 age group. Advanced adenoma was more frequent among the 60-69 age group. The majority of adenomas were detected in the distal colon, but a quarter of advanced adenomas were found in the proximal colon; advance age and male gender was associated with the presence of adenoma. It seems that CRC screening among average-risk population might be recommended in countries such as Iran. However, sigmioidoscopy alone would miss many colorectal adenomas. Furthermore, the 50-59 age group could be considered as an appropriate target population for this purpose in Iran.

  10. Mortality decrease according to socioeconomic groups during the economic crisis in Spain: a cohort study of 36 million people.

    Science.gov (United States)

    Regidor, Enrique; Vallejo, Fernando; Granados, José A Tapia; Viciana-Fernández, Francisco J; de la Fuente, Luis; Barrio, Gregorio

    2016-11-26

    8) for the low group, 2·4% (2·0 to 2·7) for the medium group and 2·5% (1·9 to 3·0) for the high group in 2008-11. The low socioeconomic group showed the largest effect size for both wealth indicators. In Spain, probably due to the decrease in exposure to risk factors, all-cause mortality decreased more during the economic crisis than before the economic crisis, especially in low socioeconomic groups. None. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Average correlation clustering algorithm (ACCA) for grouping of co-regulated genes with similar pattern of variation in their expression values.

    Science.gov (United States)

    Bhattacharya, Anindya; De, Rajat K

    2010-08-01

    Distance based clustering algorithms can group genes that show similar expression values under multiple experimental conditions. They are unable to identify a group of genes that have similar pattern of variation in their expression values. Previously we developed an algorithm called divisive correlation clustering algorithm (DCCA) to tackle this situation, which is based on the concept of correlation clustering. But this algorithm may also fail for certain cases. In order to overcome these situations, we propose a new clustering algorithm, called average correlation clustering algorithm (ACCA), which is able to produce better clustering solution than that produced by some others. ACCA is able to find groups of genes having more common transcription factors and similar pattern of variation in their expression values. Moreover, ACCA is more efficient than DCCA with respect to the time of execution. Like DCCA, we use the concept of correlation clustering concept introduced by Bansal et al. ACCA uses the correlation matrix in such a way that all genes in a cluster have the highest average correlation values with the genes in that cluster. We have applied ACCA and some well-known conventional methods including DCCA to two artificial and nine gene expression datasets, and compared the performance of the algorithms. The clustering results of ACCA are found to be more significantly relevant to the biological annotations than those of the other methods. Analysis of the results show the superiority of ACCA over some others in determining a group of genes having more common transcription factors and with similar pattern of variation in their expression profiles. Availability of the software: The software has been developed using C and Visual Basic languages, and can be executed on the Microsoft Windows platforms. The software may be downloaded as a zip file from http://www.isical.ac.in/~rajat. Then it needs to be installed. Two word files (included in the zip file) need to

  12. The Effectiveness of Cognitive-Existential Group Therapy on Increasing Hope and Decreasing Depression in Women-Treated With Haemodialysis.

    Science.gov (United States)

    Bahmani, Bahman; Motamed Najjar, Maryam; Sayyah, Mansour; Shafi-Abadi, Abdollah; Haddad Kashani, Hamed

    2015-11-17

    Hopefulness is one of the most significant predictors of adaptation in hemodialysis patients, and plays a vital role in the recovery process. In contrast to hopefulness, depression is a frequent psychological reaction of the hemodialysis treatment with many negative consequences. The current research was designed to examine the effect of cognitive-existential treatment on the level of hopefulness and depression in hemodialysis patients. This quasi-experimental research included 22 female patients suffering from chronic kidney failure disease undergoing hemodialysis treatment for at least 3 months. The patients were randomly assigned into two groups of experimental and control conditions. The experimental group received a combination of treatment including some elements of "existentialism" philosophy and a "cognitive" approach designed for the Iranian population. The treatment protocol lasted for 12 sessions of 90 minutes twice per week prior to the entry of the patient to the dialysis session.  Miller's hope scale and BDI-II-21 were employed to collect the data. Statistical analysis was performed on the data using analysis of covariance by SPSS: 16 software. The result of the analysis indicated that there was a significant improvement in hopefulness level and decrease in depression of the patients in the experiment condition (Pexistential treatment resulted in the increase of hopefulness and decrease level of depression in the hemodialysis patients suffering from chronic kidney failure.

  13. The Effectiveness of Cognitive-Existential Group Therapy on Increasing Hope and Decreasing Depression in Women-Treated with Haemodialysis

    Science.gov (United States)

    Bahmani, Bahman; Najjar, Maryam Motamed; Sayyah, Mansour; Shafi-Abadi, Abdollah; Kashani, Hamed Haddad

    2016-01-01

    Introduction: Hopefulness is one of the most significant predictors of adaptation in hemodialysis patients, and plays a vital role in the recovery process. In contrast to hopefulness, depression is a frequent psychological reaction of the hemodialysis treatment with many negative consequences. The current research was designed to examine the effect of cognitive-existential treatment on the level of hopefulness and depression in hemodialysis patients. Materials & Methods: This quasi-experimental research included 22 female patients suffering from chronic kidney failure disease undergoing hemodialysis treatment for at least 3 months. The patients were randomly assigned into two groups of experimental and control conditions. The experimental group received a combination of treatment including some elements of “existentialism” philosophy and a “cognitive” approach designed for the Iranian population. The treatment protocol lasted for 12 sessions of 90 minutes twice per week prior to the entry of the patient to the dialysis session. Miller’s hope scale and BDI-II-21 were employed to collect the data. Statistical analysis was performed on the data using analysis of covariance by SPSS: 16 software. Results: The result of the analysis indicated that there was a significant improvement in hopefulness level and decrease in depression of the patients in the experiment condition (Pexistential treatment resulted in the increase of hopefulness and decrease level of depression in the hemodialysis patients suffering from chronic kidney failure. PMID:26755466

  14. What's Average?

    Science.gov (United States)

    Stack, Sue; Watson, Jane; Hindley, Sue; Samson, Pauline; Devlin, Robyn

    2010-01-01

    This paper reports on the experiences of a group of teachers engaged in an action research project to develop critical numeracy classrooms. The teachers initially explored how contexts in the media could be used as bases for activities to encourage student discernment and critical thinking about the appropriate use of the underlying mathematical…

  15. High Glucose-Induced PC12 Cell Death by Increasing Glutamate Production and Decreasing Methyl Group Metabolism

    Directory of Open Access Journals (Sweden)

    Minjiang Chen

    2016-01-01

    Full Text Available Objective. High glucose- (HG- induced neuronal cell death is responsible for the development of diabetic neuropathy. However, the effect of HG on metabolism in neuronal cells is still unclear. Materials and Methods. The neural-crest derived PC12 cells were cultured for 72 h in the HG (75 mM or control (25 mM groups. We used NMR-based metabolomics to examine both intracellular and extracellular metabolic changes in HG-treated PC12 cells. Results. We found that the reduction in intracellular lactate may be due to excreting more lactate into the extracellular medium under HG condition. HG also induced the changes of other energy-related metabolites, such as an increased succinate and creatine phosphate. Our results also reveal that the synthesis of glutamate from the branched-chain amino acids (isoleucine and valine may be enhanced under HG. Increased levels of intracellular alanine, phenylalanine, myoinositol, and choline were observed in HG-treated PC12 cells. In addition, HG-induced decreases in intracellular dimethylamine, dimethylglycine, and 3-methylhistidine may indicate a downregulation of methyl group metabolism. Conclusions. Our metabolomic results suggest that HG-induced neuronal cell death may be attributed to a series of metabolic changes, involving energy metabolism, amino acids metabolism, osmoregulation and membrane metabolism, and methyl group metabolism.

  16. Early colonization with a group of Lactobacilli decreases the risk for allergy at five years of age despite allergic heredity.

    Directory of Open Access Journals (Sweden)

    Maria A Johansson

    Full Text Available BACKGROUND: Microbial deprivation early in life can potentially influence immune mediated disease development such as allergy. The aims of this study were to investigate the influence of parental allergy on the infant gut colonization and associations between infant gut microbiota and allergic disease at five years of age. METHODS AND FINDINGS: Fecal samples were collected from 58 infants, with allergic or non-allergic parents respectively, at one and two weeks as well as at one, two and twelve months of life. DNA was extracted from the fecal samples and Real time PCR, using species-specific primers, was used for detection of Bifidobacterium (B. adolescentis, B. breve, B. bifidum, Clostridium (C. difficile, a group of Lactobacilli (Lactobacillus (L. casei, L. paracasei and L. rhamnosus as well as Staphylococcus (S. aureus. Infants with non-allergic parents were more frequently colonized by Lactobacilli compared to infants with allergic parents (p = 0.014. However, non-allergic five-year olds acquired Lactobacilli more frequently during their first weeks of life, than their allergic counterparts, irrespectively of parental allergy (p = 0.009, p = 0.028. Further the non-allergic children were colonized with Lactobacilli on more occasions during the first two months of life (p = 0.038. Also, significantly more non-allergic children were colonized with B. bifidum at one week of age than the children allergic at five years (p = 0.048. CONCLUSION: In this study we show that heredity for allergy has an impact on the gut microbiota in infants but also that early Lactobacilli (L. casei, L. paracasei, L. rhamnosus colonization seems to decrease the risk for allergy at five years of age despite allergic heredity.

  17. [The mortality of patients with diabetes mellitus using oral antidiabetic drugs in the Czech Republic decreased over the decade of 2003-2013 and came closer to the population average].

    Science.gov (United States)

    Brož, Jan; Honěk, Petr; Dušek, Ladislav; Pavlík, Tomáš; Kvapil, Milan

    2015-11-01

    Every year official data is published which describes the care of patients with diabetes mellitus in the Czech Republic. An overall number of individuals with diabetes, the number of newly reported cases and the number of patient deaths is always specified. However this data does not allow us to identify the differences in mortality between the individual cohorts of diabetic patients in relation to therapy. Comparison of the mortality development in the periods of 2002-2006 and 2010-2013 in a representative sample of the patient population with type 2 diabetes mellitus using oral antidiabetic drugs, kept in the database of the General Health Insurance Company of the Czech Republic (VZP) which provided health care coverage for 63% of Czech population in 2013. A retrospective epidemiologic analysis. We identified all individuals in the VZP database who had a record of DM diagnosis (E10-E16 based on ICD 10) or who had any antidiabetic therapy prescribed (ATC group A10) in the periods of 2002-2008 and 2009-2013. We only selected those patients for the analysis who were treated with oral antidiabetic medicines (in the given year or the preceding years they had a record of treatment with at least one medicine from A10B group, while having no record of treatment with medicines from A10A group within both years). 237,665 individuals met the selected criteria in 2003 and 315,418 individuals in 2013. Mortality rates dropped for all age groups (from 2003-2013): for 50-59 year olds by 1.2%-0.7%; in 60-69 year olds by 2.6%-1.6%; for 70-79 year olds by 5.8%-3.5%. In 2013 mortality rates came close to the general population where for the same age groups they reached 0.6%, 1.5% and 3.4% respectively. When expressed in relative terms, the mortality among 50-59 year olds declined by 42% (Czechia by 25%), among 60-69 year olds by 39% (Czechia by 17%) and among 70-79 year olds by 40% (Czechia by 28%) from the year 2003. The decline in mortality among the patients with DM treated with

  18. One angry woman: Anger expression increases influence for men, but decreases influence for women, during group deliberation.

    Science.gov (United States)

    Salerno, Jessica M; Peter-Hagene, Liana C

    2015-12-01

    We investigated whether expressing anger increases social influence for men, but diminishes social influence for women, during group deliberation. In a deception paradigm, participants believed they were engaged in a computer-mediated mock jury deliberation about a murder case. In actuality, the interaction was scripted. The script included 5 other mock jurors who provided verdicts and comments in support of the verdicts; 4 agreed with the participant and 1 was a "holdout" dissenter. Holdouts expressed their opinions with no emotion, anger, or fear and had either male or female names. Holdouts exerted no influence on participants' opinions when they expressed no emotion or fear. Participants' confidence in their own verdict dropped significantly, however, after male holdouts expressed anger. Yet, anger expression undermined female holdouts: Participants became significantly more confident in their original verdicts after female holdouts expressed anger-even though they were expressing the exact same opinion and emotion as the male holdouts. Mediation analyses revealed that participants drew different inferences from male versus female anger, which created a gender gap in influence during group deliberation. The current study has implications for group decisions in general, and jury deliberations in particular, by suggesting that expressing anger might lead men to gain influence, but women to lose influence over others (even when making identical arguments). These diverging consequences might result in women potentially having less influence on societally important decisions than men, such as jury verdicts.

  19. A Shift in Task Routines during the Learning of a Motor Skill: Group-Averaged Data May Mask Critical Phases in the Individuals' Acquisition of Skilled Performance

    Science.gov (United States)

    Adi-Japha, Esther; Karni, Avi; Parnes, Ariel; Loewenschuss, Iris; Vakil, Eli

    2008-01-01

    The authors describe a transient phase during training on a movement sequence wherein, after an initial improvement in speed and decrease in variability, individual participants' performance showed a significant increase in variability without change in mean performance speed. Subsequent to this phase, as practice continued, variability again…

  20. The average numbers of outliers over groups of various splits into training and test sets: A criterion of the reliability of a QSPR? A case of water solubility

    Science.gov (United States)

    Toropova, Alla P.; Toropov, Andrey A.; Benfenati, Emilio; Gini, Giuseppina; Leszczynska, Danuta; Leszczynski, Jerzy

    2012-07-01

    The validation of quantitative structure-property/activity relationships (QSPR/QSAR) is an important challenge of modern theoretical chemistry. Analysis of QSPRs which are obtained with various distribution into sub-systems of training and of testing can be a useful approach to estimate reliability of QSPR predictions. The balance of correlation is an approach for the building up of QSPR with using three components of available data: (a) sub-training set (developer), (b) calibration set (critic), and (c) test set (estimator). Computational experiments have shown that the probabilistic interdependence between the distribution of available data into sub-training set, calibration set, and test set and the average numbers of outliers in the test set exists.

  1. High-Mobility Group Box-1 Induces Decreased Brain-Derived Neurotrophic Factor-Mediated Neuroprotection in the Diabetic Retina

    Directory of Open Access Journals (Sweden)

    Ahmed M. Abu El-Asrar

    2013-01-01

    Full Text Available To test the hypothesis that brain-derived neurotrophic factor-(BDNF- mediated neuroprotection is reduced by high-mobility group box-1 (HMGB1 in diabetic retina, paired vitreous and serum samples from 46 proliferative diabetic retinopathy and 34 nondiabetic patients were assayed for BDNF, HMGB1, soluble receptor for advanced glycation end products (sRAGE, soluble intercellular adhesion molecule-1 (sICAM-1, monocyte chemoattractant protein-1 (MCP-1, and TBARS. We also examined retinas of diabetic and HMGB1 intravitreally injected rats. The effect of the HMGB1 inhibitor glycyrrhizin on diabetes-induced changes in retinal BDNF expressions was studied. Western blot, ELISA, and TBARS assays were used. BDNF was not detected in vitreous samples. BDNF levels were significantly lower in serum samples from diabetic patients compared with nondiabetics, whereas HMGB1, sRAGE, sICAM-1, and TBARS levels were significantly higher in diabetic serum samples. MCP-1 levels did not differ significantly. There was significant inverse correlation between serum levels of BDNF and HMGB1. Diabetes and intravitreal administration of HMGB1 induced significant upregulation of the expression of HMGB1, TBARS, and cleaved caspase-3, whereas the expression of BDNF and synaptophysin was significantly downregulated in rat retinas. Glycyrrhizin significantly attenuated diabetes-induced downregulation of BDNF. Our results suggest that HMGB1-induced downregulation of BDNF might be involved in pathogenesis of diabetic retinal neurodegeneration.

  2. Differential Effects on Student Demographic Groups of Using ACT® College Readiness Assessment Composite Score, Act Benchmarks, and High School Grade Point Average for Predicting Long-Term College Success through Degree Completion. ACT Research Report Series, 2013 (5)

    Science.gov (United States)

    Radunzel, Justine; Noble, Julie

    2013-01-01

    In this study, we evaluated the differential effects on racial/ethnic, family income, and gender groups of using ACT® College Readiness Assessment Composite score and high school grade point average (HSGPA) for predicting long-term college success. Outcomes included annual progress towards a degree (based on cumulative credit-bearing hours…

  3. Evaluation of the antihypertensive effect of barnidipine, a dihydropyridine calcium entry blocker, as determined by the ambulatory blood pressure level averaged for 24 h, daytime, and nighttime. Barnidipine Study Group.

    Science.gov (United States)

    Imai, Y; Abe, K; Nishiyama, A; Sekino, M; Yoshinaga, K

    1997-12-01

    We evaluated the effect of barnidipine, a dihydropyridine calcium antagonist, administered once daily in the morning in a dose of 5, 10, or 15 mg on ambulatory blood pressure (BP) in 34 patients (51.3+/-9.6 years). Hypertension was diagnosed based on the clinic BP. The patients were classified into groups according to the ambulatory BP: group 1, dippers with true hypertension; group 2, nondippers with true hypertension; group 3, dippers with false hypertension; and Group 4, nondippers with false hypertension. Barnidipine reduced the clinic systolic BP (SBP) and diastolic BP (DBP) in all groups and significantly reduced the average 24 h ambulatory BP (133.0+/-16.5/90.7+/-12.3 mm Hg v 119.7+/-13.7/81.8+/-10.3 mm Hg, P Barnidipine significantly reduced the daytime ambulatory SBP in groups 1, 2, and 3, but not in group 4, and significantly reduced daytime ambulatory DBP in group 1 but not in groups 2, 3, and 4. Barnidipine significantly reduced the nighttime ambulatory SBP only in group 2 and the nighttime ambulatory DBP in groups 2 and 4. Once-a-day administration of barnidipine influenced 24 h BP on true hypertensives (the ratio of the trough to peak effect > 50%), but had minimal effect on low BP such as the nocturnal BP in dippers and the ambulatory BP in false hypertensives. These findings suggest that barnidipine can be used safely in patients with isolated clinic ("white coat") hypertension and in those with dipping patterns of circadian BP variation whose nocturnal BP is low before treatment.

  4. A Randomized Controlled Trial to Decrease Job Burnout in First-Year Internal Medicine Residents Using a Facilitated Discussion Group Intervention.

    Science.gov (United States)

    Ripp, Jonathan A; Fallar, Robert; Korenstein, Deborah

    2016-05-01

    Background Burnout is common in internal medicine (IM) trainees and is associated with depression and suboptimal patient care. Facilitated group discussion reduces burnout among practicing clinicians. Objective We hypothesized that this type of intervention would reduce incident burnout among first-year IM residents. Methods Between June 2013 and May 2014, participants from a convenience sample of 51 incoming IM residents were randomly assigned (in groups of 3) to the intervention or a control. Twice-monthly theme-based discussion sessions (18 total) led by expert facilitators were held for intervention groups. Surveys were administered at study onset and completion. Demographic and personal characteristics were collected. Burnout and burnout domains were the primary outcomes. Following convention, we defined burnout as a high emotional exhaustion or depersonalization score on the Maslach Burnout Inventory. Results All 51 eligible residents participated; 39 (76%) completed both surveys. Initial burnout prevalence (10 of 21 [48%] versus 7 of 17 [41%], P = .69), incidence of burnout at year end (9 of 11 [82%] versus 5 of 10 [50%], P = .18), and secondary outcomes were similar in intervention and control arms. More residents in the intervention group had high year-end depersonalization scores (18 of 21 [86%] versus 9 of 17 [53%], P = .04). Many intervention residents revealed that sessions did not truly free them from clinical or educational responsibilities. Conclusions A facilitated group discussion intervention did not decrease burnout in resident physicians. Future discussion-based interventions for reducing resident burnout should be voluntary and effectively free participants from clinical duties.

  5. Aggregation and Averaging.

    Science.gov (United States)

    Siegel, Irving H.

    The arithmetic processes of aggregation and averaging are basic to quantitative investigations of employment, unemployment, and related concepts. In explaining these concepts, this report stresses need for accuracy and consistency in measurements, and describes tools for analyzing alternative measures. (BH)

  6. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    1999-01-01

    In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion...

  7. Your Average Nigga

    Science.gov (United States)

    Young, Vershawn Ashanti

    2004-01-01

    "Your Average Nigga" contends that just as exaggerating the differences between black and white language leaves some black speakers, especially those from the ghetto, at an impasse, so exaggerating and reifying the differences between the races leaves blacks in the impossible position of either having to try to be white or forever struggling to…

  8. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    2001-01-01

    In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  9. Covariant approximation averaging

    CERN Document Server

    Shintani, Eigo; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph

    2014-01-01

    We present a new class of statistical error reduction techniques for Monte-Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in $N_f=2+1$ lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte-Carlo calculations over conventional methods for the same cost.

  10. Negative Average Preference Utilitarianism

    Directory of Open Access Journals (Sweden)

    Roger Chao

    2012-03-01

    Full Text Available For many philosophers working in the area of Population Ethics, it seems that either they have to confront the Repugnant Conclusion (where they are forced to the conclusion of creating massive amounts of lives barely worth living, or they have to confront the Non-Identity Problem (where no one is seemingly harmed as their existence is dependent on the “harmful” event that took place. To them it seems there is no escape, they either have to face one problem or the other. However, there is a way around this, allowing us to escape the Repugnant Conclusion, by using what I will call Negative Average Preference Utilitarianism (NAPU – which though similar to anti-frustrationism, has some important differences in practice. Current “positive” forms of utilitarianism have struggled to deal with the Repugnant Conclusion, as their theory actually entails this conclusion; however, it seems that a form of Negative Average Preference Utilitarianism (NAPU easily escapes this dilemma (it never even arises within it.

  11. Physical Theories with Average Symmetry

    OpenAIRE

    Alamino, Roberto C.

    2013-01-01

    This Letter probes the existence of physical laws invariant only in average when subjected to some transformation. The concept of a symmetry transformation is broadened to include corruption by random noise and average symmetry is introduced by considering functions which are invariant only in average under these transformations. It is then shown that actions with average symmetry obey a modified version of Noether's Theorem with dissipative currents. The relation of this with possible violat...

  12. Rigidity spectrum of Forbush decrease

    Science.gov (United States)

    Sakakibara, S.; Munakata, K.; Nagashima, K.

    1985-01-01

    Using data from neutron monitors and muon telescopes at surface and underground stations, the average rigidity spectrum of Forbush decreases (Fds) during the period of 1978-1982 were obtained. Thirty eight Ed-events are classified into two groups Hard Fd and Soft Fd according to size of Fd at Sakashita station. It is found that a spectral form of fractional-power type (P to the-gamma sub 1 (P+P sub c) to the -gamma sub2) is more suitable for the present purpose than that of power-exponential type or of power type with an upper limiting rigidity. The best fitted spectrum of fractional-power type is expressed by gamma sub1 = 0.37, gamma sub2 = 0.89 and P subc = 10 GV for Hard Fd and gamma sub1 = 0.77, gamma sub2 = 1.02 and P sub c - 14GV for Soft Fd.

  13. Average Convexity in Communication Situations

    NARCIS (Netherlands)

    Slikker, M.

    1998-01-01

    In this paper we study inheritance properties of average convexity in communication situations. We show that the underlying graph ensures that the graphrestricted game originating from an average convex game is average convex if and only if every subgraph associated with a component of the underlyin

  14. Sampling Based Average Classifier Fusion

    Directory of Open Access Journals (Sweden)

    Jian Hou

    2014-01-01

    fusion algorithms have been proposed in literature, average fusion is almost always selected as the baseline for comparison. Little is done on exploring the potential of average fusion and proposing a better baseline. In this paper we empirically investigate the behavior of soft labels and classifiers in average fusion. As a result, we find that; by proper sampling of soft labels and classifiers, the average fusion performance can be evidently improved. This result presents sampling based average fusion as a better baseline; that is, a newly proposed classifier fusion algorithm should at least perform better than this baseline in order to demonstrate its effectiveness.

  15. Physical Theories with Average Symmetry

    CERN Document Server

    Alamino, Roberto C

    2013-01-01

    This Letter probes the existence of physical laws invariant only in average when subjected to some transformation. The concept of a symmetry transformation is broadened to include corruption by random noise and average symmetry is introduced by considering functions which are invariant only in average under these transformations. It is then shown that actions with average symmetry obey a modified version of Noether's Theorem with dissipative currents. The relation of this with possible violations of physical symmetries, as for instance Lorentz invariance in some quantum gravity theories, is briefly commented.

  16. Quantized average consensus with delay

    NARCIS (Netherlands)

    Jafarian, Matin; De Persis, Claudio

    2012-01-01

    Average consensus problem is a special case of cooperative control in which the agents of the network asymptotically converge to the average state (i.e., position) of the network by transferring information via a communication topology. One of the issues of the large scale networks is the cost of co

  17. Gaussian moving averages and semimartingales

    DEFF Research Database (Denmark)

    Basse-O'Connor, Andreas

    2008-01-01

    In the present paper we study moving averages (also known as stochastic convolutions) driven by a Wiener process and with a deterministic kernel. Necessary and sufficient conditions on the kernel are provided for the moving average to be a semimartingale in its natural filtration. Our results...... are constructive - meaning that they provide a simple method to obtain kernels for which the moving average is a semimartingale or a Wiener process. Several examples are considered. In the last part of the paper we study general Gaussian processes with stationary increments. We provide necessary and sufficient...

  18. Role of spatial averaging in multicellular gradient sensing

    Science.gov (United States)

    Smith, Tyler; Fancher, Sean; Levchenko, Andre; Nemenman, Ilya; Mugler, Andrew

    2016-06-01

    Gradient sensing underlies important biological processes including morphogenesis, polarization, and cell migration. The precision of gradient sensing increases with the length of a detector (a cell or group of cells) in the gradient direction, since a longer detector spans a larger range of concentration values. Intuition from studies of concentration sensing suggests that precision should also increase with detector length in the direction transverse to the gradient, since then spatial averaging should reduce the noise. However, here we show that, unlike for concentration sensing, the precision of gradient sensing decreases with transverse length for the simplest gradient sensing model, local excitation-global inhibition. The reason is that gradient sensing ultimately relies on a subtraction of measured concentration values. While spatial averaging indeed reduces the noise in these measurements, which increases precision, it also reduces the covariance between the measurements, which results in the net decrease in precision. We demonstrate how a recently introduced gradient sensing mechanism, regional excitation-global inhibition (REGI), overcomes this effect and recovers the benefit of transverse averaging. Using a REGI-based model, we compute the optimal two- and three-dimensional detector shapes, and argue that they are consistent with the shapes of naturally occurring gradient-sensing cell populations.

  19. Vocal attractiveness increases by averaging.

    Science.gov (United States)

    Bruckert, Laetitia; Bestelmeyer, Patricia; Latinus, Marianne; Rouger, Julien; Charest, Ian; Rousselet, Guillaume A; Kawahara, Hideki; Belin, Pascal

    2010-01-26

    Vocal attractiveness has a profound influence on listeners-a bias known as the "what sounds beautiful is good" vocal attractiveness stereotype [1]-with tangible impact on a voice owner's success at mating, job applications, and/or elections. The prevailing view holds that attractive voices are those that signal desirable attributes in a potential mate [2-4]-e.g., lower pitch in male voices. However, this account does not explain our preferences in more general social contexts in which voices of both genders are evaluated. Here we show that averaging voices via auditory morphing [5] results in more attractive voices, irrespective of the speaker's or listener's gender. Moreover, we show that this phenomenon is largely explained by two independent by-products of averaging: a smoother voice texture (reduced aperiodicities) and a greater similarity in pitch and timbre with the average of all voices (reduced "distance to mean"). These results provide the first evidence for a phenomenon of vocal attractiveness increases by averaging, analogous to a well-established effect of facial averaging [6, 7]. They highlight prototype-based coding [8] as a central feature of voice perception, emphasizing the similarity in the mechanisms of face and voice perception.

  20. Age group athletes in inline skating: decrease in overall and increase in master athlete participation in the longest inline skating race in Europe – the Inline One-Eleven

    Directory of Open Access Journals (Sweden)

    Teutsch U

    2013-05-01

    Full Text Available Uwe Teutsch,1 Beat Knechtle,1,2 Christoph Alexander Rüst,1 Thomas Rosemann,1 Romuald Lepers31Institute of General Practice and Health Services Research, University of Zurich, Zurich, Switzerland; 2Gesundheitszentrum St Gallen, St Gallen, Switzerland; 3INSERM U1093, Faculty of Sport Sciences, University of Burgundy, Dijon, FranceBackground: Participation and performance trends in age group athletes have been investigated in endurance and ultraendurance races in swimming, cycling, running, and triathlon, but not in long-distance inline skating. The aim of this study was to investigate trends in participation, age, and performance in the longest inline race in Europe, the Inline One-Eleven over 111 km, held between 1998 and 2009.Methods: The total number, age distribution, age at the time of the competition, and race times of male and female finishers at the Inline One-Eleven were analyzed.Results: Overall participation increased until 2003 but decreased thereafter. During the 12-year period, the relative participation in skaters younger than 40 years old decreased while relative participation increased for skaters older than 40 years. The mean top ten skating time was 199 ± 9 minutes (range: 189–220 minutes for men and 234 ± 17 minutes (range: 211–271 minutes for women, respectively. The gender difference in performance remained stable at 17% ± 5% across years.Conclusion: To summarize, although the participation of master long-distance inline skaters increased, the overall participation decreased across years in the Inline One-Eleven. The race times of the best female and male skaters stabilized across years with a gender difference in performance of 17% ± 5%. Further studies should focus on the participation in the international World Inline Cup races.Keywords: endurance, men, women, gender

  1. Averaged Electroencephalic Audiometry in Infants

    Science.gov (United States)

    Lentz, William E.; McCandless, Geary A.

    1971-01-01

    Normal, preterm, and high-risk infants were tested at 1, 3, 6, and 12 months of age using averaged electroencephalic audiometry (AEA) to determine the usefulness of AEA as a measurement technique for assessing auditory acuity in infants, and to delineate some of the procedural and technical problems often encountered. (KW)

  2. Ergodic averages via dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    2006-01-01

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary ...

  3. Cosmic structure, averaging and dark energy

    CERN Document Server

    Wiltshire, David L

    2013-01-01

    These lecture notes review the theoretical problems associated with coarse-graining the observed inhomogeneous structure of the universe at late epochs, of describing average cosmic evolution in the presence of growing inhomogeneity, and of relating average quantities to physical observables. In particular, a detailed discussion of the timescape scenario is presented. In this scenario, dark energy is realized as a misidentification of gravitational energy gradients which result from gradients in the kinetic energy of expansion of space, in the presence of density and spatial curvature gradients that grow large with the growth of structure. The phenomenology and observational tests of the timescape model are discussed in detail, with updated constraints from Planck satellite data. In addition, recent results on the variation of the Hubble expansion on < 100/h Mpc scales are discussed. The spherically averaged Hubble law is significantly more uniform in the rest frame of the Local Group of galaxies than in t...

  4. High average power supercontinuum sources

    Indian Academy of Sciences (India)

    J C Travers

    2010-11-01

    The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium. The most common experimental arrangements are described, including both continuous wave fibre laser systems with over 100 W pump power, and picosecond mode-locked, master oscillator power fibre amplifier systems, with over 10 kW peak pump power. These systems can produce broadband supercontinua with over 50 and 1 mW/nm average spectral power, respectively. Techniques for numerical modelling of the supercontinuum sources are presented and used to illustrate some supercontinuum dynamics. Some recent experimental results are presented.

  5. Dependability in Aggregation by Averaging

    CERN Document Server

    Jesus, Paulo; Almeida, Paulo Sérgio

    2010-01-01

    Aggregation is an important building block of modern distributed applications, allowing the determination of meaningful properties (e.g. network size, total storage capacity, average load, majorities, etc.) that are used to direct the execution of the system. However, the majority of the existing aggregation algorithms exhibit relevant dependability issues, when prospecting their use in real application environments. In this paper, we reveal some dependability issues of aggregation algorithms based on iterative averaging techniques, giving some directions to solve them. This class of algorithms is considered robust (when compared to common tree-based approaches), being independent from the used routing topology and providing an aggregation result at all nodes. However, their robustness is strongly challenged and their correctness often compromised, when changing the assumptions of their working environment to more realistic ones. The correctness of this class of algorithms relies on the maintenance of a funda...

  6. Measuring Complexity through Average Symmetry

    OpenAIRE

    Alamino, Roberto C.

    2015-01-01

    This work introduces a complexity measure which addresses some conflicting issues between existing ones by using a new principle - measuring the average amount of symmetry broken by an object. It attributes low (although different) complexity to either deterministic or random homogeneous densities and higher complexity to the intermediate cases. This new measure is easily computable, breaks the coarse graining paradigm and can be straightforwardly generalised, including to continuous cases an...

  7. Mirror averaging with sparsity priors

    CERN Document Server

    Dalalyan, Arnak

    2010-01-01

    We consider the problem of aggregating the elements of a (possibly infinite) dictionary for building a decision procedure, that aims at minimizing a given criterion. Along with the dictionary, an independent identically distributed training sample is available, on which the performance of a given procedure can be tested. In a fairly general set-up, we establish an oracle inequality for the Mirror Averaging aggregate based on any prior distribution. This oracle inequality is applied in the context of sparse coding for different problems of statistics and machine learning such as regression, density estimation and binary classification.

  8. (Average-) convexity of common pool and oligopoly TU-games

    NARCIS (Netherlands)

    Driessen, T.S.H.; Meinhardt, H.

    2000-01-01

    The paper studies both the convexity and average-convexity properties for a particular class of cooperative TU-games called common pool games. The common pool situation involves a cost function as well as a (weakly decreasing) average joint production function. Firstly, it is shown that, if the rele

  9. High-Average Power Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Dowell, David H.; /SLAC; Power, John G.; /Argonne

    2012-09-05

    There has been significant progress in the development of high-power facilities in recent years yet major challenges remain. The task of WG4 was to identify which facilities were capable of addressing the outstanding R&D issues presently preventing high-power operation. To this end, information from each of the facilities represented at the workshop was tabulated and the results are presented herein. A brief description of the major challenges is given, but the detailed elaboration can be found in the other three working group summaries.

  10. Unscrambling The "Average User" Of Habbo Hotel

    Directory of Open Access Journals (Sweden)

    Mikael Johnson

    2007-01-01

    Full Text Available The “user” is an ambiguous concept in human-computer interaction and information systems. Analyses of users as social actors, participants, or configured users delineate approaches to studying design-use relationships. Here, a developer’s reference to a figure of speech, termed the “average user,” is contrasted with design guidelines. The aim is to create an understanding about categorization practices in design through a case study about the virtual community, Habbo Hotel. A qualitative analysis highlighted not only the meaning of the “average user,” but also the work that both the developer and the category contribute to this meaning. The average user a represents the unknown, b influences the boundaries of the target user groups, c legitimizes the designer to disregard marginal user feedback, and d keeps the design space open, thus allowing for creativity. The analysis shows how design and use are intertwined and highlights the developers’ role in governing different users’ interests.

  11. The average visual response in patients with cerebrovascular disease

    NARCIS (Netherlands)

    Oostehuis, H.J.G.H.; Ponsen, E.J.; Jonkman, E.J.; Magnus, O.

    1969-01-01

    The average visual response (AVR) was recorded in thirty patients after a cerebrovascular accident and in fourteen control subjects from the same age group. The AVR was obtained with the aid of a 16-channel EEG machine, a Computer of Average Transients and a tape recorder with 13 FM channels. This

  12. 7 CFR 1209.12 - On average.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false On average. 1209.12 Section 1209.12 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS....12 On average. On average means a rolling average of production or imports during the last two...

  13. Task performance and perceptions of anxiety: averaging and summation in an evaluative setting.

    Science.gov (United States)

    Seta, J J; Crisson, J E; Seta, C E; Wang, M A

    1989-03-01

    Suggests that individuals' "stage fright," or perceptions of anxiety and performance, is a function of tendencies to both average and summate the impact of audience members. We found that under certain conditions adding an evaluative member to an audience decreased anxiety, whereas in other conditions the addition of evaluative members increased anxiety. These results are not expected from social impact theory or social facilitation research and suggest that individuals do not react to groups of individuals in a manner analogous to the way in which trait information is typically averaged in forming impressions of individuals (Anderson, 1981). An averaging-summation model that does account for these findings is presented. This research has implications for research on crowding, stress, social influence, and affective responses.

  14. Decreasing relative risk premium

    DEFF Research Database (Denmark)

    Hansen, Frank

    2007-01-01

    such that the corresponding relative risk premium is a decreasing function of present wealth, and we determine the set of associated utility functions. We find a new characterization of risk vulnerability and determine a large set of utility functions, closed under summation and composition, which are both risk vulnerable...... and have decreasing relative risk premium. We finally introduce the notion of partial risk neutral preferences on binary lotteries and show that partial risk neutrality is equivalent to preferences with decreasing relative risk premium...

  15. Level sets of multiple ergodic averages

    CERN Document Server

    Ai-Hua, Fan; Ma, Ji-Hua

    2011-01-01

    We propose to study multiple ergodic averages from multifractal analysis point of view. In some special cases in the symbolic dynamics, Hausdorff dimensions of the level sets of multiple ergodic average limit are determined by using Riesz products.

  16. Accurate Switched-Voltage voltage averaging circuit

    OpenAIRE

    金光, 一幸; 松本, 寛樹

    2006-01-01

    Abstract ###This paper proposes an accurate Switched-Voltage (SV) voltage averaging circuit. It is presented ###to compensated for NMOS missmatch error at MOS differential type voltage averaging circuit. ###The proposed circuit consists of a voltage averaging and a SV sample/hold (S/H) circuit. It can ###operate using nonoverlapping three phase clocks. Performance of this circuit is verified by PSpice ###simulations.

  17. Spectral averaging techniques for Jacobi matrices

    CERN Document Server

    del Rio, Rafael; Schulz-Baldes, Hermann

    2008-01-01

    Spectral averaging techniques for one-dimensional discrete Schroedinger operators are revisited and extended. In particular, simultaneous averaging over several parameters is discussed. Special focus is put on proving lower bounds on the density of the averaged spectral measures. These Wegner type estimates are used to analyze stability properties for the spectral types of Jacobi matrices under local perturbations.

  18. Decreasing Relative Risk Premium

    DEFF Research Database (Denmark)

    Hansen, Frank

    We consider the risk premium demanded by a decision maker with wealth x in order to be indifferent between obtaining a new level of wealth y1 with certainty, or to participate in a lottery which either results in unchanged present wealth or a level of wealth y2 > y1. We define the relative risk...... premium as the quotient between the risk premium and the increase in wealth y1–x which the decision maker puts on the line by choosing the lottery in place of receiving y1 with certainty. We study preferences such that the relative risk premium is a decreasing function of present wealth, and we determine...... relative risk premium in the small implies decreasing relative risk premium in the large, and decreasing relative risk premium everywhere implies risk aversion. We finally show that preferences with decreasing relative risk premium may be equivalently expressed in terms of certain preferences on risky...

  19. Decreasing Serial Cost Sharing

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Østerdal, Lars Peter

    The increasing serial cost sharing rule of Moulin and Shenker [Econometrica 60 (1992) 1009] and the decreasing serial rule of de Frutos [Journal of Economic Theory 79 (1998) 245] have attracted attention due to their intuitive appeal and striking incentive properties. An axiomatic characterization...... of the increasing serial rule was provided by Moulin and Shenker [Journal of Economic Theory 64 (1994) 178]. This paper gives an axiomatic characterization of the decreasing serial rule...

  20. Decreasing the stigma burden of chronic pain.

    Science.gov (United States)

    Monsivais, Diane B

    2013-10-01

    To describe stigmatizing experiences in a group of Mexican-American women with chronic pain and provide clinical implications for decreasing stigma. This focused ethnographic study derived data from semistructured interviews, participant observations, and fieldwork. Participants provided detailed descriptions of communicating about chronic pain symptoms, treatment, and management. The sample consisted of 15 English-speaking Mexican-American women 21-65 years old (average age = 45.6 years) who had nonmalignant chronic pain symptoms for 1 year or more. The cultural and social norm in the United States is the expectation for objective evidence (such as an injury) to be present if a pain condition exists. In this study, this norm created suspicion and subsequent stigmatization on the part of family, co-workers, and even those with the pain syndromes, that the painful condition was imagined instead of real. To decrease stigmatization of chronic pain, providers must understand their own misconceptions about chronic pain, possess the skills and resources to access and use the highest level of practice evidence available, and become an advocate for improved pain care at local, state, and national levels. ©2013 The Author(s) ©2013 American Association of Nurse Practitioners.

  1. Average-Time Games on Timed Automata

    OpenAIRE

    Jurdzinski, Marcin; Trivedi, Ashutosh

    2009-01-01

    An average-time game is played on the infinite graph of configurations of a finite timed automaton. The two players, Min and Max, construct an infinite run of the automaton by taking turns to perform a timed transition. Player Min wants to minimise the average time per transition and player Max wants to maximise it. A solution of average-time games is presented using a reduction to average-price game on a finite graph. A direct consequence is an elementary proof of determinacy for average-tim...

  2. On the average exponent of elliptic curves modulo $p$

    CERN Document Server

    Freiberg, Tristan

    2012-01-01

    Given an elliptic curve $E$ defined over $\\mathbb{Q}$ and a prime $p$ of good reduction, let $\\tilde{E}(\\mathbb{F}_p)$ denote the group of $\\mathbb{F}_p$-points of the reduction of $E$ modulo $p$, and let $e_p$ denote the exponent of said group. Assuming a certain form of the Generalized Riemann Hypothesis (GRH), we study the average of $e_p$ as $p \\le X$ ranges over primes of good reduction, and find that the average exponent essentially equals $p\\cdot c_{E}$, where the constant $c_{E} > 0$ depends on $E$. For $E$ without complex multiplication (CM), $c_{E}$ can be written as a rational number (depending on $E$) times a universal constant. Without assuming GRH, we can determine the average exponent when $E$ has CM, as well as give an upper bound on the average in the non-CM case.

  3. Low and decreasing vaccine effectiveness against influenza A(H3) in 2011/12 among vaccination target groups in Europe: results from the I-MOVE multicentre case-control study.

    LENUS (Irish Health Repository)

    Kissling, E

    2013-01-01

    Within the Influenza Monitoring Vaccine Effectiveness in Europe (I-MOVE) project we conducted a multicentre case–control study in eight European Union (EU) Member States to estimate the 2011\\/12 influenza vaccine effectiveness against medically attended influenza-like illness (ILI) laboratory-confirmed as influenza A(H3) among the vaccination target groups. Practitioners systematically selected ILI \\/ acute respiratory infection patients to swab within seven days of symptom onset. We restricted the study population to those meeting the EU ILI case definition and compared influenza A(H3) positive to influenza laboratory-negative patients. We used logistic regression with study site as fixed effect and calculated adjusted influenza vaccine effectiveness (IVE), controlling for potential confounders (age group, sex, month of symptom onset, chronic diseases and related hospitalisations, number of practitioner visits in the previous year). Adjusted IVE was 25% (95% confidence intervals (CI): -6 to 47) among all ages (n=1,014), 63% (95% CI: 26 to 82) in adults aged between 15 and 59 years and 15% (95% CI: -33 to 46) among those aged 60 years and above. Adjusted IVE was 38% (95%CI: -8 to 65) in the early influenza season (up to week 6 of 2012) and -1% (95% CI: -60 to 37) in the late phase. The results suggested a low adjusted IVE in 2011\\/12. The lower IVE in the late season could be due to virus changes through the season or waning immunity. Virological surveillance should be enhanced to quantify change over time and understand its relation with duration of immunological protection. Seasonal influenza vaccines should be improved to achieve acceptable levels of protection.

  4. Low and decreasing vaccine effectiveness against influenza A(H3) in 2011/12 among vaccination target groups in Europe: results from the I-MOVE multicentre case-control study.

    Science.gov (United States)

    Kissling, E; Valenciano, M; Larrauri, A; Oroszi, B; Cohen, J M; Nunes, B; Pitigoi, D; Rizzo, C; Rebolledo, J; Paradowska-Stankiewicz, I; Jiménez-Jorge, S; Horváth, J K; Daviaud, I; Guiomar, R; Necula, G; Bella, A; O'Donnell, J; Głuchowska, M; Ciancio, B C; Nicoll, A; Moren, A

    2013-01-31

    Within the Influenza Monitoring Vaccine Effectiveness in Europe (I-MOVE) project we conducted a multicentre case–control study in eight European Union (EU) Member States to estimate the 2011/12 influenza vaccine effectiveness against medically attended influenza-like illness (ILI) laboratory-confirmed as influenza A(H3) among the vaccination target groups. Practitioners systematically selected ILI / acute respiratory infection patients to swab within seven days of symptom onset. We restricted the study population to those meeting the EU ILI case definition and compared influenza A(H3) positive to influenza laboratory-negative patients. We used logistic regression with study site as fixed effect and calculated adjusted influenza vaccine effectiveness (IVE), controlling for potential confounders (age group, sex, month of symptom onset, chronic diseases and related hospitalisations, number of practitioner visits in the previous year). Adjusted IVE was 25% (95% confidence intervals (CI): -6 to 47) among all ages (n=1,014), 63% (95% CI: 26 to 82) in adults aged between 15 and 59 years and 15% (95% CI: -33 to 46) among those aged 60 years and above. Adjusted IVE was 38% (95%CI: -8 to 65) in the early influenza season (up to week 6 of 2012) and -1% (95% CI: -60 to 37) in the late phase. The results suggested a low adjusted IVE in 2011/12. The lower IVE in the late season could be due to virus changes through the season or waning immunity. Virological surveillance should be enhanced to quantify change over time and understand its relation with duration of immunological protection. Seasonal influenza vaccines should be improved to achieve acceptable levels of protection.

  5. Decreasing strabismus surgery

    Science.gov (United States)

    Arora, A; Williams, B; Arora, A K; McNamara, R; Yates, J; Fielder, A

    2005-01-01

    Aim: To determine whether there has been a consistent change across countries and healthcare systems in the frequency of strabismus surgery in children over the past decade. Methods: Retrospective analysis of data on all strabismus surgery performed in NHS hospitals in England and Wales, on children aged 0–16 years between 1989 and 2000, and between 1994 and 2000 in Ontario (Canada) hospitals. These were compared with published data for Scotland, 1989–2000. Results: Between 1989 and 1999–2000 the number of strabismus procedures performed on children, 0–16 years, in England decreased by 41.2% from 15 083 to 8869. Combined medial rectus recession with lateral rectus resection decreased from 5538 to 3013 (45.6%) in the same period. Bimedial recessions increased from 489 to 762, oblique tenotomies from 43 to 121, and the use of adjustable sutures from 29 to 44, in 2000. In Ontario, operations for squint decreased from 2280 to 1685 (26.1%) among 0–16 year olds between 1994 and 2000. Conclusion: The clinical impression of decrease in the frequency of paediatric strabismus surgery is confirmed. In the authors’ opinion this cannot be fully explained by a decrease in births or by the method of healthcare funding. Two factors that might have contributed are better conservative strabismus management and increased subspecialisation that has improved the quality of surgery and the need for re-operation. This finding has a significant impact upon surgical services and also on the training of ophthalmologists. PMID:15774914

  6. Grade Point Average and Changes in (Great) Grade Expectations.

    Science.gov (United States)

    Wendorf, Craig A.

    2002-01-01

    Examines student grade expectations throughout a semester in which students offered their expectations three times during the course: (1) within the first week; (2) midway through the semester; and (3) the week before the final examination. Finds that their expectations decreased stating that their cumulative grade point average was related to the…

  7. Decreasing relative risk premium

    DEFF Research Database (Denmark)

    Hansen, Frank

    2007-01-01

    We consider the risk premium demanded by a decision maker in order to be indifferent between obtaining a new level of wealth with certainty, or to participate in a lottery which either results in unchanged wealth or an even higher level than what can be obtained with certainty. We study preferences...... such that the corresponding relative risk premium is a decreasing function of present wealth, and we determine the set of associated utility functions. We find a new characterization of risk vulnerability and determine a large set of utility functions, closed under summation and composition, which are both risk vulnerable...... and have decreasing relative risk premium. We finally introduce the notion of partial risk neutral preferences on binary lotteries and show that partial risk neutrality is equivalent to preferences with decreasing relative risk premium...

  8. A new approach for Bayesian model averaging

    Institute of Scientific and Technical Information of China (English)

    TIAN XiangJun; XIE ZhengHui; WANG AiHui; YANG XiaoChun

    2012-01-01

    Bayesian model averaging (BMA) is a recently proposed statistical method for calibrating forecast ensembles from numerical weather models.However,successful implementation of BMA requires accurate estimates of the weights and variances of the individual competing models in the ensemble.Two methods,namely the Expectation-Maximization (EM) and the Markov Chain Monte Carlo (MCMC) algorithms,are widely used for BMA model training.Both methods have their own respective strengths and weaknesses.In this paper,we first modify the BMA log-likelihood function with the aim of removing the additional limitation that requires that the BMA weights add to one,and then use a limited memory quasi-Newtonian algorithm for solving the nonlinear optimization problem,thereby formulating a new approach for BMA (referred to as BMA-BFGS).Several groups of multi-model soil moisture simulation experiments from three land surface models show that the performance of BMA-BFGS is similar to the MCMC method in terms of simulation accuracy,and that both are superior to the EM algorithm.On the other hand,the computational cost of the BMA-BFGS algorithm is substantially less than for MCMC and is almost equivalent to that for EM.

  9. Decreasing serial cost sharing

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Østerdal, Lars Peter Raahave

    2009-01-01

    The increasing serial cost sharing rule of Moulin and Shenker (Econometrica 60:1009-1037, 1992) and the decreasing serial rule of de Frutos (J Econ Theory 79:245-275, 1998) are known by their intuitive appeal and striking incentive properties. An axiomatic characterization of the increasing serial...

  10. Decreasing Serial Cost Sharing

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Østerdal, Lars Peter

    The increasing serial cost sharing rule of Moulin and Shenker [Econometrica 60 (1992) 1009] and the decreasing serial rule of de Frutos [Journal of Economic Theory 79 (1998) 245] have attracted attention due to their intuitive appeal and striking incentive properties. An axiomatic characterization...

  11. Six-month feeding of low-dose fish oil decreases vascular expression of high mobility group box1 and receptor for advanced glycation end-products in rat chronic allograft vasculopathy.

    Science.gov (United States)

    Wei, W; Zhu, Y; Wang, J; Guo, M; Li, Y; Li, Ji

    2013-06-01

    Chronic allograft vasculopathy (CAV) is a typical feature of chronic rejection of small bowel transplantations (SBTx). Our previous studies revealed that feeding fish oil, a natural source of n-3 polyunsaturated fatty acids (PUFAs), protected against CAV. The underlying mechanism remains to be clarified. The pathway mediated by the receptor for advanced glycation end products (RAGE) and its ligand, high mobility group box-1 (HMGB1), which may contribute to the pathogenesis of CAV, is potentially regulated by n-3 PUFAs. Using a chronic rejection model of rat SBTx, the present study investigated whether amelioration of CAV by fish oil feeding was associated with regulation of the RAGE signaling pathway. Moreover, our previous studies also showed that feeding low-dose fish oil for 3 months had no effect. Since an relatively short duration of treatment might fail to produce a visible response, we fed low-dose fish oil for 190 postoperative days. Male inbred Lewis rats and F344 rats were used to establish a chronic rejection model of SBTx. The recipient rats were administered phosphate-buffered saline or fish oil at a daily dose of 3 mL/kg. All rats survived over 190 postoperative days. The expression of HMGB1 and RAGE increased in CAV-bearing vessels. Feeding low-dose fish oil for 6 months attenuated CAV, and significantly reduced HMGB1 and RAGE expressions, indicating beneficial effects of low-dose fish oil on CAV may occur via down-regulation of the HMGB1-RAGE pathway. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. WIDTHS AND AVERAGE WIDTHS OF SOBOLEV CLASSES

    Institute of Scientific and Technical Information of China (English)

    刘永平; 许贵桥

    2003-01-01

    This paper concerns the problem of the Kolmogorov n-width, the linear n-width, the Gel'fand n-width and the Bernstein n-width of Sobolev classes of the periodicmultivariate functions in the space Lp(Td) and the average Bernstein σ-width, averageKolmogorov σ-widths, the average linear σ-widths of Sobolev classes of the multivariatequantities.

  13. A procedure to average 3D anatomical structures.

    Science.gov (United States)

    Subramanya, K; Dean, D

    2000-12-01

    Creating a feature-preserving average of three dimensional anatomical surfaces extracted from volume image data is a complex task. Unlike individual images, averages present right-left symmetry and smooth surfaces which give insight into typical proportions. Averaging multiple biological surface images requires careful superimposition and sampling of homologous regions. Our approach to biological surface image averaging grows out of a wireframe surface tessellation approach by Cutting et al. (1993). The surface delineating wires represent high curvature crestlines. By adding tile boundaries in flatter areas the 3D image surface is parametrized into anatomically labeled (homology mapped) grids. We extend the Cutting et al. wireframe approach by encoding the entire surface as a series of B-spline space curves. The crestline averaging algorithm developed by Cutting et al. may then be used for the entire surface. Shape preserving averaging of multiple surfaces requires careful positioning of homologous surface regions such as these B-spline space curves. We test the precision of this new procedure and its ability to appropriately position groups of surfaces in order to produce a shape-preserving average. Our result provides an average that well represents the source images and may be useful clinically as a deformable model or for animation.

  14. Stochastic averaging of quasi-Hamiltonian systems

    Institute of Scientific and Technical Information of China (English)

    朱位秋

    1996-01-01

    A stochastic averaging method is proposed for quasi-Hamiltonian systems (Hamiltonian systems with light dampings subject to weakly stochastic excitations). Various versions of the method, depending on whether the associated Hamiltonian systems are integrable or nonintegrable, resonant or nonresonant, are discussed. It is pointed out that the standard stochastic averaging method and the stochastic averaging method of energy envelope are special cases of the stochastic averaging method of quasi-Hamiltonian systems and that the results obtained by this method for several examples prove its effectiveness.

  15. NOAA Average Annual Salinity (3-Zone)

    Data.gov (United States)

    California Department of Resources — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...

  16. Dynamic Multiscale Averaging (DMA) of Turbulent Flow

    Energy Technology Data Exchange (ETDEWEB)

    Richard W. Johnson

    2012-09-01

    A new approach called dynamic multiscale averaging (DMA) for computing the effects of turbulent flow is described. The new method encompasses multiple applications of temporal and spatial averaging, that is, multiscale operations. Initially, a direct numerical simulation (DNS) is performed for a relatively short time; it is envisioned that this short time should be long enough to capture several fluctuating time periods of the smallest scales. The flow field variables are subject to running time averaging during the DNS. After the relatively short time, the time-averaged variables are volume averaged onto a coarser grid. Both time and volume averaging of the describing equations generate correlations in the averaged equations. These correlations are computed from the flow field and added as source terms to the computation on the next coarser mesh. They represent coupling between the two adjacent scales. Since they are computed directly from first principles, there is no modeling involved. However, there is approximation involved in the coupling correlations as the flow field has been computed for only a relatively short time. After the time and spatial averaging operations are applied at a given stage, new computations are performed on the next coarser mesh using a larger time step. The process continues until the coarsest scale needed is reached. New correlations are created for each averaging procedure. The number of averaging operations needed is expected to be problem dependent. The new DMA approach is applied to a relatively low Reynolds number flow in a square duct segment. Time-averaged stream-wise velocity and vorticity contours from the DMA approach appear to be very similar to a full DNS for a similar flow reported in the literature. Expected symmetry for the final results is produced for the DMA method. The results obtained indicate that DMA holds significant potential in being able to accurately compute turbulent flow without modeling for practical

  17. Hyperhomocysteinemia decreases bone blood flow

    Directory of Open Access Journals (Sweden)

    Neetu T

    2011-01-01

    Full Text Available Neetu Tyagi*, Thomas P Vacek*, John T Fleming, Jonathan C Vacek, Suresh C TyagiDepartment of Physiology and Biophysics, School of Medicine, University of Louisville, Louisville, KY, USA *These authors have equal authorshipAbstract: Elevated plasma levels of homocysteine (Hcy, known as hyperhomocysteinemia (HHcy, are associated with osteoporosis. A decrease in bone blood flow is a potential cause of compromised bone mechanical properties. Therefore, we hypothesized that HHcy decreases bone blood flow and biomechanical properties. To test this hypothesis, male Sprague–Dawley rats were treated with Hcy (0.67 g/L in drinking water for 8 weeks. Age-matched rats served as controls. At the end of the treatment period, the rats were anesthetized. Blood samples were collected from experimental or control rats. Biochemical turnover markers (body weight, Hcy, vitamin B12, and folate were measured. Systolic blood pressure was measured from the right carotid artery. Tibia blood flow was measured by laser Doppler flow probe. The results indicated that Hcy levels were significantly higher in the Hcy-treated group than in control rats, whereas vitamin B12 levels were lower in the Hcy-treated group compared with control rats. There was no significant difference in folate concentration and blood pressure in Hcy-treated versus control rats. The tibial blood flow index of the control group was significantly higher (0.78 ± 0.09 flow unit compared with the Hcy-treated group (0.51 ± 0.09. The tibial mass was 1.1 ± 0.1 g in the control group and 0.9 ± 0.1 in the Hcy-treated group. The tibia bone density was unchanged in Hcy-treated rats. These results suggest that Hcy causes a reduction in bone blood flow, which contributes to compromised bone biomechanical properties.Keywords: homocysteine, tibia, bone density

  18. Average Transmission Probability of a Random Stack

    Science.gov (United States)

    Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg

    2010-01-01

    The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…

  19. Average sampling theorems for shift invariant subspaces

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    The sampling theorem is one of the most powerful results in signal analysis. In this paper, we study the average sampling on shift invariant subspaces, e.g. wavelet subspaces. We show that if a subspace satisfies certain conditions, then every function in the subspace is uniquely determined and can be reconstructed by its local averages near certain sampling points. Examples are given.

  20. Testing linearity against nonlinear moving average models

    NARCIS (Netherlands)

    de Gooijer, J.G.; Brännäs, K.; Teräsvirta, T.

    1998-01-01

    Lagrange multiplier (LM) test statistics are derived for testing a linear moving average model against an additive smooth transition moving average model. The latter model is introduced in the paper. The small sample performance of the proposed tests are evaluated in a Monte Carlo study and compared

  1. Averaging Einstein's equations : The linearized case

    NARCIS (Netherlands)

    Stoeger, William R.; Helmi, Amina; Torres, Diego F.

    2007-01-01

    We introduce a simple and straightforward averaging procedure, which is a generalization of one which is commonly used in electrodynamics, and show that it possesses all the characteristics we require for linearized averaging in general relativity and cosmology for weak-field and perturbed FLRW situ

  2. Averaging Einstein's equations : The linearized case

    NARCIS (Netherlands)

    Stoeger, William R.; Helmi, Amina; Torres, Diego F.

    We introduce a simple and straightforward averaging procedure, which is a generalization of one which is commonly used in electrodynamics, and show that it possesses all the characteristics we require for linearized averaging in general relativity and cosmology for weak-field and perturbed FLRW

  3. Average excitation potentials of air and aluminium

    NARCIS (Netherlands)

    Bogaardt, M.; Koudijs, B.

    1951-01-01

    By means of a graphical method the average excitation potential I may be derived from experimental data. Average values for Iair and IAl have been obtained. It is shown that in representing range/energy relations by means of Bethe's well known formula, I has to be taken as a continuously changing fu

  4. Average Transmission Probability of a Random Stack

    Science.gov (United States)

    Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg

    2010-01-01

    The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…

  5. New results on averaging theory and applications

    Science.gov (United States)

    Cândido, Murilo R.; Llibre, Jaume

    2016-08-01

    The usual averaging theory reduces the computation of some periodic solutions of a system of ordinary differential equations, to find the simple zeros of an associated averaged function. When one of these zeros is not simple, i.e., the Jacobian of the averaged function in it is zero, the classical averaging theory does not provide information about the periodic solution associated to a non-simple zero. Here we provide sufficient conditions in order that the averaging theory can be applied also to non-simple zeros for studying their associated periodic solutions. Additionally, we do two applications of this new result for studying the zero-Hopf bifurcation in the Lorenz system and in the Fitzhugh-Nagumo system.

  6. Analogue Divider by Averaging a Triangular Wave

    Science.gov (United States)

    Selvam, Krishnagiri Chinnathambi

    2017-08-01

    A new analogue divider circuit by averaging a triangular wave using operational amplifiers is explained in this paper. The triangle wave averaging analog divider using operational amplifiers is explained here. The reference triangular waveform is shifted from zero voltage level up towards positive power supply voltage level. Its positive portion is obtained by a positive rectifier and its average value is obtained by a low pass filter. The same triangular waveform is shifted from zero voltage level to down towards negative power supply voltage level. Its negative portion is obtained by a negative rectifier and its average value is obtained by another low pass filter. Both the averaged voltages are combined in a summing amplifier and the summed voltage is given to an op-amp as negative input. This op-amp is configured to work in a negative closed environment. The op-amp output is the divider output.

  7. Simple Moving Average: A Method of Reporting Evolving Complication Rates.

    Science.gov (United States)

    Harmsen, Samuel M; Chang, Yu-Hui H; Hattrup, Steven J

    2016-09-01

    Surgeons often cite published complication rates when discussing surgery with patients. However, these rates may not truly represent current results or an individual surgeon's experience with a given procedure. This study proposes a novel method to more accurately report current complication trends that may better represent the patient's potential experience: simple moving average. Reverse shoulder arthroplasty (RSA) is an increasingly popular and rapidly evolving procedure with highly variable reported complication rates. The authors used an RSA model to test and evaluate the usefulness of simple moving average. This study reviewed 297 consecutive RSA procedures performed by a single surgeon and noted complications in 50 patients (16.8%). Simple moving average for total complications as well as minor, major, acute, and chronic complications was then calculated using various lag intervals. These findings showed trends toward fewer total, major, and chronic complications over time, and these trends were represented best with a lag of 75 patients. Average follow-up within this lag was 26.2 months. Rates for total complications decreased from 17.3% to 8% at the most recent simple moving average. The authors' traditional complication rate with RSA (16.8%) is consistent with reported rates. However, the use of simple moving average shows that this complication rate decreased over time, with current trends (8%) markedly lower, giving the senior author a more accurate picture of his evolving complication trends with RSA. Compared with traditional methods, simple moving average can be used to better reflect current trends in complication rates associated with a surgical procedure and may better represent the patient's potential experience. [Orthopedics.2016; 39(5):e869-e876.].

  8. Averaged Lema\\^itre-Tolman-Bondi dynamics

    CERN Document Server

    Isidro, Eddy G Chirinos; Piattella, Oliver F; Zimdahl, Winfried

    2016-01-01

    We consider cosmological backreaction effects in Buchert's averaging formalism on the basis of an explicit solution of the Lema\\^itre-Tolman-Bondi (LTB) dynamics which is linear in the LTB curvature parameter and has an inhomogeneous bang time. The volume Hubble rate is found in terms of the volume scale factor which represents a derivation of the simplest phenomenological solution of Buchert's equations in which the fractional densities corresponding to average curvature and kinematic backreaction are explicitly determined by the parameters of the underlying LTB solution at the boundary of the averaging volume. This configuration represents an exactly solvable toy model but it does not adequately describe our "real" Universe.

  9. Average-passage flow model development

    Science.gov (United States)

    Adamczyk, John J.; Celestina, Mark L.; Beach, Tim A.; Kirtley, Kevin; Barnett, Mark

    1989-01-01

    A 3-D model was developed for simulating multistage turbomachinery flows using supercomputers. This average passage flow model described the time averaged flow field within a typical passage of a bladed wheel within a multistage configuration. To date, a number of inviscid simulations were executed to assess the resolution capabilities of the model. Recently, the viscous terms associated with the average passage model were incorporated into the inviscid computer code along with an algebraic turbulence model. A simulation of a stage-and-one-half, low speed turbine was executed. The results of this simulation, including a comparison with experimental data, is discussed.

  10. FREQUENTIST MODEL AVERAGING ESTIMATION: A REVIEW

    Institute of Scientific and Technical Information of China (English)

    Haiying WANG; Xinyu ZHANG; Guohua ZOU

    2009-01-01

    In applications, the traditional estimation procedure generally begins with model selection.Once a specific model is selected, subsequent estimation is conducted under the selected model without consideration of the uncertainty from the selection process. This often leads to the underreporting of variability and too optimistic confidence sets. Model averaging estimation is an alternative to this procedure, which incorporates model uncertainty into the estimation process. In recent years, there has been a rising interest in model averaging from the frequentist perspective, and some important progresses have been made. In this paper, the theory and methods on frequentist model averaging estimation are surveyed. Some future research topics are also discussed.

  11. Averaging of Backscatter Intensities in Compounds

    Science.gov (United States)

    Donovan, John J.; Pingitore, Nicholas E.; Westphal, Andrew J.

    2002-01-01

    Low uncertainty measurements on pure element stable isotope pairs demonstrate that mass has no influence on the backscattering of electrons at typical electron microprobe energies. The traditional prediction of average backscatter intensities in compounds using elemental mass fractions is improperly grounded in mass and thus has no physical basis. We propose an alternative model to mass fraction averaging, based of the number of electrons or protons, termed “electron fraction,” which predicts backscatter yield better than mass fraction averaging. PMID:27446752

  12. Experimental Demonstration of Squeezed State Quantum Averaging

    CERN Document Server

    Lassen, Mikael; Sabuncu, Metin; Filip, Radim; Andersen, Ulrik L

    2010-01-01

    We propose and experimentally demonstrate a universal quantum averaging process implementing the harmonic mean of quadrature variances. The harmonic mean protocol can be used to efficiently stabilize a set of fragile squeezed light sources with statistically fluctuating noise levels. The averaged variances are prepared probabilistically by means of linear optical interference and measurement induced conditioning. We verify that the implemented harmonic mean outperforms the standard arithmetic mean strategy. The effect of quantum averaging is experimentally tested both for uncorrelated and partially correlated noise sources with sub-Poissonian shot noise or super-Poissonian shot noise characteristics.

  13. The Average Lower Connectivity of Graphs

    Directory of Open Access Journals (Sweden)

    Ersin Aslan

    2014-01-01

    Full Text Available For a vertex v of a graph G, the lower connectivity, denoted by sv(G, is the smallest number of vertices that contains v and those vertices whose deletion from G produces a disconnected or a trivial graph. The average lower connectivity denoted by κav(G is the value (∑v∈VGsvG/VG. It is shown that this parameter can be used to measure the vulnerability of networks. This paper contains results on bounds for the average lower connectivity and obtains the average lower connectivity of some graphs.

  14. Cosmic inhomogeneities and averaged cosmological dynamics.

    Science.gov (United States)

    Paranjape, Aseem; Singh, T P

    2008-10-31

    If general relativity (GR) describes the expansion of the Universe, the observed cosmic acceleration implies the existence of a "dark energy." However, while the Universe is on average homogeneous on large scales, it is inhomogeneous on smaller scales. While GR governs the dynamics of the inhomogeneous Universe, the averaged homogeneous Universe obeys modified Einstein equations. Can such modifications alone explain the acceleration? For a simple generic model with realistic initial conditions, we show the answer to be "no." Averaging effects negligibly influence the cosmological dynamics.

  15. Changing mortality and average cohort life expectancy

    DEFF Research Database (Denmark)

    Schoen, Robert; Canudas-Romo, Vladimir

    2005-01-01

    of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL) has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure......, the average cohort life expectancy (ACLE), to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate...

  16. 40 CFR 63.1332 - Emissions averaging provisions.

    Science.gov (United States)

    2010-07-01

    ... Standards for Hazardous Air Pollutant Emissions: Group IV Polymers and Resins § 63.1332 Emissions averaging... if pollution prevention measures are used to control five or more of the emission points included in... additional emission points if pollution prevention measures are used to control five or more of the...

  17. Mortgage loan accessibility for risk groups decreased / Aleksei Gunter

    Index Scriptorium Estoniae

    Gunter, Aleksei, 1979-

    2003-01-01

    Uuringu "Eluaseme kättesaadavus riskirühmadele" kokkuvõttes tõdetakse, et eluasemetest ei ole Eestis puudust, kuid enamik elamuid on vananenud. Samuti on eluasemeturg orienteeritud eelkõige vaid heal majanduslikul järjel olevatele inimestele

  18. Mortgage loan accessibility for risk groups decreased / Aleksei Gunter

    Index Scriptorium Estoniae

    Gunter, Aleksei, 1979-

    2003-01-01

    Uuringu "Eluaseme kättesaadavus riskirühmadele" kokkuvõttes tõdetakse, et eluasemetest ei ole Eestis puudust, kuid enamik elamuid on vananenud. Samuti on eluasemeturg orienteeritud eelkõige vaid heal majanduslikul järjel olevatele inimestele

  19. Sea Surface Temperature Average_SST_Master

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using ArcGIS...

  20. Appeals Council Requests - Average Processing Time

    Data.gov (United States)

    Social Security Administration — This dataset provides annual data from 1989 through 2015 for the average processing time (elapsed time in days) for dispositions by the Appeals Council (AC) (both...

  1. Average Vegetation Growth 1990 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1990 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  2. Average Vegetation Growth 1997 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1997 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  3. Average Vegetation Growth 1992 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1992 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  4. Average Vegetation Growth 2001 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 2001 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  5. Average Vegetation Growth 1995 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1995 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  6. Average Vegetation Growth 2000 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 2000 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  7. Average Vegetation Growth 1998 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1998 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  8. Average Vegetation Growth 1994 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1994 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  9. MN Temperature Average (1961-1990) - Line

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  10. Average Vegetation Growth 1996 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1996 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  11. Average Vegetation Growth 2005 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 2005 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  12. Average Vegetation Growth 1993 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1993 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  13. MN Temperature Average (1961-1990) - Polygon

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  14. Spacetime Average Density (SAD) Cosmological Measures

    CERN Document Server

    Page, Don N

    2014-01-01

    The measure problem of cosmology is how to obtain normalized probabilities of observations from the quantum state of the universe. This is particularly a problem when eternal inflation leads to a universe of unbounded size so that there are apparently infinitely many realizations or occurrences of observations of each of many different kinds or types, making the ratios ambiguous. There is also the danger of domination by Boltzmann Brains. Here two new Spacetime Average Density (SAD) measures are proposed, Maximal Average Density (MAD) and Biased Average Density (BAD), for getting a finite number of observation occurrences by using properties of the Spacetime Average Density (SAD) of observation occurrences to restrict to finite regions of spacetimes that have a preferred beginning or bounce hypersurface. These measures avoid Boltzmann brain domination and appear to give results consistent with other observations that are problematic for other widely used measures, such as the observation of a positive cosmolo...

  15. A practical guide to averaging functions

    CERN Document Server

    Beliakov, Gleb; Calvo Sánchez, Tomasa

    2016-01-01

    This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...

  16. Rotational averaging of multiphoton absorption cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Friese, Daniel H., E-mail: daniel.h.friese@uit.no; Beerepoot, Maarten T. P.; Ruud, Kenneth [Centre for Theoretical and Computational Chemistry, University of Tromsø — The Arctic University of Norway, N-9037 Tromsø (Norway)

    2014-11-28

    Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.

  17. Rotational averaging of multiphoton absorption cross sections

    Science.gov (United States)

    Friese, Daniel H.; Beerepoot, Maarten T. P.; Ruud, Kenneth

    2014-11-01

    Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.

  18. Monthly snow/ice averages (ISCCP)

    Data.gov (United States)

    National Aeronautics and Space Administration — September Arctic sea ice is now declining at a rate of 11.5 percent per decade, relative to the 1979 to 2000 average. Data from NASA show that the land ice sheets in...

  19. Average Annual Precipitation (PRISM model) 1961 - 1990

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 1961-1990. Parameter-elevation...

  20. Symmetric Euler orientation representations for orientational averaging.

    Science.gov (United States)

    Mayerhöfer, Thomas G

    2005-09-01

    A new kind of orientation representation called symmetric Euler orientation representation (SEOR) is presented. It is based on a combination of the conventional Euler orientation representations (Euler angles) and Hamilton's quaternions. The properties of the SEORs concerning orientational averaging are explored and compared to those of averaging schemes that are based on conventional Euler orientation representations. To that aim, the reflectance of a hypothetical polycrystalline material with orthorhombic crystal symmetry was calculated. The calculation was carried out according to the average refractive index theory (ARIT [T.G. Mayerhöfer, Appl. Spectrosc. 56 (2002) 1194]). It is shown that the use of averaging schemes based on conventional Euler orientation representations leads to a dependence of the result from the specific Euler orientation representation that was utilized and from the initial position of the crystal. The latter problem can be overcome partly by the introduction of a weighing factor, but only for two-axes-type Euler orientation representations. In case of a numerical evaluation of the average, a residual difference remains also if a two-axes type Euler orientation representation is used despite of the utilization of a weighing factor. In contrast, this problem does not occur if a symmetric Euler orientation representation is used as a matter of principle, while the result of the averaging for both types of orientation representations converges with increasing number of orientations considered in the numerical evaluation. Additionally, the use of a weighing factor and/or non-equally spaced steps in the numerical evaluation of the average is not necessary. The symmetrical Euler orientation representations are therefore ideally suited for the use in orientational averaging procedures.

  1. Cosmic Inhomogeneities and the Average Cosmological Dynamics

    OpenAIRE

    Paranjape, Aseem; Singh, T. P.

    2008-01-01

    If general relativity (GR) describes the expansion of the Universe, the observed cosmic acceleration implies the existence of a `dark energy'. However, while the Universe is on average homogeneous on large scales, it is inhomogeneous on smaller scales. While GR governs the dynamics of the \\emph{in}homogeneous Universe, the averaged \\emph{homogeneous} Universe obeys modified Einstein equations. Can such modifications alone explain the acceleration? For a simple generic model with realistic ini...

  2. Average Bandwidth Allocation Model of WFQ

    Directory of Open Access Journals (Sweden)

    Tomáš Balogh

    2012-01-01

    Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.

  3. Optimum orientation versus orientation averaging description of cluster radioactivity

    CERN Document Server

    Seif, W M; Refaie, A I; Amer, L H

    2016-01-01

    Background: The deformation of the nuclei involved in the cluster decay of heavy nuclei affect seriously their half-lives against the decay. Purpose: We investigate the description of the different decay stages in both the optimum orientation and the orientation-averaged pictures of the cluster decay process. Method: We consider the decays of 232,233,234U and 236,238Pu isotopes. The quantum mechanical knocking frequency and penetration probability based on the Wentzel-Kramers-Brillouin approximation are used to find the decay width. Results: We found that the orientation-averaged decay width is one or two orders of magnitude less than its value along the non-compact optimum orientation. The difference between the two values increases with decreasing the mass number of the emitted cluster. Correspondingly, the extracted preformation probability based on the averaged decay width increases with the same orders of magnitude compared to its value obtained considering the optimum orientation. The cluster preformati...

  4. Perceptual averaging in individuals with Autism Spectrum Disorder

    Directory of Open Access Journals (Sweden)

    Jennifer Elise Corbett

    2016-11-01

    Full Text Available There is mounting evidence that observers rely on statistical summaries of visual information to maintain stable and coherent perception. Sensitivity to the mean (or other prototypical value of a visual feature (e.g., mean size appears to be a pervasive process in human visual perception. Previous studies in individuals diagnosed with Autism Spectrum Disorder (ASD have uncovered characteristic patterns of visual processing that suggest they may rely more on enhanced local representations of individual objects instead of computing such perceptual averages. To further explore the fundamental nature of abstract statistical representation in visual perception, we investigated perceptual averaging of mean size in a group of 12 high-functioning individuals diagnosed with ASD using simplified versions of two identification and adaptation tasks that elicited characteristic perceptual averaging effects in a control group of neurotypical participants. In Experiment 1, participants performed with above chance accuracy in recalling the mean size of a set of circles (mean task despite poor accuracy in recalling individual circle sizes (member task. In Experiment 2, their judgments of single circle size were biased by mean size adaptation. Overall, these results suggest that individuals with ASD perceptually average information about sets of objects in the surrounding environment. Our results underscore the fundamental nature of perceptual averaging in vision, and further our understanding of how autistic individuals make sense of the external environment.

  5. Feedlot cattle with calm temperaments have higher average daily gains than cattle with excitable temperaments.

    Science.gov (United States)

    Voisinet, B D; Grandin, T; Tatum, J D; O'Connor, S F; Struthers, J J

    1997-04-01

    This study was conducted to assess the effect of temperament on the average daily gains of feedlot cattle. Cattle (292 steers and 144 heifers) were transported to Colorado feedlot facilities. Breeds studied included Braford (n = 177), Simmental x Red Angus (n = 92), Red Brangus (n = 70), Simbrah (n = 65), Angus (n = 18), and Tarentaise x Angus (n = 14). Cattle were temperament rated on a numerical scale (chute score) during routine weighing and processing. Data were separated into two groups based on breed, Brahman cross (> or = 25% Brahman) and nonBrahman breeding. Animals that had Brahman breeding had a higher mean temperament rating (3.45 +/- .09) or were more excitable than animals that had no Brahman influence (1.80 +/- .10); (P < .001). These data also show that heifers have a higher mean temperament rating than steers (P < .05). Temperament scores evaluated for each breed group also showed that increased temperament score resulted in decreased average daily gains (P < .05). These data show that cattle that were quieter and calmer during handling had greater average daily gains than cattle that became agitated during routine handling.

  6. Working group on the modes of a TGAP remission or decrease concerning the intermediate energy consumptions; Groupe de travail sur les modalites d'exoneration et d'attenuation d'une tgap sur les consommations intermediaires d'energie

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-04-01

    The french government decided to examine conditions of the application of the TGAP tax on intermediate energy consumption, in order to enforce the fight against the greenhouse effect and to better control the energy consumption. A working group has been created to examine the possibilities of the TGAP remission. This report presents the conclusions of the working group and analyses different scenario. (A.L.B.)

  7. Averaged controllability of parameter dependent conservative semigroups

    Science.gov (United States)

    Lohéac, Jérôme; Zuazua, Enrique

    2017-02-01

    We consider the problem of averaged controllability for parameter depending (either in a discrete or continuous fashion) control systems, the aim being to find a control, independent of the unknown parameters, so that the average of the states is controlled. We do it in the context of conservative models, both in an abstract setting and also analysing the specific examples of the wave and Schrödinger equations. Our first result is of perturbative nature. Assuming the averaging probability measure to be a small parameter-dependent perturbation (in a sense that we make precise) of an atomic measure given by a Dirac mass corresponding to a specific realisation of the system, we show that the averaged controllability property is achieved whenever the system corresponding to the support of the Dirac is controllable. Similar tools can be employed to obtain averaged versions of the so-called Ingham inequalities. Particular attention is devoted to the 1d wave equation in which the time-periodicity of solutions can be exploited to obtain more precise results, provided the parameters involved satisfy Diophantine conditions ensuring the lack of resonances.

  8. Average Temperatures in the Southwestern United States, 2000-2015 Versus Long-Term Average

    Data.gov (United States)

    U.S. Environmental Protection Agency — This indicator shows how the average air temperature from 2000 to 2015 has differed from the long-term average (1895–2015). To provide more detailed information,...

  9. Books average previous decade of economic misery.

    Directory of Open Access Journals (Sweden)

    R Alexander Bentley

    Full Text Available For the 20(th century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  10. Benchmarking statistical averaging of spectra with HULLAC

    Science.gov (United States)

    Klapisch, Marcel; Busquet, Michel

    2008-11-01

    Knowledge of radiative properties of hot plasmas is important for ICF, astrophysics, etc When mid-Z or high-Z elements are present, the spectra are so complex that one commonly uses statistically averaged description of atomic systems [1]. In a recent experiment on Fe[2], performed under controlled conditions, high resolution transmission spectra were obtained. The new version of HULLAC [3] allows the use of the same model with different levels of details/averaging. We will take advantage of this feature to check the effect of averaging with comparison with experiment. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Quant. Spectros. Rad. Transf. 65, 43 (2000). [2] J. E. Bailey, G. A. Rochau, C. A. Iglesias et al., Phys. Rev. Lett. 99, 265002-4 (2007). [3]. M. Klapisch, M. Busquet, and A. Bar-Shalom, AIP Conference Proceedings 926, 206-15 (2007).

  11. Stochastic Averaging and Stochastic Extremum Seeking

    CERN Document Server

    Liu, Shu-Jun

    2012-01-01

    Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering  and analysis of bacterial  convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...

  12. Books average previous decade of economic misery.

    Science.gov (United States)

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  13. High Average Power Yb:YAG Laser

    Energy Technology Data Exchange (ETDEWEB)

    Zapata, L E; Beach, R J; Payne, S A

    2001-05-23

    We are working on a composite thin-disk laser design that can be scaled as a source of high brightness laser power for tactical engagement and other high average power applications. The key component is a diffusion-bonded composite comprising a thin gain-medium and thicker cladding that is strikingly robust and resolves prior difficulties with high average power pumping/cooling and the rejection of amplified spontaneous emission (ASE). In contrast to high power rods or slabs, the one-dimensional nature of the cooling geometry and the edge-pump geometry scale gracefully to very high average power. The crucial design ideas have been verified experimentally. Progress this last year included: extraction with high beam quality using a telescopic resonator, a heterogeneous thin film coating prescription that meets the unusual requirements demanded by this laser architecture, thermal management with our first generation cooler. Progress was also made in design of a second-generation laser.

  14. The modulated average structure of mullite.

    Science.gov (United States)

    Birkenstock, Johannes; Petříček, Václav; Pedersen, Bjoern; Schneider, Hartmut; Fischer, Reinhard X

    2015-06-01

    Homogeneous and inclusion-free single crystals of 2:1 mullite (Al(4.8)Si(1.2)O(9.6)) grown by the Czochralski technique were examined by X-ray and neutron diffraction methods. The observed diffuse scattering together with the pattern of satellite reflections confirm previously published data and are thus inherent features of the mullite structure. The ideal composition was closely met as confirmed by microprobe analysis (Al(4.82 (3))Si(1.18 (1))O(9.59 (5))) and by average structure refinements. 8 (5) to 20 (13)% of the available Si was found in the T* position of the tetrahedra triclusters. The strong tendencey for disorder in mullite may be understood from considerations of hypothetical superstructures which would have to be n-fivefold with respect to the three-dimensional average unit cell of 2:1 mullite and n-fourfold in case of 3:2 mullite. In any of these the possible arrangements of the vacancies and of the tetrahedral units would inevitably be unfavorable. Three directions of incommensurate modulations were determined: q1 = [0.3137 (2) 0 ½], q2 = [0 0.4021 (5) 0.1834 (2)] and q3 = [0 0.4009 (5) -0.1834 (2)]. The one-dimensional incommensurately modulated crystal structure associated with q1 was refined for the first time using the superspace approach. The modulation is dominated by harmonic occupational modulations of the atoms in the di- and the triclusters of the tetrahedral units in mullite. The modulation amplitudes are small and the harmonic character implies that the modulated structure still represents an average structure in the overall disordered arrangement of the vacancies and of the tetrahedral structural units. In other words, when projecting the local assemblies at the scale of a few tens of average mullite cells into cells determined by either one of the modulation vectors q1, q2 or q3 a weak average modulation results with slightly varying average occupation factors for the tetrahedral units. As a result, the real

  15. A singularity theorem based on spatial averages

    Indian Academy of Sciences (India)

    J M M Senovilla

    2007-07-01

    Inspired by Raychaudhuri's work, and using the equation named after him as a basic ingredient, a new singularity theorem is proved. Open non-rotating Universes, expanding everywhere with a non-vanishing spatial average of the matter variables, show severe geodesic incompletness in the past. Another way of stating the result is that, under the same conditions, any singularity-free model must have a vanishing spatial average of the energy density (and other physical variables). This is very satisfactory and provides a clear decisive difference between singular and non-singular cosmologies.

  16. Average: the juxtaposition of procedure and context

    Science.gov (United States)

    Watson, Jane; Chick, Helen; Callingham, Rosemary

    2014-09-01

    This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.

  17. SOURCE TERMS FOR AVERAGE DOE SNF CANISTERS

    Energy Technology Data Exchange (ETDEWEB)

    K. L. Goluoglu

    2000-06-09

    The objective of this calculation is to generate source terms for each type of Department of Energy (DOE) spent nuclear fuel (SNF) canister that may be disposed of at the potential repository at Yucca Mountain. The scope of this calculation is limited to generating source terms for average DOE SNF canisters, and is not intended to be used for subsequent calculations requiring bounding source terms. This calculation is to be used in future Performance Assessment calculations, or other shielding or thermal calculations requiring average source terms.

  18. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr......Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach...

  19. Grassmann Averages for Scalable Robust PCA

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Black, Michael J.

    2014-01-01

    As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...

  20. Model averaging and muddled multimodel inferences.

    Science.gov (United States)

    Cade, Brian S

    2015-09-01

    Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the t

  1. Model averaging and muddled multimodel inferences

    Science.gov (United States)

    Cade, Brian S.

    2015-01-01

    Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the

  2. Parameterized Traveling Salesman Problem: Beating the Average

    NARCIS (Netherlands)

    Gutin, G.; Patel, V.

    2016-01-01

    In the traveling salesman problem (TSP), we are given a complete graph Kn together with an integer weighting w on the edges of Kn, and we are asked to find a Hamilton cycle of Kn of minimum weight. Let h(w) denote the average weight of a Hamilton cycle of Kn for the weighting w. Vizing in 1973 asked

  3. On averaging methods for partial differential equations

    NARCIS (Netherlands)

    Verhulst, F.

    2001-01-01

    The analysis of weakly nonlinear partial differential equations both qualitatively and quantitatively is emerging as an exciting eld of investigation In this report we consider specic results related to averaging but we do not aim at completeness The sections and contain important material which

  4. Discontinuities and hysteresis in quantized average consensus

    NARCIS (Netherlands)

    Ceragioli, Francesca; Persis, Claudio De; Frasca, Paolo

    2011-01-01

    We consider continuous-time average consensus dynamics in which the agents’ states are communicated through uniform quantizers. Solutions to the resulting system are defined in the Krasowskii sense and are proven to converge to conditions of ‘‘practical consensus’’. To cope with undesired chattering

  5. Bayesian Averaging is Well-Temperated

    DEFF Research Database (Denmark)

    Hansen, Lars Kai

    2000-01-01

    Bayesian predictions are stochastic just like predictions of any other inference scheme that generalize from a finite sample. While a simple variational argument shows that Bayes averaging is generalization optimal given that the prior matches the teacher parameter distribution the situation...

  6. A Functional Measurement Study on Averaging Numerosity

    Science.gov (United States)

    Tira, Michael D.; Tagliabue, Mariaelena; Vidotto, Giulio

    2014-01-01

    In two experiments, participants judged the average numerosity between two sequentially presented dot patterns to perform an approximate arithmetic task. In Experiment 1, the response was given on a 0-20 numerical scale (categorical scaling), and in Experiment 2, the response was given by the production of a dot pattern of the desired numerosity…

  7. Generalized Jackknife Estimators of Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic li...

  8. Bootstrapping Density-Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...

  9. Quantum Averaging of Squeezed States of Light

    DEFF Research Database (Denmark)

    Squeezing has been recognized as the main resource for quantum information processing and an important resource for beating classical detection strategies. It is therefore of high importance to reliably generate stable squeezing over longer periods of time. The averaging procedure for a single qu...

  10. Bayesian Model Averaging for Propensity Score Analysis

    Science.gov (United States)

    Kaplan, David; Chen, Jianshen

    2013-01-01

    The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…

  11. A dynamic analysis of moving average rules

    NARCIS (Netherlands)

    Chiarella, C.; He, X.Z.; Hommes, C.H.

    2006-01-01

    The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type

  12. Average utility maximization: A preference foundation

    NARCIS (Netherlands)

    A.V. Kothiyal (Amit); V. Spinu (Vitalie); P.P. Wakker (Peter)

    2014-01-01

    textabstractThis paper provides necessary and sufficient preference conditions for average utility maximization over sequences of variable length. We obtain full generality by using a new algebraic technique that exploits the richness structure naturally provided by the variable length of the sequen

  13. High average-power induction linacs

    Energy Technology Data Exchange (ETDEWEB)

    Prono, D.S.; Barrett, D.; Bowles, E.; Caporaso, G.J.; Chen, Yu-Jiuan; Clark, J.C.; Coffield, F.; Newton, M.A.; Nexsen, W.; Ravenscroft, D.

    1989-03-15

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs.

  14. High Average Power Optical FEL Amplifiers

    CERN Document Server

    Ben-Zvi, I; Litvinenko, V

    2005-01-01

    Historically, the first demonstration of the FEL was in an amplifier configuration at Stanford University. There were other notable instances of amplifying a seed laser, such as the LLNL amplifier and the BNL ATF High-Gain Harmonic Generation FEL. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance a 100 kW average power FEL. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting energy recovery linacs combine well with the high-gain FEL amplifier to produce unprecedented average power FELs with some advantages. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Li...

  15. Independence, Odd Girth, and Average Degree

    DEFF Research Database (Denmark)

    Löwenstein, Christian; Pedersen, Anders Sune; Rautenbach, Dieter;

    2011-01-01

      We prove several tight lower bounds in terms of the order and the average degree for the independence number of graphs that are connected and/or satisfy some odd girth condition. Our main result is the extension of a lower bound for the independence number of triangle-free graphs of maximum...

  16. Full averaging of fuzzy impulsive differential inclusions

    Directory of Open Access Journals (Sweden)

    Natalia V. Skripnik

    2010-09-01

    Full Text Available In this paper the substantiation of the method of full averaging for fuzzy impulsive differential inclusions is studied. We extend the similar results for impulsive differential inclusions with Hukuhara derivative (Skripnik, 2007, for fuzzy impulsive differential equations (Plotnikov and Skripnik, 2009, and for fuzzy differential inclusions (Skripnik, 2009.

  17. Materials for high average power lasers

    Energy Technology Data Exchange (ETDEWEB)

    Marion, J.E.; Pertica, A.J.

    1989-01-01

    Unique materials properties requirements for solid state high average power (HAP) lasers dictate a materials development research program. A review of the desirable laser, optical and thermo-mechanical properties for HAP lasers precedes an assessment of the development status for crystalline and glass hosts optimized for HAP lasers. 24 refs., 7 figs., 1 tab.

  18. A dynamic analysis of moving average rules

    NARCIS (Netherlands)

    C. Chiarella; X.Z. He; C.H. Hommes

    2006-01-01

    The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type use

  19. The Health Effects of Income Inequality: Averages and Disparities.

    Science.gov (United States)

    Truesdale, Beth C; Jencks, Christopher

    2016-01-01

    Much research has investigated the association of income inequality with average life expectancy, usually finding negative correlations that are not very robust. A smaller body of work has investigated socioeconomic disparities in life expectancy, which have widened in many countries since 1980. These two lines of work should be seen as complementary because changes in average life expectancy are unlikely to affect all socioeconomic groups equally. Although most theories imply long and variable lags between changes in income inequality and changes in health, empirical evidence is confined largely to short-term effects. Rising income inequality can affect individuals in two ways. Direct effects change individuals' own income. Indirect effects change other people's income, which can then change a society's politics, customs, and ideals, altering the behavior even of those whose own income remains unchanged. Indirect effects can thus change both average health and the slope of the relationship between individual income and health.

  20. Average Emissivity Curve of Batse Gamma-Ray Bursts with Different Intensities

    Science.gov (United States)

    Mitrofanov, Igor G.; Litvak, Maxim L.; Briggs, Michael S.; Paciesas, William S.; Pendleton, Geoffrey N.; Preece, Robert D.; Meegan, Charles A.

    1999-01-01

    Six intensity groups with approximately 150 BATSE gamma-ray bursts each are compared using average emissivity curves. Time stretch factors for each of the dimmer groups are estimated with respect to the brightest group, which serves as the reference, taking into account the systematics of counts-produced noise effects and choice statistics. A stretching/intensity anticorrelation is found with good statistical significance during the average back slopes of bursts. A stretch factor approximately 2 is found between the 150 dimmest bursts, with peak flux less than 0.45 photons/sq cm.s, and the 147 brightest bursts, with peak flux greater than 4.1 photons/sq cm.s. On the other hand, while a trend of increasing stretching factor may exist for rise fronts for bursts with decreasing peak flux from greater than 4.1 photons/sq cm.s down to 0.7 photons/sq cm.s, the magnitude of the stretching factor is less than approximately 1.4 and is therefore inconsistent with stretching factor of back slope.

  1. Averaged Extended Tree Augmented Naive Classifier

    Directory of Open Access Journals (Sweden)

    Aaron Meehan

    2015-07-01

    Full Text Available This work presents a new general purpose classifier named Averaged Extended Tree Augmented Naive Bayes (AETAN, which is based on combining the advantageous characteristics of Extended Tree Augmented Naive Bayes (ETAN and Averaged One-Dependence Estimator (AODE classifiers. We describe the main properties of the approach and algorithms for learning it, along with an analysis of its computational time complexity. Empirical results with numerous data sets indicate that the new approach is superior to ETAN and AODE in terms of both zero-one classification accuracy and log loss. It also compares favourably against weighted AODE and hidden Naive Bayes. The learning phase of the new approach is slower than that of its competitors, while the time complexity for the testing phase is similar. Such characteristics suggest that the new classifier is ideal in scenarios where online learning is not required.

  2. Trajectory averaging for stochastic approximation MCMC algorithms

    CERN Document Server

    Liang, Faming

    2010-01-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400--407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305--320]. The application of the trajectory averaging estimator to other stochastic approximation MCMC algorithms, for example, a stochastic approximation MLE al...

  3. ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE

    Directory of Open Access Journals (Sweden)

    Carmen BOGHEAN

    2013-12-01

    Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.

  4. Time-average dynamic speckle interferometry

    Science.gov (United States)

    Vladimirov, A. P.

    2014-05-01

    For the study of microscopic processes occurring at structural level in solids and thin biological objects, a method of dynamic speckle interferometry successfully applied. However, the method has disadvantages. The purpose of the report is to acquaint colleagues with the method of averaging in time in dynamic speckle - interferometry of microscopic processes, allowing eliminating shortcomings. The main idea of the method is the choice the averaging time, which exceeds the characteristic time correlation (relaxation) the most rapid process. The method theory for a thin phase and the reflecting object is given. The results of the experiment on the high-cycle fatigue of steel and experiment to estimate the biological activity of a monolayer of cells, cultivated on a transparent substrate is given. It is shown that the method allows real-time visualize the accumulation of fatigue damages and reliably estimate the activity of cells with viruses and without viruses.

  5. Average Annual Rainfall over the Globe

    Science.gov (United States)

    Agrawal, D. C.

    2013-01-01

    The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74 ×…

  6. Endogenous average cost based access pricing

    OpenAIRE

    Fjell, Kenneth; Foros, Øystein; Pal, Debashis

    2006-01-01

    We consider an industry where a downstream competitor requires access to an upstream facility controlled by a vertically integrated and regulated incumbent. The literature on access pricing assumes the access price to be exogenously fixed ex-ante. We analyze an endogenous average cost based access pricing rule, where both firms realize the interdependence among their quantities and the regulated access price. Endogenous access pricing neutralizes the artificial cost advantag...

  7. The Ghirlanda-Guerra identities without averaging

    CERN Document Server

    Chatterjee, Sourav

    2009-01-01

    The Ghirlanda-Guerra identities are one of the most mysterious features of spin glasses. We prove the GG identities in a large class of models that includes the Edwards-Anderson model, the random field Ising model, and the Sherrington-Kirkpatrick model in the presence of a random external field. Previously, the GG identities were rigorously proved only `on average' over a range of temperatures or under small perturbations.

  8. Average Annual Rainfall over the Globe

    Science.gov (United States)

    Agrawal, D. C.

    2013-01-01

    The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74 ×…

  9. Average Light Intensity Inside a Photobioreactor

    Directory of Open Access Journals (Sweden)

    Herby Jean

    2011-01-01

    Full Text Available For energy production, microalgae are one of the few alternatives with high potential. Similar to plants, algae require energy acquired from light sources to grow. This project uses calculus to determine the light intensity inside of a photobioreactor filled with algae. Under preset conditions along with estimated values, we applied Lambert-Beer's law to formulate an equation to calculate how much light intensity escapes a photobioreactor and determine the average light intensity that was present inside the reactor.

  10. Geomagnetic effects on the average surface temperature

    Science.gov (United States)

    Ballatore, P.

    Several results have previously shown as the solar activity can be related to the cloudiness and the surface solar radiation intensity (Svensmark and Friis-Christensen, J. Atmos. Sol. Terr. Phys., 59, 1225, 1997; Veretenenkoand Pudovkin, J. Atmos. Sol. Terr. Phys., 61, 521, 1999). Here, the possible relationships between the averaged surface temperature and the solar wind parameters or geomagnetic activity indices are investigated. The temperature data used are the monthly SST maps (generated at RAL and available from the related ESRIN/ESA database) that represent the averaged surface temperature with a spatial resolution of 0.5°x0.5° and cover the entire globe. The interplanetary data and the geomagnetic data are from the USA National Space Science Data Center. The time interval considered is 1995-2000. Specifically, possible associations and/or correlations of the average temperature with the interplanetary magnetic field Bz component and with the Kp index are considered and differentiated taking into account separate geographic and geomagnetic planetary regions.

  11. On Backus average for generally anisotropic layers

    CERN Document Server

    Bos, Len; Slawinski, Michael A; Stanoev, Theodore

    2016-01-01

    In this paper, following the Backus (1962) approach, we examine expressions for elasticity parameters of a homogeneous generally anisotropic medium that is long-wave-equivalent to a stack of thin generally anisotropic layers. These expressions reduce to the results of Backus (1962) for the case of isotropic and transversely isotropic layers. In over half-a-century since the publications of Backus (1962) there have been numerous publications applying and extending that formulation. However, neither George Backus nor the authors of the present paper are aware of further examinations of mathematical underpinnings of the original formulation; hence, this paper. We prove that---within the long-wave approximation---if the thin layers obey stability conditions then so does the equivalent medium. We examine---within the Backus-average context---the approximation of the average of a product as the product of averages, and express it as a proposition in terms of an upper bound. In the presented examination we use the e...

  12. A simple algorithm for averaging spike trains.

    Science.gov (United States)

    Julienne, Hannah; Houghton, Conor

    2013-02-25

    Although spike trains are the principal channel of communication between neurons, a single stimulus will elicit different spike trains from trial to trial. This variability, in both spike timings and spike number can obscure the temporal structure of spike trains and often means that computations need to be run on numerous spike trains in order to extract features common across all the responses to a particular stimulus. This can increase the computational burden and obscure analytical results. As a consequence, it is useful to consider how to calculate a central spike train that summarizes a set of trials. Indeed, averaging responses over trials is routine for other signal types. Here, a simple method for finding a central spike train is described. The spike trains are first mapped to functions, these functions are averaged, and a greedy algorithm is then used to map the average function back to a spike train. The central spike trains are tested for a large data set. Their performance on a classification-based test is considerably better than the performance of the medoid spike trains.

  13. Changing mortality and average cohort life expectancy

    Directory of Open Access Journals (Sweden)

    Robert Schoen

    2005-10-01

    Full Text Available Period life expectancy varies with changes in mortality, and should not be confused with the life expectancy of those alive during that period. Given past and likely future mortality changes, a recent debate has arisen on the usefulness of the period life expectancy as the leading measure of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure, the average cohort life expectancy (ACLE, to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate measures of mortality are calculated for England and Wales, Norway, and Switzerland for the years 1880 to 2000. CAL is found to be sensitive to past and present changes in death rates. ACLE requires the most data, but gives the best representation of the survivorship of cohorts present at a given time.

  14. Disk-averaged synthetic spectra of Mars

    Science.gov (United States)

    Tinetti, Giovanna; Meadows, Victoria S.; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather

    2005-01-01

    The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.

  15. Spatial averaging infiltration model for layered soil

    Institute of Scientific and Technical Information of China (English)

    HU HePing; YANG ZhiYong; TIAN FuQiang

    2009-01-01

    To quantify the influences of soil heterogeneity on infiltration, a spatial averaging infiltration model for layered soil (SAI model) is developed by coupling the spatial averaging approach proposed by Chen et al. and the Generalized Green-Ampt model proposed by Jia et al. In the SAI model, the spatial heterogeneity along the horizontal direction is described by a probability distribution function, while that along the vertical direction is represented by the layered soils. The SAI model is tested on a typical soil using Monte Carlo simulations as the base model. The results show that the SAI model can directly incorporate the influence of spatial heterogeneity on infiltration on the macro scale. It is also found that the homogeneous assumption of soil hydraulic conductivity along the horizontal direction will overestimate the infiltration rate, while that along the vertical direction will underestimate the infiltration rate significantly during rainstorm periods. The SAI model is adopted in the spatial averaging hydrological model developed by the authors, and the results prove that it can be applied in the macro-scale hydrological and land surface process modeling in a promising way.

  16. Spatial averaging infiltration model for layered soil

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    To quantify the influences of soil heterogeneity on infiltration, a spatial averaging infiltration model for layered soil (SAI model) is developed by coupling the spatial averaging approach proposed by Chen et al. and the Generalized Green-Ampt model proposed by Jia et al. In the SAI model, the spatial hetero- geneity along the horizontal direction is described by a probability distribution function, while that along the vertical direction is represented by the layered soils. The SAI model is tested on a typical soil using Monte Carlo simulations as the base model. The results show that the SAI model can directly incorporate the influence of spatial heterogeneity on infiltration on the macro scale. It is also found that the homogeneous assumption of soil hydraulic conductivity along the horizontal direction will overes- timate the infiltration rate, while that along the vertical direction will underestimate the infiltration rate significantly during rainstorm periods. The SAI model is adopted in the spatial averaging hydrological model developed by the authors, and the results prove that it can be applied in the macro-scale hy- drological and land surface process modeling in a promising way.

  17. Disk-averaged synthetic spectra of Mars

    CERN Document Server

    Tinetti, G; Fong, W; Meadows, V S; Snively, H; Velusamy, T; Crisp, David; Fong, William; Meadows, Victoria S.; Snively, Heather; Tinetti, Giovanna; Velusamy, Thangasamy

    2004-01-01

    The principal goal of the NASA Terrestrial Planet Finder (TPF) and ESA Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earth-sized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of the planet Mars to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPF-C) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model which uses observational data as input to generate a database of spatially-resolved synthetic spectra for a range of illumination conditions (phase angles) and viewing geometries. Results presented here include disk averaged synthetic spectra, light-cur...

  18. Disk-averaged synthetic spectra of Mars

    Science.gov (United States)

    Tinetti, Giovanna; Meadows, Victoria S.; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather

    2005-01-01

    The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.

  19. Bayesian Model Averaging and Weighted Average Least Squares : Equivariance, Stability, and Numerical Issues

    NARCIS (Netherlands)

    De Luca, G.; Magnus, J.R.

    2011-01-01

    This article is concerned with the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals which implement, respectively, the exact Bayesian Model Averaging (BMA) estimator and the Weighted Average Least Squa

  20. A sixth order averaged vector field method

    OpenAIRE

    Li, Haochen; Wang, Yushun; Qin, Mengzhao

    2014-01-01

    In this paper, based on the theory of rooted trees and B-series, we propose the concrete formulas of the substitution law for the trees of order =5. With the help of the new substitution law, we derive a B-series integrator extending the averaged vector field (AVF) method to high order. The new integrator turns out to be of order six and exactly preserves energy for Hamiltonian systems. Numerical experiments are presented to demonstrate the accuracy and the energy-preserving property of the s...

  1. Phase-averaged transport for quasiperiodic Hamiltonians

    CERN Document Server

    Bellissard, J; Schulz-Baldes, H

    2002-01-01

    For a class of discrete quasi-periodic Schroedinger operators defined by covariant re- presentations of the rotation algebra, a lower bound on phase-averaged transport in terms of the multifractal dimensions of the density of states is proven. This result is established under a Diophantine condition on the incommensuration parameter. The relevant class of operators is distinguished by invariance with respect to symmetry automorphisms of the rotation algebra. It includes the critical Harper (almost-Mathieu) operator. As a by-product, a new solution of the frame problem associated with Weyl-Heisenberg-Gabor lattices of coherent states is given.

  2. Sparsity averaging for radio-interferometric imaging

    CERN Document Server

    Carrillo, Rafael E; Wiaux, Yves

    2014-01-01

    We propose a novel regularization method for compressive imaging in the context of the compressed sensing (CS) theory with coherent and redundant dictionaries. Natural images are often complicated and several types of structures can be present at once. It is well known that piecewise smooth images exhibit gradient sparsity, and that images with extended structures are better encapsulated in wavelet frames. Therefore, we here conjecture that promoting average sparsity or compressibility over multiple frames rather than single frames is an extremely powerful regularization prior.

  3. Fluctuations of wavefunctions about their classical average

    Energy Technology Data Exchange (ETDEWEB)

    Benet, L [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Flores, J [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Hernandez-Saldana, H [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Izrailev, F M [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Leyvraz, F [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Seligman, T H [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico)

    2003-02-07

    Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics.

  4. The average free volume model for liquids

    CERN Document Server

    Yu, Yang

    2014-01-01

    In this work, the molar volume thermal expansion coefficient of 59 room temperature ionic liquids is compared with their van der Waals volume Vw. Regular correlation can be discerned between the two quantities. An average free volume model, that considers the particles as hard core with attractive force, is proposed to explain the correlation in this study. A combination between free volume and Lennard-Jones potential is applied to explain the physical phenomena of liquids. Some typical simple liquids (inorganic, organic, metallic and salt) are introduced to verify this hypothesis. Good agreement from the theory prediction and experimental data can be obtained.

  5. Fluctuations of wavefunctions about their classical average

    CERN Document Server

    Bénet, L; Hernandez-Saldana, H; Izrailev, F M; Leyvraz, F; Seligman, T H

    2003-01-01

    Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics.

  6. Grassmann Averages for Scalable Robust PCA

    OpenAIRE

    2014-01-01

    As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortunately, state-of-the-art approaches for robust PCA do not scale beyond small-to-medium sized datasets. To address this, we introduce the Grassmann Average (GA), whic...

  7. Detrending moving average algorithm for multifractals

    Science.gov (United States)

    Gu, Gao-Feng; Zhou, Wei-Xing

    2010-07-01

    The detrending moving average (DMA) algorithm is a widely used technique to quantify the long-term correlations of nonstationary time series and the long-range correlations of fractal surfaces, which contains a parameter θ determining the position of the detrending window. We develop multifractal detrending moving average (MFDMA) algorithms for the analysis of one-dimensional multifractal measures and higher-dimensional multifractals, which is a generalization of the DMA method. The performance of the one-dimensional and two-dimensional MFDMA methods is investigated using synthetic multifractal measures with analytical solutions for backward (θ=0) , centered (θ=0.5) , and forward (θ=1) detrending windows. We find that the estimated multifractal scaling exponent τ(q) and the singularity spectrum f(α) are in good agreement with the theoretical values. In addition, the backward MFDMA method has the best performance, which provides the most accurate estimates of the scaling exponents with lowest error bars, while the centered MFDMA method has the worse performance. It is found that the backward MFDMA algorithm also outperforms the multifractal detrended fluctuation analysis. The one-dimensional backward MFDMA method is applied to analyzing the time series of Shanghai Stock Exchange Composite Index and its multifractal nature is confirmed.

  8. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  9. Averaged null energy condition from causality

    Science.gov (United States)

    Hartman, Thomas; Kundu, Sandipan; Tajdini, Amirhossein

    2017-07-01

    Unitary, Lorentz-invariant quantum field theories in flat spacetime obey mi-crocausality: commutators vanish at spacelike separation. For interacting theories in more than two dimensions, we show that this implies that the averaged null energy, ∫ duT uu , must be non-negative. This non-local operator appears in the operator product expansion of local operators in the lightcone limit, and therefore contributes to n-point functions. We derive a sum rule that isolates this contribution and is manifestly positive. The argument also applies to certain higher spin operators other than the stress tensor, generating an infinite family of new constraints of the form ∫ duX uuu··· u ≥ 0. These lead to new inequalities for the coupling constants of spinning operators in conformal field theory, which include as special cases (but are generally stronger than) the existing constraints from the lightcone bootstrap, deep inelastic scattering, conformal collider methods, and relative entropy. We also comment on the relation to the recent derivation of the averaged null energy condition from relative entropy, and suggest a more general connection between causality and information-theoretic inequalities in QFT.

  10. MACHINE PROTECTION FOR HIGH AVERAGE CURRENT LINACS

    Energy Technology Data Exchange (ETDEWEB)

    Jordan, Kevin; Allison, Trent; Evans, Richard; Coleman, James; Grippo, Albert

    2003-05-01

    A fully integrated Machine Protection System (MPS) is critical to efficient commissioning and safe operation of all high current accelerators. The Jefferson Lab FEL [1,2] has multiple electron beam paths and many different types of diagnostic insertion devices. The MPS [3] needs to monitor both the status of these devices and the magnet settings which define the beam path. The matrix of these devices and beam paths are programmed into gate arrays, the output of the matrix is an allowable maximum average power limit. This power limit is enforced by the drive laser for the photocathode gun. The Beam Loss Monitors (BLMs), RF status, and laser safety system status are also inputs to the control matrix. There are 8 Machine Modes (electron path) and 8 Beam Modes (average power limits) that define the safe operating limits for the FEL. Combinations outside of this matrix are unsafe and the beam is inhibited. The power limits range from no beam to 2 megawatts of electron beam power.

  11. Intensity contrast of the average supergranule

    CERN Document Server

    Langfellner, J; Gizon, L

    2016-01-01

    While the velocity fluctuations of supergranulation dominate the spectrum of solar convection at the solar surface, very little is known about the fluctuations in other physical quantities like temperature or density at supergranulation scale. Using SDO/HMI observations, we characterize the intensity contrast of solar supergranulation at the solar surface. We identify the positions of ${\\sim}10^4$ outflow and inflow regions at supergranulation scales, from which we construct average flow maps and co-aligned intensity and magnetic field maps. In the average outflow center, the maximum intensity contrast is $(7.8\\pm0.6)\\times10^{-4}$ (there is no corresponding feature in the line-of-sight magnetic field). This corresponds to a temperature perturbation of about $1.1\\pm0.1$ K, in agreement with previous studies. We discover an east-west anisotropy, with a slightly deeper intensity minimum east of the outflow center. The evolution is asymmetric in time: the intensity excess is larger 8 hours before the reference t...

  12. Local average height distribution of fluctuating interfaces

    Science.gov (United States)

    Smith, Naftali R.; Meerson, Baruch; Sasorov, Pavel V.

    2017-01-01

    Height fluctuations of growing surfaces can be characterized by the probability distribution of height in a spatial point at a finite time. Recently there has been spectacular progress in the studies of this quantity for the Kardar-Parisi-Zhang (KPZ) equation in 1 +1 dimensions. Here we notice that, at or above a critical dimension, the finite-time one-point height distribution is ill defined in a broad class of linear surface growth models unless the model is regularized at small scales. The regularization via a system-dependent small-scale cutoff leads to a partial loss of universality. As a possible alternative, we introduce a local average height. For the linear models, the probability density of this quantity is well defined in any dimension. The weak-noise theory for these models yields the "optimal path" of the interface conditioned on a nonequilibrium fluctuation of the local average height. As an illustration, we consider the conserved Edwards-Wilkinson (EW) equation, where, without regularization, the finite-time one-point height distribution is ill defined in all physical dimensions. We also determine the optimal path of the interface in a closely related problem of the finite-time height-difference distribution for the nonconserved EW equation in 1 +1 dimension. Finally, we discuss a UV catastrophe in the finite-time one-point distribution of height in the (nonregularized) KPZ equation in 2 +1 dimensions.

  13. Asymptotic Time Averages and Frequency Distributions

    Directory of Open Access Journals (Sweden)

    Muhammad El-Taha

    2016-01-01

    Full Text Available Consider an arbitrary nonnegative deterministic process (in a stochastic setting {X(t,  t≥0} is a fixed realization, i.e., sample-path of the underlying stochastic process with state space S=(-∞,∞. Using a sample-path approach, we give necessary and sufficient conditions for the long-run time average of a measurable function of process to be equal to the expectation taken with respect to the same measurable function of its long-run frequency distribution. The results are further extended to allow unrestricted parameter (time space. Examples are provided to show that our condition is not superfluous and that it is weaker than uniform integrability. The case of discrete-time processes is also considered. The relationship to previously known sufficient conditions, usually given in stochastic settings, will also be discussed. Our approach is applied to regenerative processes and an extension of a well-known result is given. For researchers interested in sample-path analysis, our results will give them the choice to work with the time average of a process or its frequency distribution function and go back and forth between the two under a mild condition.

  14. Effect on long-term average spectrum of pop singers' vocal warm-up with vocal function exercises.

    Science.gov (United States)

    Guzman, Marco; Angulo, Mabel; Muñoz, Daniel; Mayerhoff, Ross

    2013-04-01

    Abstract This case-control study aimed to investigate if there is any change on the spectral slope declination immediately after vocal function exercises (VFE) vs traditional vocal warm-up exercises in normal singers. Thirty-eight pop singers with perceptually normal voices were divided into two groups: an experimental group (n = 20) and a control group (n = 18). One single session with VFE for the experimental group and traditional singing warm-up exercises for the control group was applied. Voice was recorded before and after the exercises. The recorded tasks were to read a phonetically balanced text and to sing a song. Long-term average spectrum (LTAS) analysis included alpha ratio, L1-L0 ratio, and singing power ratio (SPR). Acoustic parameters of voice samples pre- and post-training were compared. Comparison between VFE and control group was also performed. Significant changes after treatment included the alpha ratio and singing power ratio in speaking voice, and SPR in the singing voice for VFE group. The traditional vocal warm-up of the control group also showed pre-post changes. Significant differences between VFE group and control group for alpha ratio and SPR were found in speaking voice samples. This study demonstrates that VFE have an immediate effect on the spectrum of the voice, specifically a decrease on the spectral slope declination. The results of this study provide support for the advantageous effect of VFE as vocal warm-up on voice quality.

  15. Asymmetric network connectivity using weighted harmonic averages

    Science.gov (United States)

    Morrison, Greg; Mahadevan, L.

    2011-02-01

    We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.

  16. Averaged Null Energy Condition from Causality

    CERN Document Server

    Hartman, Thomas; Tajdini, Amirhossein

    2016-01-01

    Unitary, Lorentz-invariant quantum field theories in flat spacetime obey microcausality: commutators vanish at spacelike separation. For interacting theories in more than two dimensions, we show that this implies that the averaged null energy, $\\int du T_{uu}$, must be positive. This non-local operator appears in the operator product expansion of local operators in the lightcone limit, and therefore contributes to $n$-point functions. We derive a sum rule that isolates this contribution and is manifestly positive. The argument also applies to certain higher spin operators other than the stress tensor, generating an infinite family of new constraints of the form $\\int du X_{uuu\\cdots u} \\geq 0$. These lead to new inequalities for the coupling constants of spinning operators in conformal field theory, which include as special cases (but are generally stronger than) the existing constraints from the lightcone bootstrap, deep inelastic scattering, conformal collider methods, and relative entropy. We also comment ...

  17. Average Gait Differential Image Based Human Recognition

    Directory of Open Access Journals (Sweden)

    Jinyan Chen

    2014-01-01

    Full Text Available The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI, AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition.

  18. Geographic Gossip: Efficient Averaging for Sensor Networks

    CERN Document Server

    Dimakis, Alexandros G; Wainwright, Martin J

    2007-01-01

    Gossip algorithms for distributed computation are attractive due to their simplicity, distributed nature, and robustness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repeatedly recirculating redundant information. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing times of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of $n$ and $\\sqrt{n}$ respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy $\\epsilon$ using $O(\\frac{n^{1.5}}{\\sqrt{\\log ...

  19. Bivariate phase-rectified signal averaging

    CERN Document Server

    Schumann, Aicko Y; Bauer, Axel; Schmidt, Georg

    2008-01-01

    Phase-Rectified Signal Averaging (PRSA) was shown to be a powerful tool for the study of quasi-periodic oscillations and nonlinear effects in non-stationary signals. Here we present a bivariate PRSA technique for the study of the inter-relationship between two simultaneous data recordings. Its performance is compared with traditional cross-correlation analysis, which, however, does not work well for non-stationary data and cannot distinguish the coupling directions in complex nonlinear situations. We show that bivariate PRSA allows the analysis of events in one signal at times where the other signal is in a certain phase or state; it is stable in the presence of noise and impassible to non-stationarities.

  20. Data mining for average images in a digital hand atlas

    Science.gov (United States)

    Zhang, Aifeng; Cao, Fei; Pietka, Ewa; Liu, Brent J.; Huang, H. K.

    2004-04-01

    Bone age assessment is a procedure performed in pediatric patients to quickly evaluate parameters of maturation and growth from a left hand and wrist radiograph. Pietka and Cao have developed a Computer-aided diagnosis (CAD) method of bone age assessment based on a digital hand atlas. The aim of this paper is to extend their work by automatically select the best representative image from a group of normal children based on specific bony features that reflect skeletal maturity. The group can be of any ethnic origin and gender from one year to 18 year old in the digital atlas. This best representative image is defined as the "average" image of the group that can be augmented to Piekta and Cao's method to facilitate in the bone age assessment process.

  1. Industrial Applications of High Average Power FELS

    CERN Document Server

    Shinn, Michelle D

    2005-01-01

    The use of lasers for material processing continues to expand, and the annual sales of such lasers exceeds $1 B (US). Large scale (many m2) processing of materials require the economical production of laser powers of the tens of kilowatts, and therefore are not yet commercial processes, although they have been demonstrated. The development of FELs based on superconducting RF (SRF) linac technology provides a scaleable path to laser outputs above 50 kW in the IR, rendering these applications economically viable, since the cost/photon drops as the output power increases. This approach also enables high average power ~ 1 kW output in the UV spectrum. Such FELs will provide quasi-cw (PRFs in the tens of MHz), of ultrafast (pulsewidth ~ 1 ps) output with very high beam quality. This talk will provide an overview of applications tests by our facility's users such as pulsed laser deposition, laser ablation, and laser surface modification, as well as present plans that will be tested with our upgraded FELs. These upg...

  2. Calculating Free Energies Using Average Force

    Science.gov (United States)

    Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)

    2001-01-01

    A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.

  3. Photocatalytic performances of BiFeO{sub 3} particles with the average size in nanometer, submicrometer, and micrometer

    Energy Technology Data Exchange (ETDEWEB)

    Hao, Chunxue; FushengWen,, E-mail: wenfsh03@126.com; Xiang, Jianyong; Hou, Hang; Lv, Weiming; Lv, Yifei; Hu, Wentao; Liu, Zhongyuan

    2014-02-01

    Highlights: • Three different synthesis routes have been taken to successfully prepare the BiFeO{sub 3} particles with the different morphologies and average size in 50, 500 nm, and 15 μm. • For photodegradation of dyes under visible irradiation in the presence of BiFeO{sub 3}, the photocatalytic efficiency increases quickly with the decrease in size. • The enhanced photocatalytic efficiency of BiFeO{sub 3} nanoparticles may attribute to more surface active catalytic-sites and shorter distances carriers have to migrate to the surface reaction sites. - Abstract: Three different synthesis routes were taken to successfully prepare the BiFeO{sub 3} particles with the different morphologies and average size in 50, 500 nm, and 15 μm, respectively. The crystal structure was recognized to be a distorted rhombohedral one with the space group R3c. With the decrease in particle size, obvious decrease in peak intensity and redshift in peak position were observed for the Raman active bands. The narrow band gap was determined from the UV–vis absorption spectra, indicating the semiconducting nature of the BiFeO{sub 3}. For photodegradation of dyes under visible irradiation in the presence of BiFeO{sub 3}, the photocatalytic efficiency increased quickly with the decrease in size which may attribute to more surface active catalytic-sites and shorter distances carriers had to migrate to the surface reaction sites.

  4. Binary Mixtures of SH- and CH3-Terminated Self-Assembled Monolayers to Control the Average Spacing Between Aligned Gold Nanoparticles

    Directory of Open Access Journals (Sweden)

    Pavelka Laura

    2009-01-01

    Full Text Available Abstract This paper presents a method to control the average spacing between organometallic chemical vapor deposition (OMCVD grown gold nanoparticles (Au NPs in a line. Focused ion beam patterned CH3-terminated self-assembled monolayers are refilled systematically with different mixtures of SH- and CH3-terminated silanes. The average spacing between OMCVD Au NPs is demonstrated systematically to decrease by increasing the v/v% ratio of the thiols in the binary silane mixtures with SH- and CH3-terminated groups.

  5. Obesity Decreases Perioperative Tissue Oxygenation

    Science.gov (United States)

    Kabon, Barbara; Nagele, Angelika; Reddy, Dayakar; Eagon, Chris; Fleshman, James W.; Sessler, Daniel I.; Kurz, Andrea

    2005-01-01

    Background: Obesity is an important risk factor for surgical site infections. The incidence of surgical wound infections is directly related to tissue perfusion and oxygenation. Fat tissue mass expands without a concomitant increase in blood flow per cell, which might result in a relative hypoperfusion with decreased tissue oxygenation. Consequently, we tested the hypotheses that perioperative tissue oxygen tension is reduced in obese surgical patients. Furthermore, we compared the effect of supplemental oxygen administration on tissue oxygenation in obese and non-obese patients. Methods: Forty-six patients undergoing major abdominal surgery were assigned to one of two groups according to their body mass index (BMI): BMI < 30 kg/m2 (non-obese) and BMI ≥ 30 kg/m2 (obese). Intraoperative oxygen administration was adjusted to arterial oxygen tensions of ≈150 mmHg and ≈300 mmHg in random order. Anesthesia technique and perioperative fluid management were standardized. Subcutaneous tissue oxygen tension was measured with a polarographic electrode positioned within a subcutaneous tonometer in the lateral upper arm during surgery, in the recovery room, and on the first postoperative day. Postoperative tissue oxygen was also measured adjacent to the wound. Data were compared with unpaired two tailed t-tests and Wilcoxon rank-sum tests; P < 0.05 was considered statistically significant. Results: Intraoperative subcutaneous tissue oxygen tension was significantly less in the obese patients at baseline (36 vs. 57 mmHg, P = 0.002) and with supplemental oxygen administration (47 vs. 76 mmHg, P = 0.014). Immediate postoperative tissue oxygen tension was also significantly less in subcutaneous tissue of the upper arm (43 vs. 54 mmHg, P = 0.011) as well as near the incision (42 vs. 62 mmHg, P = 0.012) in obese patients. In contrast, tissue oxygen tension was comparable in each group on the first postoperative morning. Conclusion: Wound and tissue hypoxia were common in obese

  6. Average dimension of fixed point spaces with applications

    CERN Document Server

    Guralnick, Robert M

    2010-01-01

    Let $G$ be a finite group, $F$ a field, and $V$ a finite dimensional $FG$-module such that $G$ has no trivial composition factor on $V$. Then the arithmetic average dimension of the fixed point spaces of elements of $G$ on $V$ is at most $(1/p) \\dim V$ where $p$ is the smallest prime divisor of the order of $G$. This answers and generalizes a 1966 conjecture of Neumann which also appeared in a paper of Neumann and Vaughan-Lee and also as a problem in The Kourovka Notebook posted by Vaughan-Lee. Our result also generalizes a recent theorem of Isaacs, Keller, Meierfrankenfeld, and Moret\\'o. Various applications are given. For example, another conjecture of Neumann and Vaughan-Lee is proven and some results of Segal and Shalev are improved and/or generalized concerning BFC groups.

  7. Interpreting Sky-Averaged 21-cm Measurements

    Science.gov (United States)

    Mirocha, Jordan

    2015-01-01

    Within the first ~billion years after the Big Bang, the intergalactic medium (IGM) underwent a remarkable transformation, from a uniform sea of cold neutral hydrogen gas to a fully ionized, metal-enriched plasma. Three milestones during this epoch of reionization -- the emergence of the first stars, black holes (BHs), and full-fledged galaxies -- are expected to manifest themselves as extrema in sky-averaged ("global") measurements of the redshifted 21-cm background. However, interpreting these measurements will be complicated by the presence of strong foregrounds and non-trivialities in the radiative transfer (RT) modeling required to make robust predictions.I have developed numerical models that efficiently solve the frequency-dependent radiative transfer equation, which has led to two advances in studies of the global 21-cm signal. First, frequency-dependent solutions facilitate studies of how the global 21-cm signal may be used to constrain the detailed spectral properties of the first stars, BHs, and galaxies, rather than just the timing of their formation. And second, the speed of these calculations allows one to search vast expanses of a currently unconstrained parameter space, while simultaneously characterizing the degeneracies between parameters of interest. I find principally that (1) physical properties of the IGM, such as its temperature and ionization state, can be constrained robustly from observations of the global 21-cm signal without invoking models for the astrophysical sources themselves, (2) translating IGM properties to galaxy properties is challenging, in large part due to frequency-dependent effects. For instance, evolution in the characteristic spectrum of accreting BHs can modify the 21-cm absorption signal at levels accessible to first generation instruments, but could easily be confused with evolution in the X-ray luminosity star-formation rate relation. Finally, (3) the independent constraints most likely to aide in the interpretation

  8. Signal-averaged electrocardiogram in chronic Chagas' heart disease

    Directory of Open Access Journals (Sweden)

    Aguinaldo Pereira de Moraes

    Full Text Available The aim of the study was to register the prevalence of late potentials (LP in patients with chronic Chagas' heart disease (CCD and the relationship with sustained ventricular tachycardia (SVT. 192 patients (96 males, mean age 42.9 years, with CCD were studied through a Signal Averaged ECG using time domain analysis. According to presence or absence of bundle branch block (BBB and SVT, four groups of patients were created: Group I (n = 72: without SVT (VT- and without BBB (BBB-: Group II (n = 27: with SVT (VT+ and BBB-; Group III (n = 63: VT- and with BBB (BBB+; and Group IV (N = 30: VT+ and BBB+. The LP was admitted, with 40 Hz filter, in the groups without BBB using standard criteria of the method. In the group with BBB, the root-mean-square amplitude of the last 40 ms (RMS < =14µV was considered as an indicator of LP. RESULTS: In groups I and II, LP was present in 21 (78% of the patients with SVT and in 22 (31% of the patients without SVT (p < 0.001, with Sensitivity (S 78%; Specificity (SP 70% and Accuracy (Ac 72%. LP was present in 30 (48% of the patients without and 20 (67% of the patients with SVT, in groups III and IV. p = 0.066, with S = 66%; SP = 52%; and Ac = 57%. In the follow-up, there were 4 deaths unrelated to arrhythmic events, all of them did not have LP. Eight (29,6% of the patients from group II and 4 (13% from group IV presented recurrence of SVT and 91,6% of these patients had LP. CONCLUSIONS: LP occurred in 77.7% of the patients with SVT and without BBB. In the groups with BBB, there was association of LP with SVT in 66,6% of the cases. The recurrence of SVT was present in 21% of the cases from which 91,6% had LP.

  9. Decreasing Free Radicals Level on High Risk Person After Vitamin C and E Supplement Treatment

    Science.gov (United States)

    Sitorus, M. S.; Anggraini, D. R.; Hidayat

    2017-03-01

    Has become a global issue that the increase in global warming mainly caused by high air pollution levels which are donated by motor vehicle emissions. As a rapidly developing country, Indonesia becomes vulnerable to health problems related to air pollution. Excessive free radicals that is produced by air pollution can initiate stress oxidative. Already known that, stress oxidative trigger many health problems. Vitamin C and E is a non enzymatic antioxidant that can neutralize free radicals. This study aims to investigate the decreasing free radicals level by administering vitamin C and E. This research using pre and post experimental design study. There are 24 operators gasoline station Pertamina as samples, with an average age of 26 years. The samples were divided into 4 groups. Group 1 (control), group 2, were given vitamin C doses of 500mg / day, group 3 was given vitamin E doses of 250 IU / day and the group 4 was given a combination of vitamins C and E. The treatment was given for 30 days. Free radicals level is obtained from malonaldehyde (MDA) level by spectrophotometer. Before treatment the average of MDA level is 5.540 µm. After the treatment, MDA is significantly decreased become 3.992 µm (T-test, sigsafety aid and supplement.

  10. New Nordic diet versus average Danish diet

    DEFF Research Database (Denmark)

    Khakimov, Bekzod; Poulsen, Sanne Kellebjerg; Savorani, Francesco

    2016-01-01

    A previous study has shown effects of the New Nordic Diet (NND) to stimulate weight loss and lower systolic and diastolic blood pressure in obese Danish women and men in a randomized, controlled dietary intervention study. This work demonstrates long-term metabolic effects of the NND as compared...... metabolites reflecting specific differences in the diets, especially intake of plant foods and seafood, and in energy metabolism related to ketone bodies and gluconeogenesis, formed the predominant metabolite pattern discriminating the intervention groups. Among NND subjects higher levels of vaccenic acid...... diets high in fish, vegetables, fruit, and wholegrain facilitated weight loss and improved insulin sensitivity by increasing ketosis and gluconeogenesis in the fasting state....

  11. Occurrence and average behavior of pulsating aurora

    Science.gov (United States)

    Partamies, N.; Whiter, D.; Kadokura, A.; Kauristie, K.; Nesse Tyssøy, H.; Massetti, S.; Stauning, P.; Raita, T.

    2017-05-01

    Motivated by recent event studies and modeling efforts on pulsating aurora, which conclude that the precipitation energy during these events is high enough to cause significant chemical changes in the mesosphere, this study looks for the bulk behavior of auroral pulsations. Based on about 400 pulsating aurora events, we outline the typical duration, geomagnetic conditions, and change in the peak emission height for the events. We show that the auroral peak emission height for both green and blue emission decreases by about 8 km at the start of the pulsating aurora interval. This brings the hardest 10% of the electrons down to about 90 km altitude. The median duration of pulsating aurora is about 1.4 h. This value is a conservative estimate since in many cases the end of event is limited by the end of auroral imaging for the night or the aurora drifting out of the camera field of view. The longest durations of auroral pulsations are observed during events which start within the substorm recovery phases. As a result, the geomagnetic indices are not able to describe pulsating aurora. Simultaneous Antarctic auroral images were found for 10 pulsating aurora events. In eight cases auroral pulsations were seen in the southern hemispheric data as well, suggesting an equatorial precipitation source and a frequent interhemispheric occurrence. The long lifetimes of pulsating aurora, their interhemispheric occurrence, and the relatively high-precipitation energies make this type of aurora an effective energy deposition process which is easy to identify from the ground-based image data.

  12. Impulsive synchronization schemes of stochastic complex networks with switching topology: average time approach.

    Science.gov (United States)

    Li, Chaojie; Yu, Wenwu; Huang, Tingwen

    2014-06-01

    In this paper, a novel impulsive control law is proposed for synchronization of stochastic discrete complex networks with time delays and switching topologies, where average dwell time and average impulsive interval are taken into account. The side effect of time delays is estimated by Lyapunov-Razumikhin technique, which quantitatively gives the upper bound to increase the rate of Lyapunov function. By considering the compensation of decreasing interval, a better impulsive control law is recast in terms of average dwell time and average impulsive interval. Detailed results from a numerical illustrative example are presented and discussed. Finally, some relevant conclusions are drawn.

  13. Preparation of high viscosity average molecular mass poly-L-lactide

    Institute of Scientific and Technical Information of China (English)

    ZHOU Zhi-hua; RUAN Jian-ming; ZOU Jian-peng; ZHOU Zhong-cheng; SHEN Xiong-jun

    2006-01-01

    Poly-L-lactide(PLLA) was synthesized by ring-opening polymerization from high purity L-lactide with tin octoate as initiator, and characterized by means of infrared, and 1H-nuclear magnetic resonance. The influences of initiator concentration,polymerization temperature and polymerization time on the viscosity average molecular mass of PLLA were investigated. The effects of different purification methods on the concentration of initiator and viscosity average molecular mass were also studied. PLLA with a viscosity average molecular mass of about 50.5×104 was obtained when polymerization was conducted for 24 h at 140 ℃ with the molar ratio of monomer to purification initator being 12 000. After purification, the concentration of tin octoate decreases; however,the effect of different purification methods on the viscosity average molecular mass of PLLA is different, and the obtained PLLA is a typical amorphous polymeric material. The crystallinity of PLLA decreases with the increase of viscosity average molecular mass.

  14. Rapidity dependence of the average transverse momentum in hadronic collisions

    Science.gov (United States)

    Durães, F. O.; Giannini, A. V.; Gonçalves, V. P.; Navarra, F. S.

    2016-08-01

    The energy and rapidity dependence of the average transverse momentum in p p and p A collisions at energies currently available at the BNL Relativistic Heavy Ion Collider (RHIC) and CERN Large Hadron Collider (LHC) are estimated using the color glass condensate (CGC) formalism. We update previous predictions for the pT spectra using the hybrid formalism of the CGC approach and two phenomenological models for the dipole-target scattering amplitude. We demonstrate that these models are able to describe the RHIC and LHC data for hadron production in p p , d Au , and p Pb collisions at pT≤20 GeV. Moreover, we present our predictions for and demonstrate that the ratio / decreases with the rapidity and has a behavior similar to that predicted by hydrodynamical calculations.

  15. Hearing Office Average Processing Time Ranking Report, February 2016

    Data.gov (United States)

    Social Security Administration — A ranking of ODAR hearing offices by the average number of hearings dispositions per ALJ per day. The average shown will be a combined average for all ALJs working...

  16. Yearly average performance of the principal solar collector types

    Energy Technology Data Exchange (ETDEWEB)

    Rabl, A.

    1981-01-01

    The results of hour-by-hour simulations for 26 meteorological stations are used to derive universal correlations for the yearly total energy that can be delivered by the principal solar collector types: flat plate, evacuated tubes, CPC, single- and dual-axis tracking collectors, and central receiver. The correlations are first- and second-order polynomials in yearly average insolation, latitude, and threshold (= heat loss/optical efficiency). With these correlations, the yearly collectible energy can be found by multiplying the coordinates of a single graph by the collector parameters, which reproduces the results of hour-by-hour simulations with an accuracy (rms error) of 2% for flat plates and 2% to 4% for concentrators. This method can be applied to collectors that operate year-around in such a way that no collected energy is discarded, including photovoltaic systems, solar-augmented industrial process heat systems, and solar thermal power systems. The method is also recommended for rating collectors of different type or manufacturer by yearly average performance, evaluating the effects of collector degradation, the benefits of collector cleaning, and the gains from collector improvements (due to enhanced optical efficiency or decreased heat loss per absorber surface). For most of these applications, the method is accurate enough to replace a system simulation.

  17. Average density and porosity of high-strength lightweight concrete

    Directory of Open Access Journals (Sweden)

    A.S. Inozemtcev

    2014-11-01

    Full Text Available The analysis results of high-strength lightweight concrete (HSLWC structure are presented in this paper. The X-ray tomography, optical microscopy and other methods are used for researching of average density and porosity. It has been revealed that mixtures of HSLWC with density 1300…1500 kg/m3 have a homogeneous structure. The developed concrete has a uniform distribution of the hollow filler and a uniform layer of cement-mineral matrix. The highly saturated gas phase which is divided by denser large particles of quartz sand and products of cement hydration in the contact area allow forming a composite material with low average density, big porosity (up to 40% and high strength (compressive strength is more than 40 MPa. Special modifiers increase adhesion, compacts structure in the contact area, decrease water absorption of high-strength lightweight concrete (up to 1 % and ensure its high water resistance (water resistance coefficient is more than 0.95.

  18. ANTINOMY OF THE MODERN AVERAGE PROFESSIONAL EDUCATION

    Directory of Open Access Journals (Sweden)

    A. A. Listvin

    2017-01-01

    of ways of their decision and options of the valid upgrade of the SPE system answering to the requirements of economy. The inefficiency of the concept of one-leveled SPE and its non-competitiveness against the background of development of an applied bachelor degree at the higher school is shown. It is offered to differentiate programs of basic level for training of skilled workers and the program of the increased level for training of specialists of an average link (technicians, technologists on the basis of basic level for forming of a single system of continuous professional training and effective functioning of regional systems of professional education. Such system will help to eliminate disproportions in a triad «a worker – a technician – an engineer», and will increase the quality of professional education. Furthermore, it is indicated the need of polyprofessional education wherein the integrated educational structures differing in degree of formation of split-level educational institutions on the basis of network interaction, convergence and integration are required. According to the author, in the regions it is necessary to develop two types of organizations and SPE organizations: territorial multi-profile colleges with flexible variable programs and the organizations realizing educational programs of applied qualifications in specific industries (metallurgical, chemical, construction, etc. according to the specifics of economy of territorial subjects.Practical significance. The results of the research can be useful to specialists of management of education, heads and pedagogical staff of SPE institutions, and also representatives of regional administrations and employers while organizing the multilevel network system of training of skilled workers and experts of middle ranking.

  19. 40 CFR 80.205 - How is the annual refinery or importer average and corporate pool average sulfur level determined?

    Science.gov (United States)

    2010-07-01

    ... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual refinery or importer average and corporate pool average sulfur level determined? (a) The annual refinery or importer average...

  20. Temporal Decrease in Upper Atmospheric Chlorine

    Science.gov (United States)

    Froidevaux, L.; Livesey, N. J.; Read, W. G.; Salawitch, R. J.; Waters, J. W.; Drouin, B.; MacKenzie, I. A.; Pumphrey, H. C.; Bernath, P.; Boone, C.; hide

    2006-01-01

    We report a steady decrease in the upper stratospheric and lower mesospheric abundances of hydrogen chloride (HCl) from August 2004 through January 2006, as measured by the Microwave Limb Sounder (MLS) aboard the Aura satellite. For 60(deg)S to 60(deg)N zonal means, the average yearly change in the 0.7 to 0.1 hPa (approx.50 to 65 km) region is -27 +/- 3 pptv/year, or -0.78 +/- 0.08 percent/year. This is consistent with surface abundance decrease rates (about 6 to 7 years earlier) in chlorine source gases. The MLS data confirm that international agreements to reduce global emissions of ozone-depleting industrial gases are leading to global decreases in the total gaseous chlorine burden. Tracking stratospheric HCl variations on a seasonal basis is now possible with MLS data. Inferred stratospheric total chlorine (CITOT) has a value of 3.60 ppbv at the beginning of 2006, with a (2-sigma) accuracy estimate of 7%; the stratospheric chlorine loading has decreased by about 43 pptv in the 18-month period studied here. We discuss the MLS HCl measurements in the context of other satellite-based HCl data, as well as expectations from surface chlorine data. A mean age of air of approx. 5.5 years and an age spectrum width of 2 years or less provide a fairly good fit to the ensemble of measurements.

  1. The monthly-averaged and yearly-averaged cosine effect factor of a heliostat field

    Energy Technology Data Exchange (ETDEWEB)

    Al-Rabghi, O.M.; Elsayed, M.M. (King Abdulaziz Univ., Jeddah (Saudi Arabia). Dept. of Thermal Engineering)

    1992-01-01

    Calculations are carried out to determine the dependence of the monthly-averaged and the yearly-averaged daily cosine effect factor on the pertinent parameters. The results are plotted on charts for each month and for the full year. These results cover latitude angles between 0 and 45[sup o]N, for fields with radii up to 50 tower height. In addition, the results are expressed in mathematical correlations to facilitate using them in computer applications. A procedure is outlined to use the present results to preliminary layout the heliostat field, and to predict the rated MW[sub th] reflected by the heliostat field during a period of a month, several months, or a year. (author)

  2. Lagrangian averages, averaged Lagrangians, and the mean effects of fluctuations in fluid dynamics.

    Science.gov (United States)

    Holm, Darryl D.

    2002-06-01

    We begin by placing the generalized Lagrangian mean (GLM) equations for a compressible adiabatic fluid into the Euler-Poincare (EP) variational framework of fluid dynamics, for an averaged Lagrangian. This is the Lagrangian averaged Euler-Poincare (LAEP) theorem. Next, we derive a set of approximate small amplitude GLM equations (glm equations) at second order in the fluctuating displacement of a Lagrangian trajectory from its mean position. These equations express the linear and nonlinear back-reaction effects on the Eulerian mean fluid quantities by the fluctuating displacements of the Lagrangian trajectories in terms of their Eulerian second moments. The derivation of the glm equations uses the linearized relations between Eulerian and Lagrangian fluctuations, in the tradition of Lagrangian stability analysis for fluids. The glm derivation also uses the method of averaged Lagrangians, in the tradition of wave, mean flow interaction. Next, the new glm EP motion equations for incompressible ideal fluids are compared with the Euler-alpha turbulence closure equations. An alpha model is a GLM (or glm) fluid theory with a Taylor hypothesis closure. Such closures are based on the linearized fluctuation relations that determine the dynamics of the Lagrangian statistical quantities in the Euler-alpha equations. Thus, by using the LAEP theorem, we bridge between the GLM equations and the Euler-alpha closure equations, through the small-amplitude glm approximation in the EP variational framework. We conclude by highlighting a new application of the GLM, glm, and alpha-model results for Lagrangian averaged ideal magnetohydrodynamics. (c) 2002 American Institute of Physics.

  3. 40 CFR 1033.710 - Averaging emission credits.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Averaging emission credits. 1033.710... Averaging emission credits. (a) Averaging is the exchange of emission credits among your engine families. You may average emission credits only as allowed by § 1033.740. (b) You may certify one or more engine...

  4. 7 CFR 51.577 - Average midrib length.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average midrib length. 51.577 Section 51.577... STANDARDS) United States Standards for Celery Definitions § 51.577 Average midrib length. Average midrib length means the average length of all the branches in the outer whorl measured from the point...

  5. 7 CFR 760.640 - National average market price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false National average market price. 760.640 Section 760.640....640 National average market price. (a) The Deputy Administrator will establish the National Average... average quality loss factors that are reflected in the market by county or part of a county. (c)...

  6. Average gluon and quark jet multiplicities at higher orders

    Energy Technology Data Exchange (ETDEWEB)

    Bolzoni, Paolo; Kniehl, Bernd A. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Kotikov, Anatoly V. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Joint Institute of Nuclear Research, Moscow (Russian Federation). Bogoliubov Lab. of Theoretical Physics

    2013-05-15

    We develop a new formalism for computing and including both the perturbative and nonperturbative QCD contributions to the scale evolution of average gluon and quark jet multiplicities. The new method is motivated by recent progress in timelike small-x resummation obtained in the MS factorization scheme. We obtain next-to-next-to-leading-logarithmic (NNLL) resummed expressions, which represent generalizations of previous analytic results. Our expressions depend on two nonperturbative parameters with clear and simple physical interpretations. A global fit of these two quantities to all available experimental data sets that are compatible with regard to the jet algorithms demonstrates by its goodness how our results solve a longstanding problem of QCD. We show that the statistical and theoretical uncertainties both do not exceed 5% for scales above 10 GeV. We finally propose to use the jet multiplicity data as a new way to extract the strong-coupling constant. Including all the available theoretical input within our approach, we obtain {alpha}{sub s}{sup (5)}(M{sub Z})=0.1199{+-}0.0026 in the MS scheme in an approximation equivalent to next-to-next-to-leading order enhanced by the resummations of ln(x) terms through the NNLL level and of ln Q{sup 2} terms by the renormalization group, in excellent agreement with the present world average.

  7. Colorectal Cancer Screening in Average Risk Populations: Evidence Summary

    Directory of Open Access Journals (Sweden)

    Jill Tinmouth

    2016-01-01

    Full Text Available Introduction. The objectives of this systematic review were to evaluate the evidence for different CRC screening tests and to determine the most appropriate ages of initiation and cessation for CRC screening and the most appropriate screening intervals for selected CRC screening tests in people at average risk for CRC. Methods. Electronic databases were searched for studies that addressed the research objectives. Meta-analyses were conducted with clinically homogenous trials. A working group reviewed the evidence to develop conclusions. Results. Thirty RCTs and 29 observational studies were included. Flexible sigmoidoscopy (FS prevented CRC and led to the largest reduction in CRC mortality with a smaller but significant reduction in CRC mortality with the use of guaiac fecal occult blood tests (gFOBTs. There was insufficient or low quality evidence to support the use of other screening tests, including colonoscopy, as well as changing the ages of initiation and cessation for CRC screening with gFOBTs in Ontario. Either annual or biennial screening using gFOBT reduces CRC-related mortality. Conclusion. The evidentiary base supports the use of FS or FOBT (either annual or biennial to screen patients at average risk for CRC. This work will guide the development of the provincial CRC screening program.

  8. Kinetic energy equations for the average-passage equation system

    Science.gov (United States)

    Johnson, Richard W.; Adamczyk, John J.

    1989-01-01

    Important kinetic energy equations derived from the average-passage equation sets are documented, with a view to their interrelationships. These kinetic equations may be used for closing the average-passage equations. The turbulent kinetic energy transport equation used is formed by subtracting the mean kinetic energy equation from the averaged total instantaneous kinetic energy equation. The aperiodic kinetic energy equation, averaged steady kinetic energy equation, averaged unsteady kinetic energy equation, and periodic kinetic energy equation, are also treated.

  9. Kinetic energy equations for the average-passage equation system

    Science.gov (United States)

    Johnson, Richard W.; Adamczyk, John J.

    1989-01-01

    Important kinetic energy equations derived from the average-passage equation sets are documented, with a view to their interrelationships. These kinetic equations may be used for closing the average-passage equations. The turbulent kinetic energy transport equation used is formed by subtracting the mean kinetic energy equation from the averaged total instantaneous kinetic energy equation. The aperiodic kinetic energy equation, averaged steady kinetic energy equation, averaged unsteady kinetic energy equation, and periodic kinetic energy equation, are also treated.

  10. Characterizing individual painDETECT symptoms by average pain severity

    Directory of Open Access Journals (Sweden)

    Sadosky A

    2016-07-01

    Full Text Available Alesia Sadosky,1 Vijaya Koduru,2 E Jay Bienen,3 Joseph C Cappelleri4 1Pfizer Inc, New York, NY, 2Eliassen Group, New London, CT, 3Outcomes Research Consultant, New York, NY, 4Pfizer Inc, Groton, CT, USA Background: painDETECT is a screening measure for neuropathic pain. The nine-item version consists of seven sensory items (burning, tingling/prickling, light touching, sudden pain attacks/electric shock-type pain, cold/heat, numbness, and slight pressure, a pain course pattern item, and a pain radiation item. The seven-item version consists only of the sensory items. Total scores of both versions discriminate average pain-severity levels (mild, moderate, and severe, but their ability to discriminate individual item severity has not been evaluated.Methods: Data were from a cross-sectional, observational study of six neuropathic pain conditions (N=624. Average pain severity was evaluated using the Brief Pain Inventory-Short Form, with severity levels defined using established cut points for distinguishing mild, moderate, and severe pain. The Wilcoxon rank sum test was followed by ridit analysis to represent the probability that a randomly selected subject from one average pain-severity level had a more favorable outcome on the specific painDETECT item relative to a randomly selected subject from a comparator severity level.Results: A probability >50% for a better outcome (less severe pain was significantly observed for each pain symptom item. The lowest probability was 56.3% (on numbness for mild vs moderate pain and highest probability was 76.4% (on cold/heat for mild vs severe pain. The pain radiation item was significant (P<0.05 and consistent with pain symptoms, as well as with total scores for both painDETECT versions; only the pain course item did not differ.Conclusion: painDETECT differentiates severity such that the ability to discriminate average pain also distinguishes individual pain item severity in an interpretable manner. Pain

  11. Measurement properties of painDETECT by average pain severity

    Directory of Open Access Journals (Sweden)

    Cappelleri JC

    2014-11-01

    Full Text Available Joseph C Cappelleri,1 E Jay Bienen,2 Vijaya Koduru,3 Alesia Sadosky4 1Pfizer, Groton, CT, 2Outcomes research consultant, New York, NY, 3Eliassen Group, New London, CT, USA; 4Pfizer, New York, NY, USA Background: Since the burden of neuropathic pain (NeP increases with pain severity, it is important to characterize and quantify pain severity when identifying NeP patients. This study evaluated whether painDETECT, a screening questionnaire to identify patients with NeP, can distinguish pain severity. Materials and methods: Subjects (n=614, 55.4% male, 71.8% white, mean age 55.5 years with confirmed NeP were identified during office visits to US community-based physicians. The Brief Pain Inventory – Short Form stratified subjects by mild (score 0–3, n=110, moderate (score 4–6, n=297, and severe (score 7–10, n=207 average pain. Scores on the nine-item painDETECT (seven pain-symptom items, one pain-course item, one pain-irradiation item range from -1 to 38 (worst NeP; the seven-item painDETECT scores (only pain symptoms range from 0 to 35. The ability of painDETECT to discriminate average pain-severity levels, based on the average pain item from the Brief Pain Inventory – Short Form (0–10 scale, was evaluated using analysis of variance or covariance models to obtain unadjusted and adjusted (age, sex, race, ethnicity, time since NeP diagnosis, number of comorbidities mean painDETECT scores. Cumulative distribution functions on painDETECT scores by average pain severity were compared (Kolmogorov–Smirnov test. Cronbach's alpha assessed internal consistency reliability. Results: Unadjusted mean scores were 15.2 for mild, 19.8 for moderate, and 24.0 for severe pain for the nine items, and 14.3, 18.6, and 22.7, respectively, for the seven items. Adjusted nine-item mean scores for mild, moderate, and severe pain were 17.3, 21.3, and 25.3, respectively; adjusted seven-item mean scores were 16.4, 20.1, and 24.0, respectively. All pair

  12. BIHOURLY DIAGRAMS OF FORBUSH DECREASES

    Science.gov (United States)

    Bihourly diagrams were made of Forbush decreases of cosmic ray intensity as observed at Uppsala from 31 Aug 56 to 31 Dec 59, at Kiruna from Nov 56 to 31 Dec 59, and at Murchison Bay from 26 Aug 57 to 30 Apr 59. (Author)

  13. The average crossing number of equilateral random polygons

    Science.gov (United States)

    Diao, Y.; Dobay, A.; Kusner, R. B.; Millett, K.; Stasiak, A.

    2003-11-01

    In this paper, we study the average crossing number of equilateral random walks and polygons. We show that the mean average crossing number ACN of all equilateral random walks of length n is of the form \\frac{3}{16} n \\ln n +O(n) . A similar result holds for equilateral random polygons. These results are confirmed by our numerical studies. Furthermore, our numerical studies indicate that when random polygons of length n are divided into individual knot types, the \\langle ACN({\\cal K})\\rangle for each knot type \\cal K can be described by a function of the form \\langle ACN({\\cal K})\\rangle=a (n-n_0) \\ln (n-n_0)+b (n-n_0)+c where a, b and c are constants depending on \\cal K and n0 is the minimal number of segments required to form \\cal K . The \\langle ACN({\\cal K})\\rangle profiles diverge from each other, with more complex knots showing higher \\langle ACN({\\cal K})\\rangle than less complex knots. Moreover, the \\langle ACN({\\cal K})\\rangle profiles intersect with the langACNrang profile of all closed walks. These points of intersection define the equilibrium length of \\cal K , i.e., the chain length n_e({\\cal K}) at which a statistical ensemble of configurations with given knot type \\cal K —upon cutting, equilibration and reclosure to a new knot type \\cal K^\\prime —does not show a tendency to increase or decrease \\langle ACN({\\cal K^\\prime)}\\rangle . This concept of equilibrium length seems to be universal, and applies also to other length-dependent observables for random knots, such as the mean radius of gyration langRgrang.

  14. Average years of life lost due to breast and cervical cancer and the association with the marginalization index in Mexico in 2000 and 2010

    Directory of Open Access Journals (Sweden)

    Claudio Alberto Dávila Cervantes

    2014-05-01

    Full Text Available The objective of this study was to calculate average years of life lost due to breast and cervical cancer in Mexico in 2000 and 2010. Data on mortality in women aged between 20 and 84 years was obtained from the National Institute for Statistics and Geography. Age-specific mortality rates and average years of life lost, which is an estimate of the number of years that a person would have lived if he or she had not died prematurely, were estimated for both diseases. Data was disaggregated into five-year age groups and socioeconomic status based on the 2010 marginalization index obtained from the National Population Council. A decrease in average years of life lost due to cervical cancer (37.4% and an increase in average years of life lost due breast cancer (8.9% was observed during the period studied. Average years of life lost due to cervical cancer was greater among women living in areas with a high marginalization index, while average years of life lost due to breast cancer was greater in women from areas with a low marginalization index.

  15. Average years of life lost due to breast and cervical cancer and the association with the marginalization index in Mexico in 2000 and 2010.

    Science.gov (United States)

    Cervantes, Claudio Alberto Dávila; Botero, Marcela Agudelo

    2014-05-01

    The objective of this study was to calculate average years of life lost due to breast and cervical cancer in Mexico in 2000 and 2010. Data on mortality in women aged between 20 and 84 years was obtained from the National Institute for Statistics and Geography. Age-specific mortality rates and average years of life lost, which is an estimate of the number of years that a person would have lived if he or she had not died prematurely, were estimated for both diseases. Data was disaggregated into five-year age groups and socioeconomic status based on the 2010 marginalization index obtained from the National Population Council. A decrease in average years of life lost due to cervical cancer (37.4%) and an increase in average years of life lost due breast cancer (8.9%) was observed during the period studied. Average years of life lost due to cervical cancer was greater among women living in areas with a high marginalization index, while average years of life lost due to breast cancer was greater in women from areas with a low marginalization index.

  16. Development of new group members' in-group and out-group stereotypes: changes in perceived group variability and ethnocentrism.

    Science.gov (United States)

    Ryan, C S; Bogart, L M

    1997-10-01

    Changes in new members' in-group and out-group stereotypes were examined, distinguishing among three stereotype components: stereotypicality, dispersion, and ethnocentrism. Pledges in 4 sororities judged their in-group and out-groups 4 times during their 8-month induction. Overall, out-groups were judged more stereotypically than in-groups at every wave. Although out-groups were initially perceived as more dispersed than in-groups, decreased out-group dispersion resulted in a shift toward out-group homogeneity. Ethnocentrism was present at every wave but decreased because of decreased in-group positivity. The authors discuss implications of these results for existing explanations of stereotype development. It is suggested that other aspects of group socialization (R.L. Moreland & J.M. Levine, 1982) are needed to explain fully the development of intergroup perceptions for new group members.

  17. Decreasing incidence rates of bacteremia

    DEFF Research Database (Denmark)

    Nielsen, Stig Lønberg; Pedersen, C; Jensen, T G

    2014-01-01

    BACKGROUND: Numerous studies have shown that the incidence rate of bacteremia has been increasing over time. However, few studies have distinguished between community-acquired, healthcare-associated and nosocomial bacteremia. METHODS: We conducted a population-based study among adults with first......-time bacteremia in Funen County, Denmark, during 2000-2008 (N = 7786). We reported mean and annual incidence rates (per 100,000 person-years), overall and by place of acquisition. Trends were estimated using a Poisson regression model. RESULTS: The overall incidence rate was 215.7, including 99.0 for community......-acquired, 50.0 for healthcare-associated and 66.7 for nosocomial bacteremia. During 2000-2008, the overall incidence rate decreased by 23.3% from 254.1 to 198.8 (3.3% annually, p bacteremia decreased by 25.6% from 119.0 to 93.8 (3.7% annually, p

  18. Bundling of Reimbursement for Inferior Vena Cava Filter Placement Resulted in Significantly Decreased Utilization between 2012 and 2014.

    Science.gov (United States)

    Glocker, Roan J; TerBush, Matthew J; Hill, Elaine L; Guido, Joseph J; Doyle, Adam; Ellis, Jennifer L; Raman, Kathleen; Morrow, Gary R; Stoner, Michael C

    2017-01-01

    On January 1, 2012, reimbursement for inferior vena cava filters (IVCFs) became bundled by the Centers for Medicare and Medicaid Services. This resulted in ICVF placement (CPT code 37191) now yielding 4.71 relative value units (RVUs), a decrease from 15.6 RVUs for placement and associated procedures (CPT codes 37620, 36010, 75825-26, 75940-26). Our hypothesis was that IVCF utilization would decrease in response to this change as other procedures had done once they had become bundled. Including data from 2010 to 2011 (before bundling) and 2012 to 2014 (after bundling), we utilized 5% inpatient, outpatient, and carrier files of Medicare limited data sets and analyzed IVCF utilization before and after bundling across specialty types, controlling for total diagnosis of deep vein thrombosis (DVT) and pulmonary embolism (PE) (ICD-9 codes 453.xx and 415.xx, respectively) and placement location. In 2010 and 2011, the rates/10,000 DVT/PE diagnoses were 918 and 1,052, respectively (average 985). In 2012, 2013, and 2014, rates were 987, 877, and 605, respectively (average 823). Comparing each year individually, there is a significant difference (P utilization. Comparing averages in the 2010-2011 and 2012-2014 groups, there is also a significant decrease in utilization after bundling (P utilization decreased significantly. More data from subsequent years will be needed to show if this decrease utilization continues to persist. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Life satisfaction decreases during adolescence.

    Science.gov (United States)

    Goldbeck, Lutz; Schmitz, Tim G; Besier, Tanja; Herschbach, Peter; Henrich, Gerhard

    2007-08-01

    Adolescence is a developmental phase associated with significant somatic and psychosocial changes. So far there are few studies on developmental aspects of life satisfaction. This cross-sectional study examines the effects of age and gender on adolescent's life satisfaction. 1,274 German adolescents (aged 11-16 years) participated in a school-based survey study. They completed the adolescent version of the Questions on Life Satisfaction (FLZ(M) - Fragen zur Lebenszufriedenheit), a multidimensional instrument measuring the subjective importance and satisfaction with eight domains of general and eight domains of health-related life satisfaction. Effects of gender and age were analysed using ANOVAs. Girls reported significantly lower general (F = 5.0; p = .025) and health-related life satisfaction (F = 25.3; p life domains, there was a significant decrease in general (F = 14.8; p life satisfaction (F = 8.0; p Satisfaction with friends remained on a high level, whereas satisfaction with family relations decreased. Only satisfaction with partnership/sexuality increased slightly, however this effect cannot compensate the general loss of satisfaction. Decreasing life satisfaction has to be considered as a developmental phenomenon. Associations with the increasing prevalence of depression and suicidal ideation during adolescence are discussed. Life satisfaction should be considered a relevant aspect of adolescent's well-being and functioning.

  20. Average capacity for optical wireless communication systems over exponentiated Weibull distribution non-Kolmogorov turbulent channels.

    Science.gov (United States)

    Cheng, Mingjian; Zhang, Yixin; Gao, Jie; Wang, Fei; Zhao, Fengsheng

    2014-06-20

    We model the average channel capacity of optical wireless communication systems for cases of weak to strong turbulence channels, using the exponentiation Weibull distribution model. The joint effects of the beam wander and spread, pointing errors, atmospheric attenuation, and the spectral index of non-Kolmogorov turbulence on system performance are included. Our results show that the average capacity decreases steeply as the propagation length L changes from 0 to 200 m and decreases slowly down or tends to a stable value as the propagation length L is greater than 200 m. In the weak turbulence region, by increasing the detection aperture, we can improve the average channel capacity and the atmospheric visibility as an important issue affecting the average channel capacity. In the strong turbulence region, the increase of the radius of the detection aperture cannot reduce the effects of the atmospheric turbulence on the average channel capacity, and the effect of atmospheric visibility on the channel information capacity can be ignored. The effect of the spectral power exponent on the average channel capacity in the strong turbulence region is higher than weak turbulence region. Irrespective of the details determining the turbulent channel, we can say that pointing errors have a significant effect on the average channel capacity of optical wireless communication systems in turbulence channels.

  1. Average annual runoff in the United States, 1951-80

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This is a line coverage of average annual runoff in the conterminous United States, 1951-1980. Surface runoff Average runoff Surface waters United States

  2. Seasonal Sea Surface Temperature Averages, 1985-2001 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This data set consists of four images showing seasonal sea surface temperature (SST) averages for the entire earth. Data for the years 1985-2001 are averaged to...

  3. Average American 15 Pounds Heavier Than 20 Years Ago

    Science.gov (United States)

    ... page: https://medlineplus.gov/news/fullstory_160233.html Average American 15 Pounds Heavier Than 20 Years Ago ... since the late 1980s and early 1990s, the average American has put on 15 or more additional ...

  4. Original article Functioning of memory and attention processes in children with intelligence below average

    Directory of Open Access Journals (Sweden)

    Aneta Rita Borkowska

    2014-05-01

    Full Text Available BACKGROUND The aim of the research was to assess memorization and recall of logically connected and unconnected material, coded graphically and linguistically, and the ability to focus attention, in a group of children with intelligence below average, compared to children with average intelligence. PARTICIPANTS AND PROCEDURE The study group included 27 children with intelligence below average. The control group consisted of 29 individuals. All of them were examined using the authors’ experimental trials and the TUS test (Attention and Perceptiveness Test. RESULTS Children with intelligence below average memorized significantly less information contained in the logical material, demonstrated lower ability to memorize the visual material, memorized significantly fewer words in the verbal material learning task, achieved lower results in such indicators of the visual attention process pace as the number of omissions and mistakes, and had a lower pace of perceptual work, compared to children with average intelligence. CONCLUSIONS The results confirm that children with intelligence below average have difficulties with memorizing new material, both logically connected and unconnected. The significantly lower capacity of direct memory is independent of modality. The results of the study on the memory process confirm the hypothesis about lower abilities of children with intelligence below average, in terms of concentration, work pace, efficiency and perception.

  5. Trait valence and the better-than-average effect.

    Science.gov (United States)

    Gold, Ron S; Brown, Mark G

    2011-12-01

    People tend to regard themselves as having superior personality traits compared to their average peer. To test whether this "better-than-average effect" varies with trait valence, participants (N = 154 students) rated both themselves and the average student on traits constituting either positive or negative poles of five trait dimensions. In each case, the better-than-average effect was found, but trait valence had no effect. Results were discussed in terms of Kahneman and Tversky's prospect theory.

  6. Investigating Averaging Effect by Using Three Dimension Spectrum

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    The eddy current displacement sensor's averaging effect has been investigated in this paper,and thefrequency spectrum property of the averaging effect was also deduced. It indicates that the averaging effect has no influences on measuring a rotor's rotating error, but it has visible influences on measuring the rotor's profile error. According to the frequency spectrum of the averaging effect, the actual sampling data can be adjusted reasonably, thus measuring precision is improved.

  7. Average of Distribution and Remarks on Box-Splines

    Institute of Scientific and Technical Information of China (English)

    LI Yue-sheng

    2001-01-01

    A class of generalized moving average operators is introduced, and the integral representations of an average function are provided. It has been shown that the average of Dirac δ-distribution is just the well known box-spline. Some remarks on box-splines, such as their smoothness and the corresponding partition of unity, are made. The factorization of average operators is derived. Then, the subdivision algorithm for efficient computing of box-splines and their linear combinations follows.

  8. Fault Scaling Relationships Depend on the Average Geological Slip Rate

    Science.gov (United States)

    Anderson, J. G.; Biasi, G. P.; Wesnousky, S. G.

    2016-12-01

    This study addresses whether knowing the geological slip rates on a fault in addition to the rupture length improves estimates of magnitude (Mw) of continental earthquakes that rupture the surface, based on a database of 80 events that includes 57 strike-slip, 12 reverse, and 11 normal faulting events. Three functional forms are tested to relate rupture length L to magnitude Mw: linear, bilinear, and a shape with constant static stress drop. The slip rate dependence is tested as a perturbation to the estimates of magnitude from rupture length. When the data are subdivided by fault mechanism, magnitude predictions from rupture length are improved for strike-slip faults when slip rate is included, but not for reverse or normal faults. This conclusion is robust, independent of the functional form used to relate L to Mw. Our preferred model is the constant stress drop model, because teleseismic observations of earthquakes favor that result. Because a dependence on slip rate is only significant for strike-slip events, a combined relationship for all rupture mechanisms is not appropriate. The observed effect of slip rate for strike-slip faults implies that the static stress drop, on average, tends to decrease as the fault slip rate increases.

  9. Voice examination in patients with decreased high pitch after thyroidectomy.

    Science.gov (United States)

    Kim, Sung Won; Kim, Seung Tae; Park, Hyo Sang; Lee, Hyoung Shin; Hong, Jong Chul; Kwon, Soon Bok; Lee, Kang Dae

    2012-06-01

    Decreased high pitch after thyroidectomy due to injury of the external branch of superior laryngeal nerve (EBSLN) may be a critical, especially to professional voice users. The author studied the usefulness of VRP (voice range profile) and MDVP (multi-dimensional voice program) to evaluate patients who have decreased high pitch after thyroidectomy. A study was performed with 58 females and 9 males who underwent voice assessment between January 2008 and June 2009. The patients were classified as the group of female with no decreased high pitch (group A, n = 52), decreased high pitch (group B, n = 6) and the group of male with no decreased high pitch (group C, n = 9). VRP and laryngeal electromyogram (EMG) was performed in group B. The preoperative frequency range of group A and B were statistically not different. In Group B, the result of VRP showed that the frequency range was 443.11 ± 83.97, 246.67 ± 49.41, 181.37 ± 80.13 Hz showing significant decrease after the surgery compared to that of the preoperative result. (P VRP revealed no significant difference between the preoperative and postoperative result. VRP is a noninvasive, quick, and practical test to demonstrate decreased frequency range visually and helps to evaluate EBSLN injury in patient with thyroidectomy.

  10. Scalable Robust Principal Component Analysis Using Grassmann Averages

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Enficiaud, Raffi

    2016-01-01

    provide a simple algorithm for computing this Grassmann Average (GA), and show that the subspace estimate is less sensitive to outliers than PCA for general distributions. Because averages can be efficiently computed, we immediately gain scalability. We exploit robust averaging to formulate the Robust...

  11. Averaging and Globalising Quotients of Informetric and Scientometric Data.

    Science.gov (United States)

    Egghe, Leo; Rousseau, Ronald

    1996-01-01

    Discussion of impact factors for "Journal Citation Reports" subject categories focuses on the difference between an average of quotients and a global average, obtained as a quotient of averages. Applications in the context of informetrics and scientometrics are given, including journal prices and subject discipline influence scores.…

  12. Perturbation resilience and superiorization methodology of averaged mappings

    Science.gov (United States)

    He, Hongjin; Xu, Hong-Kun

    2017-04-01

    We first prove the bounded perturbation resilience for the successive fixed point algorithm of averaged mappings, which extends the string-averaging projection and block-iterative projection methods. We then apply the superiorization methodology to a constrained convex minimization problem where the constraint set is the intersection of fixed point sets of a finite family of averaged mappings.

  13. Spectral averaging techniques for Jacobi matrices with matrix entries

    CERN Document Server

    Sadel, Christian

    2009-01-01

    A Jacobi matrix with matrix entries is a self-adjoint block tridiagonal matrix with invertible blocks on the off-diagonals. Averaging over boundary conditions leads to explicit formulas for the averaged spectral measure which can potentially be useful for spectral analysis. Furthermore another variant of spectral averaging over coupling constants for these operators is presented.

  14. 76 FR 6161 - Annual Determination of Average Cost of Incarceration

    Science.gov (United States)

    2011-02-03

    ... No: 2011-2363] DEPARTMENT OF JUSTICE Bureau of Prisons Annual Determination of Average Cost of Incarceration AGENCY: Bureau of Prisons, Justice. ACTION: Notice. SUMMARY: The fee to cover the average cost of incarceration for Federal inmates in Fiscal Year 2009 was $25,251. The average annual cost to confine an...

  15. 20 CFR 226.62 - Computing average monthly compensation.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Computing average monthly compensation. 226... Compensation § 226.62 Computing average monthly compensation. The employee's average monthly compensation is computed by first determining the employee's highest 60 months of railroad compensation...

  16. 40 CFR 1042.710 - Averaging emission credits.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Averaging emission credits. 1042.710..., Banking, and Trading for Certification § 1042.710 Averaging emission credits. (a) Averaging is the exchange of emission credits among your engine families. (b) You may certify one or more engine families to...

  17. 27 CFR 19.37 - Average effective tax rate.

    Science.gov (United States)

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Average effective tax rate..., DEPARTMENT OF THE TREASURY LIQUORS DISTILLED SPIRITS PLANTS Taxes Effective Tax Rates § 19.37 Average effective tax rate. (a) The proprietor may establish an average effective tax rate for any...

  18. 7 CFR 51.2561 - Average moisture content.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except...

  19. 20 CFR 404.220 - Average-monthly-wage method.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Average-monthly-wage method. 404.220 Section... INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.220 Average-monthly-wage method. (a) Who is eligible for this method. You...

  20. 7 CFR 1410.44 - Average adjusted gross income.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Average adjusted gross income. 1410.44 Section 1410... Average adjusted gross income. (a) Benefits under this part will not be available to persons or legal entities whose average adjusted gross income exceeds $1,000,000 or as further specified in part...

  1. 18 CFR 301.7 - Average System Cost methodology functionalization.

    Science.gov (United States)

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Average System Cost... REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER MARKETING ADMINISTRATIONS AVERAGE... ACT § 301.7 Average System Cost methodology functionalization. (a) Functionalization of each...

  2. 47 CFR 80.759 - Average terrain elevation.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Average terrain elevation. 80.759 Section 80... Average terrain elevation. (a)(1) Draw radials from the antenna site for each 45 degrees of azimuth.... (d) Average the values by adding them and dividing by the number of readings along each radial....

  3. 34 CFR 668.196 - Average rates appeals.

    Science.gov (United States)

    2010-07-01

    ... 34 Education 3 2010-07-01 2010-07-01 false Average rates appeals. 668.196 Section 668.196....196 Average rates appeals. (a) Eligibility. (1) You may appeal a notice of a loss of eligibility under... calculated as an average rate under § 668.183(d)(2). (2) You may appeal a notice of a loss of...

  4. 20 CFR 404.221 - Computing your average monthly wage.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the...

  5. 34 CFR 668.215 - Average rates appeals.

    Science.gov (United States)

    2010-07-01

    ... 34 Education 3 2010-07-01 2010-07-01 false Average rates appeals. 668.215 Section 668.215... Average rates appeals. (a) Eligibility. (1) You may appeal a notice of a loss of eligibility under § 668... as an average rate under § 668.202(d)(2). (2) You may appeal a notice of a loss of eligibility...

  6. 7 CFR 51.2548 - Average moisture content determination.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content determination. 51.2548..., AND STANDARDS) United States Standards for Grades of Pistachio Nuts in the Shell § 51.2548 Average moisture content determination. (a) Determining average moisture content of the lot is not a requirement...

  7. Decreasing Fires in Mediterranean Europe.

    Directory of Open Access Journals (Sweden)

    Marco Turco

    Full Text Available Forest fires are a serious environmental hazard in southern Europe. Quantitative assessment of recent trends in fire statistics is important for assessing the possible shifts induced by climate and other environmental/socioeconomic changes in this area. Here we analyse recent fire trends in Portugal, Spain, southern France, Italy and Greece, building on a homogenized fire database integrating official fire statistics provided by several national/EU agencies. During the period 1985-2011, the total annual burned area (BA displayed a general decreasing trend, with the exception of Portugal, where a heterogeneous signal was found. Considering all countries globally, we found that BA decreased by about 3020 km2 over the 27-year-long study period (i.e. about -66% of the mean historical value. These results are consistent with those obtained on longer time scales when data were available, also yielding predominantly negative trends in Spain and France (1974-2011 and a mixed trend in Portugal (1980-2011. Similar overall results were found for the annual number of fires (NF, which globally decreased by about 12600 in the study period (about -59%, except for Spain where, excluding the provinces along the Mediterranean coast, an upward trend was found for the longer period. We argue that the negative trends can be explained, at least in part, by an increased effort in fire management and prevention after the big fires of the 1980's, while positive trends may be related to recent socioeconomic transformations leading to more hazardous landscape configurations, as well as to the observed warming of recent decades. We stress the importance of fire data homogenization prior to analysis, in order to alleviate spurious effects associated with non-stationarities in the data due to temporal variations in fire detection efforts.

  8. Metformin Attenuates 131I-Induced Decrease in Peripheral Blood Cells in Patients with Differentiated Thyroid Cancer.

    Science.gov (United States)

    Bikas, Athanasios; Van Nostrand, Douglas; Jensen, Kirk; Desale, Sameer; Mete, Mihriye; Patel, Aneeta; Wartofsky, Leonard; Vasko, Vasyl; Burman, Kenneth D

    2016-02-01

    131I treatment (tx) of differentiated thyroid cancer (DTC) is associated with hematopoietic toxicity. It was hypothesized that metformin could have radioprotective effects on bone-marrow function. The objective was to determine whether metformin prevents 131I-induced changes in complete blood counts (CBC) in patients with DTC. A retrospective analysis was performed of CBC values in DTC patients who were (40 patients: metformin group) or were not taking metformin (39 patients: control group) at the time of administration of 131I. Repeated measures analysis of variance was used for the analysis of the differences in the averages of CBC that were documented at baseline and at 1, 6, and 12 months post 131I tx. The groups were comparable in terms of age, sex, stage of DTC, 131I dose administered, and baseline CBC values. In the control group, the decrease in white blood cells (WBC) was 35.8% (p decrease in WBC was 17.1% (p decrease in platelets in the control group was 15.5% (p decrease in CBC parameters, and its radioprotective properties were more prominent in WBC. Patients who were taking metformin during 131I tx also experienced a faster recovery in their blood counts, when compared to the control group. Further study is warranted in order to examine if the radioprotective properties of metformin observed in the current study for 131I tx can also apply to other forms of therapeutic chemo- and radiotherapy.

  9. Hawaiian craniofacial morphometrics: average Mokapuan skull, artificial cranial deformation, and the "rocker" mandible.

    Science.gov (United States)

    Schendel, S A; Walker, G; Kamisugi, A

    1980-05-01

    Craniofacial morphology and cultural cranial deformation were analyzed by the computer morphometric system in 79 adult Hawaiian skulls from Mokapu, Oahu. The average Hawaiian male was large, but similar in shape to the female. Both were larger than the present Caucasian, showed a greater dental protrusion, and possessed a larger ANB angle, flatter cranial base, and larger facial heights. Correlations in Hawaiian craniofacial structure were found between an increasing mandibular plane angle and 1) shorter posterior facial height, 2) larger gonial angle, 3) larger cranial base angle, and 4) smaller SNA and SNB angles. Of the 79 skulls studied, 8.9% were found to have severe head molding or intentional cranial deformation. Significant statistical differences between the molded group and the nonmolded group are, in decreasing significance: 1) larger upper face height, 2) smaller glabella to occiput distance, and 3) increased lower face height with deformation. The morphometric differences were readily seen by graphic comparison between groups. It is postulated that external forces to the neurocranium result in redirection of the growth vectors in the neurocranial functional matrix, including the cranial base, and secondarily, to the orofacial functional matrix. There is a possibility that the cranial deformation is a retention of the normal birth molding changes. The Polynesian "rocker jaw" was found in 81% to 95% of this populace. This mandibular form occurs only with attainment of adult stature and craniofacial form. This data agrees with the hypothesis that mandibular form is modified by the physical forces present and their direction in the orofacial functional matrix.

  10. Influence of packing interactions on the average conformation of B-DNA in crystalline structures.

    Science.gov (United States)

    Tereshko, V; Subirana, J A

    1999-04-01

    The molecular interactions in crystals of oligonucleotides in the B form have been analysed and in particular the end-to-end interactions. Phosphate-phosphate interactions in dodecamers are also reviewed. A strong influence of packing constraints on the average conformation of the double helix is found. There is a strong relationship between the space group, the end-to-end interactions and the average conformation of DNA. Dodecamers must have a B-form average conformation with 10 +/- 0.1 base pairs per turn in order to crystallize in the P212121 and related space groups usually found. Decamers show a wider range of conformational variation, with 9.7-10. 6 base pairs per turn, depending on the terminal sequence and the space group. The influence of the space group in decamers is quite striking and remains unexplained. Only small variations are allowed in each case. Thus, crystal packing is strongly related to the average DNA conformation in the crystals and deviations from the average are rather limited. The constraints imposed by the crystal lattice explain why the average twist of the DNA in solution (10.6 base pairs per turn) is seldom found in oligonucleotides crystallized in the B form.

  11. Technologies for Decreasing Mining Losses

    Science.gov (United States)

    Valgma, Ingo; Väizene, Vivika; Kolats, Margit; Saarnak, Martin

    2013-12-01

    In case of stratified deposits like oil shale deposit in Estonia, mining losses depend on mining technologies. Current research focuses on extraction and separation possibilities of mineral resources. Selective mining, selective crushing and separation tests have been performed, showing possibilities of decreasing mining losses. Rock crushing and screening process simulations were used for optimizing rock fractions. In addition mine backfilling, fine separation, and optimized drilling and blasting have been analyzed. All tested methods show potential and depend on mineral usage. Usage in addition depends on the utilization technology. The questions like stability of the material flow and influences of the quality fluctuations to the final yield are raised.

  12. A significant decrease in diagnosis of primary progressive multiple sclerosis: A cohort study.

    Science.gov (United States)

    Westerlind, Helga; Stawiarz, Leszek; Fink, Katharina; Hillert, Jan; Manouchehrinia, Ali

    2016-07-01

    Several reports indicate changes to prevalence, incidence, female-to-male ratio in multiple sclerosis. Diagnostic criteria, course definitions and clinical management of the disease have also undergone change during the recent decades. To investigate temporal trends in the diagnosis of primary progressive multiple sclerosis (PPMS) in Sweden. Through the Swedish MS registry we investigated the proportion of PPMS diagnosis in birth, diagnosis and age period cohorts using Poisson regression. A total of 16,915 patients were categorised into six birth-cohorts from 1946 to 1975 and seven date-of-diagnosis-cohorts from 1980 to 2014. We observed a decrease in the uncorrected analysis of diagnosis of PPMS from 19.2% to 2.2% and an average decrease of 23% (p decrease per diagnosis-cohort was seen. In the age-specific diagnosis period cohorts the same decreasing trend of PPMS diagnosis was observed in almost all groups. The diagnosis of PPMS has significantly decreased in Sweden specifically after introduction of disease-modifying treatments. Such decrease can have severe impacts on the future research on PPMS. Our data also suggest that the current trend to emphasise presence or absence of inflammatory activity is already reflected in clinical practice. © The Author(s), 2016.

  13. Statins Decrease Oxidative Stress and ICD Therapies

    Directory of Open Access Journals (Sweden)

    Heather L. Bloom

    2010-01-01

    Full Text Available Recent studies demonstrate that statins decrease ventricular arrhythmias in internal cardioverter defibrillator (ICD patients. The mechanism is unknown, but evidence links increased inflammatory and oxidative states with increased arrhythmias. We hypothesized that statin use decreases oxidation. Methods. 304 subjects with ICDs were surveyed for ventricular arrhythmia. Blood was analyzed for derivatives of reactive oxygen species (DROMs and interleukin-6 (IL-6. Results. Subjects included 252 (83% men, 58% on statins, 20% had ventricular arrhythmias. Average age was 63 years and ejection fraction (EF 20%. ICD implant duration was 29 ± 27 months. Use of statins correlated with lower ICD events (r=0.12, P=.02. Subjects on statins had lower hsCRP (5.2 versus 6.3; P=.05 and DROM levels (373 versus 397; P=.03. Other factors, including IL-6 and EF did not differ between statin and nonstatin use, nor did beta-blocker or antiarrhythmic use. Multivariate cross-correlation analysis demonstrated that DROMs, statins, IL-6 and EF were strongly associated with ICD events. Multivariate regression shows DROMs to be the dominant predictor. Conclusion. ICD event rate correlates with DROMs, a measure of lipid peroxides. Use of statins is associated with reduced DROMs and fewer ICD events, suggesting that statins exert their effect through reducing oxidation.

  14. Hyper-arousal decreases human visual thresholds.

    Directory of Open Access Journals (Sweden)

    Adam J Woods

    Full Text Available Arousal has long been known to influence behavior and serves as an underlying component of cognition and consciousness. However, the consequences of hyper-arousal for visual perception remain unclear. The present study evaluates the impact of hyper-arousal on two aspects of visual sensitivity: visual stereoacuity and contrast thresholds. Sixty-eight participants participated in two experiments. Thirty-four participants were randomly divided into two groups in each experiment: Arousal Stimulation or Sham Control. The Arousal Stimulation group underwent a 50-second cold pressor stimulation (immersing the foot in 0-2° C water, a technique known to increase arousal. In contrast, the Sham Control group immersed their foot in room temperature water. Stereoacuity thresholds (Experiment 1 and contrast thresholds (Experiment 2 were measured before and after stimulation. The Arousal Stimulation groups demonstrated significantly lower stereoacuity and contrast thresholds following cold pressor stimulation, whereas the Sham Control groups showed no difference in thresholds. These results provide the first evidence that hyper-arousal from sensory stimulation can lower visual thresholds. Hyper-arousal's ability to decrease visual thresholds has important implications for survival, sports, and everyday life.

  15. Hyper-arousal decreases human visual thresholds.

    Science.gov (United States)

    Woods, Adam J; Philbeck, John W; Wirtz, Philip

    2013-01-01

    Arousal has long been known to influence behavior and serves as an underlying component of cognition and consciousness. However, the consequences of hyper-arousal for visual perception remain unclear. The present study evaluates the impact of hyper-arousal on two aspects of visual sensitivity: visual stereoacuity and contrast thresholds. Sixty-eight participants participated in two experiments. Thirty-four participants were randomly divided into two groups in each experiment: Arousal Stimulation or Sham Control. The Arousal Stimulation group underwent a 50-second cold pressor stimulation (immersing the foot in 0-2° C water), a technique known to increase arousal. In contrast, the Sham Control group immersed their foot in room temperature water. Stereoacuity thresholds (Experiment 1) and contrast thresholds (Experiment 2) were measured before and after stimulation. The Arousal Stimulation groups demonstrated significantly lower stereoacuity and contrast thresholds following cold pressor stimulation, whereas the Sham Control groups showed no difference in thresholds. These results provide the first evidence that hyper-arousal from sensory stimulation can lower visual thresholds. Hyper-arousal's ability to decrease visual thresholds has important implications for survival, sports, and everyday life.

  16. Group X

    Energy Technology Data Exchange (ETDEWEB)

    Fields, Susannah

    2007-08-16

    This project is currently under contract for research through the Department of Homeland Security until 2011. The group I was responsible for studying has to remain confidential so as not to affect the current project. All dates, reference links and authors, and other distinguishing characteristics of the original group have been removed from this report. All references to the name of this group or the individual splinter groups has been changed to 'Group X'. I have been collecting texts from a variety of sources intended for the use of recruiting and radicalizing members for Group X splinter groups for the purpose of researching the motivation and intent of leaders of those groups and their influence over the likelihood of group radicalization. This work included visiting many Group X websites to find information on splinter group leaders and finding their statements to new and old members. This proved difficult because the splinter groups of Group X are united in beliefs, but differ in public opinion. They are eager to tear each other down, prove their superiority, and yet remain anonymous. After a few weeks of intense searching, a list of eight recruiting texts and eight radicalizing texts from a variety of Group X leaders were compiled.

  17. Averages of $b$-hadron, $c$-hadron, and $\\tau$-lepton properties as of summer 2014

    Energy Technology Data Exchange (ETDEWEB)

    Amhis, Y.; et al.

    2014-12-23

    This article reports world averages of measurements of $b$-hadron, $c$-hadron, and $\\tau$-lepton properties obtained by the Heavy Flavor Averaging Group (HFAG) using results available through summer 2014. For the averaging, common input parameters used in the various analyses are adjusted (rescaled) to common values, and known correlations are taken into account. The averages include branching fractions, lifetimes, neutral meson mixing parameters, $CP$ violation parameters, parameters of semileptonic decays and CKM matrix elements.

  18. Averages of $b$-hadron, $c$-hadron, and $\\tau$-lepton properties as of summer 2016 arXiv

    CERN Document Server

    Amhis, Y.; Ben-Haim, E.; Bernlochner, F.; Bozek, A.; Bozzi, C.; Chrząszcz, M.; Dingfelder, J.; Duell, S.; Gersabeck, M.; Gershon, T.; Goldenzweig, P.; Harr, R.; Hayasaka, K.; Hayashii, H.; Kenzie, M.; Kuhr, T.; Leroy, O.; Lusiani, A.; Lyu, X.R.; Miyabayashi, K.; Naik, P.; Nanut, T.; Oyanguren Campos, A.; Patel, M.; Pedrini, D.; Petrič, M.; Rama, M.; Roney, M.; Rotondo, M.; Schneider, O.; Schwanda, C.; Schwartz, A.J.; Serrano, J.; Shwartz, B.; Tesarek, R.; Trabelsi, K.; Urquijo, P.; Van Kooten, R.; Yelton, J.; Zupanc, A.

    This article reports world averages of measurements of $b$-hadron, $c$-hadron, and $\\tau$-lepton properties obtained by the Heavy Flavor Averaging Group (HFAG) using results available through summer 2016. For the averaging, common input parameters used in the various analyses are adjusted (rescaled) to common values, and known correlations are taken into account. The averages include branching fractions, lifetimes, neutral meson mixing parameters, \\CP~violation parameters, parameters of semileptonic decays and CKM matrix elements.

  19. Averages of $b$-hadron, $c$-hadron, and $\\tau$-lepton properties as of summer 2014

    CERN Document Server

    Amhis, Y; Ben-Haim, E; Blyth, S; Bozek, A; Bozzi, C; Carbone, A; Chistov, R; Chrząszcz, M; Cibinetto, G; Dingfelder, J; Gelb, M; Gersabeck, M; Gershon, T; Gibbons, L; Golob, B; Harr, R; Hayasaka, K; Hayashii, H; Kuhr, T; Leroy, O; Lusiani, A; Miyabayashi, K; Naik, P; Nishida, S; Campos, A Oyanguren; Patel, M; Pedrini, D; Petrič, M; Rama, M; Roney, M; Rotondo, M; Schneider, O; Schwanda, C; Schwartz, A J; Shwartz, B; Smith, J G; Tesarek, R; Trabelsi, K; Urquijo, P; Van Kooten, R; Zupanc, A

    2014-01-01

    This article reports world averages of measurements of $b$-hadron, $c$-hadron, and $\\tau$-lepton properties obtained by the Heavy Flavor Averaging Group (HFAG) using results available through summer 2014. For the averaging, common input parameters used in the various analyses are adjusted (rescaled) to common values, and known correlations are taken into account. The averages include branching fractions, lifetimes, neutral meson mixing parameters, $CP$ violation parameters, parameters of semileptonic decays and CKM matrix elements.

  20. Averages of $b$-hadron, $c$-hadron, and $\\tau$-lepton properties as of summer 2016

    Energy Technology Data Exchange (ETDEWEB)

    Amhis, Y.; et al.

    2016-12-21

    This article reports world averages of measurements of $b$-hadron, $c$-hadron, and $\\tau$-lepton properties obtained by the Heavy Flavor Averaging Group (HFAG) using results available through summer 2016. For the averaging, common input parameters used in the various analyses are adjusted (rescaled) to common values, and known correlations are taken into account. The averages include branching fractions, lifetimes, neutral meson mixing parameters, \\CP~violation parameters, parameters of semileptonic decays and CKM matrix elements.

  1. Decreased 24-hour urinary MHPG in childhood autism.

    Science.gov (United States)

    Young, J G; Cohen, D J; Caparulo, B K; Brown, S L; Maas, J W

    1979-08-01

    The authors compared a group of boys with childhood autism and a group of normal boys of similar age and found a decrease in urinary 3-methoxy-4-hydroxyphenethylene glycol (MHPG) in the autistic group. They hypothesize that autistic children might have an alteration in central and peripheral noradrenergic function, which might be related to impaired regulation of attention, arousal, and anxiety.

  2. Data mining and visualization of average images in a digital hand atlas

    Science.gov (United States)

    Zhang, Aifeng; Gertych, Arkadiusz; Liu, Brent J.; Huang, H. K.

    2005-04-01

    We have collected a digital hand atlas containing digitized left hand radiographs of normally developed children grouped accordingly by age, sex, and race. A set of features stored in a database reflecting patient's stage of skeletal development has been calculated by automatic image processing procedures. This paper addresses a new concept, "average" image in the digital hand atlas. The "average" reference image in the digital atlas is selected for each of the groups of normal developed children with the best representative skeletal maturity based on bony features. A data mining procedure was designed and applied to find the average image through average feature vector matching. It also provides a temporary solution for the missing feature problem through polynomial regression. As more cases are added to the digital hand atlas, it can grow to provide clinicians accurate reference images to aid the bone age assessment process.

  3. Moderate systemic hypothermia decreases burn depth progression.

    Science.gov (United States)

    Rizzo, Julie A; Burgess, Pamela; Cartie, Richard J; Prasad, Balakrishna M

    2013-05-01

    Therapeutic hypothermia has been proposed to be beneficial in an array of human pathologies including cardiac arrest, stroke, traumatic brain and spinal cord injury, and hemorrhagic shock. Burn depth progression is multifactorial but inflammation plays a large role. Because hypothermia is known to reduce inflammation, we hypothesized that moderate hypothermia will decrease burn depth progression. We used a second-degree 15% total body surface area thermal injury model in rats. Burn depth was assessed by histology of biopsy sections. Moderate hypothermia in the range of 31-33°C was applied for 4h immediately after burn and in a delayed fashion, starting 2h after burn. In order to gain insight into the beneficial effects of hypothermia, we analyzed global gene expression in the burned skin. Immediate hypothermia decreased burn depth progression at 6h post injury, and this protective effect was sustained for at least 24h. Burn depth was 18% lower in rats subjected to immediate hypothermia compared to control rats at both 6 and 24h post injury. Rats in the delayed hypothermia group did not show any significant decrease in burn depth at 6h, but had 23% lower burn depth than controls at 24h. Increased expression of several skin-protective genes such as CCL4, CCL6 and CXCL13 and decreased expression of tissue remodeling genes such as matrix metalloprotease-9 were discovered in the skin biopsy samples of rats subjected to immediate hypothermia. Systemic hypothermia decreases burn depth progression in a rodent model and up-regulation of skin-protective genes and down-regulation of detrimental tissue remodeling genes by hypothermia may contribute to its beneficial effects. Published by Elsevier Ltd.

  4. Group morphology

    NARCIS (Netherlands)

    Roerdink, Jos B.T.M.

    2000-01-01

    In its original form, mathematical morphology is a theory of binary image transformations which are invariant under the group of Euclidean translations. This paper surveys and extends constructions of morphological operators which are invariant under a more general group TT, such as the motion group

  5. Impedance group summary

    Science.gov (United States)

    Blaskiewicz, M.; Dooling, J.; Dyachkov, M.; Fedotov, A.; Gluckstern, R.; Hahn, H.; Huang, H.; Kurennoy, S.; Linnecar, T.; Shaposhnikova, E.; Stupakov, G.; Toyama, T.; Wang, J. G.; Weng, W. T.; Zhang, S. Y.; Zotter, B.

    1999-12-01

    The impedance working group was charged to reply to the following 8 questions relevant to the design of high-intensity proton machines such as the SNS or the FNAL driver. These questions were first discussed one by one in the whole group, then each ne of them assigned to one member to summarize. On the lst morning these contributions were publicly read, re-discussed and re-written where required—hence they are not the opinion of a particular person, but rather the averaged opinion of all members of the working group. (AIP)

  6. Group Assessment and Structured Learning.

    Science.gov (United States)

    Lambert, Warren; And Others

    1980-01-01

    Two new techniques that were used with a group of seven blind, multiply handicapped young adults in a half-way house are described. Structured learning therapy is a social skills training technique and group assessment is a method of averaging psychological data on a group of clients to facilitate program planning based on client needs.…

  7. Declining average daily census. Part 2: Possible solutions.

    Science.gov (United States)

    Weil, T P

    1986-01-01

    Several possible solutions are available to hospitals experiencing a declining average daily census, including: Closure of some U.S. hospitals; Joint ventures between physicians and hospitals; Development of integrated and coordinated medical-fiscal-management information systems; Improvements in the hospital's short-term marketing strategy; Reduction of the facility's internal operation expenses; Vertical more than horizontal diversification to develop a multilevel (acute through home care) regional health care system with an alternative health care payment system that is a joint venture with the medical staff(s); Acquisition or management by a not-for-profit or investor-owned multihospital system (emphasis on horizontal versus vertical integration). Many reasons exist for an institution to choose the solution of developing a regional multilevel health care system rather than being part of a large, geographically scattered, multihospital system. Geographic proximity, lenders' preferences, service integration, management recruitment, and local remedies to a declining census all favor the regional system. More answers lie in emphasizing the basics of health care regionalization and focusing on vertical integration, including a prepayment plan, rather than stressing large multihospital systems with institutions in several states or selling out to the investor-owned groups.

  8. Perceptual learning in Williams syndrome: looking beyond averages.

    Directory of Open Access Journals (Sweden)

    Patricia Gervan

    Full Text Available Williams Syndrome is a genetically determined neurodevelopmental disorder characterized by an uneven cognitive profile and surprisingly large neurobehavioral differences among individuals. Previous studies have already shown different forms of memory deficiencies and learning difficulties in WS. Here we studied the capacity of WS subjects to improve their performance in a basic visual task. We employed a contour integration paradigm that addresses occipital visual function, and analyzed the initial (i.e. baseline and after-learning performance of WS individuals. Instead of pooling the very inhomogeneous results of WS subjects together, we evaluated individual performance by expressing it in terms of the deviation from the average performance of the group of typically developing subjects of similar age. This approach helped us to reveal information about the possible origins of poor performance of WS subjects in contour integration. Although the majority of WS individuals showed both reduced baseline and reduced learning performance, individual analysis also revealed a dissociation between baseline and learning capacity in several WS subjects. In spite of impaired initial contour integration performance, some WS individuals presented learning capacity comparable to learning in the typically developing population, and vice versa, poor learning was also observed in subjects with high initial performance levels. These data indicate a dissociation between factors determining initial performance and perceptual learning.

  9. Increasing Age Is a Risk Factor for Decreased Postpartum Pelvic Floor Strength.

    Science.gov (United States)

    Quiroz, Lieschen H; Pickett, Stephanie D; Peck, Jennifer D; Rostaminia, Ghazaleh; Stone, Daniel E; Shobeiri, S Abbas

    This study aimed to determine factors associated with decreased pelvic floor strength (PFS) after the first vaginal delivery (VD) in a cohort of low-risk women. This is a secondary analysis of a prospective study examining the risk of pelvic floor injury in a cohort of primiparous women. All recruited participants underwent an examination, three-dimensional ultrasound and measurement of PFS in the third trimester and repeated at 4 weeks to 6 months postpartum using a perineometer. There were 84 women recruited for the study, and 70 completed the postpartum assessment. Average age was 28.4 years (standard deviation, 4.8). There were 46 (66%) subjects with a VD and 24 (34%) with a cesarean delivery who labored. Decreased PFS was observed more frequently in the VD group compared with the cesarean delivery group (68% vs 42%, P = 0.03).In modified Poisson regression models controlling for mode of delivery and time of postpartum assessment, women who were aged 25 to 29 years (risk ratio = 2.80, 95% confidence interval, 1.03-7.57) and 30 years and older (risk ratio = 2.53, 95% confidence interval, 0.93-6.86) were over 2.5 times more likely to have decreased postpartum PFS compared with women younger than 25 years. In this population, women aged 25 years and older were more than twice as likely to have a decrease in postpartum PFS.

  10. Averaging and sampling for magnetic-observatory hourly data

    Directory of Open Access Journals (Sweden)

    J. J. Love

    2010-11-01

    Full Text Available A time and frequency-domain analysis is made of the effects of averaging and sampling methods used for constructing magnetic-observatory hourly data values. Using 1-min data as a proxy for continuous, geomagnetic variation, we construct synthetic hourly values of two standard types: instantaneous "spot" measurements and simple 1-h "boxcar" averages. We compare these average-sample types with others: 2-h average, Gaussian, and "brick-wall" low-frequency-pass. Hourly spot measurements provide a statistically unbiased representation of the amplitude range of geomagnetic-field variation, but as a representation of continuous field variation over time, they are significantly affected by aliasing, especially at high latitudes. The 1-h, 2-h, and Gaussian average-samples are affected by a combination of amplitude distortion and aliasing. Brick-wall values are not affected by either amplitude distortion or aliasing, but constructing them is, in an operational setting, relatively more difficult than it is for other average-sample types. It is noteworthy that 1-h average-samples, the present standard for observatory hourly data, have properties similar to Gaussian average-samples that have been optimized for a minimum residual sum of amplitude distortion and aliasing. For 1-h average-samples from medium and low-latitude observatories, the average of the combination of amplitude distortion and aliasing is less than the 5.0 nT accuracy standard established by Intermagnet for modern 1-min data. For medium and low-latitude observatories, average differences between monthly means constructed from 1-min data and monthly means constructed from any of the hourly average-sample types considered here are less than the 1.0 nT resolution of standard databases. We recommend that observatories and World Data Centers continue the standard practice of reporting simple 1-h-average hourly values.

  11. Statin Intake Is Associated With Decreased Insulin Sensitivity During Cardiac Surgery

    Science.gov (United States)

    Sato, Hiroaki; Carvalho, George; Sato, Tamaki; Hatzakorzian, Roupen; Lattermann, Ralph; Codere-Maruyama, Takumi; Matsukawa, Takashi; Schricker, Thomas

    2012-01-01

    OBJECTIVE Surgical trauma impairs intraoperative insulin sensitivity and is associated with postoperative adverse events. Recently, preprocedural statin therapy is recommended for patients with coronary artery disease. However, statin therapy is reported to increase insulin resistance and the risk of new-onset diabetes. Thus, we investigated the association between preoperative statin therapy and intraoperative insulin sensitivity in nondiabetic, dyslipidemic patients undergoing coronary artery bypass grafting. RESEARCH DESIGN AND METHODS In this prospective, nonrandomized trial, patients taking lipophilic statins were assigned to the statin group and hypercholesterolemic patients not receiving any statins were allocated to the control group. Insulin sensitivity was assessed by the hyperinsulinemic-normoglycemic clamp technique during surgery. The mean, SD of blood glucose, and the coefficient of variation (CV) after surgery were calculated for each patient. The association between statin use and intraoperative insulin sensitivity was tested by multiple regression analysis. RESULTS We studied 120 patients. In both groups, insulin sensitivity gradually decreased during surgery with values being on average ∼20% lower in the statin than in the control group. In the statin group, the mean blood glucose in the intensive care unit was higher than in the control group (153 ± 20 vs. 140 ± 20 mg/dL; P statin group (SD, P statin use was independently associated with intraoperative insulin sensitivity (β = −0.16; P = 0.03). CONCLUSIONS Preoperative use of lipophilic statins is associated with increased insulin resistance during cardiac surgery in nondiabetic, dyslipidemic patients. PMID:22829524

  12. Baroclinic pressure gradient difference schemes of subtracting the local averaged density stratification in sigma coordinates models

    Institute of Scientific and Technical Information of China (English)

    ZHU Shouxian; ZHANG Wenjing

    2008-01-01

    Much has been written of the error in computing the baroclinic pressure gradient (BPG) with sigma coordinates in ocean or atmos- pheric numerical models. The usual way to reduce the error is to subtract area-averaged density stratification of the whole computa- tion region. But if there is great difference between the area-averaged and the local averaged density stratification, the error will be obvious. An example is given to show that the error from this method may be larger than that from no correction sometimes. The definition of local area is put forward. Then, four improved BPG difference schemes of subtracting the local averaged density strat- ification are designed to reduce the error. Two of them are for diagnostic calculation (density field is fixed), and the others are for prognostic calculation (density field is not fixed). The results show that the errors from these schemes all significantly decrease.

  13. Averaging VMAT treatment plans for multi-criteria navigation

    CERN Document Server

    Craft, David; Unkelbach, Jan

    2013-01-01

    The main approach to smooth Pareto surface navigation for radiation therapy multi-criteria treatment planning involves taking real-time averages of pre-computed treatment plans. In fluence-based treatment planning, fluence maps themselves can be averaged, which leads to the dose distributions being averaged due to the linear relationship between fluence and dose. This works for fluence-based photon plans and proton spot scanning plans. In this technical note, we show that two or more sliding window volumetric modulated arc therapy (VMAT) plans can be combined by averaging leaf positions in a certain way, and we demonstrate that the resulting dose distribution for the averaged plan is approximately the average of the dose distributions of the original plans. This leads to the ability to do Pareto surface navigation, i.e. interactive multi-criteria exploration of VMAT plan dosimetric tradeoffs.

  14. Averaging and exact perturbations in LTB dust models

    CERN Document Server

    Sussman, Roberto A

    2012-01-01

    We introduce a scalar weighed average ("q-average") acting on concentric comoving domains in spherically symmetric Lemaitre-Tolman-Bondi (LTB) dust models. The resulting averaging formalism allows for an elegant coordinate independent dynamical study of the models, providing as well a valuable theoretical insight on the properties of scalar averaging in inhomogeneous spacetimes. The q-averages of those covariant scalars common to FLRW models (the "q-scalars") identically satisfy FLRW evolution laws and determine for every domain a unique FLRW background state. All curvature and kinematic proper tensors and their invariant contractions are expressible in terms of the q-scalars and their linear and quadratic local fluctuations, which convey the effects of inhomogeneity through the ratio of Weyl to Ricci curvature invariants and the magnitude of radial gradients. We define also non-local fluctuations associated with the intuitive notion of a "contrast" with respect to FLRW reference averaged values assigned to a...

  15. Distributed Weighted Parameter Averaging for SVM Training on Big Data

    OpenAIRE

    Das, Ayan; Bhattacharya, Sourangshu

    2015-01-01

    Two popular approaches for distributed training of SVMs on big data are parameter averaging and ADMM. Parameter averaging is efficient but suffers from loss of accuracy with increase in number of partitions, while ADMM in the feature space is accurate but suffers from slow convergence. In this paper, we report a hybrid approach called weighted parameter averaging (WPA), which optimizes the regularized hinge loss with respect to weights on parameters. The problem is shown to be same as solving...

  16. On the average crosscap number Ⅱ: Bounds for a graph

    Institute of Scientific and Technical Information of China (English)

    Yi-chao CHEN; Yan-pei LIU

    2007-01-01

    The bounds are obtained for the average crosscap number. Let G be a graph which is not a tree. It is shown that the average crosscap number of G is not less than 2β(G)-1/2β(G)-1β(G)β(G) and not larger than/β(G). Furthermore, we also describe the structure of the graphs which attain the bounds of the average crosscap number.

  17. On the average crosscap number II: Bounds for a graph

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The bounds are obtained for the average crosscap number. Let G be a graph which is not a tree. It is shown that the average crosscap number of G is not less thanβ(G)-1/2β(G)-1β(G) and not larger thanβ(G). Furthermore, we also describe the structure of the graphs which attain the bounds of the average crosscap number.

  18. Group devaluation and group identification

    NARCIS (Netherlands)

    Leach, C.W.; Rodriguez Mosquera, P.M.; Vliek, M.L.W.; Hirt, E.

    2010-01-01

    In three studies, we showed that increased in-group identification after (perceived or actual) group devaluation is an assertion of a (preexisting) positive social identity that counters the negative social identity implied in societal devaluation. Two studies with real-world groups used order manip

  19. The effect of group composition and age on social behavior and competition in groups of weaned dairy calves.

    Science.gov (United States)

    Faerevik, G; Jensen, M B; Bøe, K E

    2010-09-01

    The objective of the present study was to investigate how group composition affects behavior and weight gain of newly weaned dairy calves and how age within heterogeneous groups affects behavior and competition. Seventy-two calves were introduced into 6 groups of 12 calves, of which 3 groups were homogeneous and 3 groups were heterogeneous (including 6 young and 6 old calves). The 9.8 mx9.5 m large experimental pen had 4 separate lying areas as well as a feeding area. Behavior and subgrouping were recorded on d 1, 7, and 14 after grouping, and calves were weighed before and after the experimental period of 14 d. Analysis of the effect of group composition on behavior and weight gain included young calves in heterogeneous groups and calves in homogeneous groups within the same age range at grouping (30 to 42 d). Irrespective of group composition, time spent feeding and lying increased, whereas time spent active decreased from d 1 to 7. In homogeneous groups, calves were more explorative on d 1 after grouping. Finally, calves in homogeneous groups had a higher average daily weight gain than calves in heterogeneous groups. Analysis of the effect of age included young and old calves of heterogeneous groups. Young calves were less explorative than old calves. Young calves were more active than old calves on d 1 but less active on d 7. Time spent lying and lying alone increased over time. More displacements from the feed manger were performed by old calves than by young calves. An analysis including all calves in both homogeneous and heterogeneous groups showed that when lying, calves were evenly distributed on the 4 lying areas and formed subgroups of on average 3 calves. In conclusion, age heterogeneity leads to increased competition, which may have a negative influence on the young calves' performance.

  20. Decision trees with minimum average depth for sorting eight elements

    KAUST Repository

    AbouEisha, Hassan

    2015-11-19

    We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees is approximately equal to 8.548×10^326365), has also minimum depth. Both problems were considered by Knuth (1998). To obtain these results, we use tools based on extensions of dynamic programming which allow us to make sequential optimization of decision trees relative to depth and average depth, and to count the number of decision trees with minimum average depth.

  1. Practical definition of averages of tensors in general relativity

    CERN Document Server

    Boero, Ezequiel F

    2016-01-01

    We present a definition of tensor fields which are average of tensors over a manifold, with a straightforward and natural definition of derivative for the averaged fields; which in turn makes a suitable and practical construction for the study of averages of tensor fields that satisfy differential equations. Although we have in mind applications to general relativity, our presentation is applicable to a general n-dimensional manifold. The definition is based on the integration of scalars constructed from a physically motivated basis, making use of the least amount of geometrical structure. We also present definitions of covariant derivative of the averaged tensors and Lie derivative.

  2. A conversion formula for comparing pulse oximeter desaturation rates obtained with different averaging times.

    Directory of Open Access Journals (Sweden)

    Jan Vagedes

    Full Text Available OBJECTIVE: The number of desaturations determined in recordings of pulse oximeter saturation (SpO2 primarily depends on the time over which values are averaged. As the averaging time in pulse oximeters is not standardized, it varies considerably between centers. To make SpO2 data comparable, it is thus desirable to have a formula that allows conversion between desaturation rates obtained using different averaging times for various desaturation levels and minimal durations. METHODS: Oxygen saturation was measured for 170 hours in 12 preterm infants with a mean number of 65 desaturations <90% per hour of arbitrary duration by using a pulse oximeter in a 2-4 s averaging mode. Using 7 different averaging times between 3 and 16 seconds, the raw red-to-infrared data were reprocessed to determine the number of desaturations (D. The whole procedure was carried out for 7 different minimal desaturation durations (≥ 1, ≥ 5, ≥ 10, ≥ 15, ≥ 20, ≥ 25, ≥ 30 s below SpO2 threshold values of 80%, 85% or 90% to finally reach a conversion formula. The formula was validated by splitting the infants into two groups of six children each and using one group each as a training set and the other one as a test set. RESULTS: Based on the linear relationship found between the logarithm of the desaturation rate and the logarithm of the averaging time, the conversion formula is: D2 = D1 (T2/T1(c, where D2 is the desaturation rate for the desired averaging time T2, and D1 is the desaturation rate for the original averaging time T1, with the exponent c depending on the desaturation threshold and the minimal desaturation duration. The median error when applying this formula was 2.6%. CONCLUSION: This formula enables the conversion of desaturation rates between different averaging times for various desaturation thresholds and minimal desaturation durations.

  3. Charging for computer usage with average cost pricing

    CERN Document Server

    Landau, K

    1973-01-01

    This preliminary report, which is mainly directed to commercial computer centres, gives an introduction to the application of average cost pricing when charging for using computer resources. A description of the cost structure of a computer installation shows advantages and disadvantages of average cost pricing. This is completed by a discussion of the different charging-rates which are possible. (10 refs).

  4. On the Average-Case Complexity of Shellsort

    NARCIS (Netherlands)

    Vitányi, P.M.B.

    2015-01-01

    We prove a lower bound expressed in the increment sequence on the average-case complexity (number of inversions which is proportional to the running time) of Shellsort. This lower bound is sharp in every case where it could be checked. We obtain new results e.g. determining the average-case complexi

  5. Interpreting Bivariate Regression Coefficients: Going beyond the Average

    Science.gov (United States)

    Halcoussis, Dennis; Phillips, G. Michael

    2010-01-01

    Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…

  6. Analytic computation of average energy of neutrons inducing fission

    Energy Technology Data Exchange (ETDEWEB)

    Clark, Alexander Rich [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-08-12

    The objective of this report is to describe how I analytically computed the average energy of neutrons that induce fission in the bare BeRP ball. The motivation of this report is to resolve a discrepancy between the average energy computed via the FMULT and F4/FM cards in MCNP6 by comparison to the analytic results.

  7. Safety Impact of Average Speed Control in the UK

    DEFF Research Database (Denmark)

    Lahrmann, Harry Spaabæk; Brassøe, Bo; Johansen, Jonas Wibert

    2016-01-01

    in the UK. The study demonstrates that the introduction of average speed control results in statistically significant and substantial reductions both in speed and in number of accidents. The evaluation indicates that average speed control has a higher safety effect than point-based automatic speed control....

  8. A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, Manfred

    2003-01-01

    We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages...

  9. A Simple Geometrical Derivation of the Spatial Averaging Theorem.

    Science.gov (United States)

    Whitaker, Stephen

    1985-01-01

    The connection between single phase transport phenomena and multiphase transport phenomena is easily accomplished by means of the spatial averaging theorem. Although different routes to the theorem have been used, this paper provides a route to the averaging theorem that can be used in undergraduate classes. (JN)

  10. Averaged EMG profiles in jogging and running at different speeds

    NARCIS (Netherlands)

    Gazendam, Marnix G. J.; Hof, At L.

    2007-01-01

    EMGs were collected from 14 muscles with surface electrodes in 10 subjects walking 1.25-2.25 m s(-1) and running 1.25-4.5 m s(-1). The EMGs were rectified, interpolated in 100% of the stride, and averaged over all subjects to give an average profile. In running, these profiles could be decomposed in

  11. Average widths of anisotropic Besov-Wiener classes

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    This paper concerns the problem of average σ-K width and average σ-L width of some anisotropic Besov-Wiener classes Srp q θb(Rd) and Srp q θB(Rd) in Lq(Rd) (1≤q≤p<∞). The weak asymptotic behavior is established for the corresponding quantities.

  12. 7 CFR 701.17 - Average adjusted gross income limitation.

    Science.gov (United States)

    2010-01-01

    ... 9003), each applicant must meet the provisions of the Adjusted Gross Income Limitations at 7 CFR part... 7 Agriculture 7 2010-01-01 2010-01-01 false Average adjusted gross income limitation. 701.17... RELATED PROGRAMS PREVIOUSLY ADMINISTERED UNDER THIS PART § 701.17 Average adjusted gross income...

  13. A note on moving average models for Gaussian random fields

    DEFF Research Database (Denmark)

    Hansen, Linda Vadgård; Thorarinsdottir, Thordis L.

    The class of moving average models offers a flexible modeling framework for Gaussian random fields with many well known models such as the Matérn covariance family and the Gaussian covariance falling under this framework. Moving average models may also be viewed as a kernel smoothing of a Lévy...

  14. Average widths of anisotropic Besov-Wiener classes

    Institute of Scientific and Technical Information of China (English)

    蒋艳杰

    2000-01-01

    This paper concems the problem of average σ-K width and average σ-L width of some anisotropic Besov-wiener classes Spqθr(Rd) and Spqθr(Rd) in Lq(Rd) (1≤≤q≤p<∞). The weak asymptotic behavior is established for the corresponding quantities.

  15. Remarks on the Lower Bounds for the Average Genus

    Institute of Scientific and Technical Information of China (English)

    Yi-chao Chen

    2011-01-01

    Let G be a graph of maximum degree at most four. By using the overlap matrix method which is introduced by B. Mohar, we show that the average genus of G is not less than 1/3 of its maximum genus, and the bound is best possible. Also, a new lower bound of average genus in terms of girth is derived.

  16. Delineating the Average Rate of Change in Longitudinal Models

    Science.gov (United States)

    Kelley, Ken; Maxwell, Scott E.

    2008-01-01

    The average rate of change is a concept that has been misunderstood in the literature. This article attempts to clarify the concept and show unequivocally the mathematical definition and meaning of the average rate of change in longitudinal models. The slope from the straight-line change model has at times been interpreted as if it were always the…

  17. Average cross-responses in correlated financial markets

    Science.gov (United States)

    Wang, Shanshan; Schäfer, Rudi; Guhr, Thomas

    2016-09-01

    There are non-vanishing price responses across different stocks in correlated financial markets, reflecting non-Markovian features. We further study this issue by performing different averages, which identify active and passive cross-responses. The two average cross-responses show different characteristic dependences on the time lag. The passive cross-response exhibits a shorter response period with sizeable volatilities, while the corresponding period for the active cross-response is longer. The average cross-responses for a given stock are evaluated either with respect to the whole market or to different sectors. Using the response strength, the influences of individual stocks are identified and discussed. Moreover, the various cross-responses as well as the average cross-responses are compared with the self-responses. In contrast to the short-memory trade sign cross-correlations for each pair of stocks, the sign cross-correlations averaged over different pairs of stocks show long memory.

  18. The Optimal Selection for Restricted Linear Models with Average Estimator

    Directory of Open Access Journals (Sweden)

    Qichang Xie

    2014-01-01

    Full Text Available The essential task of risk investment is to select an optimal tracking portfolio among various portfolios. Statistically, this process can be achieved by choosing an optimal restricted linear model. This paper develops a statistical procedure to do this, based on selecting appropriate weights for averaging approximately restricted models. The method of weighted average least squares is adopted to estimate the approximately restricted models under dependent error setting. The optimal weights are selected by minimizing a k-class generalized information criterion (k-GIC, which is an estimate of the average squared error from the model average fit. This model selection procedure is shown to be asymptotically optimal in the sense of obtaining the lowest possible average squared error. Monte Carlo simulations illustrate that the suggested method has comparable efficiency to some alternative model selection techniques.

  19. Do Diurnal Aerosol Changes Affect Daily Average Radiative Forcing?

    Energy Technology Data Exchange (ETDEWEB)

    Kassianov, Evgueni I.; Barnard, James C.; Pekour, Mikhail S.; Berg, Larry K.; Michalsky, Joseph J.; Lantz, K.; Hodges, G. B.

    2013-06-17

    Strong diurnal variability of aerosol has been observed frequently for many urban/industrial regions. How this variability may alter the direct aerosol radiative forcing (DARF), however, is largely unknown. To quantify changes in the time-averaged DARF, we perform an assessment of 29 days of high temporal resolution ground-based data collected during the Two-Column Aerosol Project (TCAP) on Cape Cod, which is downwind of metropolitan areas. We demonstrate that strong diurnal changes of aerosol loading (about 20% on average) have a negligible impact on the 24-h average DARF, when daily averaged optical properties are used to find this quantity. However, when there is a sparse temporal sampling of aerosol properties, which may preclude the calculation of daily averaged optical properties, large errors (up to 100%) in the computed DARF may occur. We describe a simple way of reducing these errors, which suggests the minimal temporal sampling needed to accurately find the forcing.

  20. The effect of the behavior of an average consumer on the public debt dynamics

    Science.gov (United States)

    De Luca, Roberto; Di Mauro, Marco; Falzarano, Angelo; Naddeo, Adele

    2017-09-01

    An important issue within the present economic crisis is understanding the dynamics of the public debt of a given country, and how the behavior of average consumers and tax payers in that country affects it. Starting from a model of the average consumer behavior introduced earlier by the authors, we propose a simple model to quantitatively address this issue. The model is then studied and analytically solved under some reasonable simplifying assumptions. In this way we obtain a condition under which the public debt steadily decreases.

  1. Active cooling of pulse compression diffraction gratings for high energy, high average power ultrafast lasers.

    Science.gov (United States)

    Alessi, David A; Rosso, Paul A; Nguyen, Hoang T; Aasen, Michael D; Britten, Jerald A; Haefner, Constantin

    2016-12-26

    Laser energy absorption and subsequent heat removal from diffraction gratings in chirped pulse compressors poses a significant challenge in high repetition rate, high peak power laser development. In order to understand the average power limitations, we have modeled the time-resolved thermo-mechanical properties of current and advanced diffraction gratings. We have also developed and demonstrated a technique of actively cooling Petawatt scale, gold compressor gratings to operate at 600W of average power - a 15x increase over the highest average power petawatt laser currently in operation. Combining this technique with low absorption multilayer dielectric gratings developed in our group would enable pulse compressors for petawatt peak power lasers operating at average powers well above 40kW.

  2. Algebraic Groups

    DEFF Research Database (Denmark)

    2007-01-01

    The workshop continued a series of Oberwolfach meetings on algebraic groups, started in 1971 by Tonny Springer and Jacques Tits who both attended the present conference. This time, the organizers were Michel Brion, Jens Carsten Jantzen, and Raphaël Rouquier. During the last years, the subject...... of algebraic groups (in a broad sense) has seen important developments in several directions, also related to representation theory and algebraic geometry. The workshop aimed at presenting some of these developments in order to make them accessible to a "general audience" of algebraic group......-theorists, and to stimulate contacts between participants. Each of the first four days was dedicated to one area of research that has recently seen decisive progress: \\begin{itemize} \\item structure and classification of wonderful varieties, \\item finite reductive groups and character sheaves, \\item quantum cohomology...

  3. Group Grammar

    Science.gov (United States)

    Adams, Karen

    2015-01-01

    In this article Karen Adams demonstrates how to incorporate group grammar techniques into a classroom activity. In the activity, students practice using the target grammar to do something they naturally enjoy: learning about each other.

  4. Exploring the Best Classification from Average Feature Combination

    Directory of Open Access Journals (Sweden)

    Jian Hou

    2014-01-01

    Full Text Available Feature combination is a powerful approach to improve object classification performance. While various combination algorithms have been proposed, average combination is almost always selected as the baseline algorithm to be compared with. In previous work we have found that it is better to use only a sample of the most powerful features in average combination than using all. In this paper, we continue this work and further show that the behaviors of features in average combination can be integrated into the k-Nearest-Neighbor (kNN framework. Based on the kNN framework, we then propose to use a selection based average combination algorithm to obtain the best classification performance from average combination. Our experiments on four diverse datasets indicate that this selection based average combination performs evidently better than the ordinary average combination, and thus serves as a better baseline. Comparing with this new and better baseline makes the claimed superiority of newly proposed combination algorithms more convincing. Furthermore, the kNN framework is helpful in understanding the underlying mechanism of feature combination and motivating novel feature combination algorithms.

  5. Bounds on Average Time Complexity of Decision Trees

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any diagnostic problem, the minimum average depth of decision tree is bounded from below by the entropy of probability distribution (with a multiplier 1/log2 k for a problem over a k-valued information system). Among diagnostic problems, the problems with a complete set of attributes have the lowest minimum average depth of decision trees (e.g, the problem of building optimal prefix code [1] and a blood test study in assumption that exactly one patient is ill [23]). For such problems, the minimum average depth of decision tree exceeds the lower bound by at most one. The minimum average depth reaches the maximum on the problems in which each attribute is "indispensable" [44] (e.g., a diagnostic problem with n attributes and kn pairwise different rows in the decision table and the problem of implementing the modulo 2 summation function). These problems have the minimum average depth of decision tree equal to the number of attributes in the problem description. © Springer-Verlag Berlin Heidelberg 2011.

  6. MUYANG GROUP

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    @@ With its headquarters in the historic city of Yangzhou,Jiangsu Muyang Group Co.,Ltd has since its founding in 1967 grown into a well-known group corporation whose activities cover research&development.project design,manufacturing,installation and services in a multitude of industries including feed machinery and engineering,storage engineering,grain machinery and engineering,environmental protection,conveying equipment and automatic control systems.

  7. Abelian groups

    CERN Document Server

    Fuchs, László

    2015-01-01

    Written by one of the subject’s foremost experts, this book focuses on the central developments and modern methods of the advanced theory of abelian groups, while remaining accessible, as an introduction and reference, to the non-specialist. It provides a coherent source for results scattered throughout the research literature with lots of new proofs. The presentation highlights major trends that have radically changed the modern character of the subject, in particular, the use of homological methods in the structure theory of various classes of abelian groups, and the use of advanced set-theoretical methods in the study of undecidability problems. The treatment of the latter trend includes Shelah’s seminal work on the undecidability in ZFC of Whitehead’s Problem; while the treatment of the former trend includes an extensive (but non-exhaustive) study of p-groups, torsion-free groups, mixed groups, and important classes of groups arising from ring theory. To prepare the reader to tackle these topics, th...

  8. Unbiased Cultural Transmission in Time-Averaged Archaeological Assemblages

    CERN Document Server

    Madsen, Mark E

    2012-01-01

    Unbiased models are foundational in the archaeological study of cultural transmission. Applications have as- sumed that archaeological data represent synchronic samples, despite the accretional nature of the archaeological record. I document the circumstances under which time-averaging alters the distribution of model predictions. Richness is inflated in long-duration assemblages, and evenness is "flattened" compared to unaveraged samples. Tests of neutrality, employed to differentiate biased and unbiased models, suffer serious problems with Type I error under time-averaging. Finally, the time-scale over which time-averaging alters predictions is determined by the mean trait lifetime, providing a way to evaluate the impact of these effects upon archaeological samples.

  9. Average-Case Analysis of Algorithms Using Kolmogorov Complexity

    Institute of Scientific and Technical Information of China (English)

    姜涛; 李明

    2000-01-01

    Analyzing the average-case complexity of algorithms is a very prac tical but very difficult problem in computer science. In the past few years, we have demonstrated that Kolmogorov complexity is an important tool for analyzing the average-case complexity of algorithms. We have developed the incompressibility method. In this paper, several simple examples are used to further demonstrate the power and simplicity of such method. We prove bounds on the average-case number of stacks (queues) required for sorting sequential or parallel Queuesort or Stacksort.

  10. Sample Selected Averaging Method for Analyzing the Event Related Potential

    Science.gov (United States)

    Taguchi, Akira; Ono, Youhei; Kimura, Tomoaki

    The event related potential (ERP) is often measured through the oddball task. On the oddball task, subjects are given “rare stimulus” and “frequent stimulus”. Measured ERPs were analyzed by the averaging technique. In the results, amplitude of the ERP P300 becomes large when the “rare stimulus” is given. However, measured ERPs are included samples without an original feature of ERP. Thus, it is necessary to reject unsuitable measured ERPs when using the averaging technique. In this paper, we propose the rejection method for unsuitable measured ERPs for the averaging technique. Moreover, we combine the proposed method and Woody's adaptive filter method.

  11. Epigallocatechin gallate improves insulin signaling by decreasing toll-like receptor 4 (TLR4) activity in adipose tissues of high-fat diet rats.

    Science.gov (United States)

    Bao, Suqing; Cao, Yanli; Fan, Chenling; Fan, Yuxin; Bai, Shuting; Teng, Weiping; Shan, Zhongyan

    2014-04-01

    In this study, we investigated the beneficial effects and the underlying mechanism of epigallocatechin gallate (EGCG) in adipose tissues of rats fed with a high-fat diet (HFD). Fasting plasma insulin, epididymal fat coefficient and free fatty acids, homeostasis model assessment-insulin resistance index, and the average glucose infusion rate were determined. EGCG significantly decreased free fatty acids, fasting insulin, homeostasis model assessment-insulin resistance index, and epididymal fat coefficient, and increased glucose infusion rate in HFD group. The levels of toll-like receptor 4, TNF receptor associated factor 6, inhibitor-kappa-B kinase β, p-nuclear factor κB, tumor necrosis factor α, and IL-6 in the EGCG group were all significantly lower than the HFD control group. EGCG also decreased the level of phosphorylated insulin receptor substrate 1 and increased phosphoinositide-3-kinase and glucose transporter isoform 4 in the HFD group. Decreased macrophage infiltration was in EGCG group versus HFD group, and the protein level of CD68 in EGCG group was also significantly lower than that of HFD group. EGCG attenuated inflammation by decreasing the content of macrophages, interfered the toll-like receptor 4 mediated inflammatory response pathway, thus, improving insulin signaling in adipose tissues. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Grade Point Average: Report of the GPA Pilot Project 2013-14

    Science.gov (United States)

    Higher Education Academy, 2015

    2015-01-01

    This report is published as the result of a range of investigations and debates involving many universities and colleges and a series of meetings, presentations, discussions and consultations. Interest in a grade point average (GPA) system was originally initiated by a group of interested universities, progressing to the systematic investigation…

  13. Effects of Contingency Contracting on Decreasing Student Tardiness.

    Science.gov (United States)

    Din, Feng S.; Isack, Lori R.; Rietveld, Jill

    A contingency contract program was implemented in this study to determine the effects of contingency contracting on decreasing student tardiness in high school classrooms. The participants were 32 high school students. Of the 32 participants, 16 were randomly assigned to the experimental group and the other 16 to the control group. The…

  14. Grade-Average Method: A Statistical Approach for Estimating ...

    African Journals Online (AJOL)

    Grade-Average Method: A Statistical Approach for Estimating Missing Value for Continuous Assessment Marks. ... Journal of the Nigerian Association of Mathematical Physics. Journal Home · ABOUT ... Open Access DOWNLOAD FULL TEXT ...

  15. United States Average Annual Precipitation, 2000-2004 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 2000-2004. Parameter-elevation...

  16. On the average sensitivity of laced Boolean functions

    CERN Document Server

    jiyou, Li

    2011-01-01

    In this paper we obtain the average sensitivity of the laced Boolean functions. This confirms a conjecture of Shparlinski. We also compute the weights of the laced Boolean functions and show that they are almost balanced.

  17. Distribution of population-averaged observables in stochastic gene expression

    Science.gov (United States)

    Bhattacharyya, Bhaswati; Kalay, Ziya

    2014-01-01

    Observation of phenotypic diversity in a population of genetically identical cells is often linked to the stochastic nature of chemical reactions involved in gene regulatory networks. We investigate the distribution of population-averaged gene expression levels as a function of population, or sample, size for several stochastic gene expression models to find out to what extent population-averaged quantities reflect the underlying mechanism of gene expression. We consider three basic gene regulation networks corresponding to transcription with and without gene state switching and translation. Using analytical expressions for the probability generating function of observables and large deviation theory, we calculate the distribution and first two moments of the population-averaged mRNA and protein levels as a function of model parameters, population size, and number of measurements contained in a data set. We validate our results using stochastic simulations also report exact results on the asymptotic properties of population averages which show qualitative differences among different models.

  18. on the performance of Autoregressive Moving Average Polynomial ...

    African Journals Online (AJOL)

    Timothy Ademakinwa

    Moving Average Polynomial Distributed Lag (ARMAPDL) model. The parameters of these models were estimated using least squares and Newton Raphson iterative methods. ..... Global Journal of Mathematics and Statistics. Vol. 1. No.

  19. Medicare Part B Drug Average Sales Pricing Files

    Data.gov (United States)

    U.S. Department of Health & Human Services — Manufacturer reporting of Average Sales Price (ASP) data - A manufacturers ASP must be calculated by the manufacturer every calendar quarter and submitted to CMS...

  20. The Partial Averaging of Fuzzy Differential Inclusions on Finite Interval

    Directory of Open Access Journals (Sweden)

    Andrej V. Plotnikov

    2014-01-01

    Full Text Available The substantiation of a possibility of application of partial averaging method on finite interval for differential inclusions with the fuzzy right-hand side with a small parameter is considered.

  1. United States Average Annual Precipitation, 2005-2009 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 2005-2009. Parameter-elevation...

  2. SAM: A Simple Averaging Model of Impression Formation

    Science.gov (United States)

    Lewis, Robert A.

    1976-01-01

    Describes the Simple Averaging Model (SAM) which was developed to demonstrate impression-formation computer modeling with less complex and less expensive procedures than are required by most established programs. (RC)

  3. Average monthly and annual climate maps for Bolivia

    KAUST Repository

    Vicente-Serrano, Sergio M.

    2015-02-24

    This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.

  4. United States Average Annual Precipitation, 1961-1990 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 1961-1990. Parameter-elevation...

  5. The average-shadowing property and topological ergodicity for flows

    Energy Technology Data Exchange (ETDEWEB)

    Gu Rongbao [School of Finance, Nanjing University of Finance and Economics, Nanjing 210046 (China)]. E-mail: rbgu@njue.edu.cn; Guo Wenjing [School of Finance, Nanjing University of Finance and Economics, Nanjing 210046 (China)

    2005-07-01

    In this paper, the transitive property for a flow without sensitive dependence on initial conditions is studied and it is shown that a Lyapunov stable flow with the average-shadowing property on a compact metric space is topologically ergodic.

  6. Time averaging, ageing and delay analysis of financial time series

    Science.gov (United States)

    Cherstvy, Andrey G.; Vinod, Deepak; Aghion, Erez; Chechkin, Aleksei V.; Metzler, Ralf

    2017-06-01

    We introduce three strategies for the analysis of financial time series based on time averaged observables. These comprise the time averaged mean squared displacement (MSD) as well as the ageing and delay time methods for varying fractions of the financial time series. We explore these concepts via statistical analysis of historic time series for several Dow Jones Industrial indices for the period from the 1960s to 2015. Remarkably, we discover a simple universal law for the delay time averaged MSD. The observed features of the financial time series dynamics agree well with our analytical results for the time averaged measurables for geometric Brownian motion, underlying the famed Black-Scholes-Merton model. The concepts we promote here are shown to be useful for financial data analysis and enable one to unveil new universal features of stock market dynamics.

  7. Ensemble vs. time averages in financial time series analysis

    Science.gov (United States)

    Seemann, Lars; Hua, Jia-Chen; McCauley, Joseph L.; Gunaratne, Gemunu H.

    2012-12-01

    Empirical analysis of financial time series suggests that the underlying stochastic dynamics are not only non-stationary, but also exhibit non-stationary increments. However, financial time series are commonly analyzed using the sliding interval technique that assumes stationary increments. We propose an alternative approach that is based on an ensemble over trading days. To determine the effects of time averaging techniques on analysis outcomes, we create an intraday activity model that exhibits periodic variable diffusion dynamics and we assess the model data using both ensemble and time averaging techniques. We find that ensemble averaging techniques detect the underlying dynamics correctly, whereas sliding intervals approaches fail. As many traded assets exhibit characteristic intraday volatility patterns, our work implies that ensemble averages approaches will yield new insight into the study of financial markets’ dynamics.

  8. Model Averaging Software for Dichotomous Dose Response Risk Estimation

    Directory of Open Access Journals (Sweden)

    Matthew W. Wheeler

    2008-02-01

    Full Text Available Model averaging has been shown to be a useful method for incorporating model uncertainty in quantitative risk estimation. In certain circumstances this technique is computationally complex, requiring sophisticated software to carry out the computation. We introduce software that implements model averaging for risk assessment based upon dichotomous dose-response data. This software, which we call Model Averaging for Dichotomous Response Benchmark Dose (MADr-BMD, fits the quantal response models, which are also used in the US Environmental Protection Agency benchmark dose software suite, and generates a model-averaged dose response model to generate benchmark dose and benchmark dose lower bound estimates. The software fulfills a need for risk assessors, allowing them to go beyond one single model in their risk assessments based on quantal data by focusing on a set of models that describes the experimental data.

  9. United States Average Annual Precipitation, 1995-1999 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 1995-1999. Parameter-elevation...

  10. United States Average Annual Precipitation, 1990-1994 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 1990-1994. Parameter-elevation...

  11. United States Average Annual Precipitation, 1990-2009 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 1990-2009. Parameter-elevation...

  12. Does subduction zone magmatism produce average continental crust

    Science.gov (United States)

    Ellam, R. M.; Hawkesworth, C. J.

    1988-01-01

    The question of whether present day subduction zone magmatism produces material of average continental crust composition, which perhaps most would agree is andesitic, is addressed. It was argued that modern andesitic to dacitic rocks in Andean-type settings are produced by plagioclase fractionation of mantle derived basalts, leaving a complementary residue with low Rb/Sr and a positive Eu anomaly. This residue must be removed, for example by delamination, if the average crust produced in these settings is andesitic. The author argued against this, pointing out the absence of evidence for such a signature in the mantle. Either the average crust is not andesitic, a conclusion the author was not entirely comfortable with, or other crust forming processes must be sought. One possibility is that during the Archean, direct slab melting of basaltic or eclogitic oceanic crust produced felsic melts, which together with about 65 percent mafic material, yielded an average crust of andesitic composition.

  13. Historical Data for Average Processing Time Until Hearing Held

    Data.gov (United States)

    Social Security Administration — This dataset provides historical data for average wait time (in days) from the hearing request date until a hearing was held. This dataset includes data from fiscal...

  14. Bivariate copulas on the exponentially weighted moving average control chart

    Directory of Open Access Journals (Sweden)

    Sasigarn Kuvattana

    2016-10-01

    Full Text Available This paper proposes four types of copulas on the Exponentially Weighted Moving Average (EWMA control chart when observations are from an exponential distribution using a Monte Carlo simulation approach. The performance of the control chart is based on the Average Run Length (ARL which is compared for each copula. Copula functions for specifying dependence between random variables are used and measured by Kendall’s tau. The results show that the Normal copula can be used for almost all shifts.

  15. Time averages, recurrence and transience in the stochastic replicator dynamics

    CERN Document Server

    Hofbauer, Josef; 10.1214/08-AAP577

    2009-01-01

    We investigate the long-run behavior of a stochastic replicator process, which describes game dynamics for a symmetric two-player game under aggregate shocks. We establish an averaging principle that relates time averages of the process and Nash equilibria of a suitably modified game. Furthermore, a sufficient condition for transience is given in terms of mixed equilibria and definiteness of the payoff matrix. We also present necessary and sufficient conditions for stochastic stability of pure equilibria.

  16. On the relativistic mass function and averaging in cosmology

    CERN Document Server

    Ostrowski, Jan J; Roukema, Boudewijn F

    2016-01-01

    The general relativistic description of cosmological structure formation is an important challenge from both the theoretical and the numerical point of views. In this paper we present a brief prescription for a general relativistic treatment of structure formation and a resulting mass function on galaxy cluster scales in a highly generic scenario. To obtain this we use an exact scalar averaging scheme together with the relativistic generalization of Zel'dovich's approximation (RZA) that serves as a closure condition for the averaged equations.

  17. Use of a Correlation Coefficient for Conditional Averaging.

    Science.gov (United States)

    1997-04-01

    data. Selection of the sine function period and a correlation coefficient threshold are discussed. Also examined are the effects of the period and...threshold level on the number of ensembles captured for inclusion for conditional averaging. Both the selection of threshold correlation coefficient and the...A method of collecting ensembles for conditional averaging is presented that uses data collected from a plane mixing layer. The correlation

  18. Estimation of annual average daily traffic with optimal adjustment factors

    OpenAIRE

    Alonso Oreña, Borja; Moura Berodia, José Luis; Ibeas Portilla, Ángel; Romero Junquera, Juan Pablo

    2014-01-01

    This study aimed to estimate the annual average daily traffic in inter-urban networks determining the best correlation (affinity) between the short period traffic counts and permanent traffic counters. A bi-level optimisation problem is proposed in which an agent in an upper level prefixes the affinities between short period traffic counts and permanent traffic counters stations and looks to minimise the annual average daily traffic calculation error while, in a lower level, an origin–destina...

  19. The averaging of nonlocal Hamiltonian structures in Whitham's method

    Directory of Open Access Journals (Sweden)

    Andrei Ya. Maltsev

    2002-01-01

    Full Text Available We consider the m-phase Whitham's averaging method and propose the procedure of “averaging” nonlocal Hamiltonian structures. The procedure is based on the existence of a sufficient number of local-commuting integrals of the system and gives the Poisson bracket of Ferapontov type for Whitham's system. The method can be considered as the generalization of the Dubrovin-Novikov procedure for the local field-theoretical brackets.

  20. Separability criteria with angular and Hilbert space averages

    Science.gov (United States)

    Fujikawa, Kazuo; Oh, C. H.; Umetsu, Koichiro; Yu, Sixia

    2016-05-01

    The practically useful criteria of separable states ρ =∑kwkρk in d = 2 × 2 are discussed. The equality G(a , b) = 4 [ - ] = 0 for any two projection operators P(a) and P(b) provides a necessary and sufficient separability criterion in the case of a separable pure state ρ = | ψ > Werner state is applied to two photon systems, it is shown that the Hilbert space average can judge its inseparability but not the geometrical angular average.

  1. A precise measurement of the average b hadron lifetime

    CERN Document Server

    Buskulic, Damir; De Bonis, I; Décamp, D; Ghez, P; Goy, C; Lees, J P; Lucotte, A; Minard, M N; Odier, P; Pietrzyk, B; Ariztizabal, F; Chmeissani, M; Crespo, J M; Efthymiopoulos, I; Fernández, E; Fernández-Bosman, M; Gaitan, V; Garrido, L; Martínez, M; Orteu, S; Pacheco, A; Padilla, C; Palla, Fabrizio; Pascual, A; Perlas, J A; Sánchez, F; Teubert, F; Colaleo, A; Creanza, D; De Palma, M; Farilla, A; Gelao, G; Girone, M; Iaselli, Giuseppe; Maggi, G; Maggi, M; Marinelli, N; Natali, S; Nuzzo, S; Ranieri, A; Raso, G; Romano, F; Ruggieri, F; Selvaggi, G; Silvestris, L; Tempesta, P; Zito, G; Huang, X; Lin, J; Ouyang, Q; Wang, T; Xie, Y; Xu, R; Xue, S; Zhang, J; Zhang, L; Zhao, W; Bonvicini, G; Cattaneo, M; Comas, P; Coyle, P; Drevermann, H; Engelhardt, A; Forty, Roger W; Frank, M; Hagelberg, R; Harvey, J; Jacobsen, R; Janot, P; Jost, B; Knobloch, J; Lehraus, Ivan; Markou, C; Martin, E B; Mato, P; Meinhard, H; Minten, Adolf G; Miquel, R; Oest, T; Palazzi, P; Pater, J R; Pusztaszeri, J F; Ranjard, F; Rensing, P E; Rolandi, Luigi; Schlatter, W D; Schmelling, M; Schneider, O; Tejessy, W; Tomalin, I R; Venturi, A; Wachsmuth, H W; Wiedenmann, W; Wildish, T; Witzeling, W; Wotschack, J; Ajaltouni, Ziad J; Bardadin-Otwinowska, Maria; Barrès, A; Boyer, C; Falvard, A; Gay, P; Guicheney, C; Henrard, P; Jousset, J; Michel, B; Monteil, S; Montret, J C; Pallin, D; Perret, P; Podlyski, F; Proriol, J; Rossignol, J M; Saadi, F; Fearnley, Tom; Hansen, J B; Hansen, J D; Hansen, J R; Hansen, P H; Nilsson, B S; Kyriakis, A; Simopoulou, Errietta; Siotis, I; Vayaki, Anna; Zachariadou, K; Blondel, A; Bonneaud, G R; Brient, J C; Bourdon, P; Passalacqua, L; Rougé, A; Rumpf, M; Tanaka, R; Valassi, Andrea; Verderi, M; Videau, H L; Candlin, D J; Parsons, M I; Focardi, E; Parrini, G; Corden, M; Delfino, M C; Georgiopoulos, C H; Jaffe, D E; Antonelli, A; Bencivenni, G; Bologna, G; Bossi, F; Campana, P; Capon, G; Chiarella, V; Felici, G; Laurelli, P; Mannocchi, G; Murtas, F; Murtas, G P; Pepé-Altarelli, M; Dorris, S J; Halley, A W; ten Have, I; Knowles, I G; Lynch, J G; Morton, W T; O'Shea, V; Raine, C; Reeves, P; Scarr, J M; Smith, K; Smith, M G; Thompson, A S; Thomson, F; Thorn, S; Turnbull, R M; Becker, U; Braun, O; Geweniger, C; Graefe, G; Hanke, P; Hepp, V; Kluge, E E; Putzer, A; Rensch, B; Schmidt, M; Sommer, J; Stenzel, H; Tittel, K; Werner, S; Wunsch, M; Beuselinck, R; Binnie, David M; Cameron, W; Colling, D J; Dornan, Peter J; Konstantinidis, N P; Moneta, L; Moutoussi, A; Nash, J; San Martin, G; Sedgbeer, J K; Stacey, A M; Dissertori, G; Girtler, P; Kneringer, E; Kuhn, D; Rudolph, G; Bowdery, C K; Brodbeck, T J; Colrain, P; Crawford, G; Finch, A J; Foster, F; Hughes, G; Sloan, Terence; Whelan, E P; Williams, M I; Galla, A; Greene, A M; Kleinknecht, K; Quast, G; Raab, J; Renk, B; Sander, H G; Wanke, R; Van Gemmeren, P; Zeitnitz, C; Aubert, Jean-Jacques; Bencheikh, A M; Benchouk, C; Bonissent, A; Bujosa, G; Calvet, D; Carr, J; Diaconu, C A; Etienne, F; Thulasidas, M; Nicod, D; Payre, P; Rousseau, D; Talby, M; Abt, I; Assmann, R W; Bauer, C; Blum, Walter; Brown, D; Dietl, H; Dydak, Friedrich; Ganis, G; Gotzhein, C; Jakobs, K; Kroha, H; Lütjens, G; Lutz, Gerhard; Männer, W; Moser, H G; Richter, R H; Rosado-Schlosser, A; Schael, S; Settles, Ronald; Seywerd, H C J; Stierlin, U; Saint-Denis, R; Wolf, G; Alemany, R; Boucrot, J; Callot, O; Cordier, A; Courault, F; Davier, M; Duflot, L; Grivaz, J F; Heusse, P; Jacquet, M; Kim, D W; Le Diberder, F R; Lefrançois, J; Lutz, A M; Musolino, G; Nikolic, I A; Park, H J; Park, I C; Schune, M H; Simion, S; Veillet, J J; Videau, I; Abbaneo, D; Azzurri, P; Bagliesi, G; Batignani, G; Bettarini, S; Bozzi, C; Calderini, G; Carpinelli, M; Ciocci, M A; Ciulli, V; Dell'Orso, R; Fantechi, R; Ferrante, I; Foà, L; Forti, F; Giassi, A; Giorgi, M A; Gregorio, A; Ligabue, F; Lusiani, A; Marrocchesi, P S; Messineo, A; Rizzo, G; Sanguinetti, G; Sciabà, A; Spagnolo, P; Steinberger, Jack; Tenchini, Roberto; Tonelli, G; Triggiani, G; Vannini, C; Verdini, P G; Walsh, J; Betteridge, A P; Blair, G A; Bryant, L M; Cerutti, F; Gao, Y; Green, M G; Johnson, D L; Medcalf, T; Mir, L M; Perrodo, P; Strong, J A; Bertin, V; Botterill, David R; Clifft, R W; Edgecock, T R; Haywood, S; Edwards, M; Maley, P; Norton, P R; Thompson, J C; Bloch-Devaux, B; Colas, P; Duarte, H; Emery, S; Kozanecki, Witold; Lançon, E; Lemaire, M C; Locci, E; Marx, B; Pérez, P; Rander, J; Renardy, J F; Rosowsky, A; Roussarie, A; Schuller, J P; Schwindling, J; Si Mohand, D; Trabelsi, A; Vallage, B; Johnson, R P; Kim, H Y; Litke, A M; McNeil, M A; Taylor, G; Beddall, A; Booth, C N; Boswell, R; Cartwright, S L; Combley, F; Dawson, I; Köksal, A; Letho, M; Newton, W M; Rankin, C; Thompson, L F; Böhrer, A; Brandt, S; Cowan, G D; Feigl, E; Grupen, Claus; Lutters, G; Minguet-Rodríguez, J A; Rivera, F; Saraiva, P; Smolik, L; Stephan, F; Apollonio, M; Bosisio, L; Della Marina, R; Giannini, G; Gobbo, B; Ragusa, F; Rothberg, J E; Wasserbaech, S R; Armstrong, S R; Bellantoni, L; Elmer, P; Feng, P; Ferguson, D P S; Gao, Y S; González, S; Grahl, J; Harton, J L; Hayes, O J; Hu, H; McNamara, P A; Nachtman, J M; Orejudos, W; Pan, Y B; Saadi, Y; Schmitt, M; Scott, I J; Sharma, V; Turk, J; Walsh, A M; Wu Sau Lan; Wu, X; Yamartino, J M; Zheng, M; Zobernig, G

    1996-01-01

    An improved measurement of the average b hadron lifetime is performed using a sample of 1.5 million hadronic Z decays, collected during the 1991-1993 runs of ALEPH, with the silicon vertex detector fully operational. This uses the three-dimensional impact parameter distribution of lepton tracks coming from semileptonic b decays and yields an average b hadron lifetime of 1.533 \\pm 0.013 \\pm 0.022 ps.

  2. Average life of oxygen vacancies of quartz in sediments

    Institute of Scientific and Technical Information of China (English)

    DIAO; Shaobo(刁少波); YE; Yuguang(业渝光)

    2002-01-01

    Average life of oxygen vacancies of quartz in sediments is estimated by using the ESR (electron spin resonance) signals of E( centers from the thermal activation technique. The experimental results show that the second-order kinetics equation is more applicable to the life estimation compared with the first order equation. The average life of oxygen vacancies of quartz from 4895 to 4908 deep sediments in the Tarim Basin is about 1018 a at 27℃.

  3. Cycle Average Peak Fuel Temperature Prediction Using CAPP/GAMMA+

    Energy Technology Data Exchange (ETDEWEB)

    Tak, Nam-il; Lee, Hyun Chul; Lim, Hong Sik [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    In order to obtain a cycle average maximum fuel temperature without rigorous efforts, a neutronics/thermo-fluid coupled calculation is needed with depletion capability. Recently, a CAPP/GAMMA+ coupled code system has been developed and the initial core of PMR200 was analyzed using the CAPP/GAMMA+ code system. The GAMMA+ code is a system thermo-fluid analysis code and the CAPP code is a neutronics code. The General Atomics proposed that the design limit of the fuel temperature under normal operating conditions should be a cycle-averaged maximum value. Nonetheless, the existing works of Korea Atomic Energy Research Institute (KAERI) only calculated the maximum fuel temperature at a fixed time point, e.g., the beginning of cycle (BOC) just because the calculation capability was not ready for a cycle average value. In this work, a cycle average maximum fuel temperature has been calculated using CAPP/GAMMA+ code system for the equilibrium core of PMR200. The CAPP/GAMMA+ coupled calculation was carried out for the equilibrium core of PMR 200 from BOC to EOC to obtain a cycle average peak fuel temperature. The peak fuel temperature was predicted to be 1372 .deg. C near MOC. However, the cycle average peak fuel temperature was calculated as 1181 .deg. C, which is below the design target of 1250 .deg. C.

  4. Group Anonymity

    CERN Document Server

    Chertov, Oleg; 10.1007/978-3-642-14058-7_61

    2010-01-01

    In recent years the amount of digital data in the world has risen immensely. But, the more information exists, the greater is the possibility of its unwanted disclosure. Thus, the data privacy protection has become a pressing problem of the present time. The task of individual privacy-preserving is being thoroughly studied nowadays. At the same time, the problem of statistical disclosure control for collective (or group) data is still open. In this paper we propose an effective and relatively simple (wavelet-based) way to provide group anonymity in collective data. We also provide a real-life example to illustrate the method.

  5. Quantum black hole wave packet: Average area entropy and temperature dependent width

    Directory of Open Access Journals (Sweden)

    Aharon Davidson

    2014-09-01

    Full Text Available A quantum Schwarzschild black hole is described, at the mini super spacetime level, by a non-singular wave packet composed of plane wave eigenstates of the momentum Dirac-conjugate to the mass operator. The entropy of the mass spectrum acquires then independent contributions from the average mass and the width. Hence, Bekenstein's area entropy is formulated using the 〈mass2〉 average, leaving the 〈mass〉 average to set the Hawking temperature. The width function peaks at the Planck scale for an elementary (zero entropy, zero free energy micro black hole of finite rms size, and decreases Doppler-like towards the classical limit.

  6. Children’s Attitudes and Stereotype Content Toward Thin, Average-Weight, and Overweight Peers

    Directory of Open Access Journals (Sweden)

    Federica Durante

    2014-05-01

    Full Text Available Six- to 11-year-old children’s attitudes toward thin, average-weight, and overweight targets were investigated with associated warmth and competence stereotypes. The results showed positive attitudes toward average-weight targets and negative attitudes toward overweight peers: Both attitudes decreased as a function of children’s age. Thin targets were perceived more positively than overweight ones but less positively than average-weight targets. Notably, social desirability concerns predicted the decline of anti-fat bias in older children. Finally, the results showed ambivalent stereotypes toward thin and overweight targets—particularly among older children—mirroring the stereotypes observed in adults. This result suggests that by the end of elementary school, children manage the two fundamental dimensions of social judgment similar to adults.

  7. Average Synchronization and Temporal Order in a Noisy Neuronal Network with Coupling Delay

    Institute of Scientific and Technical Information of China (English)

    WANG Qing-Yun; DUAN Zhi-Sheng; LU Qi-Shao

    2007-01-01

    Average synchronization and temporal order characterized by the rate of firing are studied in a spatially extended network system with the coupling time delay, which is locally modelled by a two-dimensional Rulkov map neuron.It is shown that there exists an optimal noise level, where average synchronization and temporal order are maximum irrespective of the coupling time delay. Furthermore, it is found that temporal order is weakened when the coupling time delay appears. However, the coupling time delay has a twofold effect on average synchronization,one associated with its increase, the other with its decrease. This clearly manifests that random perturbations and time delay play a complementary role in synchronization and temporal order.

  8. An average/deprivation/inequality (ADI) analysis of chronic disease outcomes and risk factors in Argentina

    Science.gov (United States)

    De Maio, Fernando G; Linetzky, Bruno; Virgolini, Mario

    2009-01-01

    Background Recognition of the global economic and epidemiological burden of chronic non-communicable diseases has increased in recent years. However, much of the research on this issue remains focused on individual-level risk factors and neglects the underlying social patterning of risk factors and disease outcomes. Methods Secondary analysis of Argentina's 2005 Encuesta Nacional de Factores de Riesgo (National Risk Factor Survey, N = 41,392) using a novel analytical strategy first proposed by the United Nations Development Programme (UNDP), which we here refer to as the Average/Deprivation/Inequality (ADI) framework. The analysis focuses on two risk factors (unhealthy diet and obesity) and one related disease outcome (diabetes), a notable health concern in Latin America. Logistic regression is used to examine the interplay between socioeconomic and demographic factors. The ADI analysis then uses the results from the logistic regression to identify the most deprived, the best-off, and the difference between the two ideal types. Results Overall, 19.9% of the sample reported being in poor/fair health, 35.3% reported not eating any fruits or vegetables in five days of the week preceding the interview, 14.7% had a BMI of 30 or greater, and 8.5% indicated that a health professional had told them that they have diabetes or high blood pressure. However, significant variation is hidden by these summary measures. Educational attainment displayed the strongest explanatory power throughout the models, followed by household income, with both factors highlighting the social patterning of risk factors and disease outcomes. As educational attainment and household income increase, the probability of poor health, unhealthy diet, obesity, and diabetes decrease. The analyses also point toward important provincial effects and reinforce the notion that both compositional factors (i.e., characteristics of individuals) and contextual factors (i.e., characteristics of places) are

  9. Microbes make average 2 nanometer diameter crystalline UO2 particles.

    Science.gov (United States)

    Suzuki, Y.; Kelly, S. D.; Kemner, K. M.; Banfield, J. F.

    2001-12-01

    It is well known that phylogenetically diverse groups of microorganisms are capable of catalyzing the reduction of highly soluble U(VI) to highly insoluble U(IV), which rapidly precipitates as uraninite (UO2). Because biological uraninite is highly insoluble, microbial uranyl reduction is being intensively studied as the basis for a cost-effective in-situ bioremediation strategy. Previous studies have described UO2 biomineralization products as amorphous or poorly crystalline. The objective of this study is to characterize the nanocrystalline uraninite in detail in order to determine the particle size, crystallinity, and size-related structural characteristics, and to examine the implications of these for reoxidation and transport. In this study, we obtained U-contaminated sediment and water from an inactive U mine and incubated them anaerobically with nutrients to stimulate reductive precipitation of UO2 by indigenous anaerobic bacteria, mainly Gram-positive spore-forming Desulfosporosinus and Clostridium spp. as revealed by RNA-based phylogenetic analysis. Desulfosporosinus sp. was isolated from the sediment and UO2 was precipitated by this isolate from a simple solution that contains only U and electron donors. We characterized UO2 formed in both of the experiments by high resolution-TEM (HRTEM) and X-ray absorption fine structure analysis (XAFS). The results from HRTEM showed that both the pure and the mixed cultures of microorganisms precipitated around 1.5 - 3 nm crystalline UO2 particles. Some particles as small as around 1 nm could be imaged. Rare particles around 10 nm in diameter were also present. Particles adhere to cells and form colloidal aggregates with low fractal dimension. In some cases, coarsening by oriented attachment on \\{111\\} is evident. Our preliminary results from XAFS for the incubated U-contaminated sample also indicated an average diameter of UO2 of 2 nm. In nanoparticles, the U-U distance obtained by XAFS was 0.373 nm, 0.012 nm

  10. Informal groups

    NARCIS (Netherlands)

    E. van den Berg; P. van Houwelingen; J. de Hart

    2011-01-01

    Original title: Informele groepen Going out running with a group of friends, rather than joining an official sports club. Individuals who decide to take action themselves rather than giving money to good causes. Maintaining contact with others not as a member of an association, but through an Inter

  11. Decreasing activated sludge thermal hydrolysis temperature reduces product colour, without decreasing degradability.

    Science.gov (United States)

    Dwyer, Jason; Starrenburg, Daniel; Tait, Stephan; Barr, Keith; Batstone, Damien J; Lant, Paul

    2008-11-01

    Activated sludges are becoming more difficult to degrade in anaerobic digesters, due to the implementation of stricter nitrogen limits, longer sludge ages, and removal of primary sedimentation units. Thermal hydrolysis is a popular method to enhance degradability of long-age activated sludge, and involves pressure and heat treatment of the process fluid (150-160 degrees C saturated steam). However, as documented in this study, in a full-scale system, the use of thermal hydrolysis produces coloured, recalcitrant compounds that can have downstream impacts (e.g., failure of UV disinfection, and increased effluent nitrogen). The coloured compound formed during thermal hydrolysis was found to be melanoidins. These are coloured recalcitrant compounds produced by polymerisation of low molecular weight intermediates, such as carbohydrates and amino compounds at elevated temperature (Maillard reaction). By decreasing the THP operating temperature from 165 degrees C to 140 degrees C, THP effluent colour decreased from 12,677 mg-PtCo L(-1) to 3837 mg-PtCo L(-1). The change in THP operating temperature from 165 degrees C to 140 degrees C was shown to have no significant impact on anaerobic biodegradability of the sludge. The rate and extent of COD biodegradation remained largely unaffected by the temperature change with an average first order hydrolysis rate of 0.19 d(-1) and conversion extent of 0.43 g-COD(CH4)g-COD(-1).

  12. The relationship between limit of Dysphagia and average volume per swallow in patients with Parkinson's disease.

    Science.gov (United States)

    Belo, Luciana Rodrigues; Gomes, Nathália Angelina Costa; Coriolano, Maria das Graças Wanderley de Sales; de Souza, Elizabete Santos; Moura, Danielle Albuquerque Alves; Asano, Amdore Guescel; Lins, Otávio Gomes

    2014-08-01

    The goal of this study was to obtain the limit of dysphagia and the average volume per swallow in patients with mild to moderate Parkinson's disease (PD) but without swallowing complaints and in normal subjects, and to investigate the relationship between them. We hypothesize there is a direct relationship between these two measurements. The study included 10 patients with idiopathic PD and 10 age-matched normal controls. Surface electromyography was recorded over the suprahyoid muscle group. The limit of dysphagia was obtained by offering increasing volumes of water until piecemeal deglutition occurred. The average volume per swallow was calculated by dividing the time taken by the number of swallows used to drink 100 ml of water. The PD group showed a significantly lower dysphagia limit and lower average volume per swallow. There was a significantly moderate direct correlation and association between the two measurements. About half of the PD patients had an abnormally low dysphagia limit and average volume per swallow, although none had spontaneously related swallowing problems. Both measurements may be used as a quick objective screening test for the early identification of swallowing alterations that may lead to dysphagia in PD patients, but the determination of the average volume per swallow is much quicker and simpler.

  13. Exact Averaging of Stochastic Equations for Flow in Porous Media

    Energy Technology Data Exchange (ETDEWEB)

    Karasaki, Kenzi; Shvidler, Mark; Karasaki, Kenzi

    2008-03-15

    It is well known that at present, exact averaging of the equations for flow and transport in random porous media have been proposed for limited special fields. Moreover, approximate averaging methods--for example, the convergence behavior and the accuracy of truncated perturbation series--are not well studied, and in addition, calculation of high-order perturbations is very complicated. These problems have for a long time stimulated attempts to find the answer to the question: Are there in existence some, exact, and sufficiently general forms of averaged equations? Here, we present an approach for finding the general exactly averaged system of basic equations for steady flow with sources in unbounded stochastically homogeneous fields. We do this by using (1) the existence and some general properties of Green's functions for the appropriate stochastic problem, and (2) some information about the random field of conductivity. This approach enables us to find the form of the averaged equations without directly solving the stochastic equations or using the usual assumption regarding any small parameters. In the common case of a stochastically homogeneous conductivity field we present the exactly averaged new basic nonlocal equation with a unique kernel-vector. We show that in the case of some type of global symmetry (isotropy, transversal isotropy, or orthotropy), we can for three-dimensional and two-dimensional flow in the same way derive the exact averaged nonlocal equations with a unique kernel-tensor. When global symmetry does not exist, the nonlocal equation with a kernel-tensor involves complications and leads to an ill-posed problem.

  14. Icodextrin decreases technique failure and improves patient survival in peritoneal dialysis patients.

    Science.gov (United States)

    Wang, I-Kuan; Li, Yu-Fen; Chen, Jin-Hua; Liang, Chih-Chia; Liu, Yao-Lung; Lin, Hsin-Hung; Chang, Chiz-Tzung; Tsai, Wen-Chen; Yen, Tzung-Hai; Huang, Chiu-Ching

    2015-03-01

    It remains unclear whether long-term daily icodextrin use can decrease technique failure and improve survival in peritoneal dialysis (PD) patients. The aim of the present study was to investigate whether icodextrin use, once daily, can decrease technique failure and prolong patient survival in incident PD patients. Incident PD patients who survived more than 90 days were recruited from the China Medical University Hospital, Taiwan, between 1 January 2007 and 31 December 2011. All patients were followed until transfer to haemodialysis (HD), renal transplantation, transfer to another centre, death, or 31 December 2011. A total of 306 incident PD patients (89 icodextrin users, 217 icodextrin non-users) were recruited during the study period. Icodextrin users were more likely to have hypertension, diabetes and high or high-average peritoneal transport compared with non-users. During the follow-up period, 43 patients were transferred to HD: seven (7.87%) of the icodextrin group, and 36 (16.59%) of the non-icodextrin group. Thirty-two patients died during the follow-up period: five (5.62%) of the icodextrin group, and 27 (12.44%) of the non-icodextrin group. Icodextrin use was significantly associated with a better prognosis, in terms of technique failure (adjusted HR = 0.32; 95% CI = 0.14-0.72). With regard to patient survival, icodextrin use (adjusted HR = 0.33; 95% CI = 0.12-0.87) was associated with a significantly lower risk of death. The use of icodextrin once daily may decrease technique failure and improve survival in incident PD patients. © 2014 Asian Pacific Society of Nephrology.

  15. Distribution of population averaged observables in stochastic gene expression

    Science.gov (United States)

    Bhattacharyya, Bhaswati; Kalay, Ziya

    2014-03-01

    Observation of phenotypic diversity in a population of genetically identical cells is often linked to the stochastic nature of chemical reactions involved in gene regulatory networks. We investigate the distribution of population averaged gene expression levels as a function of population, or sample size for several stochastic gene expression models to find out to what extent population averaged quantities reflect the underlying mechanism of gene expression. We consider three basic gene regulation networks corresponding to transcription with and without gene state switching and translation. Using analytical expressions for the probability generating function (pgf) of observables and Large Deviation Theory, we calculate the distribution of population averaged mRNA and protein levels as a function of model parameters and population size. We validate our results using stochastic simulations also report exact results on the asymptotic properties of population averages which show qualitative differences for different models. We calculate the skewness and coefficient of variance for pgfs to estimate the sample size required for population average that contains information about gene expression models. This is relevant to experiments where a large number of data points are unavailable.

  16. Averaged universe confronted to cosmological observations: a fully covariant approach

    CERN Document Server

    Wijenayake, Tharake; Ishak, Mustapha

    2016-01-01

    One of the outstanding problems in general relativistic cosmology is that of the averaging. That is, how the lumpy universe that we observe at small scales averages out to a smooth Friedmann-Lemaitre-Robertson-Walker (FLRW) model. The root of the problem is that averaging does not commute with the Einstein equations that govern the dynamics of the model. This leads to the well-know question of backreaction in cosmology. In this work, we approach the problem using the covariant framework of Macroscopic Gravity (MG). We use its cosmological solution with a flat FLRW macroscopic background where the result of averaging cosmic inhomogeneities has been encapsulated into a backreaction density parameter denoted $\\Omega_\\mathcal{A}$. We constrain this averaged universe using available cosmological data sets of expansion and growth including, for the first time, a full CMB analysis from Planck temperature anisotropy and polarization data, the supernovae data from Union 2.1, the galaxy power spectrum from WiggleZ, the...

  17. Transversely isotropic higher-order averaged structure tensors

    Science.gov (United States)

    Hashlamoun, Kotaybah; Federico, Salvatore

    2017-08-01

    For composites or biological tissues reinforced by statistically oriented fibres, a probability distribution function is often used to describe the orientation of the fibres. The overall effect of the fibres on the material response is accounted for by evaluating averaging integrals over all possible directions in space. The directional average of the structure tensor (tensor product of the unit vector describing the fibre direction by itself) is of high significance. Higher-order averaged structure tensors feature in several models and carry similarly important information. However, their evaluation has a quite high computational cost. This work proposes to introduce mathematical techniques to minimise the computational cost associated with the evaluation of higher-order averaged structure tensors, for the case of a transversely isotropic probability distribution of orientation. A component expression is first introduced, using which a general tensor expression is obtained, in terms of an orthonormal basis in which one of the vectors coincides with the axis of symmetry of transverse isotropy. Then, a higher-order transversely isotropic averaged structure tensor is written in an appropriate basis, constructed starting from the basis of the space of second-order transversely isotropic tensors, which is constituted by the structure tensor and its complement to the identity.

  18. COMMUNICATIONS GROUP

    CERN Multimedia

    L. Taylor

    2011-01-01

    The CMS Communications Group, established at the start of 2010, has been busy in all three areas of its responsibility: (1) Communications Infrastructure, (2) Information Systems, and (3) Outreach and Education. Communications Infrastructure There are now 55 CMS Centres worldwide that are well used by physicists working on remote CMS shifts, Computing operations, data quality monitoring, data analysis and outreach. The CMS Centre@CERN in Meyrin, is the centre of the CMS offline and computing operations, hosting dedicated analysis efforts such as during the CMS Heavy Ion lead-lead running. With a majority of CMS sub-detectors now operating in a “shifterless” mode, many monitoring operations are now routinely performed from there, rather than in the main Control Room at P5. The CMS Communications Group, CERN IT and the EVO team are providing excellent videoconferencing support for the rapidly-increasing number of CMS meetings. In parallel, CERN IT and ...

  19. Lego Group

    DEFF Research Database (Denmark)

    Møller Larsen, Marcus; Pedersen, Torben; Slepniov, Dmitrij

    2010-01-01

    The last years’ rather adventurous journey from 2004 to 2009 had taught the fifth-largest toy-maker in the world - the LEGO Group - the importance of managing the global supply chain effectively. In order to survive the largest internal financial crisis in its roughly 70 years of existence......, the management had, among many initiatives, decided to offshore and outsource a major chunk of its production to Flextronics. In this pursuit of rapid cost-cutting sourcing advantages, the LEGO Group planned to license out as much as 80 per cent of its production besides closing down major parts...... of the production in high cost countries. Confident with the prospects of the new partnership, the company signed a long-term contract with Flextronics. This decision eventually proved itself to have been too hasty, however. Merely three years after the contracts were signed, LEGO management announced that it would...

  20. Group play

    DEFF Research Database (Denmark)

    Tychsen, Anders; Hitchens, Michael; Brolund, Thea

    2008-01-01

    of group dynamics, the influence of the fictional game characters and the comparative play experience between the two formats. The results indicate that group dynamics and the relationship between the players and their digital characters, are integral to the quality of the gaming experience in multiplayer......Role-playing games (RPGs) are a well-known game form, existing in a number of formats, including tabletop, live action, and various digital forms. Despite their popularity, empirical studies of these games are relatively rare. In particular there have been few examinations of the effects...... of the various formats used by RPGs on the gaming experience. This article presents the results of an empirical study, examining how multi-player tabletop RPGs are affected as they are ported to the digital medium. Issues examined include the use of disposition assessments to predict play experience, the effect...

  1. Familiarity and Voice Representation: From Acoustic-Based Representation to Voice Averages

    Directory of Open Access Journals (Sweden)

    Maureen Fontaine

    2017-07-01

    Full Text Available The ability to recognize an individual from their voice is a widespread ability with a long evolutionary history. Yet, the perceptual representation of familiar voices is ill-defined. In two experiments, we explored the neuropsychological processes involved in the perception of voice identity. We specifically explored the hypothesis that familiar voices (trained-to-familiar (Experiment 1, and famous voices (Experiment 2 are represented as a whole complex pattern, well approximated by the average of multiple utterances produced by a single speaker. In experiment 1, participants learned three voices over several sessions, and performed a three-alternative forced-choice identification task on original voice samples and several “speaker averages,” created by morphing across varying numbers of different vowels (e.g., [a] and [i] produced by the same speaker. In experiment 2, the same participants performed the same task on voice samples produced by familiar speakers. The two experiments showed that for famous voices, but not for trained-to-familiar voices, identification performance increased and response times decreased as a function of the number of utterances in the averages. This study sheds light on the perceptual representation of familiar voices, and demonstrates the power of average in recognizing familiar voices. The speaker average captures the unique characteristics of a speaker, and thus retains the information essential for recognition; it acts as a prototype of the speaker.

  2. How do children form impressions of persons? They average.

    Science.gov (United States)

    Hendrick, C; Franz, C M; Hoving, K L

    1975-05-01

    The experiment reported was concerned with impression formation in children. Twelve subjects in each of Grades K, 2, 4, and 6 rated several sets of single trait words and trait pairs. The response scale consisted of a graded series of seven schematic faces which ranged from a deep frown to a happy smile. A basic question was whether children use an orderly integration rule in forming impressions of trait pairs. The answer was clear. At all grade levels a simple averaging model adequately accounted for pair ratings. A second question concerned how children resolve semantic inconsistencies. Responses to two highly inconsistent trait pairs suggested that subjects responded in the same fashion, essentially averaging the two traits in a pair. Overall, the data strongly supported an averaging model, and indicated that impression formation of children is similar to previous results obtained from adults.

  3. Testing averaged cosmology with type Ia supernovae and BAO data

    CERN Document Server

    Santos, B; Devi, N Chandrachani; Alcaniz, J S

    2016-01-01

    An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard $\\Lambda$CDM cosmological scenario when a joint analysis of current SNe Ia and BAO data is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.

  4. An ɴ-ary λ-averaging based similarity classifier

    Directory of Open Access Journals (Sweden)

    Kurama Onesfole

    2016-06-01

    Full Text Available We introduce a new n-ary λ similarity classifier that is based on a new n-ary λ-averaging operator in the aggregation of similarities. This work is a natural extension of earlier research on similarity based classification in which aggregation is commonly performed by using the OWA-operator. So far λ-averaging has been used only in binary aggregation. Here the λ-averaging operator is extended to the n-ary aggregation case by using t-norms and t-conorms. We examine four different n-ary norms and test the new similarity classifier with five medical data sets. The new method seems to perform well when compared with the similarity classifier.

  5. Genuine non-self-averaging and ultraslow convergence in gelation

    Science.gov (United States)

    Cho, Y. S.; Mazza, M. G.; Kahng, B.; Nagler, J.

    2016-08-01

    In irreversible aggregation processes droplets or polymers of microscopic size successively coalesce until a large cluster of macroscopic scale forms. This gelation transition is widely believed to be self-averaging, meaning that the order parameter (the relative size of the largest connected cluster) attains well-defined values upon ensemble averaging with no sample-to-sample fluctuations in the thermodynamic limit. Here, we report on anomalous gelation transition types. Depending on the growth rate of the largest clusters, the gelation transition can show very diverse patterns as a function of the control parameter, which includes multiple stochastic discontinuous transitions, genuine non-self-averaging and ultraslow convergence of the transition point. Our framework may be helpful in understanding and controlling gelation.

  6. The Conservation of Area Integrals in Averaging Transformations

    Science.gov (United States)

    Kuznetsov, E. D.

    2010-06-01

    It is shown for the two-planetary version of the weakly perturbed two-body problem that, in a system defined by a finite part of a Poisson expansion of the averaged Hamiltonian, only one of the three components of the area vector is conserved, corresponding to the longitudes measuring plane. The variability of the other two components is demonstrated in two ways. The first is based on calculating the Poisson bracket of the averaged Hamiltonian and the components of the area vector written in closed form. In the second, an echeloned Poisson series processor (EPSP) is used when calculating the Poisson bracket. The averaged Hamiltonian is taken with accuracy to second order in the small parameter of the problem, and the components of the area vector are expanded in a Poisson series.

  7. Evolution of the average avalanche shape with the universality class.

    Science.gov (United States)

    Laurson, Lasse; Illa, Xavier; Santucci, Stéphane; Tore Tallakstad, Ken; Måløy, Knut Jørgen; Alava, Mikko J

    2013-01-01

    A multitude of systems ranging from the Barkhausen effect in ferromagnetic materials to plastic deformation and earthquakes respond to slow external driving by exhibiting intermittent, scale-free avalanche dynamics or crackling noise. The avalanches are power-law distributed in size, and have a typical average shape: these are the two most important signatures of avalanching systems. Here we show how the average avalanche shape evolves with the universality class of the avalanche dynamics by employing a combination of scaling theory, extensive numerical simulations and data from crack propagation experiments. It follows a simple scaling form parameterized by two numbers, the scaling exponent relating the average avalanche size to its duration and a parameter characterizing the temporal asymmetry of the avalanches. The latter reflects a broken time-reversal symmetry in the avalanche dynamics, emerging from the local nature of the interaction kernel mediating the avalanche dynamics.

  8. Group Connections: Whole Group Teaching.

    Science.gov (United States)

    Griffiths, Dorothy

    2002-01-01

    A learner-centered approach to adult group instruction involved learners in investigating 20th-century events. The approach allowed learners to concentrate on different activities according to their abilities and gave them opportunities to develop basic skills and practice teamwork. (SK)

  9. The Role of the Harmonic Vector Average in Motion Integration

    Directory of Open Access Journals (Sweden)

    Alan eJohnston

    2013-10-01

    Full Text Available The local speeds of object contours vary systematically with the cosine of the angle between the normal component of the local velocity and the global object motion direction. An array of Gabor elements whose speed changes with local spatial orientation in accordance with this pattern can appear to move as a single surface. The apparent direction of motion of plaids and Gabor arrays has variously been proposed to result from feature tracking, vector addition and vector averaging in addition to the geometrically correct global velocity as indicated by the intersection of constraints (IOC solution. Here a new combination rule, the harmonic vector average (HVA, is introduced, as well as a new algorithm for computing the IOC solution. The vector sum can be discounted as an integration strategy as it increases with the number of elements. The vector average over local vectors that vary in direction always provides an underestimate of the true global speed. The harmonic vector average however provides the correct global speed and direction for an unbiased sample of local velocities with respect to the global motion direction, as is the case for a simple closed contour. The HVA over biased samples provides an aggregate velocity estimate that can still be combined through an IOC computation to give an accurate estimate of the global velocity, which is not true of the vector average. Psychophysical results for type II Gabor arrays show perceived direction and speed falls close to the intersection of constraints direction for Gabor arrays having a wide range of orientations but the IOC prediction fails as the mean orientation shifts away from the global motion direction and the orientation range narrows. In this case perceived velocity generally defaults to the harmonic vector average.

  10. Decreases in Smoking-Related Cancer Mortality Rates Are Associated with Birth Cohort Effects in Korean Men.

    Science.gov (United States)

    Jee, Yon Ho; Shin, Aesun; Lee, Jong-Keun; Oh, Chang-Mo

    2016-12-05

    Background: This study aimed to examine trends in smoking-related cancer mortality rates and to investigate the effect birth cohort on smoking-related cancer mortality in Korean men. Methods: The number of smoking-related cancer deaths and corresponding population numbers were obtained from Statistics Korea for the period 1984-2013. Joinpoint regression analysis was used to detect changes in trends in age-standardized mortality rates. Birth-cohort specific mortality rates were illustrated by 5 year age groups. Results: The age-standardized mortality rates for oropharyngeal decreased from 2003 to 2013 (annual percent change (APC): -3.1 (95% CI, -4.6 to -1.6)) and lung cancers decreased from 2002 to 2013 (APC -2.4 (95% CI -2.7 to -2.2)). The mortality rates for esophageal declined from 1994 to 2002 (APC -2.5 (95% CI -4.1 to -0.8)) and from 2002 to 2013 (APC -5.2 (95% CI -5.7 to -4.7)) and laryngeal cancer declined from 1995 to 2013 (average annual percent change (AAPC): -3.3 (95% CI -4.7 to -1.8)). By the age group, the trends for the smoking-related cancer mortality except for oropharyngeal cancer have changed earlier to decrease in the younger age group. The birth-cohort specific mortality rates and age-period-cohort analysis consistently showed that all birth cohorts born after 1930 showed reduced mortality of smoking-related cancers. Conclusions: In Korean men, smoking-related cancer mortality rates have decreased. Our findings also indicate that current decreases in smoking-related cancer mortality rates have mainly been due to a decrease in the birth cohort effect, which suggest that decrease in smoking rates.

  11. Measurement of the average lifetime of b hadrons

    Science.gov (United States)

    Adriani, O.; Aguilar-Benitez, M.; Ahlen, S.; Alcaraz, J.; Aloisio, A.; Alverson, G.; Alviggi, M. G.; Ambrosi, G.; An, Q.; Anderhub, H.; Anderson, A. L.; Andreev, V. P.; Angelescu, T.; Antonov, L.; Antreasyan, D.; Arce, P.; Arefiev, A.; Atamanchuk, A.; Azemoon, T.; Aziz, T.; Baba, P. V. K. S.; Bagnaia, P.; Bakken, J. A.; Ball, R. C.; Banerjee, S.; Bao, J.; Barillère, R.; Barone, L.; Baschirotto, A.; Battiston, R.; Bay, A.; Becattini, F.; Bechtluft, J.; Becker, R.; Becker, U.; Behner, F.; Behrens, J.; Bencze, Gy. L.; Berdugo, J.; Berges, P.; Bertucci, B.; Betev, B. L.; Biasini, M.; Biland, A.; Bilei, G. M.; Bizzarri, R.; Blaising, J. J.; Bobbink, G. J.; Bock, R.; Böhm, A.; Borgia, B.; Bosetti, M.; Bourilkov, D.; Bourquin, M.; Boutigny, D.; Bouwens, B.; Brambilla, E.; Branson, J. G.; Brock, I. C.; Brooks, M.; Bujak, A.; Burger, J. D.; Burger, W. J.; Busenitz, J.; Buytenhuijs, A.; Cai, X. D.; Capell, M.; Caria, M.; Carlino, G.; Cartacci, A. M.; Castello, R.; Cerrada, M.; Cesaroni, F.; Chang, Y. H.; Chaturvedi, U. K.; Chemarin, M.; Chen, A.; Chen, C.; Chen, G.; Chen, G. M.; Chen, H. F.; Chen, H. S.; Chen, M.; Chen, W. Y.; Chiefari, G.; Chien, C. Y.; Choi, M. T.; Chung, S.; Civinini, C.; Clare, I.; Clare, R.; Coan, T. E.; Cohn, H. O.; Coignet, G.; Colino, N.; Contin, A.; Costantini, S.; Cotorobai, F.; Cui, X. T.; Cui, X. Y.; Dai, T. S.; D'Alessandro, R.; de Asmundis, R.; Degré, A.; Deiters, K.; Dénes, E.; Denes, P.; DeNotaristefani, F.; Dhina, M.; DiBitonto, D.; Diemoz, M.; Dimitrov, H. R.; Dionisi, C.; Ditmarr, M.; Djambazov, L.; Dova, M. T.; Drago, E.; Duchesneau, D.; Duinker, P.; Duran, I.; Easo, S.; El Mamouni, H.; Engler, A.; Eppling, F. J.; Erné, F. C.; Extermann, P.; Fabbretti, R.; Fabre, M.; Falciano, S.; Fan, S. J.; Fackler, O.; Fay, J.; Felcini, M.; Ferguson, T.; Fernandez, D.; Fernandez, G.; Ferroni, F.; Fesefeldt, H.; Fiandrini, E.; Field, J. H.; Filthaut, F.; Fisher, P. H.; Forconi, G.; Fredj, L.; Freudenreich, K.; Friebel, W.; Fukushima, M.; Gailloud, M.; Galaktionov, Yu.; Gallo, E.; Ganguli, S. N.; Garcia-Abia, P.; Gele, D.; Gentile, S.; Gheordanescu, N.; Giagu, S.; Goldfarb, S.; Gong, Z. F.; Gonzalez, E.; Gougas, A.; Goujon, D.; Gratta, G.; Gruenewald, M.; Gu, C.; Guanziroli, M.; Guo, J. K.; Gupta, V. K.; Gurtu, A.; Gustafson, H. R.; Gutay, L. J.; Hangarter, K.; Hartmann, B.; Hasan, A.; Hauschildt, D.; He, C. F.; He, J. T.; Hebbeker, T.; Hebert, M.; Hervé, A.; Hilgers, K.; Hofer, H.; Hoorani, H.; Hu, G.; Hu, G. Q.; Ille, B.; Ilyas, M. M.; Innocente, V.; Janssen, H.; Jezequel, S.; Jin, B. N.; Jones, L. W.; Josa-Mutuberria, I.; Kasser, A.; Khan, R. A.; Kamyshkov, Yu.; Kapinos, P.; Kapustinsky, J. S.; Karyotakis, Y.; Kaur, M.; Khokhar, S.; Kienzle-Focacci, M. N.; Kim, J. K.; Kim, S. C.; Kim, Y. G.; Kinnison, W. W.; Kirkby, A.; Kirkby, D.; Kirsch, S.; Kittel, W.; Klimentov, A.; Klöckner, R.; König, A. C.; Koffeman, E.; Kornadt, O.; Koutsenko, V.; Koulbardis, A.; Kraemer, R. W.; Kramer, T.; Krastev, V. R.; Krenz, W.; Krivshich, A.; Kuijten, H.; Kumar, K. S.; Kunin, A.; Landi, G.; Lanske, D.; Lanzano, S.; Lebedev, A.; Lebrun, P.; Lecomte, P.; Lecoq, P.; Le Coultre, P.; Lee, D. M.; Lee, J. S.; Lee, K. Y.; Leedom, I.; Leggett, C.; Le Goff, J. M.; Leiste, R.; Lenti, M.; Leonardi, E.; Li, C.; Li, H. T.; Li, P. J.; Liao, J. Y.; Lin, W. T.; Lin, Z. Y.; Linde, F. L.; Lindemann, B.; Lista, L.; Liu, Y.; Lohmann, W.; Longo, E.; Lu, Y. S.; Lubbers, J. M.; Lübelsmeyer, K.; Luci, C.; Luckey, D.; Ludovici, L.; Luminari, L.; Lustermann, W.; Ma, J. M.; Ma, W. G.; MacDermott, M.; Malik, R.; Malinin, A.; Maña, C.; Maolinbay, M.; Marchesini, P.; Marion, F.; Marin, A.; Martin, J. P.; Martinez-Laso, L.; Marzano, F.; Massaro, G. G. G.; Mazumdar, K.; McBride, P.; McMahon, T.; McNally, D.; Merk, M.; Merola, L.; Meschini, M.; Metzger, W. J.; Mi, Y.; Mihul, A.; Mills, G. B.; Mir, Y.; Mirabelli, G.; Mnich, J.; Möller, M.; Monteleoni, B.; Morand, R.; Morganti, S.; Moulai, N. E.; Mount, R.; Müller, S.; Nadtochy, A.; Nagy, E.; Napolitano, M.; Nessi-Tedaldi, F.; Newman, H.; Neyer, C.; Niaz, M. A.; Nippe, A.; Nowak, H.; Organtini, G.; Pandoulas, D.; Paoletti, S.; Paolucci, P.; Pascale, G.; Passaleva, G.; Patricelli, S.; Paul, T.; Pauluzzi, M.; Paus, C.; Pauss, F.; Pei, Y. J.; Pensotti, S.; Perret-Gallix, D.; Perrier, J.; Pevsner, A.; Piccolo, D.; Pieri, M.; Piroué, P. A.; Plasil, F.; Plyaskin, V.; Pohl, M.; Pojidaev, V.; Postema, H.; Qi, Z. D.; Qian, J. M.; Qureshi, K. N.; Raghavan, R.; Rahal-Callot, G.; Rancoita, P. G.; Rattaggi, M.; Raven, G.; Razis, P.; Read, K.; Ren, D.; Ren, Z.; Rescigno, M.; Reucroft, S.; Ricker, A.; Riemann, S.; Riemers, B. C.; Riles, K.; Rind, O.; Rizvi, H. A.; Ro, S.; Rodriguez, F. J.; Roe, B. P.; Röhner, M.; Romero, L.; Rosier-Lees, S.; Rosmalen, R.; Rosselet, Ph.; van Rossum, W.; Roth, S.; Rubbia, A.; Rubio, J. A.; Rykaczewski, H.; Sachwitz, M.; Salicio, J.; Salicio, J. M.; Sanders, G. S.; Santocchia, A.; Sarakinos, M. S.; Sartorelli, G.; Sassowsky, M.; Sauvage, G.; Schegelsky, V.; Schmitz, D.; Schmitz, P.; Schneegans, M.; Schopper, H.; Schotanus, D. J.; Shotkin, S.; Schreiber, H. J.; Shukla, J.; Schulte, R.; Schulte, S.; Schultze, K.; Schwenke, J.; Schwering, G.; Sciacca, C.; Scott, I.; Sehgal, R.; Seiler, P. G.; Sens, J. C.; Servoli, L.; Sheer, I.; Shen, D. Z.; Shevchenko, S.; Shi, X. R.; Shumilov, E.; Shoutko, V.; Son, D.; Sopczak, A.; Soulimov, V.; Spartiotis, C.; Spickermann, T.; Spillantini, P.; Starosta, R.; Steuer, M.; Stickland, D. P.; Sticozzi, F.; Stone, H.; Strauch, K.; Stringfellow, B. C.; Sudhakar, K.; Sultanov, G.; Sun, L. Z.; Susinno, G. F.; Suter, H.; Swain, J. D.; Syed, A. A.; Tang, X. W.; Taylor, L.; Terzi, G.; Ting, Samuel C. C.; Ting, S. M.; Tonutti, M.; Tonwar, S. C.; Tóth, J.; Tsaregorodtsev, A.; Tsipolitis, G.; Tully, C.; Tung, K. L.; Ulbricht, J.; Urbán, L.; Uwer, U.; Valente, E.; Van de Walle, R. T.; Vetlitsky, I.; Viertel, G.; Vikas, P.; Vikas, U.; Vivargent, M.; Vogel, H.; Vogt, H.; Vorobiev, I.; Vorobyov, A. A.; Vuilleumier, L.; Wadhwa, M.; Wallraff, W.; Wang, C.; Wang, C. R.; Wang, X. L.; Wang, Y. F.; Wang, Z. M.; Warner, C.; Weber, A.; Weber, J.; Weill, R.; Wenaus, T. J.; Wenninger, J.; White, M.; Willmott, C.; Wittgenstein, F.; Wright, D.; Wu, S. X.; Wynhoff, S.; Wysłouch, B.; Xie, Y. Y.; Xu, J. G.; Xu, Z. Z.; Xue, Z. L.; Yan, D. S.; Yang, B. Z.; Yang, C. G.; Yang, G.; Ye, C. H.; Ye, J. B.; Ye, Q.; Yeh, S. C.; Yin, Z. W.; You, J. M.; Yunus, N.; Yzerman, M.; Zaccardelli, C.; Zaitsev, N.; Zemp, P.; Zeng, M.; Zeng, Y.; Zhang, D. H.; Zhang, Z. P.; Zhou, B.; Zhou, G. J.; Zhou, J. F.; Zhu, R. Y.; Zichichi, A.; van der Zwaan, B. C. C.; L3 Collaboration

    1993-11-01

    The average lifetime of b hadrons has been measured using the L3 detector at LEP, running at √ s ≈ MZ. A b-enriched sample was obtained from 432538 hadronic Z events collected in 1990 and 1991 by tagging electrons and muons from semileptonic b hadron decays. From maximum likelihood fits to the electron and muon impact parameter distributions, the average b hadron lifetime was measured to be τb = (1535 ± 35 ± 28) fs, where the first error is statistical and the second includes both the experimental and the theoretical systematic uncertainties.

  12. Calculations of canonical averages from the grand canonical ensemble.

    Science.gov (United States)

    Kosov, D S; Gelin, M F; Vdovin, A I

    2008-02-01

    Grand canonical and canonical ensembles become equivalent in the thermodynamic limit, but when the system size is finite the results obtained in the two ensembles deviate from each other. In many important cases, the canonical ensemble provides an appropriate physical description but it is often much easier to perform the calculations in the corresponding grand canonical ensemble. We present a method to compute averages in the canonical ensemble based on calculations of the expectation values in the grand canonical ensemble. The number of particles, which is fixed in the canonical ensemble, is not necessarily the same as the average number of particles in the grand canonical ensemble.

  13. Averaging in Parametrically Excited Systems – A State Space Formulation

    Directory of Open Access Journals (Sweden)

    Pfau Bastian

    2016-01-01

    Full Text Available Parametric excitation can lead to instabilities as well as to an improved stability behavior, depending on whether a parametric resonance or anti-resonance is induced. In order to calculate the stability domains and boundaries, the method of averaging is applied. The problem is reformulated in state space representation, which allows a general handling of the averaging method especially for systems with non-symmetric system matrices. It is highlighted that this approach can enhance the first order approximation significantly. Two example systems are investigated: a generic mechanical system and a flexible rotor in journal bearings with adjustable geometry.

  14. HAT AVERAGE MULTIRESOLUTION WITH ERROR CONTROL IN 2-D

    Institute of Scientific and Technical Information of China (English)

    Sergio Amat

    2004-01-01

    Multiresolution representations of data are a powerful tool in data compression. For a proper adaptation to the singularities, it is crucial to develop nonlinear methods which are not based on tensor product. The hat average framework permets develop adapted schemes for all types of singularities. In contrast with the wavelet framework these representations cannot be considered as a change of basis, and the stability theory requires different considerations. In this paper, non separable two-dimensional hat average multiresolution processing algorithms that ensure stability are introduced. Explicit error bounds are presented.

  15. Light shift averaging in paraffin-coated alkali vapor cells

    CERN Document Server

    Zhivun, Elena; Sudyka, Julia; Pustelny, Szymon; Patton, Brian; Budker, Dmitry

    2015-01-01

    Light shifts are an important source of noise and systematics in optically pumped magnetometers. We demonstrate that the long spin coherence time in paraffin-coated cells leads to spatial averaging of the light shifts over the entire cell volume. This renders the averaged light shift independent, under certain approximations, of the light-intensity distribution within the sensor cell. These results and the underlying mechanism can be extended to other spatially varying phenomena in anti-relaxation-coated cells with long coherence times.

  16. Modification of averaging process in GR: Case study flat LTB

    CERN Document Server

    Khosravi, Shahram; Mansouri, Reza

    2007-01-01

    We study the volume averaging of inhomogeneous metrics within GR and discuss its shortcomings such as gauge dependence, singular behavior as a result of caustics, and causality violations. To remedy these shortcomings, we suggest some modifications to this method. As a case study we focus on the inhomogeneous model of structured FRW based on a flat LTB metric. The effect of averaging is then studied in terms of an effective backreaction fluid. This backreaction fluid turns out to behave like a dark matter component, instead of dark energy as claimed in literature.

  17. Generalized Sampling Series Approximation of Random Signals from Local Averages

    Institute of Scientific and Technical Information of China (English)

    SONG Zhanjie; HE Gaiyun; YE Peixin; YANG Deyun

    2007-01-01

    Signals are often of random character since they cannot bear any information if they are predictable for any time t, they are usually modelled as stationary random processes. On the other hand, because of the inertia of the measurement apparatus, measured sampled values obtained in practice may not be the precise value of the signal X(t) at time tk( k∈ Z), but only local averages of X(t) near tk. In this paper, it is presented that a wide (or weak ) sense stationary stochastic process can be approximated by generalized sampling series with local average samples.

  18. THEORETICAL CALCULATION OF THE RELATIVISTIC SUBCONFIGURATION-AVERAGED TRANSITION ENERGIES

    Institute of Scientific and Technical Information of China (English)

    张继彦; 杨向东; 杨国洪; 张保汉; 雷安乐; 刘宏杰; 李军

    2001-01-01

    A method for calculating the average energies of relativistic subconfigurations in highly ionized heavy atoms has been developed in the framework of the multiconfigurational Dirac-Fock theory. The method is then used to calculate the average transition energies of the spin-orbit-split 3d-4p transition of Co-like tungsten, the 3d-5f transition of Cu-like tantalum, and the 3d-5f transitions of Cu-like and Zn-like gold samples. The calculated results are in good agreement with those calculated with the relativistic parametric potential method and also with the experimental results.

  19. Quantum state discrimination using the minimum average number of copies

    CERN Document Server

    Slussarenko, Sergei; Li, Jun-Gang; Campbell, Nicholas; Wiseman, Howard M; Pryde, Geoff J

    2016-01-01

    In the task of discriminating between nonorthogonal quantum states from multiple copies, the key parameters are the error probability and the resources (number of copies) used. Previous studies have considered the task of minimizing the average error probability for fixed resources. Here we consider minimizing the average resources for a fixed admissible error probability. We derive a detection scheme optimized for the latter task, and experimentally test it, along with schemes previously considered for the former task. We show that, for our new task, our new scheme outperforms all previously considered schemes.

  20. Average quantum dynamics of closed systems over stochastic Hamiltonians

    CERN Document Server

    Yu, Li

    2011-01-01

    We develop a master equation formalism to describe the evolution of the average density matrix of a closed quantum system driven by a stochastic Hamiltonian. The average over random processes generally results in decoherence effects in closed system dynamics, in addition to the usual unitary evolution. We then show that, for an important class of problems in which the Hamiltonian is proportional to a Gaussian random process, the 2nd-order master equation yields exact dynamics. The general formalism is applied to study the examples of a two-level system, two atoms in a stochastic magnetic field and the heating of a trapped ion.

  1. MAIN STAGES SCIENTIFIC AND PRODUCTION MASTERING THE TERRITORY AVERAGE URAL

    Directory of Open Access Journals (Sweden)

    V.S. Bochko

    2006-09-01

    Full Text Available Questions of the shaping Average Ural, as industrial territory, on base her scientific study and production mastering are considered in the article. It is shown that studies of Ural resources and particularities of the vital activity of its population were concerned by Russian and foreign scientist in XVIII-XIX centuries. It is noted that in XX century there was a transition to systematic organizing-economic study of production power, society and natures of Average Ural. More attention addressed on new problems of region and on needs of their scientific solving.

  2. Averaging processes in granular flows driven by gravity

    Science.gov (United States)

    Rossi, Giulia; Armanini, Aronne

    2016-04-01

    One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental

  3. Averages of B-Hadron, C-Hadron, and tau-lepton properties as of early 2012

    Energy Technology Data Exchange (ETDEWEB)

    Amhis, Y.; et al.

    2012-07-01

    This article reports world averages of measurements of b-hadron, c-hadron, and tau-lepton properties obtained by the Heavy Flavor Averaging Group (HFAG) using results available through the end of 2011. In some cases results available in the early part of 2012 are included. For the averaging, common input parameters used in the various analyses are adjusted (rescaled) to common values, and known correlations are taken into account. The averages include branching fractions, lifetimes, neutral meson mixing parameters, CP violation parameters, parameters of semileptonic decays and CKM matrix elements.

  4. CB(1) blockade-induced weight loss over 48 weeks decreases liver fat in proportion to weight loss in humans.

    Science.gov (United States)

    Bergholm, R; Sevastianova, K; Santos, A; Kotronen, A; Urjansson, M; Hakkarainen, A; Lundbom, J; Tiikkainen, M; Rissanen, A; Lundbom, N; Yki-Järvinen, H

    2013-05-01

    Studies in mice have suggested that endocannabinoid blockade using the cannabinoid receptor type 1 (CB1) blocker rimonabant prevents obesity-induced hepatic steatosis. To determine effects of rimonabant on liver fat in humans, we measured liver fat content by proton magnetic resonance spectroscopy in 37 subjects who used either a CB1 blocker rimonabant or placebo in a double-blind, randomized manner. This was retrospectively compared with a historical hypocaloric diet weight loss group (n=23). Weight loss averaged 8.5±1.4 kg in the rimonabant, 1.7±1.0 kg in the placebo and 7.5±0.2 kg in the hypocaloric diet group (Pfat decreased more in the rimonabant (5.9% (2.5-14.6%) vs 1.8% (0.9-3.5%), before vs after) than in the placebo group (6.8% (2.2-15.7%) vs 4.9% (1.6-7.8%), before vs after, Ploss of liver fat (r=0.70, P>0.0001). The decreases in liver fat were comparable between the rimonabant and the young historical hypocaloric diet groups. We conclude that, unlike in mice, in humans rimonabant decreases liver fat in proportion to weight loss.

  5. COMMUNICATIONS GROUP

    CERN Multimedia

    L. Taylor

    2011-01-01

    The CMS Communications Group has been busy in all three areas of its responsibility: (1) Communications Infrastructure, (2) Information Systems, and (3) Outreach and Education. Communications Infrastructure The 55 CMS Centres worldwide are well used by physicists working on remote CMS shifts, Computing operations, data quality monitoring, data analysis and outreach. The CMS Centre@CERN in Meyrin, is the centre of the CMS Offline and Computing operations, and a number of subdetector shifts can now take place there, rather than in the main Control Room at P5. A new CMS meeting room has been equipped for videoconferencing in building 42, next to building 40. Our building 28 meeting room and the facilities at P5 will be refurbished soon and plans are underway to steadily upgrade the ageing equipment in all 15 CMS meeting rooms at CERN. The CMS evaluation of the Vidyo tool indicates that it is not yet ready to be considered as a potential replacement for EVO. The Communications Group provides the CMS-TV (web) cha...

  6. COMMUNICATIONS GROUP

    CERN Multimedia

    L. Taylor

    2010-01-01

    The CMS Communications Group, established at the start of 2010, has been strengthening the activities in all three areas of its responsibility: (1) Communications Infrastructure, (2) Information Systems, and (3) Outreach and Education. Communications Infrastructure The Communications Group has invested a lot of effort to support the operations needs of CMS. Hence, the CMS Centres where physicists work on remote CMS shifts, Data Quality Monitoring, and Data Analysis are running very smoothly. There are now 55 CMS Centres worldwide, up from just 16 at the start of CMS data-taking. The latest to join are Imperial College London, the University of Iowa, and the Università di Napoli. The CMS Centre@CERN in Meyrin, which is now full repaired after the major flooding at the beginning of the year, has been at the centre of CMS offline and computing operations, most recently hosting a large fraction of the CMS Heavy Ion community during the lead-lead run. A number of sub-detector shifts can now take pla...

  7. Evaluation of Average Wall Thickness of Organically Modified Mesoporous Silica

    Institute of Scientific and Technical Information of China (English)

    Yan Jun GONG; Zhi Hong LI; Bao Zhong DONG

    2005-01-01

    The small angle X-ray scattering of organically modified MSU-X silica prepared by co-condensation of tetraethoxysilane (TEOS) and methyltriethoxysilane (MTES) show negative deviation from Debye's theory due to the existence of the organic interface layer. By exerting correction of the scattering negative deviation, Debye relation may be recovered, and the average wall thickness of the material may be evaluated.

  8. Climate Prediction Center (CPC) Zonally Average 500 MB Temperature Anomalies

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This is one of the CPC?s Monthly Atmospheric and SST Indices. It is the 500-hPa temperature anomalies averaged over the latitude band 20oN ? 20oS. The anomalies are...

  9. Error estimates in horocycle averages asymptotics: challenges from string theory

    NARCIS (Netherlands)

    Cardella, M.A.

    2010-01-01

    For modular functions of rapid decay, a classical result connects the error estimate in their long horocycle average asymptotic to the Riemann hypothesis. We study similar asymptotics, for modular functions with not that mild growing conditions, such as of polynomial growth and of exponential growth

  10. 75 FR 78157 - Farmer and Fisherman Income Averaging

    Science.gov (United States)

    2010-12-15

    ... computing income tax liability. The regulations reflect changes made by the American Jobs Creation Act of 2004 and the Tax Extenders and Alternative Minimum Tax Relief Act of 2008. The regulations provide...) relating to the averaging of farm and fishing income in computing tax liability. A notice of proposed...

  11. The First Steps with Alexia, the Average Lexicographic Value

    NARCIS (Netherlands)

    Tijs, S.H.

    2005-01-01

    The new value AL for balanced games is discussed, which is based on averaging of lexicographic maxima of the core.Exactifications of games play a special role to find interesting relations of AL with other solution concepts for various classes of games as convex games, big boss games, simplex games

  12. Maximum Likelihood Estimation of Multivariate Autoregressive-Moving Average Models.

    Science.gov (United States)

    1977-02-01

    maximizing the same have been proposed i) in time domain by Box and Jenkins [41. Astrom [3J, Wilson [23 1, and Phadke [161, and ii) in frequency domain by...moving average residuals and other convariance matrices with linear structure ”, Anna/s of Staustics, 3. 3. Astrom , K. J. (1970), Introduction to

  13. Fixed Points of Averages of Resolvents: Geometry and Algorithms

    CERN Document Server

    Bauschke, Heinz H; Wylie, Calvin J S

    2011-01-01

    To provide generalized solutions if a given problem admits no actual solution is an important task in mathematics and the natural sciences. It has a rich history dating back to the early 19th century when Carl Friedrich Gauss developed the method of least squares of a system of linear equations - its solutions can be viewed as fixed points of averaged projections onto hyperplanes. A powerful generalization of this problem is to find fixed points of averaged resolvents (i.e., firmly nonexpansive mappings). This paper concerns the relationship between the set of fixed points of averaged resolvents and certain fixed point sets of compositions of resolvents. It partially extends recent work for two mappings on a question of C. Byrne. The analysis suggests a reformulation in a product space. Furthermore, two new algorithms are presented. A complete convergence proof that is based on averaged mappings is provided for the first algorithm. The second algorithm, which currently has no convergence proof, iterates a map...

  14. Averaging of random sets based on their distance functions

    NARCIS (Netherlands)

    Baddeley, A.J.; Molchanov, I.S.

    1995-01-01

    A new notion of expectation (or distance average) of random closed sets based on their distance function representation is introduced. A general concept of the distance function is exploited to define the expectation, which is the set whose distance function is closest to the expected distance funct

  15. Modeling of Sokoto Daily Average Temperature: A Fractional ...

    African Journals Online (AJOL)

    the daily average temperature (DAT) series of Sokoto metropolis for the period of 01/01/2003 to. 03/04/2007. ... in Melbourne, Australia, for the period 1981–1990 ..... Advances in Meteorology, 1-2. Period ... paper, Department of Economics.

  16. Bootstrapping pre-averaged realized volatility under market microstructure noise

    DEFF Research Database (Denmark)

    Hounyo, Ulrich; Goncalves, Sílvia; Meddahi, Nour

    -averaged returns implies that these are kn-dependent with kn growing slowly with the sample size n. This motivates the application of a blockwise bootstrap method. We show that the "blocks of blocks" bootstrap method suggested by Politis and Romano (1992) (and further studied by Bühlmann and Künsch (1995...

  17. Ensemble Bayesian model averaging using Markov Chain Monte Carlo sampling

    NARCIS (Netherlands)

    Vrugt, J.A.; Diks, C.G.H.; Clark, M.

    2008-01-01

    Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In t

  18. Trend of Average Wages as Indicator of Hypothetical Money Illusion

    Directory of Open Access Journals (Sweden)

    Julian Daszkowski

    2010-06-01

    Full Text Available The definition of wage in Poland not before 1998 includes any value of social security contribution. Changed definition creates higher level of reported wages, but was expected not to influence the take home pay. Nevertheless, the trend of average wages, after a short period, has returned to its previous line. Such effect is explained in the term of money illusion.

  19. Discrete Averaging Relations for Micro to Macro Transition

    Science.gov (United States)

    Liu, Chenchen; Reina, Celia

    2016-05-01

    The well-known Hill's averaging theorems for stresses and strains as well as the so-called Hill-Mandel principle of macrohomogeneity are essential ingredients for the coupling and the consistency between the micro and macro scales in multiscale finite element procedures (FE$^2$). We show in this paper that these averaging relations hold exactly under standard finite element discretizations, even if the stress field is discontinuous across elements and the standard proofs based on the divergence theorem are no longer suitable. The discrete averaging results are derived for the three classical types of boundary conditions (affine displacement, periodic and uniform traction boundary conditions) using the properties of the shape functions and the weak form of the microscopic equilibrium equations. The analytical proofs are further verified numerically through a simple finite element simulation of an irregular representative volume element undergoing large deformations. Furthermore, the proofs are extended to include the effects of body forces and inertia, and the results are consistent with those in the smooth continuum setting. This work provides a solid foundation to apply Hill's averaging relations in multiscale finite element methods without introducing an additional error in the scale transition due to the discretization.

  20. The background effective average action approach to quantum gravity

    DEFF Research Database (Denmark)

    D’Odorico, G.; Codello, A.; Pagani, C.

    2016-01-01

    of an UV attractive non-Gaussian fixed-point, which we find characterized by real critical exponents. Our closure method is general and can be applied systematically to more general truncations of the gravitational effective average action. © Springer International Publishing Switzerland 2016....

  1. Average subentropy, coherence and entanglement of random mixed quantum states

    Science.gov (United States)

    Zhang, Lin; Singh, Uttam; Pati, Arun K.

    2017-02-01

    Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate that mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.

  2. Precalculating the average luminance of road surface in public lighting.

    NARCIS (Netherlands)

    Schreuder, D.A.

    1967-01-01

    The influence of the reflection properties of the road surface on the aspect of the street lighting and the importance of the use of luminance has been shown. A method is described with which the value to be expected of the average road surface luminance can be easily found.

  3. Average Number of Coherent Modes for Pulse Random Fields

    CERN Document Server

    Lazaruk, A M; Lazaruk, Alexander M.; Karelin, Nikolay V.

    1997-01-01

    Some consequences of spatio-temporal symmetry for the deterministic decomposition of complex light fields into factorized components are considered. This enables to reveal interrelations between spatial and temporal coherence properties of wave. An estimation of average number of the decomposition terms is obtained in the case of statistical ensemble of light pulses.

  4. Fully variational average atom model with ion-ion correlations.

    Science.gov (United States)

    Starrett, C E; Saumon, D

    2012-02-01

    An average atom model for dense ionized fluids that includes ion correlations is presented. The model assumes spherical symmetry and is based on density functional theory, the integral equations for uniform fluids, and a variational principle applied to the grand potential. Starting from density functional theory for a mixture of classical ions and quantum mechanical electrons, an approximate grand potential is developed, with an external field being created by a central nucleus fixed at the origin. Minimization of this grand potential with respect to electron and ion densities is carried out, resulting in equations for effective interaction potentials. A third condition resulting from minimizing the grand potential with respect to the average ion charge determines the noninteracting electron chemical potential. This system is coupled to a system of point ions and electrons with an ion fixed at the origin, and a closed set of equations is obtained. Solution of these equations results in a self-consistent electronic and ionic structure for the plasma as well as the average ionization, which is continuous as a function of temperature and density. Other average atom models are recovered by application of simplifying assumptions.

  5. Average local values and local variances in quantum mechanics

    CERN Document Server

    Muga, J G; Sala, P R

    1998-01-01

    Several definitions for the average local value and local variance of a quantum observable are examined and compared with their classical counterparts. An explicit way to construct an infinite number of these quantities is provided. It is found that different classical conditions may be satisfied by different definitions, but none of the quantum definitions examined is entirely consistent with all classical requirements.

  6. Pareto Principle in Datamining: an Above-Average Fencing Algorithm

    Directory of Open Access Journals (Sweden)

    K. Macek

    2008-01-01

    Full Text Available This paper formulates a new datamining problem: which subset of input space has the relatively highest output where the minimal size of this subset is given. This can be useful where usual datamining methods fail because of error distribution asymmetry. The paper provides a novel algorithm for this datamining problem, and compares it with clustering of above-average individuals.

  7. A Formula of Average Path Length for Unweighted Networks

    Institute of Scientific and Technical Information of China (English)

    LIU Chun-Ping; LIU Yu-Rong; HE Da-Ren; ZHU Lu-Jin

    2008-01-01

    In this paper, based on the adjacency matrix of the network and its powers, the formulas are derived for the shortest path and the average path length, and an effective algorithm is presented. Furthermore, an example is provided to demonstrate the proposed method.

  8. HIGH AVERAGE POWER UV FREE ELECTRON LASER EXPERIMENTS AT JLAB

    Energy Technology Data Exchange (ETDEWEB)

    Douglas, David; Evtushenko, Pavel; Gubeli, Joseph; Hernandez-Garcia, Carlos; Legg, Robert; Neil, George; Powers, Thomas; Shinn, Michelle D; Tennant, Christopher

    2012-07-01

    Having produced 14 kW of average power at {approx}2 microns, JLAB has shifted its focus to the ultraviolet portion of the spectrum. This presentation will describe the JLab UV Demo FEL, present specifics of its driver ERL, and discuss the latest experimental results from FEL experiments and machine operations.

  9. Multiscale Gossip for Efficient Decentralized Averaging in Wireless Packet Networks

    CERN Document Server

    Tsianos, Konstantinos I

    2010-01-01

    This paper describes and analyzes a hierarchical gossip algorithm for solving the distributed average consensus problem in wireless sensor networks. The network is recursively partitioned into subnetworks. Initially, nodes at the finest scale gossip to compute local averages. Then, using geographic routing to enable gossip between nodes that are not directly connected, these local averages are progressively fused up the hierarchy until the global average is computed. We show that the proposed hierarchical scheme with $k$ levels of hierarchy is competitive with state-of-the-art randomized gossip algorithms, in terms of message complexity, achieving $\\epsilon$-accuracy with high probability after $O\\big(n \\log \\log n \\log \\frac{kn}{\\epsilon} \\big)$ messages. Key to our analysis is the way in which the network is recursively partitioned. We find that the optimal scaling law is achieved when subnetworks at scale $j$ contain $O(n^{(2/3)^j})$ nodes; then the message complexity at any individual scale is $O(n \\log \\...

  10. An averaging method for nonlinear laminar Ekman layers

    DEFF Research Database (Denmark)

    Andersen, Anders Peter; Lautrup, B.; Bohr, T.

    2003-01-01

    We study steady laminar Ekman boundary layers in rotating systems using,an averaging method similar to the technique of von Karman and Pohlhausen. The method allows us to explore nonlinear corrections to the standard Ekman theory even at large Rossby numbers. We consider both the standard self...

  11. Precalculating the average luminance of road surface in public lighting.

    NARCIS (Netherlands)

    Schreuder, D.A.

    1967-01-01

    The influence of the reflection properties of the road surface on the aspect of the street lighting and the importance of the use of luminance has been shown. A method is described with which the value to be expected of the average road surface luminance can be easily found.

  12. Reducing Noise by Repetition: Introduction to Signal Averaging

    Science.gov (United States)

    Hassan, Umer; Anwar, Muhammad Sabieh

    2010-01-01

    This paper describes theory and experiments, taken from biophysics and physiological measurements, to illustrate the technique of signal averaging. In the process, students are introduced to the basic concepts of signal processing, such as digital filtering, Fourier transformation, baseline correction, pink and Gaussian noise, and the cross- and…

  13. Enhancing the performance of exponentially weighted moving average charts: discussion

    NARCIS (Netherlands)

    Abbas, N.; Riaz, M.; Does, R.J.M.M.

    2015-01-01

    Abbas et al. (Abbas N, Riaz M, Does RJMM. Enhancing the performance of EWMA charts. Quality and Reliability Engineering International 2011; 27(6):821-833) proposed the use of signaling schemes with exponentially weighted moving average charts (named as 2/2 and modified − 2/3 schemes) for their impro

  14. Relaxing monotonicity in the identification of local average treatment effects

    DEFF Research Database (Denmark)

    Huber, Martin; Mellace, Giovanni

    In heterogeneous treatment effect models with endogeneity, the identification of the local average treatment effect (LATE) typically relies on an instrument that satisfies two conditions: (i) joint independence of the potential post-instrument variables and the instrument and (ii) monotonicity...

  15. Small Bandwidth Asymptotics for Density-Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    This paper proposes (apparently) novel standard error formulas for the density-weighted average derivative estimator of Powell, Stock, and Stoker (1989). Asymptotic validity of the standard errors developed in this paper does not require the use of higher-order kernels and the standard errors...

  16. Multiscale correlations and conditional averages in numerical turbulence

    NARCIS (Netherlands)

    Grossmann, Siegfried; Lohse, Detlef; Reeh, Achim

    2000-01-01

    The equations of motion for the nth order velocity differences raise the interest in correlation functions containing both large and small scales simultaneously. We consider the scaling of such objects and also their conditional average representation with emphasis on the question of whether they be

  17. Model uncertainty and Bayesian model averaging in vector autoregressive processes

    NARCIS (Netherlands)

    R.W. Strachan (Rodney); H.K. van Dijk (Herman)

    2006-01-01

    textabstractEconomic forecasts and policy decisions are often informed by empirical analysis based on econometric models. However, inference based upon a single model, when several viable models exist, limits its usefulness. Taking account of model uncertainty, a Bayesian model averaging procedure i

  18. Post-model selection inference and model averaging

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  19. Average and local structure of selected metal deuterides

    Energy Technology Data Exchange (ETDEWEB)

    Soerby, Magnus H.

    2005-07-01

    The main topic of this thesis is improved understanding of site preference and mutual interactions of deuterium (D) atoms in selected metallic metal deuterides. The work was partly motivated by reports of abnormally short D-D distances in RENiInD1.33 compounds (RE = rear-earth element; D-D {upsilon} square root 1.6 Aa) which show that the so-called Switendick criterion that demands a D-D separation of at least 2 Aa, is not a universal rule. The work is experimental and heavily based on scattering measurements using x-rays (lab and synchrotron) and neutrons. In order to enhance data quality, deuterium is almost exclusively used instead of natural hydrogen in sample preparations. The data-analyses are in some cases taken beyond ''conventional'' analysis of the Bragg scattering, as the diffuse scattering contains important information on D-D distances in disordered deuterides (Paper 3 and 4). A considerable part of this work is devoted to determination of the crystal structure of saturated Zr2Ni deuteride, Zr2NiD-4.8. The structure remained unsolved when only a few months remained of the scholarship. The route to the correct structure was found in the last moment. In Chapter II this winding road towards the structure determination is described; an interesting exercise in how to cope with triclinic superstructures of metal hydrides. The solution emerged by combining data from synchrotron radiation powder x-ray diffraction (SR-PXD), powder neutron diffraction (PND) and electron diffraction (ED). The triclinic crystal structure, described in space group P1 , is fully ordered with composition Zr4Ni2D9 (Zr2NiD4.5). The unit cell is doubled as compared to lower Zr2Ni deuterides due to a deuterium superstructure: asuper = a, bsuper = b - c, csuper = b + c. The deviation from higher symmetry is very small. The metal lattice is pseudo-I-centred tetragonal and the deuterium lattice is pseudo-C-centred monoclinic. The deuterium site preference in Zr2Ni

  20. An extended car-following model accounting for the average headway effect in intelligent transportation system

    Science.gov (United States)

    Kuang, Hua; Xu, Zhi-Peng; Li, Xing-Li; Lo, Siu-Ming

    2017-04-01

    In this paper, an extended car-following model is proposed to simulate traffic flow by considering average headway of preceding vehicles group in intelligent transportation systems environment. The stability condition of this model is obtained by using the linear stability analysis. The phase diagram can be divided into three regions classified as the stable, the metastable and the unstable ones. The theoretical result shows that the average headway plays an important role in improving the stabilization of traffic system. The mKdV equation near the critical point is derived to describe the evolution properties of traffic density waves by applying the reductive perturbation method. Furthermore, through the simulation of space-time evolution of the vehicle headway, it is shown that the traffic jam can be suppressed efficiently with taking into account the average headway effect, and the analytical result is consistent with the simulation one.

  1. Oxidative Stress, Lipid Profile and Liver Functions in Average Egyptian Long Term Depo Medroxy Progesterone Acetate (DMPA Users

    Directory of Open Access Journals (Sweden)

    A. Bakeet

    2005-09-01

    Full Text Available Depo-medroxy progesterone acetate (DMPA, Depo-Provera® is used in morethan 80 countries as a long-acting contraceptive administered as a single intramuscular(i.m injection of 150 mg/3 months. The present study was set up to investigate theeffects of DMPA on 80 average Egyptian women classified into four groups comprisingthose using the drug for one, two, three and four years, respectively, compared to acontrol group (N = 20 of married non-hormonally – treated women of similar ages. Thedrug showed a transient significant elevation of alanine aminotransferase activity (ALTwithout an apparent effect on other liver indices, namely total bilirubin (T.Bil level,aspartate aminotransferase (AST and alkaline phosphatase (ALP activities. Only thelow density/high density lipoproteins cholesterol ratio (LDLC/HDLC was gradually andnon-significantly (ns increased in comparison to control group, however, neither totalcholesterol (TC nor triglycerides (TG were affected by the drug. The lipid peroxideproduct malondialdehyde (MDA was significantly elevated in an gradual manner with acorresponding decrease in reduced glutathione (GSH, without any change in blood nitricoxide (NO levels. It can be concluded that DMPA may be considered as a safecontraceptive medication for the studied group of women, but that special care should beexercised for cardiovascular, hepatic and other patients more sensitive to the harmfuleffects of free radicals. Alternatively, supportive medications are advisable for eachexposed case to secure against the possible irreversible adverse effects of the drug by continuous use. In addition, annual re-evaluation is much more advisable despite the proven safety of the drug.

  2. Condition monitoring of gearboxes using synchronously averaged electric motor signals

    Science.gov (United States)

    Ottewill, J. R.; Orkisz, M.

    2013-07-01

    Due to their prevalence in rotating machinery, the condition monitoring of gearboxes is extremely important in the minimization of potentially dangerous and expensive failures. Traditionally, gearbox condition monitoring has been conducted using measurements obtained from casing-mounted vibration transducers such as accelerometers. A well-established technique for analyzing such signals is the synchronous signal average, where vibration signals are synchronized to a measured angular position and then averaged from rotation to rotation. Driven, in part, by improvements in control methodologies based upon methods of estimating rotor speed and torque, induction machines are used increasingly in industry to drive rotating machinery. As a result, attempts have been made to diagnose defects using measured terminal currents and voltages. In this paper, the application of the synchronous signal averaging methodology to electric drive signals, by synchronizing stator current signals with a shaft position estimated from current and voltage measurements is proposed. Initially, a test-rig is introduced based on an induction motor driving a two-stage reduction gearbox which is loaded by a DC motor. It is shown that a defect seeded into the gearbox may be located using signals acquired from casing-mounted accelerometers and shaft mounted encoders. Using simple models of an induction motor and a gearbox, it is shown that it should be possible to observe gearbox defects in the measured stator current signal. A robust method of extracting the average speed of a machine from the current frequency spectrum, based on the location of sidebands of the power supply frequency due to rotor eccentricity, is presented. The synchronous signal averaging method is applied to the resulting estimations of rotor position and torsional vibration. Experimental results show that the method is extremely adept at locating gear tooth defects. Further results, considering different loads and different

  3. High-average-power diode-pumped Yb: YAG lasers

    Energy Technology Data Exchange (ETDEWEB)

    Avizonis, P V; Beach, R; Bibeau, C M; Emanuel, M A; Harris, D G; Honea, E C; Monroe, R S; Payne, S A; Skidmore, J A; Sutton, S B

    1999-10-01

    A scaleable diode end-pumping technology for high-average-power slab and rod lasers has been under development for the past several years at Lawrence Livermore National Laboratory (LLNL). This technology has particular application to high average power Yb:YAG lasers that utilize a rod configured gain element. Previously, this rod configured approach has achieved average output powers in a single 5 cm long by 2 mm diameter Yb:YAG rod of 430 W cw and 280 W q-switched. High beam quality (M{sup 2} = 2.4) q-switched operation has also been demonstrated at over 180 W of average output power. More recently, using a dual rod configuration consisting of two, 5 cm long by 2 mm diameter laser rods with birefringence compensation, we have achieved 1080 W of cw output with an M{sup 2} value of 13.5 at an optical-to-optical conversion efficiency of 27.5%. With the same dual rod laser operated in a q-switched mode, we have also demonstrated 532 W of average power with an M{sup 2} < 2.5 at 17% optical-to-optical conversion efficiency. These q-switched results were obtained at a 10 kHz repetition rate and resulted in 77 nsec pulse durations. These improved levels of operational performance have been achieved as a result of technology advancements made in several areas that will be covered in this manuscript. These enhancements to our architecture include: (1) Hollow lens ducts that enable the use of advanced cavity architectures permitting birefringence compensation and the ability to run in large aperture-filling near-diffraction-limited modes. (2) Compound laser rods with flanged-nonabsorbing-endcaps fabricated by diffusion bonding. (3) Techniques for suppressing amplified spontaneous emission (ASE) and parasitics in the polished barrel rods.

  4. COGNITIVE THERAPY DECREASE THE LEVEL OF DEPRESSION

    Directory of Open Access Journals (Sweden)

    Ah. Yusuf

    2017-07-01

    Full Text Available Introduction: Aging is a natural process in individuals. Most of the elderly have problems in dealing with this natural process. Lost of occupation, friends and loneliness may result in depression in this age group. Cognitive therapy changes pessimistic idea, unrealistic hopes and excessive self evaluation may result and justify depression. Cognitive therapy may help elderly to recognize the problem in life, to develop positive objective of life and to create more positive personality. The aimed of this study was to analyze the effect of cognitive therapy to reduce the level of depression. Method: This study was used a pre experimental pre post test design. Sample were 10 elderly people who met to the inclusion criteria. The independent variable was cognitive therapy and dependent variable was the level of depression in elderly. Data were collected by using Geriatric Depression Scale (GDS 15, then analyzed by using Wilcoxon Signed Rank Test with significance levelα≤0.05. Result: The result showed that cognitive therapy has an effect on reducing depression with significance level p=0.005. Discussion: It can be concluded that cognitive therapy was effective in reducing depression level in elderly. Further studies are recommended to analyze the effect of cognitive therapy on decreasing anxiety in elderly by measuring cathecolamin.

  5. GROUP LAZINESS: THE EFFECT OF SOCIAL LOAFING ON GROUP PERFORMANCE

    National Research Council Canada - National Science Library

    Xiangyu Ying; Huanhuan Li; Shan Jiang; Fei Peng; Zhongxin Lin

    2014-01-01

      Social loafing has been defined as a phenomenon in which people exhibit a sizable decrease in individual effort when performing in groups as compared to when they perform alone, and has been regarded...

  6. Can Diuretics Decrease Your Potassium Level?

    Science.gov (United States)

    ... and Conditions High blood pressure (hypertension) Can diuretics decrease your potassium level? Answers from Sheldon G. Sheps, ... D. Yes, some diuretics — also called water pills — decrease potassium in the blood. Diuretics are commonly used ...

  7. Glycine treatment decreases proinflammatory cytokines and increases interferon-gamma in patients with type 2 diabetes.

    Science.gov (United States)

    Cruz, M; Maldonado-Bernal, C; Mondragón-Gonzalez, R; Sanchez-Barrera, R; Wacher, N H; Carvajal-Sandoval, G; Kumate, J

    2008-08-01

    Amino acids have been shown to stimulate insulin secretion and decrease glycated hemoglobin (A1C) in patients with Type 2 diabetes. In vitro, glycine reduces tumor necrosis factor (TNF)-alpha secretion and increases interleukin-10 secretion in human monocytes stimulated with lipopolysaccharide. The aim of this study was to determine whether glycine modifies the proinflammatory profiles of patients with Type 2 diabetes. Seventy-four patients, with Type 2 diabetes were enrolled in the study. The mean age was 58.5 yr, average age of diagnosis was 5 yr, the mean body mass index was 28.5 kg/m2, the mean fasting glucose level was 175.5 mg/dl and the mean A1C level was 8%. They were allocated to one of two treatments, 5 g/d glycine or 5 g/d placebo, po tid, for 3 months. A1C levels of patients given glycine were significantly lower after 3 months of treatment than those of the placebo group. A significant reduction in TNF-receptor I levels was observed in patients given glycine compared with placebo. There was a decrease of 38% in the interferon (IFN)-gamma level of the group treated with placebo, whereas that of the group treated with glycine increased up to 43%. These data showed that patients treated with glycine had a significant decrease in A1C and in proinflammatory cytokines and also an important increase of IFN-gamma. Treatment with glycine is likely to have a beneficial effect on innate and adaptive immune responses and may help prevent tissue damage caused by chronic inflammation in patients with Type 2 diabetes.

  8. Crystallization: A phase transition process driving by chemical potential decrease

    Science.gov (United States)

    Sun, Congting; Xue, Dongfeng

    2017-07-01

    A chemical bonding model is established to describe the chemical potential decrease during crystallization. In the nucleation stage, in situ molecular vibration spectroscopy shows the increased vibration energy of constituent groups, indicating the shortened chemical bonding and the decreased chemical potential towards the formation of nuclei. Starting from the Gibbs free energy formula, the chemical potential decrease during crystallization is scaled, which depends on the released chemical bonding energy per unit phase transition zone. In the crystal growth, the direction-dependent growth rate of inorganic single crystals can be quantitatively determined, their anisotropic thermodynamic morphology can thus be constructed on the basis of relative growth rates.

  9. Persistent Postconcussive Symptoms Are Accompanied by Decreased Functional Brain Oxygenation.

    Science.gov (United States)

    Helmich, Ingo; Saluja, Rajeet S; Lausberg, Hedda; Kempe, Mathias; Furley, Philip; Berger, Alisa; Chen, Jen-Kai; Ptito, Alain

    2015-01-01

    Diagnostic methods are considered a major concern in the determination of mild traumatic brain injury. The authors examined brain oxygenation patterns in subjects with severe and minor persistent postconcussive difficulties and a healthy control group during working memory tasks in prefrontal brain regions using functional near-infrared spectroscopy. The results demonstrated decreased working memory performances among concussed subjects with severe postconcussive symptoms that were accompanied by decreased brain oxygenation patterns. An association appears to exist between decreased brain oxygenation, poor performance of working memory tasks, and increased symptom severity scores in subjects suffering from persistent postconcussive symptoms.

  10. Proton transport properties of poly(aspartic acid) with different average molecular weights

    Energy Technology Data Exchange (ETDEWEB)

    Nagao, Yuki, E-mail: ynagao@kuchem.kyoto-u.ac.j [Department of Mechanical Systems and Design, Graduate School of Engineering, Tohoku University, 6-6-01 Aoba Aramaki, Aoba-ku, Sendai 980-8579 (Japan); Imai, Yuzuru [Institute of Development, Aging and Cancer (IDAC), Tohoku University, 4-1 Seiryo-cho, Aoba-ku, Sendai 980-8575 (Japan); Matsui, Jun [Institute of Multidisciplinary Research for Advanced Materials (IMRAM), Tohoku University, 2-1-1 Katahira, Sendai 980-8577 (Japan); Ogawa, Tomoyuki [Department of Electronic Engineering, Graduate School of Engineering, Tohoku University, 6-6-05 Aoba Aramaki, Aoba-ku, Sendai 980-8579 (Japan); Miyashita, Tokuji [Institute of Multidisciplinary Research for Advanced Materials (IMRAM), Tohoku University, 2-1-1 Katahira, Sendai 980-8577 (Japan)

    2011-04-15

    Research highlights: Seven polymers with different average molecular weights were synthesized. The proton conductivity depended on the number-average degree of polymerization. The difference of the proton conductivities was more than one order of magnitude. The number-average molecular weight contributed to the stability of the polymer. - Abstract: We synthesized seven partially protonated poly(aspartic acids)/sodium polyaspartates (P-Asp) with different average molecular weights to study their proton transport properties. The number-average degree of polymerization (DP) for each P-Asp was 30 (P-Asp30), 115 (P-Asp115), 140 (P-Asp140), 160 (P-Asp160), 185 (P-Asp185), 205 (P-Asp205), and 250 (P-Asp250). The proton conductivity depended on the number-average DP. The maximum and minimum proton conductivities under a relative humidity of 70% and 298 K were 1.7 . 10{sup -3} S cm{sup -1} (P-Asp140) and 4.6 . 10{sup -4} S cm{sup -1} (P-Asp250), respectively. Differential thermogravimetric analysis (TG-DTA) was carried out for each P-Asp. The results were classified into two categories. One exhibited two endothermic peaks between t = (270 and 300) {sup o}C, the other exhibited only one peak. The P-Asp group with two endothermic peaks exhibited high proton conductivity. The high proton conductivity is related to the stability of the polymer. The number-average molecular weight also contributed to the stability of the polymer.

  11. 40 CFR 600.510-12 - Calculation of average fuel economy and average carbon-related exhaust emissions.

    Science.gov (United States)

    2010-07-01

    ... fuel as determined in § 600.113-08(a) and (b); FEpet is the fuel economy while operated on petroleum... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation of average fuel economy... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS...

  12. Averaging and renormalization for the Korteveg–deVries–Burgers equation

    Science.gov (United States)

    Chorin, Alexandre J.

    2003-01-01

    We consider traveling wave solutions of the Korteveg–deVries–Burgers equation and set up an analogy between the spatial averaging of these traveling waves and real-space renormalization for Hamiltonian systems. The result is an effective equation that reproduces means of the unaveraged, highly oscillatory, solution. The averaging enhances the apparent diffusion, creating an “eddy” (or renormalized) diffusion coefficient; the relation between the eddy diffusion coefficient and the original diffusion coefficient is found numerically to be one of incomplete similarity, setting up an instance of Barenblatt's renormalization group. The results suggest a relation between self-similar solutions of differential equations on one hand and renormalization groups and optimal prediction algorithms on the other. An analogy with hydrodynamics is pointed out. PMID:12913126

  13. Averaging and renormalization for the Korteveg-deVries-Burgers equation.

    Science.gov (United States)

    Chorin, Alexandre J

    2003-08-19

    We consider traveling wave solutions of the Korteveg-deVries-Burgers equation and set up an analogy between the spatial averaging of these traveling waves and real-space renormalization for Hamiltonian systems. The result is an effective equation that reproduces means of the unaveraged, highly oscillatory, solution. The averaging enhances the apparent diffusion, creating an "eddy" (or renormalized) diffusion coefficient; the relation between the eddy diffusion coefficient and the original diffusion coefficient is found numerically to be one of incomplete similarity, setting up an instance of Barenblatt's renormalization group. The results suggest a relation between self-similar solutions of differential equations on one hand and renormalization groups and optimal prediction algorithms on the other. An analogy with hydrodynamics is pointed out.

  14. Update of Ireland's national average indoor radon concentration - Application of a new survey protocol.

    Science.gov (United States)

    Dowdall, A; Murphy, P; Pollard, D; Fenton, D

    2017-04-01

    In 2002, a National Radon Survey (NRS) in Ireland established that the geographically weighted national average indoor radon concentration was 89 Bq m(-3). Since then a number of developments have taken place which are likely to have impacted on the national average radon level. Key among these was the introduction of amending Building Regulations in 1998 requiring radon preventive measures in new buildings in High Radon Areas (HRAs). In 2014, the Irish Government adopted the National Radon Control Strategy (NRCS) for Ireland. A knowledge gap identified in the NRCS was to update the national average for Ireland given the developments since 2002. The updated national average would also be used as a baseline metric to assess the effectiveness of the NRCS over time. A new national survey protocol was required that would measure radon in a sample of homes representative of radon risk and geographical location. The design of the survey protocol took into account that it is not feasible to repeat the 11,319 measurements carried out for the 2002 NRS due to time and resource constraints. However, the existence of that comprehensive survey allowed for a new protocol to be developed, involving measurements carried out in unbiased randomly selected volunteer homes. This paper sets out the development and application of that survey protocol. The results of the 2015 survey showed that the current national average indoor radon concentration for homes in Ireland is 77 Bq m(-3), a decrease from the 89 Bq m(-3) reported in the 2002 NRS. Analysis of the results by build date demonstrate that the introduction of the amending Building Regulations in 1998 have led to a reduction in the average indoor radon level in Ireland. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Effects of number of animals monitored on representations of cattle group movement characteristics and spatial occupancy.

    Science.gov (United States)

    Liu, Tong; Green, Angela R; Rodríguez, Luis F; Ramirez, Brett C; Shike, Daniel W

    2015-01-01

    The number of animals required to represent the collective characteristics of a group remains a concern in animal movement monitoring with GPS. Monitoring a subset of animals from a group instead of all animals can reduce costs and labor; however, incomplete data may cause information losses and inaccuracy in subsequent data analyses. In cattle studies, little work has been conducted to determine the number of cattle within a group needed to be instrumented considering subsequent analyses. Two different groups of cattle (a mixed group of 24 beef cows and heifers, and another group of 8 beef cows) were monitored with GPS collars at 4 min intervals on intensively managed pastures and corn residue fields in 2011. The effects of subset group size on cattle movement characterization and spatial occupancy analysis were evaluated by comparing the results between subset groups and the entire group for a variety of summarization parameters. As expected, more animals yield better results for all parameters. Results show the average group travel speed and daily travel distances are overestimated as subset group size decreases, while the average group radius is underestimated. Accuracy of group centroid locations and group radii are improved linearly as subset group size increases. A kernel density estimation was performed to quantify the spatial occupancy by cattle via GPS location data. Results show animals among the group had high similarity of spatial occupancy. Decisions regarding choosing an appropriate subset group size for monitoring depend on the specific use of data for subsequent analysis: a small subset group may be adequate for identifying areas visited by cattle; larger subset group size (e.g. subset group containing more than 75% of animals) is recommended to achieve better accuracy of group movement characteristics and spatial occupancy for the use of correlating cattle locations with other environmental factors.

  16. [Weighted-averaging multi-planar reconstruction method for multi-detector row computed tomography].

    Science.gov (United States)

    Aizawa, Mitsuhiro; Nishikawa, Keiichi; Sasaki, Keita; Kobayashi, Norio; Yama, Mitsuru; Sano, Tsukasa; Murakami, Shin-ichi

    2012-01-01

    Development of multi-detector row computed tomography (MDCT) has enabled three-dimensions (3D) scanning with minute voxels. Minute voxels improve spatial resolution of CT images. At the same time, however, they increase image noise. Multi-planar reconstruction (MPR) is one of effective 3D-image processing techniques. The conventional MPR technique can adjust slice thickness of MPR images. When a thick slice is used, the image noise is decreased. In this case, however, spatial resolution is deteriorated. In order to deal with this trade-off problem, we have developed the weighted-averaging multi-planar reconstruction (W-MPR) technique to control the balance between the spatial resolution and noise. The weighted-average is determined by the Gaussian-type weighting function. In this study, we compared the performance of W-MPR with that of conventional simple-addition-averaging MPR. As a result, we could confirm that W-MPR can decrease the image noise without significant deterioration of spatial resolution. W-MPR can adjust freely the weight for each slice by changing the shape of the weighting function. Therefore, W-MPR can allow us to select a proper balance of spatial resolution and noise and at the same time produce suitable MPR images for observation of targeted anatomical structures.

  17. COMMUNICATIONS GROUP

    CERN Multimedia

    L. Taylor

    2011-01-01

    Communications Infrastructure The 55 CMS Centres worldwide are well used by physicists working on remote CMS shifts, Computing operations, data quality monitoring, data analysis and outreach. The CMS Centre@CERN in Meyrin is particularly busy at the moment, hosting about 50 physicists taking part in the heavy-ion data-taking and analysis. Three new CMS meeting room will be equipped for videoconferencing in early 2012: 40/5B-08, 42/R-031, and 28/S-029. The CMS-TV service showing LHC Page 1, CMS Page 1, etc. (http://cmsdoc.cern.ch/cmscc/projector/index.jsp) is now also available for mobile devices: http://cern.ch/mcmstv. Figure 12: Screenshots of CMS-TV for mobile devices Information Systems CMS has a new web site: (http://cern.ch/cms) using a modern web Content Management System to ensure content and links are managed and updated easily and coherently. It covers all CMS sub-projects and groups, replacing the iCMS internal pages. It also incorporates the existing CMS public web site (http:/...

  18. COMMUNICATIONS GROUP

    CERN Multimedia

    L. Taylor

    2012-01-01

      Outreach and Education We are fortunate that our research has captured the public imagination, even though this inevitably puts us under the global media spotlight, as we saw with the Higgs seminar at CERN in December, which had 110,000 distinct webcast viewers. The media interest was huge with 71 media organisations registering to come to CERN to cover the Higgs seminar, which was followed by a press briefing with the DG and Spokespersons. This event resulted in about 2,000 generally positive stories in the global media. For this seminar, the CMS Communications Group prepared up-to-date news and public material, including links to the CMS results, animations and event displays [http://cern.ch/go/Ch8thttp://cern.ch/go/Ch8t]. There were 44,000 page-views on the CMS public website, with the Higgs news article being by far the most popular item. CMS event displays from iSpy are fast becoming the iconic media images, featuring on numerous major news outlets (BBC, CNN, MSN...) as well as in the sci...

  19. COMMUNICATIONS GROUP

    CERN Multimedia

    L. Taylor

    2010-01-01

    The recently established CMS Communications Group, led by Lucas Taylor, has been busy in all three of its main are areas of responsibility: Communications Infrastructure, Information Systems, and Outreach and Education Communications Infrastructure The damage caused by the flooding of the CMS Centre@CERN on 21st December has been completely repaired and all systems are back in operation. Major repairs were made to the roofs, ceilings and one third of the floor had to be completely replaced. Throughout these works, the CMS Centre was kept operating and even hosted a major press event for first 7 TeV collisions, as described below. Incremental work behind the scenes is steadily improving the quality of the CMS communications infrastructure, particularly Webcasting, video conferencing, and meeting rooms at CERN. CERN/IT is also deploying a pilot service of a new videoconference tool called Vidyo, to assess whether it might provide an enhanced service at a lower cost, compared to the EVO tool currently in w...

  20. Scaling registration of multiview range scans via motion averaging

    Science.gov (United States)

    Zhu, Jihua; Zhu, Li; Jiang, Zutao; Li, Zhongyu; Li, Chen; Zhang, Fan

    2016-07-01

    Three-dimensional modeling of scene or object requires registration of multiple range scans, which are obtained by range sensor from different viewpoints. An approach is proposed for scaling registration of multiview range scans via motion averaging. First, it presents a method to estimate overlap percentages of all scan pairs involved in multiview registration. Then, a variant of iterative closest point algorithm is presented to calculate relative motions (scaling transformations) for these scan pairs, which contain high overlap percentages. Subsequently, the proposed motion averaging algorithm can transform these relative motions into global motions of multiview registration. In addition, it also introduces the parallel computation to increase the efficiency of multiview registration. Furthermore, it presents the error criterion for accuracy evaluation of multiview registration result, which can make it easy to compare results of different multiview registration approaches. Experimental results carried out with public available datasets demonstrate its superiority over related approaches.