WorldWideScience

Sample records for higher average test

  1. Self-similarity of higher-order moving averages

    Science.gov (United States)

    Arianos, Sergio; Carbone, Anna; Türk, Christian

    2011-10-01

    In this work, higher-order moving average polynomials are defined by straightforward generalization of the standard moving average. The self-similarity of the polynomials is analyzed for fractional Brownian series and quantified in terms of the Hurst exponent H by using the detrending moving average method. We prove that the exponent H of the fractional Brownian series and of the detrending moving average variance asymptotically agree for the first-order polynomial. Such asymptotic values are compared with the results obtained by the simulations. The higher-order polynomials correspond to trend estimates at shorter time scales as the degree of the polynomial increases. Importantly, the increase of polynomial degree does not require to change the moving average window. Thus trends at different time scales can be obtained on data sets with the same size. These polynomials could be interesting for those applications relying on trend estimates over different time horizons (financial markets) or on filtering at different frequencies (image analysis).

  2. The true bladder dose: on average thrice higher than the ICRU reference

    International Nuclear Information System (INIS)

    Barillot, I.; Horiot, J.C.; Maingon, P.; Bone-Lepinoy, M.C.; D'Hombres, A.; Comte, J.; Delignette, A.; Feutray, S.; Vaillant, D.

    1996-01-01

    The aim of this study is to compare ICRU dose to doses at the bladder base located from ultrasonography measurements. Since 1990, the dose delivered to the bladder during utero-vaginal brachytherapy was systematically calculated at 3 or 4 points representative of bladder base determined with ultrasonography. The ICRU Reference Dose (IRD) from films, the Maximum Dose (Dmax), the Mean Dose (Dmean) representative of the dose received by a large area of bladder mucosa, the Reference Dose Rate (RDR) and the Mean Dose Rate (MDR) were recorded. Material: from 1990 to 1994, 198 measurements were performed in 152 patients. 98 patients were treated for cervix carcinomas, 54 for endometrial carcinomas. Methods: Bladder complications were classified using French Italian Syllabus. The influence of doses and dose rates on complications were tested using non parametric t test. Results: On average IRD is 21 Gy +/- 12 Gy, Dmax is 51Gy +/- 21Gy, Dmean is 40 Gy +/16 Gy. On average Dmax is thrice higher than IRD and Dmean twice higher than IRD. The same results are obtained for cervix and endometrium. Comparisons on dose rates were also performed: MDR is on average twice higher than RDR (RDR 48 cGy/h vs MDR 88 cGy/h). The five observed complications consist of incontinence only (3 G1, 1G2, 1G3). They are only statistically correlated with RDR p=0.01 (46 cGy/h in patients without complications vs 74 cGy/h in patients with complications). However the full responsibility of RT remains doubtful and should be shared with surgery in all cases. In summary: Bladder mucosa seems to tolerate well much higher doses than previous recorded without increased risk of severe sequelae. However this finding is probably explained by our efforts to spare most of bladder mucosa by 1 deg. ) customised external irradiation therapy (4 fields, full bladder) 2 deg. ) reproduction of physiologic bladder filling during brachytherapy by intermittent clamping of the Foley catheter

  3. 76 FR 28947 - Bus Testing: Calculation of Average Passenger Weight and Test Vehicle Weight, and Public Meeting...

    Science.gov (United States)

    2011-05-19

    ...-0015] RIN 2132-AB01 Bus Testing: Calculation of Average Passenger Weight and Test Vehicle Weight, and... of proposed rulemaking (NPRM) regarding the calculation of average passenger weights and test vehicle... passenger weights and actual transit vehicle loads. Specifically, FTA proposed to change the average...

  4. Averaging Principle for the Higher Order Nonlinear Schrödinger Equation with a Random Fast Oscillation

    Science.gov (United States)

    Gao, Peng

    2018-04-01

    This work concerns the problem associated with averaging principle for a higher order nonlinear Schrödinger equation perturbed by a oscillating term arising as the solution of a stochastic reaction-diffusion equation evolving with respect to the fast time. This model can be translated into a multiscale stochastic partial differential equations. Stochastic averaging principle is a powerful tool for studying qualitative analysis of stochastic dynamical systems with different time-scales. To be more precise, under suitable conditions, we prove that there is a limit process in which the fast varying process is averaged out and the limit process which takes the form of the higher order nonlinear Schrödinger equation is an average with respect to the stationary measure of the fast varying process. Finally, by using the Khasminskii technique we can obtain the rate of strong convergence for the slow component towards the solution of the averaged equation, and as a consequence, the system can be reduced to a single higher order nonlinear Schrödinger equation with a modified coefficient.

  5. Averaging Principle for the Higher Order Nonlinear Schrödinger Equation with a Random Fast Oscillation

    Science.gov (United States)

    Gao, Peng

    2018-06-01

    This work concerns the problem associated with averaging principle for a higher order nonlinear Schrödinger equation perturbed by a oscillating term arising as the solution of a stochastic reaction-diffusion equation evolving with respect to the fast time. This model can be translated into a multiscale stochastic partial differential equations. Stochastic averaging principle is a powerful tool for studying qualitative analysis of stochastic dynamical systems with different time-scales. To be more precise, under suitable conditions, we prove that there is a limit process in which the fast varying process is averaged out and the limit process which takes the form of the higher order nonlinear Schrödinger equation is an average with respect to the stationary measure of the fast varying process. Finally, by using the Khasminskii technique we can obtain the rate of strong convergence for the slow component towards the solution of the averaged equation, and as a consequence, the system can be reduced to a single higher order nonlinear Schrödinger equation with a modified coefficient.

  6. Gender Gaps in High School GPA and ACT Scores: High School Grade Point Average and ACT Test Score by Subject and Gender. Information Brief 2014-12

    Science.gov (United States)

    ACT, Inc., 2014

    2014-01-01

    Female students who graduated from high school in 2013 averaged higher grades than their male counterparts in all subjects, but male graduates earned higher scores on the math and science sections of the ACT. This information brief looks at high school grade point average and ACT test score by subject and gender

  7. Control of underactuated driftless systems using higher-order averaging theory

    OpenAIRE

    Vela, Patricio A.; Burdick, Joel W.

    2003-01-01

    This paper applies a recently developed "generalized averaging theory" to construct stabilizing feedback control laws for underactuated driftless systems. These controls exponentialy stabilize in the average; the actual system may orbit around the average. Conditions for which the orbit collapses to the averaged trajectory are given. An example validates the theory, demonstrating its utility.

  8. Predicting Freshman Grade Point Average From College Admissions Test Scores and State High School Test Scores

    OpenAIRE

    Koretz, Daniel; Yu, C; Mbekeani, Preeya Pandya; Langi, M.; Dhaliwal, Tasminda Kaur; Braslow, David Arthur

    2016-01-01

    The current focus on assessing “college and career readiness” raises an empirical question: How do high school tests compare with college admissions tests in predicting performance in college? We explored this using data from the City University of New York and public colleges in Kentucky. These two systems differ in the choice of college admissions test, the stakes for students on the high school test, and demographics. We predicted freshman grade point average (FGPA) from high school GPA an...

  9. A benchmark test of computer codes for calculating average resonance parameters

    International Nuclear Information System (INIS)

    Ribon, P.; Thompson, A.

    1983-01-01

    A set of resonance parameters has been generated from known, but secret, average values; the parameters have then been adjusted to mimic experimental data by including the effects of Doppler broadening, resolution broadening and statistical fluctuations. Average parameters calculated from the dataset by various computer codes are compared with each other, and also with the true values. The benchmark test is fully described in the report NEANDC160-U (NEA Data Bank Newsletter No. 27 July 1982); the present paper is a summary of this document. (Auth.)

  10. An Extended Quadratic Frobenius Primality Test with Average Case Error Estimates

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Frandsen, Gudmund Skovbjerg

    2001-01-01

    We present an Extended Quadratic Frobenius Primality Test (EQFT), which is related to an extends the Miller-Rabin test and the Quadratic Frobenius test (QFT) by Grantham. EQFT takes time about equivalent to 2 Miller-Rabin tests, but has much smaller error probability, namely 256/331776t for t...... for the error probability of this algorithm as well as a general closed expression bounding the error. For instance, it is at most 2-143 for k = 500, t = 2. Compared to earlier similar results for the Miller-Rabin test, the results indicates that our test in the average case has the effect of 9 Miller......-Rabin tests, while only taking time equivalent to about 2 such tests. We also give bounds for the error in case a prime is sought by incremental search from a random starting point....

  11. Facial averageness and genetic quality: Testing heritability, genetic correlation with attractiveness, and the paternal age effect.

    Science.gov (United States)

    Lee, Anthony J; Mitchem, Dorian G; Wright, Margaret J; Martin, Nicholas G; Keller, Matthew C; Zietsch, Brendan P

    2016-01-01

    Popular theory suggests that facial averageness is preferred in a partner for genetic benefits to offspring. However, whether facial averageness is associated with genetic quality is yet to be established. Here, we computed an objective measure of facial averageness for a large sample ( N = 1,823) of identical and nonidentical twins and their siblings to test two predictions from the theory that facial averageness reflects genetic quality. First, we use biometrical modelling to estimate the heritability of facial averageness, which is necessary if it reflects genetic quality. We also test for a genetic association between facial averageness and facial attractiveness. Second, we assess whether paternal age at conception (a proxy of mutation load) is associated with facial averageness and facial attractiveness. Our findings are mixed with respect to our hypotheses. While we found that facial averageness does have a genetic component, and a significant phenotypic correlation exists between facial averageness and attractiveness, we did not find a genetic correlation between facial averageness and attractiveness (therefore, we cannot say that the genes that affect facial averageness also affect facial attractiveness) and paternal age at conception was not negatively associated with facial averageness. These findings support some of the previously untested assumptions of the 'genetic benefits' account of facial averageness, but cast doubt on others.

  12. Predicting Freshman Grade Point Average From College Admissions Test Scores and State High School Test Scores

    Directory of Open Access Journals (Sweden)

    Daniel Koretz

    2016-09-01

    Full Text Available The current focus on assessing “college and career readiness” raises an empirical question: How do high school tests compare with college admissions tests in predicting performance in college? We explored this using data from the City University of New York and public colleges in Kentucky. These two systems differ in the choice of college admissions test, the stakes for students on the high school test, and demographics. We predicted freshman grade point average (FGPA from high school GPA and both college admissions and high school tests in mathematics and English. In both systems, the choice of tests had only trivial effects on the aggregate prediction of FGPA. Adding either test to an equation that included the other had only trivial effects on prediction. Although the findings suggest that the choice of test might advantage or disadvantage different students, it had no substantial effect on the over- and underprediction of FGPA for students classified by race-ethnicity or poverty.

  13. The weighted average cost of capital over the lifecycle of the firm: Is the overinvestment problem of mature firms intensified by a higher WACC?

    Directory of Open Access Journals (Sweden)

    Carlos S. Garcia

    2016-08-01

    Full Text Available Firm lifecycle theory predicts that the Weighted Average Cost of Capital (WACC will tend to fall over the lifecycle of the firm (Mueller, 2003, p. 80-81. However, given that previous research finds that corporate governance deteriorates as firms get older (Mueller and Yun, 1998; Saravia, 2014 there is good reason to suspect that the opposite could be the case, that is, that the WACC is higher for older firms. Since our literature review indicates that no direct tests to clarify this question have been carried out up till now, this paper aims to fill the gap by testing this prediction empirically. Our findings support the proposition that the WACC of younger firms is higher than that of mature firms. Thus, we find that the mature firm overinvestment problem is not intensified by a higher cost of capital, on the contrary, our results suggest that mature firms manage to invest in negative net present value projects even though they have access to cheaper capital. This finding sheds new light on the magnitude of the corporate governance problems found in mature firms.

  14. Exploiting scale dependence in cosmological averaging

    International Nuclear Information System (INIS)

    Mattsson, Teppo; Ronkainen, Maria

    2008-01-01

    We study the role of scale dependence in the Buchert averaging method, using the flat Lemaitre–Tolman–Bondi model as a testing ground. Within this model, a single averaging scale gives predictions that are too coarse, but by replacing it with the distance of the objects R(z) for each redshift z, we find an O(1%) precision at z<2 in the averaged luminosity and angular diameter distances compared to their exact expressions. At low redshifts, we show the improvement for generic inhomogeneity profiles, and our numerical computations further verify it up to redshifts z∼2. At higher redshifts, the method breaks down due to its inability to capture the time evolution of the inhomogeneities. We also demonstrate that the running smoothing scale R(z) can mimic acceleration, suggesting that it could be at least as important as the backreaction in explaining dark energy as an inhomogeneity induced illusion

  15. Analytical expressions for conditional averages: A numerical test

    DEFF Research Database (Denmark)

    Pécseli, H.L.; Trulsen, J.

    1991-01-01

    Conditionally averaged random potential fluctuations are an important quantity for analyzing turbulent electrostatic plasma fluctuations. Experimentally, this averaging can be readily performed by sampling the fluctuations only when a certain condition is fulfilled at a reference position...

  16. Changes in Student Populations and Average Test Scores of Dutch Primary Schools

    Science.gov (United States)

    Luyten, Hans; de Wolf, Inge

    2011-01-01

    This article focuses on the relation between student population characteristics and average test scores per school in the final grade of primary education from a dynamic perspective. Aggregated data of over 5,000 Dutch primary schools covering a 6-year period were used to study the relation between changes in school populations and shifts in mean…

  17. Acceleration test with mixed higher harmonics in HIMAC

    International Nuclear Information System (INIS)

    Kanazawa, M.; Sugiura, A.; Misu, T.

    2004-01-01

    In HIMAC synchrotron, beam tests with a magnetic ally loaded cavity have been performed. This cavity has very low Q-value of about 0.5, and can be added higher harmonics with fundamental acceleration frequency. In our tested system for higher harmonics, wave form of a DDS (Direct Digital Synthesizer) can be rewrite, and arbitrary wave form can be used for beam acceleration. In the beam test, second and third harmonic wave were added on the fundamental acceleration frequency, and increases of the accelerated beam intensity have been achieved. In this paper, results of the beam test and the acceleration system are presented. (author)

  18. An Extended Quadratic Frobenius Primality Test with Average and Worst Case Error Estimates

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Frandsen, Gudmund Skovbjerg

    2003-01-01

    We present an Extended Quadratic Frobenius Primality Test (EQFT), which is related to an extends the Miller-Rabin test and the Quadratic Frobenius test (QFT) by Grantham. EQFT takes time about equivalent to 2 Miller-Rabin tests, but has much smaller error probability, namely 256/331776t for t...... for the error probability of this algorithm as well as a general closed expression bounding the error. For instance, it is at most 2-143 for k = 500, t = 2. Compared to earlier similar results for the Miller-Rabin test, the results indicates that our test in the average case has the effect of 9 Miller......-Rabin tests, while only taking time equivalent to about 2 such tests. We also give bounds for the error in case a prime is sought by incremental search from a random starting point....

  19. An Extended Quadratic Frobenius Primality Test with Average- and Worst-Case Error Estimate

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Frandsen, Gudmund Skovbjerg

    2006-01-01

    We present an Extended Quadratic Frobenius Primality Test (EQFT), which is related to an extends the Miller-Rabin test and the Quadratic Frobenius test (QFT) by Grantham. EQFT takes time about equivalent to 2 Miller-Rabin tests, but has much smaller error probability, namely 256/331776t for t...... for the error probability of this algorithm as well as a general closed expression bounding the error. For instance, it is at most 2-143 for k = 500, t = 2. Compared to earlier similar results for the Miller-Rabin test, the results indicates that our test in the average case has the effect of 9 Miller......-Rabin tests, while only taking time equivalent to about 2 such tests. We also give bounds for the error in case a prime is sought by incremental search from a random starting point....

  20. Average gluon and quark jet multiplicities at higher orders

    Energy Technology Data Exchange (ETDEWEB)

    Bolzoni, Paolo; Kniehl, Bernd A. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Kotikov, Anatoly V. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Joint Institute of Nuclear Research, Moscow (Russian Federation). Bogoliubov Lab. of Theoretical Physics

    2013-05-15

    We develop a new formalism for computing and including both the perturbative and nonperturbative QCD contributions to the scale evolution of average gluon and quark jet multiplicities. The new method is motivated by recent progress in timelike small-x resummation obtained in the MS factorization scheme. We obtain next-to-next-to-leading-logarithmic (NNLL) resummed expressions, which represent generalizations of previous analytic results. Our expressions depend on two nonperturbative parameters with clear and simple physical interpretations. A global fit of these two quantities to all available experimental data sets that are compatible with regard to the jet algorithms demonstrates by its goodness how our results solve a longstanding problem of QCD. We show that the statistical and theoretical uncertainties both do not exceed 5% for scales above 10 GeV. We finally propose to use the jet multiplicity data as a new way to extract the strong-coupling constant. Including all the available theoretical input within our approach, we obtain {alpha}{sub s}{sup (5)}(M{sub Z})=0.1199{+-}0.0026 in the MS scheme in an approximation equivalent to next-to-next-to-leading order enhanced by the resummations of ln(x) terms through the NNLL level and of ln Q{sup 2} terms by the renormalization group, in excellent agreement with the present world average.

  1. Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.

    Science.gov (United States)

    Dirks, Jean; And Others

    1983-01-01

    Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)

  2. Contemporary and prospective fuel cycles for WWER-440 based on new assemblies with higher uranium capacity and higher average fuel enrichment

    International Nuclear Information System (INIS)

    Gagarinskiy, A.A.; Saprykin, V.V.

    2009-01-01

    RRC 'Kurchatov Institute' has performed an extensive cycle of calculations intended to validate the opportunities of improving different fuel cycles for WWER-440 reactors. Works were performed to upgrade and improve WWER-440 fuel cycles on the basis of second-generation fuel assemblies allowing core thermal power to be uprated to 107 108 % of its nominal value (1375 MW), while maintaining the same fuel operation lifetime. Currently intensive work is underway to develop fuel cycles based on second-generation assemblies with higher fuel capacity and average fuel enrichment per assembly increased up to 4.87 % of U-235. Fuel capacity of second-generation assemblies was increased by means of eliminated central apertures of fuel pellets, and pellet diameter extended due to reduced fuel cladding thickness. This paper intends to summarize the results of works performed in the field of WWER-440 fuel cycle modernization, and to present yet unemployed opportunities and prospects of further improvement of WWER-440 neutronic and operating parameters by means of additional optimization of fuel assembly designs and fuel element arrangements applied. (Authors)

  3. Raven's Test Performance of Sub-Saharan Africans: Average Performance, Psychometric Properties, and the Flynn Effect

    Science.gov (United States)

    Wicherts, Jelte M.; Dolan, Conor V.; Carlson, Jerry S.; van der Maas, Han L. J.

    2010-01-01

    This paper presents a systematic review of published data on the performance of sub-Saharan Africans on Raven's Progressive Matrices. The specific goals were to estimate the average level of performance, to study the Flynn Effect in African samples, and to examine the psychometric meaning of Raven's test scores as measures of general intelligence.…

  4. Testing averaged cosmology with type Ia supernovae and BAO data

    Energy Technology Data Exchange (ETDEWEB)

    Santos, B.; Alcaniz, J.S. [Departamento de Astronomia, Observatório Nacional, 20921-400, Rio de Janeiro – RJ (Brazil); Coley, A.A. [Department of Mathematics and Statistics, Dalhousie University, Halifax, B3H 3J5 Canada (Canada); Devi, N. Chandrachani, E-mail: thoven@on.br, E-mail: aac@mathstat.dal.ca, E-mail: chandrachaniningombam@astro.unam.mx, E-mail: alcaniz@on.br [Instituto de Astronomía, Universidad Nacional Autónoma de México, Box 70-264, México City, México (Mexico)

    2017-02-01

    An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO data is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.

  5. Testing averaged cosmology with type Ia supernovae and BAO data

    International Nuclear Information System (INIS)

    Santos, B.; Alcaniz, J.S.; Coley, A.A.; Devi, N. Chandrachani

    2017-01-01

    An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO data is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.

  6. The Use of Tests in Admissions to Higher Education.

    Science.gov (United States)

    Fruen, Mary

    1978-01-01

    There are both strengths and weaknesses of using standardized test scores as a criterion for admission to institutions of higher education. The relative importance of scores is dependent on the institution's degree of selectivity. In general, decision processes and admissions criteria are not well defined. Advantages of test scores include: use of…

  7. Average male and female virtual dummy model (BioRID and EvaRID) simulations with two seat concepts in the Euro NCAP low severity rear impact test configuration.

    Science.gov (United States)

    Linder, Astrid; Holmqvist, Kristian; Svensson, Mats Y

    2018-05-01

    Soft tissue neck injuries, also referred to as whiplash injuries, which can lead to long term suffering accounts for more than 60% of the cost of all injuries leading to permanent medical impairment for the insurance companies, with respect to injuries sustained in vehicle crashes. These injuries are sustained in all impact directions, however they are most common in rear impacts. Injury statistics have since the mid-1960s consistently shown that females are subject to a higher risk of sustaining this type of injury than males, on average twice the risk of injury. Furthermore, some recently developed anti-whiplash systems have revealed they provide less protection for females than males. The protection of both males and females should be addresses equally when designing and evaluating vehicle safety systems to ensure maximum safety for everyone. This is currently not the case. The norm for crash test dummies representing humans in crash test laboratories is an average male. The female part of the population is not represented in tests performed by consumer information organisations such as NCAP or in regulatory tests due to the absence of a physical dummy representing an average female. Recently, the world first virtual model of an average female crash test dummy was developed. In this study, simulations were run with both this model and an average male dummy model, seated in a simplified model of a vehicle seat. The results of the simulations were compared to earlier published results from simulations run in the same test set-up with a vehicle concepts seat. The three crash pulse severities of the Euro NCAP low severity rear impact test were applied. The motion of the neck, head and upper torso were analysed in addition to the accelerations and the Neck Injury Criterion (NIC). Furthermore, the response of the virtual models was compared to the response of volunteers as well as the average male model, to that of the response of a physical dummy model. Simulations

  8. Beauty is in the ease of the beholding: A neurophysiological test of the averageness theory of facial attractiveness

    Science.gov (United States)

    Trujillo, Logan T.; Jankowitsch, Jessica M.; Langlois, Judith H.

    2014-01-01

    Multiple studies show that people prefer attractive over unattractive faces. But what is an attractive face and why is it preferred? Averageness theory claims that faces are perceived as attractive when their facial configuration approximates the mathematical average facial configuration of the population. Conversely, faces that deviate from this average configuration are perceived as unattractive. The theory predicts that both attractive and mathematically averaged faces should be processed more fluently than unattractive faces, whereas the averaged faces should be processed marginally more fluently than the attractive faces. We compared neurocognitive and behavioral responses to attractive, unattractive, and averaged human faces to test these predictions. We recorded event-related potentials (ERPs) and reaction times (RTs) from 48 adults while they discriminated between human and chimpanzee faces. Participants categorized averaged and high attractive faces as “human” faster than low attractive faces. The posterior N170 (150 – 225 ms) face-evoked ERP component was smaller in response to high attractive and averaged faces versus low attractive faces. Single-trial EEG analysis indicated that this reduced ERP response arose from the engagement of fewer neural resources and not from a change in the temporal consistency of how those resources were engaged. These findings provide novel evidence that faces are perceived as attractive when they approximate a facial configuration close to the population average and suggest that processing fluency underlies preferences for attractive faces. PMID:24326966

  9. Testing VRIN framework: Resource value and rareness as sources of competitive advantage and above average performance

    OpenAIRE

    Talaja, Anita

    2012-01-01

    In this study, structural equation model that analyzes the impact of resource and capability characteristics, more specifically value and rareness, on sustainable competitive advantage and above average performance is developed and empirically tested. According to the VRIN framework, if a company possesses and exploits valuable, rare, inimitable and non-substitutable resources and capabilities, it will achieve sustainable competitive advantage. Although the above mentioned statement is widely...

  10. FY-2016 Methyl Iodide Higher NOx Adsorption Test Report

    Energy Technology Data Exchange (ETDEWEB)

    Soelberg, Nicholas Ray [Idaho National Lab. (INL), Idaho Falls, ID (United States); Watson, Tony Leroy [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-09-01

    Deep-bed methyl iodide adsorption testing has continued in Fiscal Year 2016 under the Department of Energy (DOE) Fuel Cycle Technology (FCT) Program Offgas Sigma Team to further research and advance the technical maturity of solid sorbents for capturing iodine-129 in off-gas streams during used nuclear fuel reprocessing. Adsorption testing with higher levels of NO (approximately 3,300 ppm) and NO2 (up to about 10,000 ppm) indicate that high efficiency iodine capture by silver aerogel remains possible. Maximum iodine decontamination factors (DFs, or the ratio of iodine flowrate in the sorbent bed inlet gas compared to the iodine flowrate in the outlet gas) exceeded 3,000 until bed breakthrough rapidly decreased the DF levels to as low as about 2, when the adsorption capability was near depletion. After breakthrough, nearly all of the uncaptured iodine that remains in the bed outlet gas stream is no longer in the form of the original methyl iodide. The methyl iodide molecules are cleaved in the sorbent bed, even after iodine adsorption is no longer efficient, so that uncaptured iodine is in the form of iodine species soluble in caustic scrubber solutions, and detected and reported here as diatomic I2. The mass transfer zone depths were estimated at 8 inches, somewhat deeper than the 2-5 inch range estimated for both silver aerogels and silver zeolites in prior deep-bed tests, which had lower NOx levels. The maximum iodine adsorption capacity and silver utilization for these higher NOx tests, at about 5-15% of the original sorbent mass, and about 12-35% of the total silver, respectively, were lower than for trends from prior silver aerogel and silver zeolite tests with lower NOx levels. Additional deep-bed testing and analyses are recommended to expand the database for organic iodide adsorption and increase the technical maturity if iodine adsorption processes.

  11. When good = better than average

    Directory of Open Access Journals (Sweden)

    Don A. Moore

    2007-10-01

    Full Text Available People report themselves to be above average on simple tasks and below average on difficult tasks. This paper proposes an explanation for this effect that is simpler than prior explanations. The new explanation is that people conflate relative with absolute evaluation, especially on subjective measures. The paper then presents a series of four studies that test this conflation explanation. These tests distinguish conflation from other explanations, such as differential weighting and selecting the wrong referent. The results suggest that conflation occurs at the response stage during which people attempt to disambiguate subjective response scales in order to choose an answer. This is because conflation has little effect on objective measures, which would be equally affected if the conflation occurred at encoding.

  12. Vouchers, Tests, Loans, Privatization: Will They Help Tackle Corruption in Russian Higher Education?

    Science.gov (United States)

    Osipian, Ararat L.

    2009-01-01

    Higher education in Russia is currently being reformed. A standardized computer-graded test and educational vouchers were introduced to make higher education more accessible, fund it more effectively, and reduce corruption in admissions to public colleges. The voucher project failed and the test faces furious opposition. This paper considers…

  13. 76 FR 13580 - Bus Testing; Calculation of Average Passenger Weight and Test Vehicle Weight

    Science.gov (United States)

    2011-03-14

    ... averages (See, Advisory Circular 120-27E, ``Aircraft Weight and Balance Control,'' June 10, 2005) and the...' needs, or they may choose to upgrade individual components, such as chassis, wheels, tires, brakes, or...

  14. Increasing average period lengths by switching of robust chaos maps in finite precision

    Science.gov (United States)

    Nagaraj, N.; Shastry, M. C.; Vaidya, P. G.

    2008-12-01

    Grebogi, Ott and Yorke (Phys. Rev. A 38, 1988) have investigated the effect of finite precision on average period length of chaotic maps. They showed that the average length of periodic orbits (T) of a dynamical system scales as a function of computer precision (ɛ) and the correlation dimension (d) of the chaotic attractor: T ˜ɛ-d/2. In this work, we are concerned with increasing the average period length which is desirable for chaotic cryptography applications. Our experiments reveal that random and chaotic switching of deterministic chaotic dynamical systems yield higher average length of periodic orbits as compared to simple sequential switching or absence of switching. To illustrate the application of switching, a novel generalization of the Logistic map that exhibits Robust Chaos (absence of attracting periodic orbits) is first introduced. We then propose a pseudo-random number generator based on chaotic switching between Robust Chaos maps which is found to successfully pass stringent statistical tests of randomness.

  15. Evaluations of average level spacings

    International Nuclear Information System (INIS)

    Liou, H.I.

    1980-01-01

    The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of 168 Er data. 19 figures, 2 tables

  16. Rotational averaging of multiphoton absorption cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Friese, Daniel H., E-mail: daniel.h.friese@uit.no; Beerepoot, Maarten T. P.; Ruud, Kenneth [Centre for Theoretical and Computational Chemistry, University of Tromsø — The Arctic University of Norway, N-9037 Tromsø (Norway)

    2014-11-28

    Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.

  17. Introducing Vouchers and Standardized Tests for Higher Education in Russia: Expectations and Measurements

    OpenAIRE

    Osipian, Ararat

    2008-01-01

    The reform of higher education in Russia, based on standardized tests and educational vouchers, was intended to reduce inequalities in access to higher education. The initiative with the vouchers has failed and by now is already forgotten while the national test is planned to be introduced nationwide in 2009. The national test called to replace the present corrupt system of entry examinations has experienced numerous problems so far and will likely have even more problems in the future. This ...

  18. Higher emotional intelligence is related to lower test anxiety among students

    Science.gov (United States)

    Ahmadpanah, Mohammad; Keshavarz, Mohammadreza; Haghighi, Mohammad; Jahangard, Leila; Bajoghli, Hafez; Sadeghi Bahmani, Dena; Holsboer-Trachsler, Edith; Brand, Serge

    2016-01-01

    Background For students attending university courses, experiencing test anxiety (TA) dramatically impairs cognitive performance and success at exams. Whereas TA is a specific case of social phobia, emotional intelligence (EI) is an umbrella term covering interpersonal and intrapersonal skills, along with positive stress management, adaptability, and mood. In the present study, we tested the hypothesis that higher EI and lower TA are associated. Further, sex differences were explored. Method During an exam week, a total of 200 university students completed questionnaires covering sociodemographic information, TA, and EI. Results Higher scores on EI traits were associated with lower TA scores. Relative to male participants, female participants reported higher TA scores, but not EI scores. Intrapersonal and interpersonal skills and mood predicted low TA, while sex, stress management, and adaptability were excluded from the equation. Conclusion The pattern of results suggests that efforts to improve intrapersonal and interpersonal skills, and mood might benefit students with high TA. Specifically, social commitment might counteract TA. PMID:26834474

  19. The effects of sweep numbers per average and protocol type on the accuracy of the p300-based concealed information test.

    Science.gov (United States)

    Dietrich, Ariana B; Hu, Xiaoqing; Rosenfeld, J Peter

    2014-03-01

    In the first of two experiments, we compared the accuracy of the P300 concealed information test protocol as a function of numbers of trials experienced by subjects and ERP averages analyzed by investigators. Contrary to Farwell et al. (Cogn Neurodyn 6(2):115-154, 2012), we found no evidence that 100 trial based averages are more accurate than 66 or 33 trial based averages (all numbers led to accuracies of 84-94 %). There was actually a trend favoring the lowest trial numbers. The second study compared numbers of irrelevant stimuli recalled and recognized in the 3-stimulus protocol versus the complex trial protocol (Rosenfeld in Memory detection: theory and application of the concealed information test, Cambridge University Press, New York, pp 63-89, 2011). Again, in contrast to expectations from Farwell et al. (Cogn Neurodyn 6(2):115-154, 2012), there were no differences between protocols, although there were more irrelevant stimuli recognized than recalled, and irrelevant 4-digit number group stimuli were neither recalled nor recognized as well as irrelevant city name stimuli. We therefore conclude that stimulus processing in the P300-based complex trial protocol-with no more than 33 sweep averages-is adequate to allow accurate detection of concealed information.

  20. Glycogen with short average chain length enhances bacterial durability

    Science.gov (United States)

    Wang, Liang; Wise, Michael J.

    2011-09-01

    Glycogen is conventionally viewed as an energy reserve that can be rapidly mobilized for ATP production in higher organisms. However, several studies have noted that glycogen with short average chain length in some bacteria is degraded very slowly. In addition, slow utilization of glycogen is correlated with bacterial viability, that is, the slower the glycogen breakdown rate, the longer the bacterial survival time in the external environment under starvation conditions. We call that a durable energy storage mechanism (DESM). In this review, evidence from microbiology, biochemistry, and molecular biology will be assembled to support the hypothesis of glycogen as a durable energy storage compound. One method for testing the DESM hypothesis is proposed.

  1. Safety Impact of Average Speed Control in the UK

    DEFF Research Database (Denmark)

    Lahrmann, Harry Spaabæk; Brassøe, Bo; Johansen, Jonas Wibert

    2016-01-01

    of automatic speed control was point-based, but in recent years a potentially more effective alternative automatic speed control method has been introduced. This method is based upon records of drivers’ average travel speed over selected sections of the road and is normally called average speed control...... in the UK. The study demonstrates that the introduction of average speed control results in statistically significant and substantial reductions both in speed and in number of accidents. The evaluation indicates that average speed control has a higher safety effect than point-based automatic speed control....

  2. Testes mass, but not sperm length, increases with higher levels of polyandry in an ancient sex model.

    Directory of Open Access Journals (Sweden)

    David E Vrech

    Full Text Available There is strong evidence that polyandrous taxa have evolved relatively larger testes than monogamous relatives. Sperm size may either increase or decrease across species with the risk or intensity of sperm competition. Scorpions represent an ancient direct mode with spermatophore-mediated sperm transfer and are particularly well suited for studies in sperm competition. This work aims to analyze for the first time the variables affecting testes mass, ejaculate volume and sperm length, according with their levels of polyandry, in species belonging to the Neotropical family Bothriuridae. Variables influencing testes mass and sperm length were obtained by model selection analysis using corrected Akaike Information Criterion. Testes mass varied greatly among the seven species analyzed, ranging from 1.6 ± 1.1 mg in Timogenes dorbignyi to 16.3 ± 4.5 mg in Brachistosternus pentheri with an average of 8.4 ± 5.0 mg in all the species. The relationship between testes mass and body mass was not significant. Body allocation in testes mass, taken as Gonadosomatic Index, was high in Bothriurus cordubensis and Brachistosternus ferrugineus and low in Timogenes species. The best-fitting model for testes mass considered only polyandry as predictor with a positive influence. Model selection showed that body mass influenced sperm length negatively but after correcting for body mass, none of the variables analyzed explained sperm length. Both body mass and testes mass influenced spermatophore volume positively. There was a strong phylogenetic effect on the model containing testes mass. As predicted by the sperm competition theory and according to what happens in other arthropods, testes mass increased in species with higher levels of sperm competition, and influenced positively spermatophore volume, but data was not conclusive for sperm length.

  3. Warfarin maintenance dose in older patients: higher average dose and wider dose frequency distribution in patients of African ancestry than those of European ancestry.

    Science.gov (United States)

    Garwood, Candice L; Clemente, Jennifer L; Ibe, George N; Kandula, Vijay A; Curtis, Kristy D; Whittaker, Peter

    2010-06-15

    Studies report that warfarin doses required to maintain therapeutic anticoagulation decrease with age; however, these studies almost exclusively enrolled patients of European ancestry. Consequently, universal application of dosing paradigms based on such evidence may be confounded because ethnicity also influences dose. Therefore, we determined if warfarin dose decreased with age in Americans of African ancestry, if older African and European ancestry patients required different doses, and if their daily dose frequency distributions differed. Our chart review examined 170 patients of African ancestry and 49 patients of European ancestry cared for in our anticoagulation clinic. We calculated the average weekly dose required for each stable, anticoagulated patient to maintain an international normalized ratio of 2.0 to 3.0, determined dose averages for groups 80 years of age and plotted dose as a function of age. The maintenance dose in patients of African ancestry decreased with age (PAfrican ancestry required higher average weekly doses than patients of European ancestry: 33% higher in the 70- to 79-year-old group (38.2+/-1.9 vs. 28.8+/-1.7 mg; P=0.006) and 52% in the >80-year-old group (33.2+/-1.7 vs. 21.8+/-3.8 mg; P=0.011). Therefore, 43% of older patients of African ancestry required daily doses >5mg and hence would have been under-dosed using current starting-dose guidelines. The dose frequency distribution was wider for older patients of African ancestry compared to those of European ancestry (PAfrican ancestry indicate that strategies for initiating warfarin therapy based on studies of patients of European ancestry could result in insufficient anticoagulation and thereby potentially increase their thromboembolism risk. Copyright 2010 Elsevier Inc. All rights reserved.

  4. Higher mind-brain development in successful leaders: testing a unified theory of performance.

    Science.gov (United States)

    Harung, Harald S; Travis, Frederick

    2012-05-01

    This study explored mind-brain characteristics of successful leaders as reflected in scores on the Brain Integration Scale, Gibbs's Socio-moral Reasoning questionnaire, and an inventory of peak experiences. These variables, which in previous studies distinguished world-class athletes and professional classical musicians from average-performing controls, were recorded in 20 Norwegian top-level managers and in 20 low-level managers-matched for age, gender, education, and type of organization (private or public). Top-level managers were characterized by higher Brain Integration Scale scores, higher levels of moral reasoning, and more frequent peak experiences. These multilevel measures could be useful tools in selection and recruiting of potential managers and in assessing leadership education and development programs. Future longitudinal research could further investigate the relationship between leadership success and these and other multilevel variables.

  5. Time-averaged molluscan death assemblages: Palimpsests of richness, snapshots of abundance

    Science.gov (United States)

    Kidwell, Susan M.

    2002-09-01

    Field tests that compare living communities to associated dead remains are the primary means of estimating the reliability of biological information in the fossil record; such tests also provide insights into the dynamics of skeletal accumulation. Contrary to expectations, molluscan death assemblages capture a strong signal of living species' rank-order abundances. This finding, combined with independent evidence for exponential postmortem destruction of dead cohorts, argues that, although the species richness of a death assemblage may be a time-averaged palimpsest of the habitat (molluscan death assemblages contain, on average, ˜25% more species than any single census of the local live community, after sample-size standardization), species' relative-abundance data from the same assemblage probably constitute a much higher acuity record dominated by the most recent dead cohorts (e.g., from the past few hundred years or so, rather than the several thousand years recorded by the total assemblage and usually taken as the acuity of species-richness information). The pervasive excess species richness of molluscan death assemblages requires further analysis and modeling to discriminate among possible sources. However, time averaging alone cannot be responsible unless rare species (species with low rates of dead-shell production) are collectively more durable (have longer taphonomic half-lives) than abundant species. Species richness and abundance data thus appear to present fundamentally different taphonomic qualities for paleobiological analysis. Relative- abundance information is more snapshot-like and thus taphonomically more straightforward than expected, especially compared to the complex origins of dead-species richness.

  6. 40 CFR 63.652 - Emissions averaging provisions.

    Science.gov (United States)

    2010-07-01

    ... emissions more than the reference control technology, but the combination of the pollution prevention... emissions average. This must include any Group 1 emission points to which the reference control technology... agrees has a higher nominal efficiency than the reference control technology. Information on the nominal...

  7. The Generalized Higher Criticism for Testing SNP-Set Effects in Genetic Association Studies

    Science.gov (United States)

    Barnett, Ian; Mukherjee, Rajarshi; Lin, Xihong

    2017-01-01

    It is of substantial interest to study the effects of genes, genetic pathways, and networks on the risk of complex diseases. These genetic constructs each contain multiple SNPs, which are often correlated and function jointly, and might be large in number. However, only a sparse subset of SNPs in a genetic construct is generally associated with the disease of interest. In this article, we propose the generalized higher criticism (GHC) to test for the association between an SNP set and a disease outcome. The higher criticism is a test traditionally used in high-dimensional signal detection settings when marginal test statistics are independent and the number of parameters is very large. However, these assumptions do not always hold in genetic association studies, due to linkage disequilibrium among SNPs and the finite number of SNPs in an SNP set in each genetic construct. The proposed GHC overcomes the limitations of the higher criticism by allowing for arbitrary correlation structures among the SNPs in an SNP-set, while performing accurate analytic p-value calculations for any finite number of SNPs in the SNP-set. We obtain the detection boundary of the GHC test. We compared empirically using simulations the power of the GHC method with existing SNP-set tests over a range of genetic regions with varied correlation structures and signal sparsity. We apply the proposed methods to analyze the CGEM breast cancer genome-wide association study. Supplementary materials for this article are available online. PMID:28736464

  8. Higher emotional intelligence is related to lower test anxiety among students

    Directory of Open Access Journals (Sweden)

    Ahmadpanah M

    2016-01-01

    Full Text Available Mohammad Ahmadpanah,1 Mohammadreza Keshavarz,1 Mohammad Haghighi,1 Leila Jahangard,1 Hafez Bajoghli,2 Dena Sadeghi Bahmani,3 Edith Holsboer-Trachsler,3 Serge Brand3,4 1Behavioral Disorders and Substances Abuse, Research Center, Hamadan University of Medical Sciences, Hamadan, Iran; 2Iranian National Center for Addiction Studies (INCAS, Iranian Institute for Reduction of High-Risk Behaviors, Tehran University of Medical Sciences, Tehran, Iran; 3Psychiatric Clinics of the University of Basel, Center for Affective, Stress and Sleep Disorders, 4Department of Sport, Exercise and Health Science, Sport Science Section, University of Basel, Basel, Switzerland Background: For students attending university courses, experiencing test anxiety (TA dramatically impairs cognitive performance and success at exams. Whereas TA is a specific case of social phobia, emotional intelligence (EI is an umbrella term covering interpersonal and intrapersonal skills, along with positive stress management, adaptability, and mood. In the present study, we tested the hypothesis that higher EI and lower TA are associated. Further, sex differences were explored.Method: During an exam week, a total of 200 university students completed questionnaires covering sociodemographic information, TA, and EI.Results: Higher scores on EI traits were associated with lower TA scores. Relative to male participants, female participants reported higher TA scores, but not EI scores. Intrapersonal and interpersonal skills and mood predicted low TA, while sex, stress management, and adaptability were excluded from the equation.Conclusion: The pattern of results suggests that efforts to improve intrapersonal and interpersonal skills, and mood might benefit students with high TA. Specifically, social commitment might counteract TA. Keywords: test anxiety, emotional intelligence, students, interpersonal skills, intrapersonal skills

  9. Reproducing multi-model ensemble average with Ensemble-averaged Reconstructed Forcings (ERF) in regional climate modeling

    Science.gov (United States)

    Erfanian, A.; Fomenko, L.; Wang, G.

    2016-12-01

    Multi-model ensemble (MME) average is considered the most reliable for simulating both present-day and future climates. It has been a primary reference for making conclusions in major coordinated studies i.e. IPCC Assessment Reports and CORDEX. The biases of individual models cancel out each other in MME average, enabling the ensemble mean to outperform individual members in simulating the mean climate. This enhancement however comes with tremendous computational cost, which is especially inhibiting for regional climate modeling as model uncertainties can originate from both RCMs and the driving GCMs. Here we propose the Ensemble-based Reconstructed Forcings (ERF) approach to regional climate modeling that achieves a similar level of bias reduction at a fraction of cost compared with the conventional MME approach. The new method constructs a single set of initial and boundary conditions (IBCs) by averaging the IBCs of multiple GCMs, and drives the RCM with this ensemble average of IBCs to conduct a single run. Using a regional climate model (RegCM4.3.4-CLM4.5), we tested the method over West Africa for multiple combination of (up to six) GCMs. Our results indicate that the performance of the ERF method is comparable to that of the MME average in simulating the mean climate. The bias reduction seen in ERF simulations is achieved by using more realistic IBCs in solving the system of equations underlying the RCM physics and dynamics. This endows the new method with a theoretical advantage in addition to reducing computational cost. The ERF output is an unaltered solution of the RCM as opposed to a climate state that might not be physically plausible due to the averaging of multiple solutions with the conventional MME approach. The ERF approach should be considered for use in major international efforts such as CORDEX. Key words: Multi-model ensemble, ensemble analysis, ERF, regional climate modeling

  10. The use of averages and other summation quantities in the testing of evaluated fission product yield and decay data. Applications to ENDF/B(IV)

    International Nuclear Information System (INIS)

    Walker, W.H.

    1976-01-01

    Averages of some fission product properties can be obtained by multiplying the fission product yield for each fission product by the value of the property (e.g. mass, atomic number, mass defect) for that fission product and summing all significant contributions. These averages can be used to test the reliability of the yield set or provide useful data for reactor calculations. The report gives the derivation of these averages and discusses their application using the ENDF/B(IV) fission product library. The following quantities are treated here: the number of fission products per fission ΣYsub(i); the average mass number and the average number of neutrons per fission; the average atomic number of the stable fission products and the average number of β-decays per fission; the average mass defect of the stable fission products and the total energy release per fission; the average decay energy per fission (beta, gamma and anti-neutrino); the average β-decay energy per fission; individual and group-averaged delayed neutron emission; the total yield for each fission product element. Wherever it is meaningful to do so, a sum is subdivided into its light and heavy mass components. The most significant differences between calculated values based on ENDF/B(IV) and measurements are the β and γ decay energies for 235 U thermal fission and delayed neutron yields for other fissile nuclides, most notably 238 U. (author)

  11. Small Bandwidth Asymptotics for Density-Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    This paper proposes (apparently) novel standard error formulas for the density-weighted average derivative estimator of Powell, Stock, and Stoker (1989). Asymptotic validity of the standard errors developed in this paper does not require the use of higher-order kernels and the standard errors...

  12. The Changing Faces of Corruption in Georgian Higher Education: Access through Times and Tests

    Science.gov (United States)

    Orkodashvili, Mariam

    2012-01-01

    This article presents a comparative-historical analysis of access to higher education in Georgia. It describes the workings of corrupt channels during the Soviet and early post-Soviet periods and the role of standardized tests in fighting corruption in higher education admission processes after introduction of the Unified National Entrance…

  13. State Averages

    Data.gov (United States)

    U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...

  14. Pigeons exhibit higher accuracy for chosen memory tests than for forced memory tests in duration matching-to-sample.

    Science.gov (United States)

    Adams, Allison; Santi, Angelo

    2011-03-01

    Following training to match 2- and 8-sec durations of feederlight to red and green comparisons with a 0-sec baseline delay, pigeons were allowed to choose to take a memory test or to escape the memory test. The effects of sample omission, increases in retention interval, and variation in trial spacing on selection of the escape option and accuracy were studied. During initial testing, escaping the test did not increase as the task became more difficult, and there was no difference in accuracy between chosen and forced memory tests. However, with extended training, accuracy for chosen tests was significantly greater than for forced tests. In addition, two pigeons exhibited higher accuracy on chosen tests than on forced tests at the short retention interval and greater escape rates at the long retention interval. These results have not been obtained in previous studies with pigeons when the choice to take the test or to escape the test is given before test stimuli are presented. It appears that task-specific methodological factors may determine whether a particular species will exhibit the two behavioral effects that were initially proposed as potentially indicative of metacognition.

  15. Standard Test Method for Measuring Neutron Fluence and Average Energy from 3H(d,n)4He Neutron Generators by Radioactivation Techniques 1

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2009-01-01

    1.1 This test method covers a general procedure for the measurement of the fast-neutron fluence rate produced by neutron generators utilizing the 3H(d,n)4He reaction. Neutrons so produced are usually referred to as 14-MeV neutrons, but range in energy depending on a number of factors. This test method does not adequately cover fusion sources where the velocity of the plasma may be an important consideration. 1.2 This test method uses threshold activation reactions to determine the average energy of the neutrons and the neutron fluence at that energy. At least three activities, chosen from an appropriate set of dosimetry reactions, are required to characterize the average energy and fluence. The required activities are typically measured by gamma ray spectroscopy. 1.3 The measurement of reaction products in their metastable states is not covered. If the metastable state decays to the ground state, the ground state reaction may be used. 1.4 The values stated in SI units are to be regarded as standard. No oth...

  16. Averaged differential expression for the discovery of biomarkers in the blood of patients with prostate cancer.

    Directory of Open Access Journals (Sweden)

    V Uma Bai

    Full Text Available The identification of a blood-based diagnostic marker is a goal in many areas of medicine, including the early diagnosis of prostate cancer. We describe the use of averaged differential display as an efficient mechanism for biomarker discovery in whole blood RNA. The process of averaging reduces the problem of clinical heterogeneity while simultaneously minimizing sample handling.RNA was isolated from the blood of prostate cancer patients and healthy controls. Samples were pooled and subjected to the averaged differential display process. Transcripts present at different levels between patients and controls were purified and sequenced for identification. Transcript levels in the blood of prostate cancer patients and controls were verified by quantitative RT-PCR. Means were compared using a t-test and a receiver-operating curve was generated. The Ring finger protein 19A (RNF19A transcript was identified as having higher levels in prostate cancer patients compared to healthy men through the averaged differential display process. Quantitative RT-PCR analysis confirmed a more than 2-fold higher level of RNF19A mRNA levels in the blood of patients with prostate cancer than in healthy controls (p = 0.0066. The accuracy of distinguishing cancer patients from healthy men using RNF19A mRNA levels in blood as determined by the area under the receiving operator curve was 0.727.Averaged differential display offers a simplified approach for the comprehensive screening of body fluids, such as blood, to identify biomarkers in patients with prostate cancer. Furthermore, this proof-of-concept study warrants further analysis of RNF19A as a clinically relevant biomarker for prostate cancer detection.

  17. Geometrical optics in general relativity: A study of the higher order corrections

    International Nuclear Information System (INIS)

    Anile, A.M.

    1976-01-01

    The higher order corrections to geometrical optics are studied in general relativity for an electromagnetic test wave. An explicit expression is found for the average energy--momentum tensor which takes into account the first-order corrections. Finally the first-order corrections to the well-known area-intensity law of geometrical optics are derived

  18. Gaps in Incorporating Germline Genetic Testing Into Treatment Decision-Making for Early-Stage Breast Cancer.

    Science.gov (United States)

    Kurian, Allison W; Li, Yun; Hamilton, Ann S; Ward, Kevin C; Hawley, Sarah T; Morrow, Monica; McLeod, M Chandler; Jagsi, Reshma; Katz, Steven J

    2017-07-10

    Purpose Genetic testing for breast cancer risk is evolving rapidly, with growing use of multiple-gene panels that can yield uncertain results. However, little is known about the context of such testing or its impact on treatment. Methods A population-based sample of patients with breast cancer diagnosed in 2014 to 2015 and identified by two SEER registries (Georgia and Los Angeles) were surveyed about genetic testing experiences (N = 3,672; response rate, 68%). Responses were merged with SEER data. A patient subgroup at higher pretest risk of pathogenic mutation carriage was defined according to genetic testing guidelines. Patients' attending surgeons were surveyed about genetic testing and results management. We examined patterns and correlates of genetic counseling and testing and the impact of results on bilateral mastectomy (BLM) use. Results Six hundred sixty-six patients reported genetic testing. Although two thirds of patients were tested before surgical treatment, patients without private insurance more often experienced delays. Approximately half of patients (57% at higher pretest risk, 42% at average risk) discussed results with a genetic counselor. Patients with pathogenic mutations in BRCA1/2 or another gene had the highest rates of BLM (higher risk, 80%; average risk, 85%); however, BLM was also common among patients with genetic variants of uncertain significance (VUS; higher risk, 43%; average risk, 51%). Surgeons' confidence in discussing testing increased with volume of patients with breast cancer, but many surgeons (higher volume, 24%; lower volume, 50%) managed patients with BRCA1/2 VUS the same as patients with BRCA1/2 pathogenic mutations. Conclusion Many patients with breast cancer are tested without ever seeing a genetic counselor. Half of average-risk patients with VUS undergo BLM, suggesting a limited understanding of results that some surgeons share. These findings emphasize the need to address challenges in personalized communication

  19. Average Skin-Friction Drag Coefficients from Tank Tests of a Parabolic Body of Revolution (NACA RM-10)

    Science.gov (United States)

    Mottard, Elmo J; Loposer, J Dan

    1954-01-01

    Average skin-friction drag coefficients were obtained from boundary-layer total-pressure measurements on a parabolic body of revolution (NACA rm-10, basic fineness ratio 15) in water at Reynolds numbers from 4.4 x 10(6) to 70 x 10(6). The tests were made in the Langley tank no. 1 with the body sting-mounted at a depth of two maximum body diameters. The arithmetic mean of three drag measurements taken around the body was in good agreement with flat-plate results, but, apparently because of the slight surface wave caused by the body, the distribution of the boundary layer around the body was not uniform over part of the Reynolds number range.

  20. Bounds on Average Time Complexity of Decision Trees

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any diagnostic problem, the minimum average depth of decision tree is bounded from below by the entropy of probability distribution (with a multiplier 1/log2 k for a problem over a k-valued information system). Among diagnostic problems, the problems with a complete set of attributes have the lowest minimum average depth of decision trees (e.g, the problem of building optimal prefix code [1] and a blood test study in assumption that exactly one patient is ill [23]). For such problems, the minimum average depth of decision tree exceeds the lower bound by at most one. The minimum average depth reaches the maximum on the problems in which each attribute is "indispensable" [44] (e.g., a diagnostic problem with n attributes and kn pairwise different rows in the decision table and the problem of implementing the modulo 2 summation function). These problems have the minimum average depth of decision tree equal to the number of attributes in the problem description. © Springer-Verlag Berlin Heidelberg 2011.

  1. Cognitive Capitalism: Economic Freedom Moderates the Effects of Intellectual and Average Classes on Economic Productivity.

    Science.gov (United States)

    Coyle, Thomas R; Rindermann, Heiner; Hancock, Dale

    2016-10-01

    Cognitive ability stimulates economic productivity. However, the effects of cognitive ability may be stronger in free and open economies, where competition rewards merit and achievement. To test this hypothesis, ability levels of intellectual classes (top 5%) and average classes (country averages) were estimated using international student assessments (Programme for International Student Assessment; Trends in International Mathematics and Science Study; and Progress in International Reading Literacy Study) (N = 99 countries). The ability levels were correlated with indicators of economic freedom (Fraser Institute), scientific achievement (patent rates), innovation (Global Innovation Index), competitiveness (Global Competitiveness Index), and wealth (gross domestic product). Ability levels of intellectual and average classes strongly predicted all economic criteria. In addition, economic freedom moderated the effects of cognitive ability (for both classes), with stronger effects at higher levels of freedom. Effects were particularly robust for scientific achievements when the full range of freedom was analyzed. The results support cognitive capitalism theory: cognitive ability stimulates economic productivity, and its effects are enhanced by economic freedom. © The Author(s) 2016.

  2. Free Energy Self-Averaging in Protein-Sized Random Heteropolymers

    International Nuclear Information System (INIS)

    Chuang, Jeffrey; Grosberg, Alexander Yu.; Kardar, Mehran

    2001-01-01

    Current theories of heteropolymers are inherently macroscopic, but are applied to mesoscopic proteins. To compute the free energy over sequences, one assumes self-averaging -- a property established only in the macroscopic limit. By enumerating the states and energies of compact 18, 27, and 36mers on a lattice with an ensemble of random sequences, we test the self-averaging approximation. We find that fluctuations in the free energy between sequences are weak, and that self-averaging is valid at the scale of real proteins. The results validate sequence design methods which exponentially speed up computational design and simplify experimental realizations

  3. Transferability of hydrological models and ensemble averaging methods between contrasting climatic periods

    Science.gov (United States)

    Broderick, Ciaran; Matthews, Tom; Wilby, Robert L.; Bastola, Satish; Murphy, Conor

    2016-10-01

    Understanding hydrological model predictive capabilities under contrasting climate conditions enables more robust decision making. Using Differential Split Sample Testing (DSST), we analyze the performance of six hydrological models for 37 Irish catchments under climate conditions unlike those used for model training. Additionally, we consider four ensemble averaging techniques when examining interperiod transferability. DSST is conducted using 2/3 year noncontinuous blocks of (i) the wettest/driest years on record based on precipitation totals and (ii) years with a more/less pronounced seasonal precipitation regime. Model transferability between contrasting regimes was found to vary depending on the testing scenario, catchment, and evaluation criteria considered. As expected, the ensemble average outperformed most individual ensemble members. However, averaging techniques differed considerably in the number of times they surpassed the best individual model member. Bayesian Model Averaging (BMA) and the Granger-Ramanathan Averaging (GRA) method were found to outperform the simple arithmetic mean (SAM) and Akaike Information Criteria Averaging (AICA). Here GRA performed better than the best individual model in 51%-86% of cases (according to the Nash-Sutcliffe criterion). When assessing model predictive skill under climate change conditions we recommend (i) setting up DSST to select the best available analogues of expected annual mean and seasonal climate conditions; (ii) applying multiple performance criteria; (iii) testing transferability using a diverse set of catchments; and (iv) using a multimodel ensemble in conjunction with an appropriate averaging technique. Given the computational efficiency and performance of GRA relative to BMA, the former is recommended as the preferred ensemble averaging technique for climate assessment.

  4. Post-model selection inference and model averaging

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  5. Testing for Bias against Female Test Takers of the Graduate Management Admissions Test and Potential Impact on Admissions to Graduate Programs in Business.

    Science.gov (United States)

    Wright, Robert E.; Bachrach, Daniel G.

    2003-01-01

    Graduate Management Admission Test (GMAT) scores and grade point average in graduate core courses were compared for 190 male and 144 female business administration students. No significant differences in course performance were found, but males had been admitted with significantly higher GMAT scores, suggesting a bias against women. (Contains 27…

  6. Averaging models: parameters estimation with the R-Average procedure

    Directory of Open Access Journals (Sweden)

    S. Noventa

    2010-01-01

    Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.

  7. Effects on fatigue life of gate valves due to higher torque switch settings during operability testing

    International Nuclear Information System (INIS)

    Richins, W.D.; Snow, S.D.; Miller, G.K.; Russell, M.J.; Ware, A.G.

    1995-12-01

    Some motor operated valves now have higher torque switch settings due to regulatory requirements to ensure valve operability with appropriate margins at design basis conditions. Verifying operability with these settings imposes higher stem loads during periodic inservice testing. These higher test loads increase stresses in the various valve internal parts which may in turn increase the fatigue usage factors. This increased fatigue is judged to be a concern primarily in the valve disks, seats, yokes, stems, and stem nuts. Although the motor operators may also have significantly increased loading, they are being evaluated by the manufacturers and are beyond the scope of this study. Two gate valves representative of both relatively weak and strong valves commonly used in commercial nuclear applications were selected for fatigue analyses. Detailed dimensional and test data were available for both valves from previous studies at the Idaho National Engineering Laboratory. Finite element models were developed to estimate maximum stresses in the internal parts of the valves and to identity the critical areas within the valves where fatigue may be a concern. Loads were estimated using industry standard equations for calculating torque switch settings prior and subsequent to the testing requirements of USNRC Generic Letter 89--10. Test data were used to determine both; (1) the overshoot load between torque switch trip and final seating of the disk during valve closing and (2) the stem thrust required to open the valves. The ranges of peak stresses thus determined were then used to estimate the increase in the fatigue usage factors due to the higher stem thrust loads. The usages that would be accumulated by 100 base cycles plus one or eight test cycles per year over 40 and 60 years of operation were calculated

  8. Averaging Bias Correction for Future IPDA Lidar Mission MERLIN

    Science.gov (United States)

    Tellier, Yoann; Pierangelo, Clémence; Wirth, Martin; Gibert, Fabien

    2018-04-01

    The CNES/DLR MERLIN satellite mission aims at measuring methane dry-air mixing ratio column (XCH4) and thus improving surface flux estimates. In order to get a 1% precision on XCH4 measurements, MERLIN signal processing assumes an averaging of data over 50 km. The induced biases due to the non-linear IPDA lidar equation are not compliant with accuracy requirements. This paper analyzes averaging biases issues and suggests correction algorithms tested on realistic simulated scenes.

  9. The effects of average revenue regulation on electricity transmission investment and pricing

    International Nuclear Information System (INIS)

    Matsukawa, Isamu

    2008-01-01

    This paper investigates the long-run effects of average revenue regulation on an electricity transmission monopolist who applies a two-part tariff comprising a variable congestion price and a non-negative fixed access fee. A binding constraint on the monopolist's expected average revenue lowers the access fee, promotes transmission investment, and improves consumer surplus. In a case of any linear or log-linear electricity demand function with a positive probability that no congestion occurs, average revenue regulation is allocatively more efficient than a Coasian two-part tariff if the level of capacity under average revenue regulation is higher than that under a Coasian two-part tariff. (author)

  10. Determination of the diagnostic x-ray tube practical peak voltage (PPV) from average or average peak voltage measurements

    Energy Technology Data Exchange (ETDEWEB)

    Hourdakis, C J, E-mail: khour@gaec.gr [Ionizing Radiation Calibration Laboratory-Greek Atomic Energy Commission, PO Box 60092, 15310 Agia Paraskevi, Athens, Attiki (Greece)

    2011-04-07

    The practical peak voltage (PPV) has been adopted as the reference measuring quantity for the x-ray tube voltage. However, the majority of commercial kV-meter models measure the average peak, U-bar{sub P}, the average, U-bar, the effective, U{sub eff} or the maximum peak, U{sub P} tube voltage. This work proposed a method for determination of the PPV from measurements with a kV-meter that measures the average U-bar or the average peak, U-bar{sub p} voltage. The kV-meter reading can be converted to the PPV by applying appropriate calibration coefficients and conversion factors. The average peak k{sub PPV,kVp} and the average k{sub PPV,Uav} conversion factors were calculated from virtual voltage waveforms for conventional diagnostic radiology (50-150 kV) and mammography (22-35 kV) tube voltages and for voltage ripples from 0% to 100%. Regression equation and coefficients provide the appropriate conversion factors at any given tube voltage and ripple. The influence of voltage waveform irregularities, like 'spikes' and pulse amplitude variations, on the conversion factors was investigated and discussed. The proposed method and the conversion factors were tested using six commercial kV-meters at several x-ray units. The deviations between the reference and the calculated - according to the proposed method - PPV values were less than 2%. Practical aspects on the voltage ripple measurement were addressed and discussed. The proposed method provides a rigorous base to determine the PPV with kV-meters from U-bar{sub p} and U-bar measurement. Users can benefit, since all kV-meters, irrespective of their measuring quantity, can be used to determine the PPV, complying with the IEC standard requirements.

  11. P1-25: Filling-in the Blind Spot with the Average Direction

    Directory of Open Access Journals (Sweden)

    Sang-Ah Yoo

    2012-10-01

    Full Text Available Previous studies have shown that the visual system integrates local motions and perceives the average direction (Watamaniuk & Duchon, 1992 Vision Research 32 931–941. We investigated whether the surface of the blind spot is filled in with the average direction of the surrounding local motions. To test this, we varied the direction of a random-dot kinematogram (RDK both in adaptation and test. Motion aftereffects (MAE were defined as the difference of motion coherence thresholds between with and without adaptation. The participants were initially adapted to an annular RDK surrounding the blind spot for 30 s in their dominant eyes. The direction of each dot in this RDK was selected equally and randomly from either a normal distribution with the mean of 15° clockwise from vertical, 15° counterclockwise from vertical, or from the mixture of them. Immediately after the adaptation, a disk-shaped test RDK was presented for 1 s to the corresponding blind-spot location in the opposite eye. This RDK moved either 15° clockwise, 15° counterclockwise, or vertically (the average of the two directions. The participants' task was to discriminate the direction of the test RDK across different coherence levels. We found significant MAE when the test RDK had the same directions as the adaptor. More importantly, equally strong MAE was observed even when the direction of the test RDK was vertical, which was not physically present during adaptation. The result demonstrates that the visual system uses the average direction of the local surrounding motions to fill in the blind spot.

  12. Capillary Electrophoresis Sensitivity Enhancement Based on Adaptive Moving Average Method.

    Science.gov (United States)

    Drevinskas, Tomas; Telksnys, Laimutis; Maruška, Audrius; Gorbatsova, Jelena; Kaljurand, Mihkel

    2018-06-05

    In the present work, we demonstrate a novel approach to improve the sensitivity of the "out of lab" portable capillary electrophoretic measurements. Nowadays, many signal enhancement methods are (i) underused (nonoptimal), (ii) overused (distorts the data), or (iii) inapplicable in field-portable instrumentation because of a lack of computational power. The described innovative migration velocity-adaptive moving average method uses an optimal averaging window size and can be easily implemented with a microcontroller. The contactless conductivity detection was used as a model for the development of a signal processing method and the demonstration of its impact on the sensitivity. The frequency characteristics of the recorded electropherograms and peaks were clarified. Higher electrophoretic mobility analytes exhibit higher-frequency peaks, whereas lower electrophoretic mobility analytes exhibit lower-frequency peaks. On the basis of the obtained data, a migration velocity-adaptive moving average algorithm was created, adapted, and programmed into capillary electrophoresis data-processing software. Employing the developed algorithm, each data point is processed depending on a certain migration time of the analyte. Because of the implemented migration velocity-adaptive moving average method, the signal-to-noise ratio improved up to 11 times for sampling frequency of 4.6 Hz and up to 22 times for sampling frequency of 25 Hz. This paper could potentially be used as a methodological guideline for the development of new smoothing algorithms that require adaptive conditions in capillary electrophoresis and other separation methods.

  13. Averaging Bias Correction for Future IPDA Lidar Mission MERLIN

    Directory of Open Access Journals (Sweden)

    Tellier Yoann

    2018-01-01

    Full Text Available The CNES/DLR MERLIN satellite mission aims at measuring methane dry-air mixing ratio column (XCH4 and thus improving surface flux estimates. In order to get a 1% precision on XCH4 measurements, MERLIN signal processing assumes an averaging of data over 50 km. The induced biases due to the non-linear IPDA lidar equation are not compliant with accuracy requirements. This paper analyzes averaging biases issues and suggests correction algorithms tested on realistic simulated scenes.

  14. Accurate phenotyping: Reconciling approaches through Bayesian model averaging.

    Directory of Open Access Journals (Sweden)

    Carla Chia-Ming Chen

    Full Text Available Genetic research into complex diseases is frequently hindered by a lack of clear biomarkers for phenotype ascertainment. Phenotypes for such diseases are often identified on the basis of clinically defined criteria; however such criteria may not be suitable for understanding the genetic composition of the diseases. Various statistical approaches have been proposed for phenotype definition; however our previous studies have shown that differences in phenotypes estimated using different approaches have substantial impact on subsequent analyses. Instead of obtaining results based upon a single model, we propose a new method, using Bayesian model averaging to overcome problems associated with phenotype definition. Although Bayesian model averaging has been used in other fields of research, this is the first study that uses Bayesian model averaging to reconcile phenotypes obtained using multiple models. We illustrate the new method by applying it to simulated genetic and phenotypic data for Kofendred personality disorder-an imaginary disease with several sub-types. Two separate statistical methods were used to identify clusters of individuals with distinct phenotypes: latent class analysis and grade of membership. Bayesian model averaging was then used to combine the two clusterings for the purpose of subsequent linkage analyses. We found that causative genetic loci for the disease produced higher LOD scores using model averaging than under either individual model separately. We attribute this improvement to consolidation of the cores of phenotype clusters identified using each individual method.

  15. Neutron resonance averaging

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1986-10-01

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs

  16. Neuropsychological factors related to returning to work in patients with higher brain dysfunction.

    Science.gov (United States)

    Kai, Akiko; Hashimoto, Manabu; Okazaki, Tetsuya; Hachisuka, Kenji

    2008-12-01

    We conducted neuropsychological tests of patients with higher brain dysfunction to examine the characteristics of barriers to employment. We tested 92 patients with higher brain dysfunction (average age of 36.3 +/- 13.8 years old, ranging between 16 and 63 years old, with an average post-injury period of 35.6 +/- 67.8 months) who were hospitalized at the university hospital between February 2002 and June 2007 for further neuropsychological evaluation, conducting the Wechsler Adult Intelligence Scale-Revised (WAIS-R), Wechsler Memory Scale-Revised (WMS-R), the Rivermead Behavioral Memory Test (RBMT), Frontal Assessment Battery (FAB) and Behavioral Assessment of Dysexecutive Syndrome (BADS). The outcomes after discharge were classified between competitive employment, sheltered employment and non-employment, and the three groups were compared using one-way analysis of variance and the Scheffe test. The WAIS-R subtests were mutually compared based on the standard values of significant differences described in the WAIS-R manual. Verbal performance and full scale Intelligence Quotient (IQ) of WAIS-R were 87.7 +/- 15.6 (mean +/- standard deviation), 78.5 +/- 18.1 and 81.0 +/- 17.2, respectively, and verbal memory, visual memory, general memory, attention/concentration and delayed recall were 74.6 +/- 20.0, 76.6 +/- 21.4, 72.0 +/- 20.4, 89.0 +/- 16.5 and 65.2 +/- 20.8, respectively. The competitive employment group showed significantly higher scores in performance IQ and full IQ on the WAIS-R and verbal memory, visual memory, general memory and delayed recall on the WMS-R and RBMT than the non-employment group. The sheltered employment group showed a significantly higher score in delayed recall than the non-employment group. No difference was observed in the FAB or BADS between the three groups. In the subtests of the WAIS-R, the score for Digit Symbol-Coding was significantly lower than almost all the other subtests. For patients with higher brain dysfunction, IQ (full

  17. Comparison of depth-averaged concentration and bed load flux sediment transport models of dam-break flow

    Directory of Open Access Journals (Sweden)

    Jia-heng Zhao

    2017-10-01

    Full Text Available This paper presents numerical simulations of dam-break flow over a movable bed. Two different mathematical models were compared: a fully coupled formulation of shallow water equations with erosion and deposition terms (a depth-averaged concentration flux model, and shallow water equations with a fully coupled Exner equation (a bed load flux model. Both models were discretized using the cell-centered finite volume method, and a second-order Godunov-type scheme was used to solve the equations. The numerical flux was calculated using a Harten, Lax, and van Leer approximate Riemann solver with the contact wave restored (HLLC. A novel slope source term treatment that considers the density change was introduced to the depth-averaged concentration flux model to obtain higher-order accuracy. A source term that accounts for the sediment flux was added to the bed load flux model to reflect the influence of sediment movement on the momentum of the water. In a one-dimensional test case, a sensitivity study on different model parameters was carried out. For the depth-averaged concentration flux model, Manning's coefficient and sediment porosity values showed an almost linear relationship with the bottom change, and for the bed load flux model, the sediment porosity was identified as the most sensitive parameter. The capabilities and limitations of both model concepts are demonstrated in a benchmark experimental test case dealing with dam-break flow over variable bed topography.

  18. Virginia tech freshman class becoming more competitive; Rise in grades and test scores noted

    OpenAIRE

    Virginia Tech News

    2004-01-01

    Admission to Virginia Tech continues to become more competitive as applicants report higher grade point averages and test scores than previous years. The incoming class of 4,975 students has an average grade point average (GPA) of 3.68 and SAT 1203, up from 3.60 GPA and 1197 SAT in 2003.

  19. PSA testing for men at average risk of prostate cancer

    Directory of Open Access Journals (Sweden)

    Bruce K Armstrong

    2017-07-01

    Full Text Available Prostate-specific antigen (PSA testing of men at normal risk of prostate cancer is one of the most contested issues in cancer screening. There is no formal screening program, but testing is common – arguably a practice that ran ahead of the evidence. Public and professional communication about PSA screening has been highly varied and potentially confusing for practitioners and patients alike. There has been much research and policy activity relating to PSA testing in recent years. Landmark randomised controlled trials have been reported; authorities – including the 2013 Prostate Cancer World Congress, the Prostate Cancer Foundation of Australia, Cancer Council Australia, and the National Health and Medical Research Council – have made or endorsed public statements and/or issued clinical practice guidelines; and the US Preventive Services Task Force is revising its recommendations. But disagreement continues. The contention is partly over what the new evidence means. It is also a result of different valuing and prioritisation of outcomes that are hard to compare: prostate cancer deaths prevented (a small and disputed number; prevention of metastatic disease (somewhat more common; and side-effects of treatment such as incontinence, impotence and bowel trouble (more common again. A sizeable proportion of men diagnosed through PSA testing (somewhere between 20% and 50% would never have had prostate cancer symptoms sufficient to prompt investigation; many of these men are older, with competing comorbidities. It is a complex picture. Below are four viewpoints from expert participants in the evolving debate, commissioned for this cancer screening themed issue of Public Health Research & Practice. We asked the authors to respond to the challenge of PSA testing of asymptomatic, normal-risk men. They raise important considerations: uncertainty, harms, the trustworthiness and interpretation of the evidence, cost (e.g. of using multiparametric

  20. Results from tests of the Delphi TPC prototype

    International Nuclear Information System (INIS)

    Vilanova, D.

    1985-01-01

    Results from beam tests of a half-scale sector of the Delphi TPC are presented. The spatial resolution is slightly higher than predicted by Monte Carlo simulations, corresponding to an average value of about 300 μm. (orig.)

  1. A virtual pebble game to ensemble average graph rigidity.

    Science.gov (United States)

    González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J

    2015-01-01

    The body-bar Pebble Game (PG) algorithm is commonly used to calculate network rigidity properties in proteins and polymeric materials. To account for fluctuating interactions such as hydrogen bonds, an ensemble of constraint topologies are sampled, and average network properties are obtained by averaging PG characterizations. At a simpler level of sophistication, Maxwell constraint counting (MCC) provides a rigorous lower bound for the number of internal degrees of freedom (DOF) within a body-bar network, and it is commonly employed to test if a molecular structure is globally under-constrained or over-constrained. MCC is a mean field approximation (MFA) that ignores spatial fluctuations of distance constraints by replacing the actual molecular structure by an effective medium that has distance constraints globally distributed with perfect uniform density. The Virtual Pebble Game (VPG) algorithm is a MFA that retains spatial inhomogeneity in the density of constraints on all length scales. Network fluctuations due to distance constraints that may be present or absent based on binary random dynamic variables are suppressed by replacing all possible constraint topology realizations with the probabilities that distance constraints are present. The VPG algorithm is isomorphic to the PG algorithm, where integers for counting "pebbles" placed on vertices or edges in the PG map to real numbers representing the probability to find a pebble. In the VPG, edges are assigned pebble capacities, and pebble movements become a continuous flow of probability within the network. Comparisons between the VPG and average PG results over a test set of proteins and disordered lattices demonstrate the VPG quantitatively estimates the ensemble average PG results well. The VPG performs about 20% faster than one PG, and it provides a pragmatic alternative to averaging PG rigidity characteristics over an ensemble of constraint topologies. The utility of the VPG falls in between the most

  2. Averaged null energy condition from causality

    Science.gov (United States)

    Hartman, Thomas; Kundu, Sandipan; Tajdini, Amirhossein

    2017-07-01

    Unitary, Lorentz-invariant quantum field theories in flat spacetime obey mi-crocausality: commutators vanish at spacelike separation. For interacting theories in more than two dimensions, we show that this implies that the averaged null energy, ∫ duT uu , must be non-negative. This non-local operator appears in the operator product expansion of local operators in the lightcone limit, and therefore contributes to n-point functions. We derive a sum rule that isolates this contribution and is manifestly positive. The argument also applies to certain higher spin operators other than the stress tensor, generating an infinite family of new constraints of the form ∫ duX uuu··· u ≥ 0. These lead to new inequalities for the coupling constants of spinning operators in conformal field theory, which include as special cases (but are generally stronger than) the existing constraints from the lightcone bootstrap, deep inelastic scattering, conformal collider methods, and relative entropy. We also comment on the relation to the recent derivation of the averaged null energy condition from relative entropy, and suggest a more general connection between causality and information-theoretic inequalities in QFT.

  3. Space reactor fuel element testing in upgraded TREAT

    International Nuclear Information System (INIS)

    Todosow, M.; Bezler, P.; Ludewig, H.; Kato, W.Y.

    1993-01-01

    The testing of candidate fuel elements at prototypic operating conditions with respect to temperature, power density, hydrogen coolant flow rate, etc., is a crucial component in the development and qualification of nuclear rocket engines based on the Particle Bed Reactor (PBR), NERVA-derivative, and other concepts. Such testing may be performed at existing reactors, or at new facilities. A scoping study has been performed to assess the feasibility of testing PBR based fuel elements at the TREAT reactor. Initial results suggests that full-scale PBR elements could be tested at an average energy deposition of ∼60--80 MW-s/L in the current TREAT reactor. If the TREAT reactor was upgraded to include fuel elements with a higher temperture limit, average energy deposition of ∼100 MW/L may be achievable

  4. Space reactor fuel element testing in upgraded TREAT

    Science.gov (United States)

    Todosow, Michael; Bezler, Paul; Ludewig, Hans; Kato, Walter Y.

    1993-01-01

    The testing of candidate fuel elements at prototypic operating conditions with respect to temperature, power density, hydrogen coolant flow rate, etc., is a crucial component in the development and qualification of nuclear rocket engines based on the Particle Bed Reactor (PBR), NERVA-derivative, and other concepts. Such testing may be performed at existing reactors, or at new facilities. A scoping study has been performed to assess the feasibility of testing PBR based fuel elements at the TREAT reactor. Initial results suggests that full-scale PBR elements could be tested at an average energy deposition of ˜60-80 MW-s/L in the current TREAT reactor. If the TREAT reactor was upgraded to include fuel elements with a higher temperture limit, average energy deposition of ˜100 MW/L may be achievable.

  5. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    1999-01-01

    In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belo...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion......In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  6. Experimental investigation of a moving averaging algorithm for motion perpendicular to the leaf travel direction in dynamic MLC target tracking

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Jai-Woong; Sawant, Amit; Suh, Yelin; Cho, Byung-Chul; Suh, Tae-Suk; Keall, Paul [Department of Biomedical Engineering, College of Medicine, Catholic University of Korea, Seoul, Korea 131-700 and Research Institute of Biomedical Engineering, Catholic University of Korea, Seoul, 131-700 (Korea, Republic of); Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States); Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States) and Department of Radiation Oncology, Asan Medical Center, Seoul, 138-736 (Korea, Republic of); Department of Biomedical Engineering, College of Medicine, Catholic University of Korea, Seoul, 131-700 and Research Institute of Biomedical Engineering, Catholic University of Korea, Seoul, 131-700 (Korea, Republic of); Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States) and Radiation Physics Laboratory, Sydney Medical School, University of Sydney, 2006 (Australia)

    2011-07-15

    . Conclusions: The delivery efficiency of moving average tracking was up to four times higher than that of real-time tracking and approached the efficiency of no compensation for all cases. The geometric accuracy and dosimetric accuracy of the moving average algorithm was between real-time tracking and no compensation, approximately half the percentage of dosimetric points failing the {gamma}-test compared with no compensation.

  7. Experimental investigation of a moving averaging algorithm for motion perpendicular to the leaf travel direction in dynamic MLC target tracking.

    Science.gov (United States)

    Yoon, Jai-Woong; Sawant, Amit; Suh, Yelin; Cho, Byung-Chul; Suh, Tae-Suk; Keall, Paul

    2011-07-01

    moving average tracking was up to four times higher than that of real-time tracking and approached the efficiency of no compensation for all cases. The geometric accuracy and dosimetric accuracy of the moving average algorithm was between real-time tracking and no compensation, approximately half the percentage of dosimetric points failing the gamma-test compared with no compensation.

  8. Average-energy games

    Directory of Open Access Journals (Sweden)

    Patricia Bouyer

    2015-09-01

    Full Text Available Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in NP inter coNP and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements.

  9. A depth semi-averaged model for coastal dynamics

    Science.gov (United States)

    Antuono, M.; Colicchio, G.; Lugni, C.; Greco, M.; Brocchini, M.

    2017-05-01

    The present work extends the semi-integrated method proposed by Antuono and Brocchini ["Beyond Boussinesq-type equations: Semi-integrated models for coastal dynamics," Phys. Fluids 25(1), 016603 (2013)], which comprises a subset of depth-averaged equations (similar to Boussinesq-like models) and a Poisson equation that accounts for vertical dynamics. Here, the subset of depth-averaged equations has been reshaped in a conservative-like form and both the Poisson equation formulations proposed by Antuono and Brocchini ["Beyond Boussinesq-type equations: Semi-integrated models for coastal dynamics," Phys. Fluids 25(1), 016603 (2013)] are investigated: the former uses the vertical velocity component (formulation A) and the latter a specific depth semi-averaged variable, ϒ (formulation B). Our analyses reveal that formulation A is prone to instabilities as wave nonlinearity increases. On the contrary, formulation B allows an accurate, robust numerical implementation. Test cases derived from the scientific literature on Boussinesq-type models—i.e., solitary and Stokes wave analytical solutions for linear dispersion and nonlinear evolution and experimental data for shoaling properties—are used to assess the proposed solution strategy. It is found that the present method gives reliable predictions of wave propagation in shallow to intermediate waters, in terms of both semi-averaged variables and conservation properties.

  10. Are average and symmetric faces attractive to infants? Discrimination and looking preferences.

    Science.gov (United States)

    Rhodes, Gillian; Geddes, Keren; Jeffery, Linda; Dziurawiec, Suzanne; Clark, Alison

    2002-01-01

    Young infants prefer to look at faces that adults find attractive, suggesting a biological basis for some face preferences. However, the basis for infant preferences is not known. Adults find average and symmetric faces attractive. We examined whether 5-8-month-old infants discriminate between different levels of averageness and symmetry in faces, and whether they prefer to look at faces with higher levels of these traits. Each infant saw 24 pairs of female faces. Each pair consisted of two versions of the same face differing either in averageness (12 pairs) or symmetry (12 pairs). Data from the mothers confirmed that adults preferred the more average and more symmetric versions in each pair. The infants were sensitive to differences in both averageness and symmetry, but showed no looking preference for the more average or more symmetric versions. On the contrary, longest looks were significantly longer for the less average versions, and both longest looks and first looks were marginally longer for the less symmetric versions. Mean looking times were also longer for the less average and less symmetric versions, but those differences were not significant. We suggest that the infant looking behaviour may reflect a novelty preference rather than an aesthetic preference.

  11. Averaging of nonlinearity-managed pulses

    International Nuclear Information System (INIS)

    Zharnitsky, Vadim; Pelinovsky, Dmitry

    2005-01-01

    We consider the nonlinear Schroedinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons

  12. Size and emotion averaging: costs of dividing attention after all.

    Science.gov (United States)

    Brand, John; Oriet, Chris; Tottenham, Laurie Sykes

    2012-03-01

    Perceptual averaging is a process by which sets of similar items are represented by summary statistics such as their average size, luminance, or orientation. Researchers have argued that this process is automatic, able to be carried out without interference from concurrent processing. Here, we challenge this conclusion and demonstrate a reliable cost of computing the mean size of circles distinguished by colour (Experiments 1 and 2) and the mean emotionality of faces distinguished by sex (Experiment 3). We also test the viability of two strategies that could have allowed observers to guess the correct response without computing the average size or emotionality of both sets concurrently. We conclude that although two means can be computed concurrently, doing so incurs a cost of dividing attention.

  13. Analysis and comparison of safety models using average daily, average hourly, and microscopic traffic.

    Science.gov (United States)

    Wang, Ling; Abdel-Aty, Mohamed; Wang, Xuesong; Yu, Rongjie

    2018-02-01

    There have been plenty of traffic safety studies based on average daily traffic (ADT), average hourly traffic (AHT), or microscopic traffic at 5 min intervals. Nevertheless, not enough research has compared the performance of these three types of safety studies, and seldom of previous studies have intended to find whether the results of one type of study is transferable to the other two studies. First, this study built three models: a Bayesian Poisson-lognormal model to estimate the daily crash frequency using ADT, a Bayesian Poisson-lognormal model to estimate the hourly crash frequency using AHT, and a Bayesian logistic regression model for the real-time safety analysis using microscopic traffic. The model results showed that the crash contributing factors found by different models were comparable but not the same. Four variables, i.e., the logarithm of volume, the standard deviation of speed, the logarithm of segment length, and the existence of diverge segment, were positively significant in the three models. Additionally, weaving segments experienced higher daily and hourly crash frequencies than merge and basic segments. Then, each of the ADT-based, AHT-based, and real-time models was used to estimate safety conditions at different levels: daily and hourly, meanwhile, the real-time model was also used in 5 min intervals. The results uncovered that the ADT- and AHT-based safety models performed similar in predicting daily and hourly crash frequencies, and the real-time safety model was able to provide hourly crash frequency. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Population Aging at Cross-Roads: Diverging Secular Trends in Average Cognitive Functioning and Physical Health in the Older Population of Germany

    Science.gov (United States)

    Steiber, Nadia

    2015-01-01

    This paper uses individual-level data from the German Socio-Economic Panel to model trends in population health in terms of cognition, physical fitness, and mental health between 2006 and 2012. The focus is on the population aged 50–90. We use a repeated population-based cross-sectional design. As outcome measures, we use SF-12 measures of physical and mental health and the Symbol-Digit Test (SDT) that captures cognitive processing speed. In line with previous research we find a highly significant Flynn effect on cognition; i.e., SDT scores are higher among those who were tested more recently (at the same age). This result holds for men and women, all age groups, and across all levels of education. While we observe a secular improvement in terms of cognitive functioning, at the same time, average physical and mental health has declined. The decline in average physical health is shown to be stronger for men than for women and found to be strongest for low-educated, young-old men aged 50–64: the decline over the 6-year interval in average physical health is estimated to amount to about 0.37 SD, whereas average fluid cognition improved by about 0.29 SD. This pattern of results at the population-level (trends in average population health) stands in interesting contrast to the positive association of physical health and cognitive functioning at the individual-level. The findings underscore the multi-dimensionality of health and the aging process. PMID:26323093

  15. “Simpson’s paradox” as a manifestation of the properties of weighted average (part 2)

    OpenAIRE

    Zhekov, Encho

    2012-01-01

    The article proves that the so-called “Simpson's paradox” is a special case of manifestation of the properties of weighted average. In this case always comes to comparing two weighted averages, where the average of the larger variables is less than that of the smaller. The article demonstrates one method for analyzing the relative change of magnitudes of the type: k S = Σ x iy i i=1 who gives answer to question: what is the reason, the weighted average of few variables with higher values, to ...

  16. “Simpson’s paradox” as a manifestation of the properties of weighted average (part 1)

    OpenAIRE

    Zhekov, Encho

    2012-01-01

    The article proves that the so-called “Simpson's paradox” is a special case of manifestation of the properties of weighted average. In this case always comes to comparing two weighted averages, where the average of the larger variables is less than that of the smaller. The article demonstrates one method for analyzing the relative change of magnitudes of the type: S = Σ ki=1x iy i who gives answer to question: what is the reason, the weighted average of few variables with higher values, to be...

  17. Average of delta: a new quality control tool for clinical laboratories.

    Science.gov (United States)

    Jones, Graham R D

    2016-01-01

    Average of normals is a tool used to control assay performance using the average of a series of results from patients' samples. Delta checking is a process of identifying errors in individual patient results by reviewing the difference from previous results of the same patient. This paper introduces a novel alternate approach, average of delta, which combines these concepts to use the average of a number of sequential delta values to identify changes in assay performance. Models for average of delta and average of normals were developed in a spreadsheet application. The model assessed the expected scatter of average of delta and average of normals functions and the effect of assay bias for different values of analytical imprecision and within- and between-subject biological variation and the number of samples included in the calculations. The final assessment was the number of patients' samples required to identify an added bias with 90% certainty. The model demonstrated that with larger numbers of delta values, the average of delta function was tighter (lower coefficient of variation). The optimal number of samples for bias detection with average of delta was likely to be between 5 and 20 for most settings and that average of delta outperformed average of normals when the within-subject biological variation was small relative to the between-subject variation. Average of delta provides a possible additional assay quality control tool which theoretical modelling predicts may be more valuable than average of normals for analytes where the group biological variation is wide compared with within-subject variation and where there is a high rate of repeat testing in the laboratory patient population. © The Author(s) 2015.

  18. Predicting Student Grade Point Average at a Community College from Scholastic Aptitude Tests and from Measures Representing Three Constructs in Vroom's Expectancy Theory Model of Motivation.

    Science.gov (United States)

    Malloch, Douglas C.; Michael, William B.

    1981-01-01

    This study was designed to determine whether an unweighted linear combination of community college students' scores on standardized achievement tests and a measure of motivational constructs derived from Vroom's expectance theory model of motivation was predictive of academic success (grade point average earned during one quarter of an academic…

  19. Face averages enhance user recognition for smartphone security.

    Science.gov (United States)

    Robertson, David J; Kramer, Robin S S; Burton, A Mike

    2015-01-01

    Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy). In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual's 'face-average'--a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1) and for real faces (Experiment 2): users could unlock their phones more reliably when the device stored an average of the user's face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings.

  20. Continuous and high-intensity interval training: which promotes higher pleasure?

    Directory of Open Access Journals (Sweden)

    Bruno R R Oliveira

    Full Text Available OBJECTIVES: To compare the psychological responses to continuous (CT and high-intensity interval training (HIT sessions. METHODS: Fifteen men attended one CT session and one HIT session. During the first visit, the maximum heart rate, VO2Peak and respiratory compensation point (RCP were determined through a maximal cardiopulmonary exercise test. The HIT stimulus intensity corresponded to 100% of VO2Peak, and the average intensity of both sessions was maintained at 15% below the RCP. The order of the sessions was randomized. Psychological and physiological variables were recorded before, during and after each session. RESULTS: There were no significant differences between the average percentages of VO2 during the two exercise sessions (HIT: 73.3% vs. CT: 71.8%; p = 0.779. Lower responses on the feeling scale (p≤0.01 and higher responses on the felt arousal scale (p≤0.001 and the rating of perceived exertion were obtained during the HIT session. Despite the more negative feeling scale responses observed during HIT and a greater feeling of fatigue (measured by Profile of Mood States afterwards (p<0.01, the physical activity enjoyment scale was not significantly different between the two conditions (p = 0.779. CONCLUSION: Despite the same average intensity for both conditions, similar psychological responses under HIT and CT conditions were not observed, suggesting that the higher dependence on anaerobic metabolism during HIT negatively influenced the feeling scale responses.

  1. Original article Functioning of memory and attention processes in children with intelligence below average

    Directory of Open Access Journals (Sweden)

    Aneta Rita Borkowska

    2014-05-01

    Full Text Available BACKGROUND The aim of the research was to assess memorization and recall of logically connected and unconnected material, coded graphically and linguistically, and the ability to focus attention, in a group of children with intelligence below average, compared to children with average intelligence. PARTICIPANTS AND PROCEDURE The study group included 27 children with intelligence below average. The control group consisted of 29 individuals. All of them were examined using the authors’ experimental trials and the TUS test (Attention and Perceptiveness Test. RESULTS Children with intelligence below average memorized significantly less information contained in the logical material, demonstrated lower ability to memorize the visual material, memorized significantly fewer words in the verbal material learning task, achieved lower results in such indicators of the visual attention process pace as the number of omissions and mistakes, and had a lower pace of perceptual work, compared to children with average intelligence. CONCLUSIONS The results confirm that children with intelligence below average have difficulties with memorizing new material, both logically connected and unconnected. The significantly lower capacity of direct memory is independent of modality. The results of the study on the memory process confirm the hypothesis about lower abilities of children with intelligence below average, in terms of concentration, work pace, efficiency and perception.

  2. The difference between alternative averages

    Directory of Open Access Journals (Sweden)

    James Vaupel

    2012-09-01

    Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.

  3. Nonlinearity management in higher dimensions

    International Nuclear Information System (INIS)

    Kevrekidis, P G; Pelinovsky, D E; Stefanov, A

    2006-01-01

    In the present paper, we revisit nonlinearity management of the time-periodic nonlinear Schroedinger equation and the related averaging procedure. By means of rigorous estimates, we show that the averaged nonlinear Schroedinger equation does not blow up in the higher dimensional case so long as the corresponding solution remains smooth. In particular, we show that the H 1 norm remains bounded, in contrast with the usual blow-up mechanism for the focusing Schroedinger equation. This conclusion agrees with earlier works in the case of strong nonlinearity management but contradicts those in the case of weak nonlinearity management. The apparent discrepancy is explained by the divergence of the averaging procedure in the limit of weak nonlinearity management

  4. Image compression using moving average histogram and RBF network

    International Nuclear Information System (INIS)

    Khowaja, S.; Ismaili, I.A.

    2015-01-01

    Modernization and Globalization have made the multimedia technology as one of the fastest growing field in recent times but optimal use of bandwidth and storage has been one of the topics which attract the research community to work on. Considering that images have a lion share in multimedia communication, efficient image compression technique has become the basic need for optimal use of bandwidth and space. This paper proposes a novel method for image compression based on fusion of moving average histogram and RBF (Radial Basis Function). Proposed technique employs the concept of reducing color intensity levels using moving average histogram technique followed by the correction of color intensity levels using RBF networks at reconstruction phase. Existing methods have used low resolution images for the testing purpose but the proposed method has been tested on various image resolutions to have a clear assessment of the said technique. The proposed method have been tested on 35 images with varying resolution and have been compared with the existing algorithms in terms of CR (Compression Ratio), MSE (Mean Square Error), PSNR (Peak Signal to Noise Ratio), computational complexity. The outcome shows that the proposed methodology is a better trade off technique in terms of compression ratio, PSNR which determines the quality of the image and computational complexity. (author)

  5. Impression formation of tests: retrospective judgments of performance are higher when easier questions come first.

    Science.gov (United States)

    Jackson, Abigail; Greene, Robert L

    2014-11-01

    Four experiments are reported on the importance of retrospective judgments of performance (postdictions) on tests. Participants answered general knowledge questions and estimated how many questions they answered correctly. They gave higher postdictions when easy questions preceded difficult questions. This was true when time to answer each question was equalized and constrained, when participants were instructed not to write answers, and when questions were presented in a multiple-choice format. Results are consistent with the notion that first impressions predominate in overall perception of test difficulty.

  6. FEL system with homogeneous average output

    Energy Technology Data Exchange (ETDEWEB)

    Douglas, David R.; Legg, Robert; Whitney, R. Roy; Neil, George; Powers, Thomas Joseph

    2018-01-16

    A method of varying the output of a free electron laser (FEL) on very short time scales to produce a slightly broader, but smooth, time-averaged wavelength spectrum. The method includes injecting into an accelerator a sequence of bunch trains at phase offsets from crest. Accelerating the particles to full energy to result in distinct and independently controlled, by the choice of phase offset, phase-energy correlations or chirps on each bunch train. The earlier trains will be more strongly chirped, the later trains less chirped. For an energy recovered linac (ERL), the beam may be recirculated using a transport system with linear and nonlinear momentum compactions M.sub.56, which are selected to compress all three bunch trains at the FEL with higher order terms managed.

  7. Test One to Test Many: A Unified Approach to Quantum Benchmarks

    Science.gov (United States)

    Bai, Ge; Chiribella, Giulio

    2018-04-01

    Quantum benchmarks are routinely used to validate the experimental demonstration of quantum information protocols. Many relevant protocols, however, involve an infinite set of input states, of which only a finite subset can be used to test the quality of the implementation. This is a problem, because the benchmark for the finitely many states used in the test can be higher than the original benchmark calculated for infinitely many states. This situation arises in the teleportation and storage of coherent states, for which the benchmark of 50% fidelity is commonly used in experiments, although finite sets of coherent states normally lead to higher benchmarks. Here, we show that the average fidelity over all coherent states can be indirectly probed with a single setup, requiring only two-mode squeezing, a 50-50 beam splitter, and homodyne detection. Our setup enables a rigorous experimental validation of quantum teleportation, storage, amplification, attenuation, and purification of noisy coherent states. More generally, we prove that every quantum benchmark can be tested by preparing a single entangled state and measuring a single observable.

  8. Influence of Averaging Preprocessing on Image Analysis with a Markov Random Field Model

    Science.gov (United States)

    Sakamoto, Hirotaka; Nakanishi-Ohno, Yoshinori; Okada, Masato

    2018-02-01

    This paper describes our investigations into the influence of averaging preprocessing on the performance of image analysis. Averaging preprocessing involves a trade-off: image averaging is often undertaken to reduce noise while the number of image data available for image analysis is decreased. We formulated a process of generating image data by using a Markov random field (MRF) model to achieve image analysis tasks such as image restoration and hyper-parameter estimation by a Bayesian approach. According to the notions of Bayesian inference, posterior distributions were analyzed to evaluate the influence of averaging. There are three main results. First, we found that the performance of image restoration with a predetermined value for hyper-parameters is invariant regardless of whether averaging is conducted. We then found that the performance of hyper-parameter estimation deteriorates due to averaging. Our analysis of the negative logarithm of the posterior probability, which is called the free energy based on an analogy with statistical mechanics, indicated that the confidence of hyper-parameter estimation remains higher without averaging. Finally, we found that when the hyper-parameters are estimated from the data, the performance of image restoration worsens as averaging is undertaken. We conclude that averaging adversely influences the performance of image analysis through hyper-parameter estimation.

  9. 77 FR 74452 - Bus Testing: Calculation of Average Passenger Weight and Test Vehicle Weight

    Science.gov (United States)

    2012-12-14

    ... require FTA to work with bus manufacturers and transit agencies to establish a new pass/ fail standard for... buses from the current value of 150 pounds to a new value of 175 pounds. This increase was proposed to... new pass/fail standards that require a more comprehensive review of its overall bus testing program...

  10. Trend of Average Wages as Indicator of Hypothetical Money Illusion

    Directory of Open Access Journals (Sweden)

    Julian Daszkowski

    2010-06-01

    Full Text Available The definition of wage in Poland not before 1998 includes any value of social security contribution. Changed definition creates higher level of reported wages, but was expected not to influence the take home pay. Nevertheless, the trend of average wages, after a short period, has returned to its previous line. Such effect is explained in the term of money illusion.

  11. Results of faecal immunochemical test for colorectal cancer screening, in average risk population, in a cohort of 1389 subjects.

    Science.gov (United States)

    Miuţescu, Bogdan; Sporea, Ioan; Popescu, Alina; Bota, Simona; Iovănescu, Dana; Burlea, Amelia; Mos, Liana; Miuţescu, Eftimie

    2013-01-01

    The aim of this study is to evaluate the usefulness of the fecal immunochemical test (FIT) in colorectal cancer screening, detection of precancerous lesions and early colorectal cancer. The study evaluated asymptomatic patients with average risk (no personal or family antecedents of polyps or colorectal cancer), aged between 50 and 74 years. The presence of the occult haemorrhage was tested with the immunochemical faecal test Hem Check 1 (Veda Lab, France). The subjects were not requested to have any dietary or drug restrictions. Colonoscopy was recommended in all subjects that tested positive. In our study, we had a total of 1389 participants who met the inclusion criteria, with a mean age of 61.2 ± 12.8 years, 565 (40.7%) men and 824 (59.3%) women. FIT was positive in 87 individuals (6.3%). In 57/87 subjects (65.5%) with positive FIT, colonoscopy was performed, while the rest of the subjects refused or delayed the investigation. A number of 5 (8.8%) patients were not able to have a complete colonoscopy, due to neoplastic stenosis. The colonoscopies revealed in 10 cases (0.7%) cancer, in 29 cases (2.1%) advanced adenomas and in 15 cases (1.1%) non advanced adenomas from the total participants in the study. The colonoscopies performed revealed a greater percentage of advanced adenomas in the left colon compared to the right colon, 74.1% vs. 28.6% (p<0.001). In our study, FIT had a positivity rate of 6.3%. The detection rate for advanced neoplasia was 2.8% (0.7% for cancer, 2.1% for advanced adenomas) in our study group. Adherence to colonoscopy for FIT-positive subjects was 65.5%.

  12. General and Local: Averaged k-Dependence Bayesian Classifiers

    Directory of Open Access Journals (Sweden)

    Limin Wang

    2015-06-01

    Full Text Available The inference of a general Bayesian network has been shown to be an NP-hard problem, even for approximate solutions. Although k-dependence Bayesian (KDB classifier can construct at arbitrary points (values of k along the attribute dependence spectrum, it cannot identify the changes of interdependencies when attributes take different values. Local KDB, which learns in the framework of KDB, is proposed in this study to describe the local dependencies implicated in each test instance. Based on the analysis of functional dependencies, substitution-elimination resolution, a new type of semi-naive Bayesian operation, is proposed to substitute or eliminate generalization to achieve accurate estimation of conditional probability distribution while reducing computational complexity. The final classifier, averaged k-dependence Bayesian (AKDB classifiers, will average the output of KDB and local KDB. Experimental results on the repository of machine learning databases from the University of California Irvine (UCI showed that AKDB has significant advantages in zero-one loss and bias relative to naive Bayes (NB, tree augmented naive Bayes (TAN, Averaged one-dependence estimators (AODE, and KDB. Moreover, KDB and local KDB show mutually complementary characteristics with respect to variance.

  13. Comparison of mass transport using average and transient rainfall boundary conditions

    International Nuclear Information System (INIS)

    Duguid, J.O.; Reeves, M.

    1976-01-01

    A general two-dimensional model for simulation of saturated-unsaturated transport of radionuclides in ground water has been developed and is currently being tested. The model is being applied to study the transport of radionuclides from a waste-disposal site where field investigations are currently under way to obtain the necessary model parameters. A comparison of the amount of tritium transported is made using both average and transient rainfall boundary conditions. The simulations indicate that there is no substantial difference in the transport for the two conditions tested. However, the values of dispersivity used in the unsaturated zone caused more transport above the water table than has been observed under actual conditions. This deficiency should be corrected and further comparisons should be made before average rainfall boundary conditions are used for long-term transport simulations

  14. Bayesian model averaging and weighted average least squares : Equivariance, stability, and numerical issues

    NARCIS (Netherlands)

    De Luca, G.; Magnus, J.R.

    2011-01-01

    In this article, we describe the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals, which implement, respectively, the exact Bayesian model-averaging estimator and the weighted-average least-squares

  15. Resident characterization of better-than- and worse-than-average clinical teaching.

    Science.gov (United States)

    Haydar, Bishr; Charnin, Jonathan; Voepel-Lewis, Terri; Baker, Keith

    2014-01-01

    Clinical teachers and trainees share a common view of what constitutes excellent clinical teaching, but associations between these behaviors and high teaching scores have not been established. This study used residents' written feedback to their clinical teachers, to identify themes associated with above- or below-average teaching scores. All resident evaluations of their clinical supervisors in a single department were collected from January 1, 2007 until December 31, 2008. A mean teaching score assigned by each resident was calculated. Evaluations that were 20% higher or 15% lower than the resident's mean score were used. A subset of these evaluations was reviewed, generating a list of 28 themes for further study. Two researchers then, independently coded the presence or absence of these themes in each evaluation. Interrater reliability of the themes and logistic regression were used to evaluate the predictive associations of the themes with above- or below-average evaluations. Five hundred twenty-seven above-average and 285 below-average evaluations were evaluated for the presence or absence of 15 positive themes and 13 negative themes, which were divided into four categories: teaching, supervision, interpersonal, and feedback. Thirteen of 15 positive themes correlated with above-average evaluations and nine had high interrater reliability (Intraclass Correlation Coefficient >0.6). Twelve of 13 negative themes correlated with below-average evaluations, and all had high interrater reliability. On the basis of these findings, the authors developed 13 recommendations for clinical educators. The authors developed 13 recommendations for clinical teachers using the themes identified from the above- and below-average clinical teaching evaluations submitted by anesthesia residents.

  16. Exceptional F(4) higher-spin theory in AdS{sub 6} at one-loop and other tests of duality

    Energy Technology Data Exchange (ETDEWEB)

    Günaydin, Murat [Institute for Gravitation and the Cosmos Physics Department, Pennsylvania State University, University Park, PA 16802 (United States); Skvortsov, Evgeny [Arnold Sommerfeld Center for Theoretical Physics, Ludwig-Maximilians University Munich, Theresienstr. 37, D-80333 Munich (Germany); Lebedev Institute of Physics, Leninsky ave. 53, 119991 Moscow (Russian Federation); Tran, Tung [Department of Physics, Brown University, Providence, Rhode Island 02912 (United States)

    2016-11-28

    We study the higher-spin gauge theory in six-dimensional anti-de Sitter space AdS{sub 6} that is based on the exceptional Lie superalgebra F(4). The relevant higher-spin algebra was constructed in http://arxiv.org/abs/1409.2185. We determine the spectrum of the theory and show that it contains the physical fields of the Romans F(4) gauged supergravity. The full spectrum consists of an infinite tower of unitary supermultiplets of F(4) which extend the Romans multiplet to higher spins plus a single short supermultiplet. Motivated by applications to this novel supersymmetric higher-spin theory as well as to other theories, we extend the known one-loop tests of AdS/CFT duality in various directions. The spectral zeta-function is derived for the most general case of fermionic and mixed-symmetry fields, which allows one to test the Type-A and B theories and supersymmetric extensions thereof in any dimension. We also study higher-spin doubletons and partially-massless fields. While most of the tests are successfully passed, the Type-B theory in all even dimensional anti-de Sitter spacetimes presents an interesting puzzle: the free energy as computed from the bulk is not equal to that of the free fermion on the CFT side, though there is some systematics to the discrepancy.

  17. Recent developments in high average power driver technology

    International Nuclear Information System (INIS)

    Prestwich, K.R.; Buttram, M.T.; Rohwein, G.J.

    1979-01-01

    Inertial confinement fusion (ICF) reactors will require driver systems operating with tens to hundreds of megawatts of average power. The pulse power technology that will be required to build such drivers is in a primitive state of development. Recent developments in repetitive pulse power are discussed. A high-voltage transformer has been developed and operated at 3 MV in a single pulse experiment and is being tested at 1.5 MV, 5 kj and 10 pps. A low-loss, 1 MV, 10 kj, 10 pps Marx generator is being tested. Test results from gas-dynamic spark gaps that operate both in the 100 kV and 700 kV range are reported. A 250 kV, 1.5 kA/cm 2 , 30 ns electron beam diode has operated stably for 1.6 x 10 5 pulses

  18. An Experimental Study Related to Planning Abilities of Gifted and Average Students

    Directory of Open Access Journals (Sweden)

    Marilena Z. Leana-Taşcılar

    2016-02-01

    Full Text Available Gifted students differ from their average peers in psychological, social, emotional and cognitive development. One of these differences in the cognitive domain is related to executive functions. One of the most important executive functions is planning and organization ability. The aim of this study was to compare planning abilities of gifted students with those of their average peers and to test the effectiveness of a training program on planning abilities of gifted students and average students. First, students’ intelligence and planning abilities were measured and then assigned to either experimental or control group. The groups were matched by intelligence and planning ability (experimental: (13 gifted and 8 average; control: 14 gifted and 8 average. In total 182 students (79 gifted and 103 average participated in the study. Then, a training program was implemented in the experimental group to find out if it improved students’ planning ability. Results showed that boys had better planning abilities than girls did, and gifted students had better planning abilities than their average peers did. Significant results were obtained in favor of the experimental group in the posttest scores

  19. Higher-Order Asymptotics and Its Application to Testing the Equality of the Examinee Ability Over Two Sets of Items.

    Science.gov (United States)

    Sinharay, Sandip; Jensen, Jens Ledet

    2018-06-27

    In educational and psychological measurement, researchers and/or practitioners are often interested in examining whether the ability of an examinee is the same over two sets of items. Such problems can arise in measurement of change, detection of cheating on unproctored tests, erasure analysis, detection of item preknowledge, etc. Traditional frequentist approaches that are used in such problems include the Wald test, the likelihood ratio test, and the score test (e.g., Fischer, Appl Psychol Meas 27:3-26, 2003; Finkelman, Weiss, & Kim-Kang, Appl Psychol Meas 34:238-254, 2010; Glas & Dagohoy, Psychometrika 72:159-180, 2007; Guo & Drasgow, Int J Sel Assess 18:351-364, 2010; Klauer & Rettig, Br J Math Stat Psychol 43:193-206, 1990; Sinharay, J Educ Behav Stat 42:46-68, 2017). This paper shows that approaches based on higher-order asymptotics (e.g., Barndorff-Nielsen & Cox, Inference and asymptotics. Springer, London, 1994; Ghosh, Higher order asymptotics. Institute of Mathematical Statistics, Hayward, 1994) can also be used to test for the equality of the examinee ability over two sets of items. The modified signed likelihood ratio test (e.g., Barndorff-Nielsen, Biometrika 73:307-322, 1986) and the Lugannani-Rice approximation (Lugannani & Rice, Adv Appl Prob 12:475-490, 1980), both of which are based on higher-order asymptotics, are shown to provide some improvement over the traditional frequentist approaches in three simulations. Two real data examples are also provided.

  20. Averaging in spherically symmetric cosmology

    International Nuclear Information System (INIS)

    Coley, A. A.; Pelavas, N.

    2007-01-01

    The averaging problem in cosmology is of fundamental importance. When applied to study cosmological evolution, the theory of macroscopic gravity (MG) can be regarded as a long-distance modification of general relativity. In the MG approach to the averaging problem in cosmology, the Einstein field equations on cosmological scales are modified by appropriate gravitational correlation terms. We study the averaging problem within the class of spherically symmetric cosmological models. That is, we shall take the microscopic equations and effect the averaging procedure to determine the precise form of the correlation tensor in this case. In particular, by working in volume-preserving coordinates, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. We find that the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background must be of the form of a spatial curvature. Inhomogeneities and spatial averaging, through this spatial curvature correction term, can have a very significant dynamical effect on the dynamics of the Universe and cosmological observations; in particular, we discuss whether spatial averaging might lead to a more conservative explanation of the observed acceleration of the Universe (without the introduction of exotic dark matter fields). We also find that the correlation tensor for a non-FLRW background can be interpreted as the sum of a spatial curvature and an anisotropic fluid. This may lead to interesting effects of averaging on astrophysical scales. We also discuss the results of averaging an inhomogeneous Lemaitre-Tolman-Bondi solution as well as calculations of linear perturbations (that is, the backreaction) in an FLRW background, which support the main conclusions of the analysis

  1. Fractional averaging of repetitive waveforms induced by self-imaging effects

    Science.gov (United States)

    Romero Cortés, Luis; Maram, Reza; Azaña, José

    2015-10-01

    We report the theoretical prediction and experimental observation of averaging of stochastic events with an equivalent result of calculating the arithmetic mean (or sum) of a rational number of realizations of the process under test, not necessarily limited to an integer record of realizations, as discrete statistical theory dictates. This concept is enabled by a passive amplification process, induced by self-imaging (Talbot) effects. In the specific implementation reported here, a combined spectral-temporal Talbot operation is shown to achieve undistorted, lossless repetition-rate division of a periodic train of noisy waveforms by a rational factor, leading to local amplification, and the associated averaging process, by the fractional rate-division factor.

  2. How to average logarithmic retrievals?

    Directory of Open Access Journals (Sweden)

    B. Funke

    2012-04-01

    Full Text Available Calculation of mean trace gas contributions from profiles obtained by retrievals of the logarithm of the abundance rather than retrievals of the abundance itself are prone to biases. By means of a system simulator, biases of linear versus logarithmic averaging were evaluated for both maximum likelihood and maximum a priori retrievals, for various signal to noise ratios and atmospheric variabilities. These biases can easily reach ten percent or more. As a rule of thumb we found for maximum likelihood retrievals that linear averaging better represents the true mean value in cases of large local natural variability and high signal to noise ratios, while for small local natural variability logarithmic averaging often is superior. In the case of maximum a posteriori retrievals, the mean is dominated by the a priori information used in the retrievals and the method of averaging is of minor concern. For larger natural variabilities, the appropriateness of the one or the other method of averaging depends on the particular case because the various biasing mechanisms partly compensate in an unpredictable manner. This complication arises mainly because of the fact that in logarithmic retrievals the weight of the prior information depends on abundance of the gas itself. No simple rule was found on which kind of averaging is superior, and instead of suggesting simple recipes we cannot do much more than to create awareness of the traps related with averaging of mixing ratios obtained from logarithmic retrievals.

  3. Are MFT-B Results Biased Because of Students Who Do Not Take the Test?

    Science.gov (United States)

    Valero, Magali; Kocher, Claudia

    2014-01-01

    The authors study the characteristics of students who take the Major Field Test in Business (MFT-B) versus those who do not. The authors find that students with higher cumulative grade point averages (GPAs) are more likely to take the test. Additionally, students are more likely to take the test if it is offered late in the semester. Further…

  4. Correlation between Grade Point Averages and Student Evaluation of Teaching Scores: Taking a Closer Look

    Science.gov (United States)

    Griffin, Tyler J.; Hilton, John, III.; Plummer, Kenneth; Barret, Devynne

    2014-01-01

    One of the most contentious potential sources of bias is whether instructors who give higher grades receive higher ratings from students. We examined the grade point averages (GPAs) and student ratings across 2073 general education religion courses at a large private university. A moderate correlation was found between GPAs and student evaluations…

  5. Chaotic Universe, Friedmannian on the average 2

    Energy Technology Data Exchange (ETDEWEB)

    Marochnik, L S [AN SSSR, Moscow. Inst. Kosmicheskikh Issledovanij

    1980-11-01

    The cosmological solutions are found for the equations for correlators, describing a statistically chaotic Universe, Friedmannian on the average in which delta-correlated fluctuations with amplitudes h >> 1 are excited. For the equation of state of matter p = n epsilon, the kind of solutions depends on the position of maximum of the spectrum of the metric disturbances. The expansion of the Universe, in which long-wave potential and vortical motions and gravitational waves (modes diverging at t ..-->.. 0) had been excited, tends asymptotically to the Friedmannian one at t ..-->.. identity and depends critically on n: at n < 0.26, the solution for the scalefactor is situated higher than the Friedmannian one, and lower at n > 0.26. The influence of finite at t ..-->.. 0 long-wave fluctuation modes leads to an averaged quasiisotropic solution. The contribution of quantum fluctuations and of short-wave parts of the spectrum of classical fluctuations to the expansion law is considered. Their influence is equivalent to the contribution from an ultrarelativistic gas with corresponding energy density and pressure. The restrictions are obtained for the degree of chaos (the spectrum characteristics) compatible with the observed helium abundance, which could have been retained by a completely chaotic Universe during its expansion up to the nucleosynthesis epoch.

  6. Numerical and experimental research on pentagonal cross-section of the averaging Pitot tube

    Science.gov (United States)

    Zhang, Jili; Li, Wei; Liang, Ruobing; Zhao, Tianyi; Liu, Yacheng; Liu, Mingsheng

    2017-07-01

    Averaging Pitot tubes have been widely used in many fields because of their simple structure and stable performance. This paper introduces a new shape of the cross-section of an averaging Pitot tube. Firstly, the structure of the averaging Pitot tube and the distribution of pressure taps are given. Then, a mathematical model of the airflow around it is formulated. After that, a series of numerical simulations are carried out to optimize the geometry of the tube. The distribution of the streamline and pressures around the tube are given. To test its performance, a test platform was constructed in accordance with the relevant national standards and is described in this paper. Curves are provided, linking the values of flow coefficient with the values of Reynolds number. With a maximum deviation of only  ±3%, the results of the flow coefficient obtained from the numerical simulations were in agreement with those obtained from experimental methods. The proposed tube has a stable flow coefficient and favorable metrological characteristics.

  7. Average subentropy, coherence and entanglement of random mixed quantum states

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Lin, E-mail: godyalin@163.com [Institute of Mathematics, Hangzhou Dianzi University, Hangzhou 310018 (China); Singh, Uttam, E-mail: uttamsingh@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India); Pati, Arun K., E-mail: akpati@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India)

    2017-02-15

    Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate that mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.

  8. High-average-power solid state lasers

    International Nuclear Information System (INIS)

    Summers, M.A.

    1989-01-01

    In 1987, a broad-based, aggressive R ampersand D program aimed at developing the technologies necessary to make possible the use of solid state lasers that are capable of delivering medium- to high-average power in new and demanding applications. Efforts were focused along the following major lines: development of laser and nonlinear optical materials, and of coatings for parasitic suppression and evanescent wave control; development of computational design tools; verification of computational models on thoroughly instrumented test beds; and applications of selected aspects of this technology to specific missions. In the laser materials areas, efforts were directed towards producing strong, low-loss laser glasses and large, high quality garnet crystals. The crystal program consisted of computational and experimental efforts aimed at understanding the physics, thermodynamics, and chemistry of large garnet crystal growth. The laser experimental efforts were directed at understanding thermally induced wave front aberrations in zig-zag slabs, understanding fluid mechanics, heat transfer, and optical interactions in gas-cooled slabs, and conducting critical test-bed experiments with various electro-optic switch geometries. 113 refs., 99 figs., 18 tabs

  9. MCQ testing in higher education: Yes, there are bad items and invalid scores—A case study identifying solutions

    OpenAIRE

    Brown, Gavin

    2017-01-01

    This is a lecture given at Umea University, Sweden in September 2017. It is based on the published study: Brown, G. T. L., & Abdulnabi, H. (2017). Evaluating the quality of higher education instructor-constructed multiple-choice tests: Impact on student grades. Frontiers in Education: Assessment, Testing, & Applied Measurement, 2(24).. doi:10.3389/feduc.2017.00024

  10. Application of the Value Averaging Investment Method on the US Stock Market

    Directory of Open Access Journals (Sweden)

    Martin Širůček

    2015-01-01

    Full Text Available The paper focuses on empirical testing and the use of the regular investment, particularly on the value averaging investment method on real data from the US stock market in the years 1990–2013. The 23-year period was chosen because of a consistently interesting situation in the market and so this regular investment method could be tested to see how it works in a bull (expansion period and a bear (recession period. The analysis is focused on results obtained by using this investment method from the viewpoint of return and risk on selected investment horizons (short-term 1 year, medium-term 5 years and long-term 10 years. The selected aim is reached by using the ratio between profit and risk. The revenue-risk profile is the ratio of the average annual profit rate measured for each investment by the internal rate of return and average annual risk expressed by selective standard deviation. The obtained results show that regular investment is suitable for a long investment horizon or the longer the investment horizon, the better the revenue-risk ratio (Sharpe ratio. According to the results obtained, specific investment recommendations are presented in the conclusion, e.g. if this investment method is suitable for a long investment period, if it is better to use value averaging for a growing, sinking or sluggish market, etc.

  11. 0.1 Trend analysis of δ18O composition of precipitation in Germany: Combining Mann-Kendall trend test and ARIMA models to correct for higher order serial correlation

    Science.gov (United States)

    Klaus, Julian; Pan Chun, Kwok; Stumpp, Christine

    2015-04-01

    Spatio-temporal dynamics of stable oxygen (18O) and hydrogen (2H) isotopes in precipitation can be used as proxies for changing hydro-meteorological and regional and global climate patterns. While spatial patterns and distributions gained much attention in recent years the temporal trends in stable isotope time series are rarely investigated and our understanding of them is still limited. These might be a result of a lack of proper trend detection tools and effort for exploring trend processes. Here we make use of an extensive data set of stable isotope in German precipitation. In this study we investigate temporal trends of δ18O in precipitation at 17 observation station in Germany between 1978 and 2009. For that we test different approaches for proper trend detection, accounting for first and higher order serial correlation. We test if significant trends in the isotope time series based on different models can be observed. We apply the Mann-Kendall trend tests on the isotope series, using general multiplicative seasonal autoregressive integrate moving average (ARIMA) models which account for first and higher order serial correlations. With the approach we can also account for the effects of temperature, precipitation amount on the trend. Further we investigate the role of geographic parameters on isotope trends. To benchmark our proposed approach, the ARIMA results are compared to a trend-free prewhiting (TFPW) procedure, the state of the art method for removing the first order autocorrelation in environmental trend studies. Moreover, we explore whether higher order serial correlations in isotope series affects our trend results. The results show that three out of the 17 stations have significant changes when higher order autocorrelation are adjusted, and four stations show a significant trend when temperature and precipitation effects are considered. Significant trends in the isotope time series are generally observed at low elevation stations (≤315 m a

  12. Identification of Large-Scale Structure Fluctuations in IC Engines using POD-Based Conditional Averaging

    Directory of Open Access Journals (Sweden)

    Buhl Stefan

    2016-01-01

    Full Text Available Cycle-to-Cycle Variations (CCV in IC engines is a well-known phenomenon and the definition and quantification is well-established for global quantities such as the mean pressure. On the other hand, the definition of CCV for local quantities, e.g. the velocity or the mixture distribution, is less straightforward. This paper proposes a new method to identify and calculate cyclic variations of the flow field in IC engines emphasizing the different contributions from large-scale energetic (coherent structures, identified by a combination of Proper Orthogonal Decomposition (POD and conditional averaging, and small-scale fluctuations. Suitable subsets required for the conditional averaging are derived from combinations of the the POD coefficients of the second and third mode. Within each subset, the velocity is averaged and these averages are compared to the ensemble-averaged velocity field, which is based on all cycles. The resulting difference of the subset-average and the global-average is identified as a cyclic fluctuation of the coherent structures. Then, within each subset, remaining fluctuations are obtained from the difference between the instantaneous fields and the corresponding subset average. The proposed methodology is tested for two data sets obtained from scale resolving engine simulations. For the first test case, the numerical database consists of 208 independent samples of a simplified engine geometry. For the second case, 120 cycles for the well-established Transparent Combustion Chamber (TCC benchmark engine are considered. For both applications, the suitability of the method to identify the two contributions to CCV is discussed and the results are directly linked to the observed flow field structures.

  13. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    2001-01-01

    In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong ...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation.......In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  14. Analysis of litter size and average litter weight in pigs using a recursive model

    DEFF Research Database (Denmark)

    Varona, Luis; Sorensen, Daniel; Thompson, Robin

    2007-01-01

    An analysis of litter size and average piglet weight at birth in Landrace and Yorkshire using a standard two-trait mixed model (SMM) and a recursive mixed model (RMM) is presented. The RMM establishes a one-way link from litter size to average piglet weight. It is shown that there is a one......-to-one correspondence between the parameters of SMM and RMM and that they generate equivalent likelihoods. As parameterized in this work, the RMM tests for the presence of a recursive relationship between additive genetic values, permanent environmental effects, and specific environmental effects of litter size......, on average piglet weight. The equivalent standard mixed model tests whether or not the covariance matrices of the random effects have a diagonal structure. In Landrace, posterior predictive model checking supports a model without any form of recursion or, alternatively, a SMM with diagonal covariance...

  15. Deblurring of class-averaged images in single-particle electron microscopy

    International Nuclear Information System (INIS)

    Park, Wooram; Chirikjian, Gregory S; Madden, Dean R; Rockmore, Daniel N

    2010-01-01

    This paper proposes a method for the deblurring of class-averaged images in single-particle electron microscopy (EM). Since EM images of biological samples are very noisy, the images which are nominally identical projection images are often grouped, aligned and averaged in order to cancel or reduce the background noise. However, the noise in the individual EM images generates errors in the alignment process, which creates an inherent limit on the accuracy of the resulting class averages. This inaccurate class average due to the alignment errors can be viewed as the result of a convolution of an underlying clear image with a blurring function. In this work, we develop a deconvolution method that gives an estimate for the underlying clear image from a blurred class-averaged image using precomputed statistics of misalignment. Since this convolution is over the group of rigid-body motions of the plane, SE(2), we use the Fourier transform for SE(2) in order to convert the convolution into a matrix multiplication in the corresponding Fourier space. For practical implementation we use a Hermite-function-based image modeling technique, because Hermite expansions enable lossless Cartesian-polar coordinate conversion using the Laguerre–Fourier expansions, and Hermite expansion and Laguerre–Fourier expansion retain their structures under the Fourier transform. Based on these mathematical properties, we can obtain the deconvolution of the blurred class average using simple matrix multiplication. Tests of the proposed deconvolution method using synthetic and experimental EM images confirm the performance of our method

  16. Will Higher Education Pass "A Test of Leadership"? An Interview with Spellings Commission Chairman Charles Miller

    Science.gov (United States)

    Callan, Pat

    2007-01-01

    Charles Miller, former chairman of the University of Texas System's Board of Regents, chaired the recent Commission on the Future of Higher Education created by Secretary of Education Margaret Spellings. Here he is interviewed regarding the panel's widely discussed report, "A Test of Leadership," by Pat Callan, president of the National…

  17. Biodegradation testing of chemicals with high Henry’s constants – separating mass and effective concentration reveals higher rate constants

    DEFF Research Database (Denmark)

    Birch, Heidi; Andersen, Henrik Rasmus; Comber, Mike

    Microextraction (HS-SPME) was applied directly on the test systems to measure substrate depletion by biodegradation relative to abiotic controls. HS-SPME was also applied to determine air to water partitioning ratios. Water phase biodegradation rate constants, kwater, were up to 72 times higher than test system...

  18. Distribution, congruence, and hotspots of higher plants in China.

    Science.gov (United States)

    Zhao, Lina; Li, Jinya; Liu, Huiyuan; Qin, Haining

    2016-01-11

    Identifying biodiversity hotspots has become a central issue in setting up priority protection areas, especially as financial resources for biological diversity conservation are limited. Taking China's Higher Plants Red List (CHPRL), including Bryophytes, Ferns, Gymnosperms, Angiosperms, as the data source, we analyzed the geographic patterns of species richness, endemism, and endangerment via data processing at a fine grid-scale with an average edge length of 30 km based on three aspects of richness information: species richness, endemic species richness, and threatened species richness. We sought to test the accuracy of hotspots used in identifying conservation priorities with regard to higher plants. Next, we tested the congruence of the three aspects and made a comparison of the similarities and differences between the hotspots described in this paper and those in previous studies. We found that over 90% of threatened species in China are concentrated. While a high spatial congruence is observed among the three measures, there is a low congruence between two different sets of hotspots. Our results suggest that biodiversity information should be considered when identifying biological hotspots. Other factors, such as scales, should be included as well to develop biodiversity conservation plans in accordance with the region's specific conditions.

  19. Lagrangian averaging with geodesic mean.

    Science.gov (United States)

    Oliver, Marcel

    2017-11-01

    This paper revisits the derivation of the Lagrangian averaged Euler (LAE), or Euler- α equations in the light of an intrinsic definition of the averaged flow map as the geodesic mean on the volume-preserving diffeomorphism group. Under the additional assumption that first-order fluctuations are statistically isotropic and transported by the mean flow as a vector field, averaging of the kinetic energy Lagrangian of an ideal fluid yields the LAE Lagrangian. The derivation presented here assumes a Euclidean spatial domain without boundaries.

  20. Change of direction ability test differentiates higher level and lower level soccer referees

    Science.gov (United States)

    Los, Arcos A; Grande, I; Casajús, JA

    2016-01-01

    This report examines the agility and level of acceleration capacity of Spanish soccer referees and investigates the possible differences between field referees of different categories. The speed test consisted of 3 maximum acceleration stretches of 15 metres. The change of direction ability (CODA) test used in this study was a modification of the Modified Agility Test (MAT). The study included a sample of 41 Spanish soccer field referees from the Navarre Committee of Soccer Referees divided into two groups: i) the higher level group (G1, n = 20): 2ndA, 2ndB and 3rd division referees from the Spanish National Soccer League (28.43 ± 1.39 years); and ii) the lower level group (G2, n = 21): Navarre Provincial League soccer referees (29.54 ± 1.87 years). Significant differences were found with respect to the CODA between G1 (5.72 ± 0.13 s) and G2 (6.06 ± 0.30 s), while no differences were encountered between groups in acceleration ability. No significant correlations were obtained in G1 between agility and the capacity to accelerate. Significant correlations were found between sprint and agility times in the G2 and in the total group. The results of this study showed that agility can be used as a discriminating factor for differentiating between national and regional field referees; however, no observable differences were found over the 5 and 15 m sprint tests. PMID:27274111

  1. Pedestrian headform testing: inferring performance at impact speeds and for headform masses not tested, and estimating average performance in a range of real-world conditions.

    Science.gov (United States)

    Hutchinson, T Paul; Anderson, Robert W G; Searson, Daniel J

    2012-01-01

    Tests are routinely conducted where instrumented headforms are projected at the fronts of cars to assess pedestrian safety. Better information would be obtained by accounting for performance over the range of expected impact conditions in the field. Moreover, methods will be required to integrate the assessment of secondary safety performance with primary safety systems that reduce the speeds of impacts. Thus, we discuss how to estimate performance over a range of impact conditions from performance in one test and how this information can be combined with information on the probability of different impact speeds to provide a balanced assessment of pedestrian safety. Theoretical consideration is given to 2 distinct aspects to impact safety performance: the test impact severity (measured by the head injury criterion, HIC) at a speed at which a structure does not bottom out and the speed at which bottoming out occurs. Further considerations are given to an injury risk function, the distribution of impact speeds likely in the field, and the effect of primary safety systems on impact speeds. These are used to calculate curves that estimate injuriousness for combinations of test HIC, bottoming out speed, and alternative distributions of impact speeds. The injuriousness of a structure that may be struck by the head of a pedestrian depends not only on the result of the impact test but also the bottoming out speed and the distribution of impact speeds. Example calculations indicate that the relationship between the test HIC and injuriousness extends over a larger range than is presently used by the European New Car Assessment Programme (Euro NCAP), that bottoming out at speeds only slightly higher than the test speed can significantly increase the injuriousness of an impact location and that effective primary safety systems that reduce impact speeds significantly modify the relationship between the test HIC and injuriousness. Present testing regimes do not take fully into

  2. Stochastic Optimal Prediction with Application to Averaged Euler Equations

    Energy Technology Data Exchange (ETDEWEB)

    Bell, John [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chorin, Alexandre J. [Univ. of California, Berkeley, CA (United States); Crutchfield, William [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2017-04-24

    Optimal prediction (OP) methods compensate for a lack of resolution in the numerical solution of complex problems through the use of an invariant measure as a prior measure in the Bayesian sense. In first-order OP, unresolved information is approximated by its conditional expectation with respect to the invariant measure. In higher-order OP, unresolved information is approximated by a stochastic estimator, leading to a system of random or stochastic differential equations. We explain the ideas through a simple example, and then apply them to the solution of Averaged Euler equations in two space dimensions.

  3. The Chicken Soup Effect: The Role of Recreation and Intramural Participation in Boosting Freshman Grade Point Average

    Science.gov (United States)

    Gibbison, Godfrey A.; Henry, Tracyann L.; Perkins-Brown, Jayne

    2011-01-01

    Freshman grade point average, in particular first semester grade point average, is an important predictor of survival and eventual student success in college. As many institutions of higher learning are searching for ways to improve student success, one would hope that policies geared towards the success of freshmen have long term benefits…

  4. A comparative analysis of 9 multi-model averaging approaches in hydrological continuous streamflow simulation

    Science.gov (United States)

    Arsenault, Richard; Gatien, Philippe; Renaud, Benoit; Brissette, François; Martel, Jean-Luc

    2015-10-01

    This study aims to test whether a weighted combination of several hydrological models can simulate flows more accurately than the models taken individually. In addition, the project attempts to identify the most efficient model averaging method and the optimal number of models to include in the weighting scheme. In order to address the first objective, streamflow was simulated using four lumped hydrological models (HSAMI, HMETS, MOHYSE and GR4J-6), each of which were calibrated with three different objective functions on 429 watersheds. The resulting 12 hydrographs (4 models × 3 metrics) were weighted and combined with the help of 9 averaging methods which are the simple arithmetic mean (SAM), Akaike information criterion (AICA), Bates-Granger (BGA), Bayes information criterion (BICA), Bayesian model averaging (BMA), Granger-Ramanathan average variant A, B and C (GRA, GRB and GRC) and the average by SCE-UA optimization (SCA). The same weights were then applied to the hydrographs in validation mode, and the Nash-Sutcliffe Efficiency metric was measured between the averaged and observed hydrographs. Statistical analyses were performed to compare the accuracy of weighted methods to that of individual models. A Kruskal-Wallis test and a multi-objective optimization algorithm were then used to identify the most efficient weighted method and the optimal number of models to integrate. Results suggest that the GRA, GRB, GRC and SCA weighted methods perform better than the individual members. Model averaging from these four methods were superior to the best of the individual members in 76% of the cases. Optimal combinations on all watersheds included at least one of each of the four hydrological models. None of the optimal combinations included all members of the ensemble of 12 hydrographs. The Granger-Ramanathan average variant C (GRC) is recommended as the best compromise between accuracy, speed of execution, and simplicity.

  5. Database of average-power damage thresholds at 1064 nm

    International Nuclear Information System (INIS)

    Rainer, F.; Hildum, E.A.; Milam, D.

    1987-01-01

    We have completed a database of average-power, laser-induced, damage thresholds at 1064 nm on a variety of materials. Measurements were made with a newly constructed laser to provide design input for moderate and high average-power laser projects. The measurements were conducted with 16-ns pulses at pulse-repetition frequencies ranging from 6 to 120 Hz. Samples were typically irradiated for time ranging from a fraction of a second up to 5 minutes (36,000 shots). We tested seven categories of samples which included antireflective coatings, high reflectors, polarizers, single and multiple layers of the same material, bare and overcoated metal surfaces, bare polished surfaces, and bulk materials. The measured damage threshold ranged from 2 for some metals to > 46 J/cm 2 for a bare polished glass substrate. 4 refs., 7 figs., 1 tab

  6. Online formative tests linked to microlectures improving academic achievement.

    Science.gov (United States)

    Bouwmeester, Rianne A M; de Kleijn, Renske A M; Freriksen, Astrid W M; van Emst, Maarten G; Veeneklaas, Rob J; van Hoeij, Maggy J W; Spinder, Matty; Ritzen, Magda J; Ten Cate, Olle Th J; van Rijen, Harold V M

    2013-12-01

    Online formative tests (OFTs) are powerful tools to direct student learning behavior, especially when enriched with specific feedback. In the present study, we have investigated the effect of OFTs enriched with hyperlinks to microlectures on examination scores. OFTs, available one week preceding each midterm and the final exams, could be used voluntarily. The use of OFTs was related to scores on midterm and final exams using ANOVA, with prior academic achievement as a covariate. On average, 74% of all students used the online formative tests (OFT+) while preparing for the summative midterm exam. OFT+ students obtained significantly higher grades compared to OFT-students, both without and with correction for previous academic achievement. Two out of three final exam scores did not significantly improve. Students using online formative tests linked to microlectures receive higher grades especially in highly aligned summative tests.

  7. Dose calculation for photon-emitting brachytherapy sources with average energy higher than 50 keV: report of the AAPM and ESTRO.

    Science.gov (United States)

    Perez-Calatayud, Jose; Ballester, Facundo; Das, Rupak K; Dewerd, Larry A; Ibbott, Geoffrey S; Meigooni, Ali S; Ouhib, Zoubir; Rivard, Mark J; Sloboda, Ron S; Williamson, Jeffrey F

    2012-05-01

    Recommendations of the American Association of Physicists in Medicine (AAPM) and the European Society for Radiotherapy and Oncology (ESTRO) on dose calculations for high-energy (average energy higher than 50 keV) photon-emitting brachytherapy sources are presented, including the physical characteristics of specific (192)Ir, (137)Cs, and (60)Co source models. This report has been prepared by the High Energy Brachytherapy Source Dosimetry (HEBD) Working Group. This report includes considerations in the application of the TG-43U1 formalism to high-energy photon-emitting sources with particular attention to phantom size effects, interpolation accuracy dependence on dose calculation grid size, and dosimetry parameter dependence on source active length. Consensus datasets for commercially available high-energy photon sources are provided, along with recommended methods for evaluating these datasets. Recommendations on dosimetry characterization methods, mainly using experimental procedures and Monte Carlo, are established and discussed. Also included are methodological recommendations on detector choice, detector energy response characterization and phantom materials, and measurement specification methodology. Uncertainty analyses are discussed and recommendations for high-energy sources without consensus datasets are given. Recommended consensus datasets for high-energy sources have been derived for sources that were commercially available as of January 2010. Data are presented according to the AAPM TG-43U1 formalism, with modified interpolation and extrapolation techniques of the AAPM TG-43U1S1 report for the 2D anisotropy function and radial dose function.

  8. Spacetime averaging of exotic singularity universes

    International Nuclear Information System (INIS)

    Dabrowski, Mariusz P.

    2011-01-01

    Taking a spacetime average as a measure of the strength of singularities we show that big-rips (type I) are stronger than big-bangs. The former have infinite spacetime averages while the latter have them equal to zero. The sudden future singularities (type II) and w-singularities (type V) have finite spacetime averages. The finite scale factor (type III) singularities for some values of the parameters may have an infinite average and in that sense they may be considered stronger than big-bangs.

  9. Numerical and experimental research on pentagonal cross-section of the averaging Pitot tube

    International Nuclear Information System (INIS)

    Zhang, Jili; Li, Wei; Liang, Ruobing; Zhao, Tianyi; Liu, Yacheng; Liu, Mingsheng

    2017-01-01

    Averaging Pitot tubes have been widely used in many fields because of their simple structure and stable performance. This paper introduces a new shape of the cross-section of an averaging Pitot tube. Firstly, the structure of the averaging Pitot tube and the distribution of pressure taps are given. Then, a mathematical model of the airflow around it is formulated. After that, a series of numerical simulations are carried out to optimize the geometry of the tube. The distribution of the streamline and pressures around the tube are given. To test its performance, a test platform was constructed in accordance with the relevant national standards and is described in this paper. Curves are provided, linking the values of flow coefficient with the values of Reynolds number. With a maximum deviation of only  ±3%, the results of the flow coefficient obtained from the numerical simulations were in agreement with those obtained from experimental methods. The proposed tube has a stable flow coefficient and favorable metrological characteristics. (paper)

  10. Dropouts and Budgets: A Test of a Dropout Reduction Model among Students in Israeli Higher Education

    Science.gov (United States)

    Bar-Am, Ran; Arar, Osama

    2017-01-01

    This article deals with the problem of student dropout during the first year in a higher education institution. To date, no model on a budget has been developed and tested to prevent dropout among Engineering Students. This case study was conducted among first-year students taking evening classes in two practical engineering colleges in Israel.…

  11. Higher Drop in Speed during a Repeated Sprint Test in Soccer Players Reporting Former Hamstring Strain Injury

    Science.gov (United States)

    Røksund, Ola D.; Kristoffersen, Morten; Bogen, Bård E.; Wisnes, Alexander; Engeseth, Merete S.; Nilsen, Ann-Kristin; Iversen, Vegard V.; Mæland, Silje; Gundersen, Hilde

    2017-01-01

    Aim: Hamstring strain injury is common in soccer. The aim of this study was to evaluate the physical capacity of players who have and have not suffered from hamstring strain injury in a sample of semi-professional and professional Norwegian soccer players in order to evaluate characteristics and to identify possible indications of insufficient rehabilitation. Method: Seventy-five semi-professional and professional soccer players (19 ± 3 years) playing at the second and third level in the Norwegian league participated in the study. All players answered a questionnaire, including one question about hamstring strain injury (yes/no) during the previous 2 years. They also performed a 40 m maximal sprint test, a repeated sprint test (8 × 20 m), a countermovement jump, a maximal oxygen consumption (VO2max) test, strength tests and flexibility tests. Independent sample t-tests were used to evaluate differences in the physical capacity of the players who had suffered from hamstring strain injury and those who had not. Mixed between-within subject's analyses of variance was used to compare changes in speed during the repeated sprint test between groups. Results: Players who reported hamstring strain injury during the previous two years (16%) had a significantly higher drop in speed (0.07 vs. 0.02 s, p = 0.007) during the repeated sprint test, compared to players reporting no previous hamstring strain injury. In addition, there was a significant interaction (groups × time) (F = 3.22, p = 0.002), showing that speed in the two groups changed differently during the repeated sprint test. There were no significant differences in relations to age, weight, height, body fat, linear speed, countermovement jump height, leg strength, VO2max, or hamstring flexibility between the groups. Conclusion: Soccer players who reported hamstring strain injury during the previous 2 years showed significant higher drop in speed during the repeated sprint test compared to players with no hamstring

  12. Discriminant analysis of essay, mathematics/science type of essay, college scholastic ability test, and grade point average as predictors of acceptance to a pre-med course at a Korean medical school.

    Science.gov (United States)

    Jeong, Geum-Hee

    2008-01-01

    A discriminant analysis was conducted to investigate how an essay, a mathematics/science type of essay, a college scholastic ability test, and grade point average affect acceptance to a pre-med course at a Korean medical school. Subjects included 122 and 385 applicants for, respectively, early and regular admission to a medical school in Korea. The early admission examination was conducted in October 2007, and the regular admission examination was conducted in January 2008. The analysis of early admission data revealed significant F values for the mathematics/science type of essay (51.64; Pgrade point average (10.66; P=0.0014). The analysis of regular admission data revealed the following F values: 28.81 (Pgrade point average, 27.47 (P<0.0001) for college scholastic ability test, 10.67 (P=0.0012) for the essay, and 216.74 (P<0.0001) for the mathematics/science type of essay. Since the mathematics/science type of essay had a strong effect on acceptance, an emphasis on this requirement and exclusion of other kinds of essays would be effective in subsequent entrance examinations for this premed course.

  13. Flexible time domain averaging technique

    Science.gov (United States)

    Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng

    2013-09-01

    Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

  14. Development of Open-Ended Problems for Measuring The Higher-Order-Thinking-Skills of High School Students on Global Warming Phenomenon

    Science.gov (United States)

    Fianti; Najwa, F. L.; Linuwih, S.

    2017-04-01

    Higher-order-thinking-skills can not be developed directly, except by training which is employing open-ended problems for measuring and developing critics, creativeness, and problem-solving thinking-skills of students. This study is a research and development producing open-ended problems. The purpose of this study is to measure the properness and effectiveness of the developed product and to observe the profile of higher-order-thinking-skills of students on global warming phenomenon. The result of properness test of open-ended problems according to the experts is 92,59% on the first stage and 97,53% on the second stage, so we can assume that the product isvery proper. The result of effectiveness test shows the coefficient of correlation between student’s midterm test scores and open-ended questions is 0,634 which is in the category of strong. Higher-order-thinking-skills of SMA Negeri 1 Salatiga students is in the category of good with the average achievement scores 61,28.

  15. Raven’s test performance of sub-Saharan Africans: average performance, psychometric properties, and the Flynn Effect

    NARCIS (Netherlands)

    Wicherts, J.M.; Dolan, C.V.; Carlson, J.S.; van der Maas, H.L.J.

    2010-01-01

    This paper presents a systematic review of published data on the performance of sub-Saharan Africans on Raven's Progressive Matrices. The specific goals were to estimate the average level of performance, to study the Flynn Effect in African samples, and to examine the psychometric meaning of Raven's

  16. Test of Axel-Brink predictions by a discrete approach to resonance-averaged (n,γ) spectroscopy

    International Nuclear Information System (INIS)

    Raman, S.; Shahal, O.; Slaughter, G.G.

    1981-01-01

    The limitations imposed by Porter-Thomas fluctuations in the study of primary γ rays following neutron capture have been partly overcome by obtaining individual γ-ray spectra from 48 resonances in the 173 Yb(n,γ) reaction and summing them after appropriate normalizations. The resulting average radiation widths (and hence the γ-ray strength function) are in good agreement with the Axel-Brink predictions based on a giant dipole resonance model

  17. Reynolds-Averaged Navier-Stokes Simulation of a 2D Circulation Control Wind Tunnel Experiment

    Science.gov (United States)

    Allan, Brian G.; Jones, Greg; Lin, John C.

    2011-01-01

    Numerical simulations are performed using a Reynolds-averaged Navier-Stokes (RANS) flow solver for a circulation control airfoil. 2D and 3D simulation results are compared to a circulation control wind tunnel test conducted at the NASA Langley Basic Aerodynamics Research Tunnel (BART). The RANS simulations are compared to a low blowing case with a jet momentum coefficient, C(sub u), of 0:047 and a higher blowing case of 0.115. Three dimensional simulations of the model and tunnel walls show wall effects on the lift and airfoil surface pressures. These wall effects include a 4% decrease of the midspan sectional lift for the C(sub u) 0.115 blowing condition. Simulations comparing the performance of the Spalart Allmaras (SA) and Shear Stress Transport (SST) turbulence models are also made, showing the SST model compares best to the experimental data. A Rotational/Curvature Correction (RCC) to the turbulence model is also evaluated demonstrating an improvement in the CFD predictions.

  18. Preference for Averageness in Faces Does Not Generalize to Non-Human Primates

    Directory of Open Access Journals (Sweden)

    Olivia B. Tomeo

    2017-07-01

    Full Text Available Facial attractiveness is a long-standing topic of active study in both neuroscience and social science, motivated by its positive social consequences. Over the past few decades, it has been established that averageness is a major factor influencing judgments of facial attractiveness in humans. Non-human primates share similar social behaviors as well as neural mechanisms related to face processing with humans. However, it is unknown whether monkeys, like humans, also find particular faces attractive and, if so, which kind of facial traits they prefer. To address these questions, we investigated the effect of averageness on preferences for faces in monkeys. We tested three adult male rhesus macaques using a visual paired comparison (VPC task, in which they viewed pairs of faces (both individual faces, or one individual face and one average face; viewing time was used as a measure of preference. We did find that monkeys looked longer at certain individual faces than others. However, unlike humans, monkeys did not prefer the average face over individual faces. In fact, the more the individual face differed from the average face, the longer the monkeys looked at it, indicating that the average face likely plays a role in face recognition rather than in judgments of facial attractiveness: in models of face recognition, the average face operates as the norm against which individual faces are compared and recognized. Taken together, our study suggests that the preference for averageness in faces does not generalize to non-human primates.

  19. Spatial analysis based on variance of moving window averages

    OpenAIRE

    Wu, B M; Subbarao, K V; Ferrandino, F J; Hao, J J

    2006-01-01

    A new method for analysing spatial patterns was designed based on the variance of moving window averages (VMWA), which can be directly calculated in geographical information systems or a spreadsheet program (e.g. MS Excel). Different types of artificial data were generated to test the method. Regardless of data types, the VMWA method correctly determined the mean cluster sizes. This method was also employed to assess spatial patterns in historical plant disease survey data encompassing both a...

  20. Higher-order RANS turbulence models for separated flows

    Data.gov (United States)

    National Aeronautics and Space Administration — Higher-order Reynolds-averaged Navier-Stokes (RANS) models are developed to overcome the shortcomings of second-moment RANS models in predicting separated flows....

  1. Measurements of higher-order mode damping in the PEP-II low-power test cavity

    International Nuclear Information System (INIS)

    Rimmer, R.A.; Goldberg, D.A.

    1993-05-01

    The paper describes the results of measurements of the Higher-Order Mode (HOM) spectrum of the low-power test model of the PEP-II RF cavity and the reduction in the Q's of the modes achieved by the addition of dedicated damping waveguides. All the longitudinal (monopole) and deflecting (dipole) modes below the beam pipe cut-off are identified by comparing their measured frequencies and field distributions with calculations using the URMEL code. Field configurations were determined using a perturbation method with an automated bead positioning system. The loaded Q's agree well with the calculated values reported previously, and the strongest HOMs are damped by more than three orders of magnitude. This is sufficient to reduce the coupled-bunch growth rates to within the capability of a reasonable feedback system. A high power test cavity will now be built to validate the thermal design at the 150 kW nominal operating level, as described elsewhere at this conference

  2. Test Anxiety and Academic Procrastination Among Prelicensure Nursing Students.

    Science.gov (United States)

    Custer, Nicole

    Test anxiety may cause nursing students to cope poorly with academic demands, affecting academic performance and attrition and leading to possible failure on the National Council Licensure Examination for Registered Nurses (NCLEX-RN®). Test-anxious nursing students may engage academic procrastination as a coping mechanism. The Test Anxiety Inventory and the Procrastination Assessment Scale for Students were administered to 202 prelicensure nursing students from diploma, associate, and baccalaureate nursing programs in southwestern Pennsylvania. Statistically significant correlations between test anxiety and academic procrastination were found. The majority of participants reported procrastinating most on weekly reading assignments. Students with higher grade point averages exhibited less academic procrastination.

  3. Averaged RMHD equations

    International Nuclear Information System (INIS)

    Ichiguchi, Katsuji

    1998-01-01

    A new reduced set of resistive MHD equations is derived by averaging the full MHD equations on specified flux coordinates, which is consistent with 3D equilibria. It is confirmed that the total energy is conserved and the linearized equations for ideal modes are self-adjoint. (author)

  4. Industrial Applications of High Average Power FELS

    CERN Document Server

    Shinn, Michelle D

    2005-01-01

    The use of lasers for material processing continues to expand, and the annual sales of such lasers exceeds $1 B (US). Large scale (many m2) processing of materials require the economical production of laser powers of the tens of kilowatts, and therefore are not yet commercial processes, although they have been demonstrated. The development of FELs based on superconducting RF (SRF) linac technology provides a scaleable path to laser outputs above 50 kW in the IR, rendering these applications economically viable, since the cost/photon drops as the output power increases. This approach also enables high average power ~ 1 kW output in the UV spectrum. Such FELs will provide quasi-cw (PRFs in the tens of MHz), of ultrafast (pulsewidth ~ 1 ps) output with very high beam quality. This talk will provide an overview of applications tests by our facility's users such as pulsed laser deposition, laser ablation, and laser surface modification, as well as present plans that will be tested with our upgraded FELs. These upg...

  5. Averaging for solitons with nonlinearity management

    International Nuclear Information System (INIS)

    Pelinovsky, D.E.; Kevrekidis, P.G.; Frantzeskakis, D.J.

    2003-01-01

    We develop an averaging method for solitons of the nonlinear Schroedinger equation with a periodically varying nonlinearity coefficient, which is used to effectively describe solitons in Bose-Einstein condensates, in the context of the recently proposed technique of Feshbach resonance management. Using the derived local averaged equation, we study matter-wave bright and dark solitons and demonstrate a very good agreement between solutions of the averaged and full equations

  6. Position-Dependent Dynamics Explain Pore-Averaged Diffusion in Strongly Attractive Adsorptive Systems.

    Science.gov (United States)

    Krekelberg, William P; Siderius, Daniel W; Shen, Vincent K; Truskett, Thomas M; Errington, Jeffrey R

    2017-12-12

    Using molecular simulations, we investigate the relationship between the pore-averaged and position-dependent self-diffusivity of a fluid adsorbed in a strongly attractive pore as a function of loading. Previous work (Krekelberg, W. P.; Siderius, D. W.; Shen, V. K.; Truskett, T. M.; Errington, J. R. Connection between thermodynamics and dynamics of simple fluids in highly attractive pores. Langmuir 2013, 29, 14527-14535, doi: 10.1021/la4037327) established that pore-averaged self-diffusivity in the multilayer adsorption regime, where the fluid exhibits a dense film at the pore surface and a lower density interior pore region, is nearly constant as a function of loading. Here we show that this puzzling behavior can be understood in terms of how loading affects the fraction of particles that reside in the film and interior pore regions as well as their distinct dynamics. Specifically, the insensitivity of pore-averaged diffusivity to loading arises from the approximate cancellation of two factors: an increase in the fraction of particles in the higher diffusivity interior pore region with loading and a corresponding decrease in the particle diffusivity in that region. We also find that the position-dependent self-diffusivities scale with the position-dependent density. We present a model for predicting the pore-average self-diffusivity based on the position-dependent self-diffusivity, which captures the unusual characteristics of pore-averaged self-diffusivity in strongly attractive pores over several orders of magnitude.

  7. Unit testing as a teaching tool in higher education

    Directory of Open Access Journals (Sweden)

    Peláez Canek

    2016-01-01

    Full Text Available Unit testing in the programming world has had a profound impact in the way modern complex systems are developed. Many Open Source and Free Software projects encourage (and in some cases, mandate the use of unit tests for new code submissions, and many software companies around the world have incorporated unit testing as part of their standard developing practices. And although not all software engineers use them, very few (if at all object their use. However, there is almost no research available pertaining the use of unit tests as a teaching tool in introductory programming courses. I have been teaching introductory programming courses in the Computer Sciences program at the Sciences Faculty in the National Autonomous University of Mexico for almost ten years, and since 2013 I have been using unit testing as a teaching tool in those courses. The intent of this paper is to discuss the results of this experience.

  8. Impact of connected vehicle guidance information on network-wide average travel time

    Directory of Open Access Journals (Sweden)

    Jiangfeng Wang

    2016-12-01

    Full Text Available With the emergence of connected vehicle technologies, the potential positive impact of connected vehicle guidance on mobility has become a research hotspot by data exchange among vehicles, infrastructure, and mobile devices. This study is focused on micro-modeling and quantitatively evaluating the impact of connected vehicle guidance on network-wide travel time by introducing various affecting factors. To evaluate the benefits of connected vehicle guidance, a simulation architecture based on one engine is proposed representing the connected vehicle–enabled virtual world, and connected vehicle route guidance scenario is established through the development of communication agent and intelligent transportation systems agents using connected vehicle application programming interface considering the communication properties, such as path loss and transmission power. The impact of connected vehicle guidance on network-wide travel time is analyzed by comparing with non-connected vehicle guidance in response to different market penetration rate, following rate, and congestion level. The simulation results explore that average network-wide travel time in connected vehicle guidance shows a significant reduction versus that in non–connected vehicle guidance. Average network-wide travel time in connected vehicle guidance have an increase of 42.23% comparing to that in non-connected vehicle guidance, and average travel time variability (represented by the coefficient of variance increases as the travel time increases. Other vital findings include that higher penetration rate and following rate generate bigger savings of average network-wide travel time. The savings of average network-wide travel time increase from 17% to 38% according to different congestion levels, and savings of average travel time in more serious congestion have a more obvious improvement for the same penetration rate or following rate.

  9. A new method for the measurement of two-phase mass flow rate using average bi-directional flow tube

    International Nuclear Information System (INIS)

    Yoon, B. J.; Uh, D. J.; Kang, K. H.; Song, C. H.; Paek, W. P.

    2004-01-01

    Average bi-directional flow tube was suggested to apply in the air/steam-water flow condition. Its working principle is similar with Pitot tube, however, it makes it possible to eliminate the cooling system which is normally needed to prevent from flashing in the pressure impulse line of pitot tube when it is used in the depressurization condition. The suggested flow tube was tested in the air-water vertical test section which has 80mm inner diameter and 10m length. The flow tube was installed at 120 of L/D from inlet of test section. In the test, the pressure drop across the average bi-directional flow tube, system pressure and average void fraction were measured on the measuring plane. In the test, fluid temperature and injected mass flow rates of air and water phases were also measured by a RTD and two coriolis flow meters, respectively. To calculate the phasic mass flow rates : from the measured differential pressure and void fraction, Chexal drift-flux correlation was used. In the test a new correlation of momentum exchange factor was suggested. The test result shows that the suggested instrumentation using the measured void fraction and Chexal drift-flux correlation can predict the mass flow rates within 10% error of measured data

  10. Kumaraswamy autoregressive moving average models for double bounded environmental data

    Science.gov (United States)

    Bayer, Fábio Mariano; Bayer, Débora Missio; Pumi, Guilherme

    2017-12-01

    In this paper we introduce the Kumaraswamy autoregressive moving average models (KARMA), which is a dynamic class of models for time series taking values in the double bounded interval (a,b) following the Kumaraswamy distribution. The Kumaraswamy family of distribution is widely applied in many areas, especially hydrology and related fields. Classical examples are time series representing rates and proportions observed over time. In the proposed KARMA model, the median is modeled by a dynamic structure containing autoregressive and moving average terms, time-varying regressors, unknown parameters and a link function. We introduce the new class of models and discuss conditional maximum likelihood estimation, hypothesis testing inference, diagnostic analysis and forecasting. In particular, we provide closed-form expressions for the conditional score vector and conditional Fisher information matrix. An application to environmental real data is presented and discussed.

  11. HIGHER ORDER THINKING IN TEACHING GRAMMAR

    Directory of Open Access Journals (Sweden)

    Citra Dewi

    2017-04-01

    Full Text Available The aim of this paper discussed about how to enhance students’ higher order thinking that should be done by teacher in teaching grammar. Usually teaching grammar was boring and has the same way to learn like change the pattern of sentence into positive, negative and introgative while the students’ need more various way to develop their thinking. The outcome of students’ competence in grammar sometimes not sufficient enough when the students’ occured some test international standart like Test of English Foreign Language, International English Language Testing. Whereas in TOEFL test it needed higher order thinking answer, so teacher should develop students’ higher order thingking in daily teaching grammar in order to make the students’ enhance their thinking are higher. The method was used in this paper by using field study based on the experience of teaching grammar. It can be shown by students’ toefl score was less in stucture and written expression. The result of this paper was after teacher gave some treatments to enhance students’ higher order thinking in teaching grammar, the students’ toefl scores are sufficient enough as a part of stucture and written expression. It can concluded that it needed some strategies to enhancce students higher order thinking by teaching grammar it can make students’ higher toefl score. Teachers should be creative and inovative to teach the students’ started from giving the students’ question or test in teaching grammar.

  12. Average concentrations of FSH and LH in seminal plasma as determined by radioimmunoassay

    International Nuclear Information System (INIS)

    Milbradt, R.; Linzbach, P.; Feller, H.

    1979-01-01

    In 322 males, 25 to 50 years of age, levels of LH and FSH respectively were determined in seminal plasma by radioimmunoassay. Average values of 0,78 ng/ml and 3,95 ng/ml were found as for FSH and LH respectively. Sperm count and motility were not related to FSH levels, but were to LH levels. A high count of spermatozoa corresponded to high concentration of LH, and normal motility was associated with higher levels of LH as compared to levels associated with asthenozoospermia. With respect to count of spermatozoa of a single or the average patient, it is suggested that the ratio of FSH/LH would be more meaningful than LH level alone. (orig.) [de

  13. Honest signaling in trust interactions: smiles rated as genuine induce trust and signal higher earning opportunities

    OpenAIRE

    Centorrino, S.; Djemai, E.; Hopfensitz, A.; Milinski, M.; Seabright, P.

    2015-01-01

    We test the hypothesis that smiles perceived as honest serve as a signal that has evolved to induce cooperation in situations requiring mutual trust. Potential trustees (84 participants from Toulouse, France) made two video clips averaging around 15 seconds for viewing by potential senders before the latter decided whether to ‘send’ or ‘keep’ a lower stake (4 euros) or higher stake (8 euros). Senders (198 participants from Lyon, France) made trust decisions with respect to the recorded clips....

  14. The effect of wind shielding and pen position on the average daily weight gain and feed conversion rate of grower/finisher pigs

    DEFF Research Database (Denmark)

    Jensen, Dan B.; Toft, Nils; Cornou, Cécile

    2014-01-01

    of the effects of wind shielding, linear mixed models were fitted to describe the average daily weight gain and feed conversion rate of 1271 groups (14 individuals per group) of purebred Duroc, Yorkshire and Danish Landrace boars, as a function of shielding (yes/no), insert season (winter, spring, summer, autumn...... groups placed in the 1st and 4th pen (p=0.0001). A similar effect was not seen on smaller pigs. Pen placement appears to have no effect on feed conversion rate.No interaction effects between shielding and distance to the corridor could be demonstrated. Furthermore, in models including both factors......). The effect could not be tested for Yorkshire and Danish Landrace due to lack of data on these breeds. For groups of pigs above the average start weight, a clear tendency of higher growth rates at greater distances from the central corridor was observed, with the most significant differences being between...

  15. 40 CFR 600.510-12 - Calculation of average fuel economy and average carbon-related exhaust emissions.

    Science.gov (United States)

    2010-07-01

    ... and average carbon-related exhaust emissions. 600.510-12 Section 600.510-12 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF... Transportation. (iv) [Reserved] (2) Average carbon-related exhaust emissions will be calculated to the nearest...

  16. High-Average-Power Diffraction Pulse-Compression Gratings Enabling Next-Generation Ultrafast Laser Systems

    Energy Technology Data Exchange (ETDEWEB)

    Alessi, D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-11-01

    Pulse compressors for ultrafast lasers have been identified as a technology gap in the push towards high peak power systems with high average powers for industrial and scientific applications. Gratings for ultrashort (sub-150fs) pulse compressors are metallic and can absorb a significant percentage of laser energy resulting in up to 40% loss as well as thermal issues which degrade on-target performance. We have developed a next generation gold grating technology which we have scaled to the petawatt-size. This resulted in improvements in efficiency, uniformity and processing as compared to previous substrate etched gratings for high average power. This new design has a deposited dielectric material for the grating ridge rather than etching directly into the glass substrate. It has been observed that average powers as low as 1W in a compressor can cause distortions in the on-target beam. We have developed and tested a method of actively cooling diffraction gratings which, in the case of gold gratings, can support a petawatt peak power laser with up to 600W average power. We demonstrated thermo-mechanical modeling of a grating in its use environment and benchmarked with experimental measurement. Multilayer dielectric (MLD) gratings are not yet used for these high peak power, ultrashort pulse durations due to their design challenges. We have designed and fabricated broad bandwidth, low dispersion MLD gratings suitable for delivering 30 fs pulses at high average power. This new grating design requires the use of a novel Out Of Plane (OOP) compressor, which we have modeled, designed, built and tested. This prototype compressor yielded a transmission of 90% for a pulse with 45 nm bandwidth, and free of spatial and angular chirp. In order to evaluate gratings and compressors built in this project we have commissioned a joule-class ultrafast Ti:Sapphire laser system. Combining the grating cooling and MLD technologies developed here could enable petawatt laser systems to

  17. Improved averaging for non-null interferometry

    Science.gov (United States)

    Fleig, Jon F.; Murphy, Paul E.

    2013-09-01

    Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.

  18. A time-averaged cosmic ray propagation theory

    International Nuclear Information System (INIS)

    Klimas, A.J.

    1975-01-01

    An argument is presented, which casts doubt on our ability to choose an appropriate magnetic field ensemble for computing the average behavior of cosmic ray particles. An alternate procedure, using time-averages rather than ensemble-averages, is presented. (orig.) [de

  19. Applications of ordered weighted averaging (OWA operators in environmental problems

    Directory of Open Access Journals (Sweden)

    Carlos Llopis-Albert

    2017-04-01

    Full Text Available This paper presents an application of a prioritized weighted aggregation operator based on ordered weighted averaging (OWA to deal with stakeholders' constructive participation in water resources projects. They have different degree of acceptance or preference regarding the measures and policies to be carried out, which lead to different environmental and socio-economic outcomes, and hence, to different levels of stakeholders’ satisfaction. The methodology establishes a prioritization relationship upon the stakeholders, which preferences are aggregated by means of weights depending on the satisfaction of the higher priority policy maker. The methodology establishes a prioritization relationship upon the stakeholders, which preferences are aggregated by means of weights depending on the satisfaction of the higher priority policy maker. The methodology has been successfully applied to a Public Participation Project (PPP in watershed management, thus obtaining efficient environmental measures in conflict resolution problems under actors’ preference uncertainties.

  20. Improving consensus structure by eliminating averaging artifacts

    Directory of Open Access Journals (Sweden)

    KC Dukka B

    2009-03-01

    Full Text Available Abstract Background Common structural biology methods (i.e., NMR and molecular dynamics often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures. However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%; in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38 of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA 1, our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction 2, which

  1. Effects of Organic Pesticides on Enchytraeids (Oligochaeta in Agroecosystems: Laboratory and Higher-Tier Tests

    Directory of Open Access Journals (Sweden)

    Jörg Römbke

    2017-05-01

    Full Text Available Enchytraeidae (Oligochaeta, Annelida are often considered to be typical forestliving organisms, but they are regularly found in agroecosystems of the temperate regions of the world. Although less known than their larger relatives, the earthworms, these saprophagous organisms play similar roles in agricultural soils (but at a smaller scale, e.g., influencing soil structure and organic matter dynamics via microbial communities, and having a central place in soil food webs. Their diversity is rarely studied or often underestimated due to difficulties in distinguishing the species. New genetic techniques reveal that even in anthropogenically highly influenced soils, more than 10 species per site can be found. Because of their close contact with the soil pore water, a high ingestion rate and a thin cuticle, they often react very sensitively to a broad range of pesticides. Firstly we provide a short overview of the diversity and abundance of enchytraeid communities in agroecosystems. Afterwards, we explore the available data on enchytraeid sensitivity toward pesticides at different levels of biological organization, focusing on pesticides used in (mainly European agroecosystems. Starting with non-standardized studies on the effects of pesticides on the sub-individual level, we compile the results of standard laboratory tests performed following OECD and ISO guidelines as well as those of higher-tier studies (i.e., semi-field and field tests. The number of comparable test data is still limited, because tests with enchytraeids are not a regulatory requirement in the European Union. While focusing on the effects of pesticides, attention is also given to their interactions with environmental stressors (e.g., climate change. In conclusion, we recommend to increase the use of enchytraeids in pesticide risk assessment because of their diversity and functional importance as well as their increasingly simplified use in (mostly standardized tests at all levels

  2. Mental health care and average happiness: strong effect in developed nations.

    Science.gov (United States)

    Touburg, Giorgio; Veenhoven, Ruut

    2015-07-01

    Mental disorder is a main cause of unhappiness in modern society and investment in mental health care is therefore likely to add to average happiness. This prediction was checked in a comparison of 143 nations around 2005. Absolute investment in mental health care was measured using the per capita number of psychiatrists and psychologists working in mental health care. Relative investment was measured using the share of mental health care in the total health budget. Average happiness in nations was measured with responses to survey questions about life-satisfaction. Average happiness appeared to be higher in countries that invest more in mental health care, both absolutely and relative to investment in somatic medicine. A data split by level of development shows that this difference exists only among developed nations. Among these nations the link between mental health care and happiness is quite strong, both in an absolute sense and compared to other known societal determinants of happiness. The correlation between happiness and share of mental health care in the total health budget is twice as strong as the correlation between happiness and size of the health budget. A causal effect is likely, but cannot be proved in this cross-sectional analysis.

  3. Production characteristics of lettuce Lactuca sativa L. in the frame of the first crop tests in the Higher Plant Chamber integrated into the MELiSSA Pilot Plant

    Science.gov (United States)

    Tikhomirova, Natalia; Lawson, Jamie; Stasiak, Michael; Dixon, Mike; Paille, Christel; Peiro, Enrique; Fossen, Arnaud; Godia, Francesc

    Micro-Ecological Life Support System Alternative (MELiSSA) is an artificial closed ecosystem that is considered a tool for the development of a bioregenerative life support system for manned space missions. One of the five compartments of MELiSSA loop -Higher Plant Chamber was recently integrated into the MELiSSA Pilot Plant facility at Universitat Aut`noma deo Barcelona. The main contributions expected by integration of this photosynthetic compartment are oxygen, water, vegetable food production and CO2 consumption. Production characteristics of Lactuca sativa L., as a MELiSSA candidate crop, were investigated in this work in the first crop experiments in the MELiSSA Pilot Plant facility. The plants were grown in batch culture and totaled 100 plants with a growing area 5 m long and 1 m wide in a sealed controlled environment. Several replicates of the experiments were carried out with varying duration. It was shown that after 46 days of lettuce cultivation dry edible biomass averaged 27, 2 g per plant. However accumulation of oxygen in the chamber, which required purging of the chamber, and decrease in the food value of the plants was observed. Reducing the duration of the tests allowed uninterrupted test without opening the system and also allowed estimation of the crop's carbon balance. Results of productivity, tissue composition, nutrient uptake and canopy photosynthesis of lettuce regardless of test duration are discussed in the paper.

  4. Measured emotional intelligence ability and grade point average in nursing students.

    Science.gov (United States)

    Codier, Estelle; Odell, Ellen

    2014-04-01

    For most schools of nursing, grade point average is the most important criteria for admission to nursing school and constitutes the main indicator of success throughout the nursing program. In the general research literature, the relationship between traditional measures of academic success, such as grade point average and postgraduation job performance is not well established. In both the general population and among practicing nurses, measured emotional intelligence ability correlates with both performance and other important professional indicators postgraduation. Little research exists comparing traditional measures of intelligence with measured emotional intelligence prior to graduation, and none in the student nurse population. This exploratory, descriptive, quantitative study was undertaken to explore the relationship between measured emotional intelligence ability and grade point average of first year nursing students. The study took place at a school of nursing at a university in the south central region of the United States. Participants included 72 undergraduate student nurse volunteers. Emotional intelligence was measured using the Mayer-Salovey-Caruso Emotional Intelligence Test, version 2, an instrument for quantifying emotional intelligence ability. Pre-admission grade point average was reported by the school records department. Total emotional intelligence (r=.24) scores and one subscore, experiential emotional intelligence(r=.25) correlated significantly (>.05) with grade point average. This exploratory, descriptive study provided evidence for some relationship between GPA and measured emotional intelligence ability, but also demonstrated lower than average range scores in several emotional intelligence scores. The relationship between pre-graduation measures of success and level of performance postgraduation deserves further exploration. The findings of this study suggest that research on the relationship between traditional and nontraditional

  5. Do higher-priced generic medicines enjoy a competitive advantage under reference pricing?

    Science.gov (United States)

    Puig-Junoy, Jaume

    2012-11-01

    In many countries with generic reference pricing, generic producers and distributors compete by means of undisclosed discounts offered to pharmacies in order to reduce acquisition costs and to induce them to dispense their generic to patients in preference over others. The objective of this article is to test the hypothesis that under prevailing reference pricing systems for generic medicines, those medicines sold at a higher consumer price may enjoy a competitive advantage. Real transaction prices for 179 generic medicines acquired by pharmacies in Spain have been used to calculate the discount rate on acquisition versus reimbursed costs to pharmacies. Two empirical hypotheses are tested: the discount rate at which pharmacies acquire generic medicines is higher for those pharmaceutical presentations for which there are more generic competitors; and, the discount rate at which pharmacies acquire generic medicines is higher for those pharmaceutical forms for which the consumer price has declined less in relation to the consumer price of the brand drug before generic entry (higher-priced generic medicines). An average discount rate of 39.3% on acquisition versus reimbursed costs to pharmacies has been observed. The magnitude of the discount positively depends on the number of competitors in the market. The higher the ratio of the consumer price of the generic to that of the brand drug prior to generic entry (i.e. the smaller the price reduction of the generic in relation to the brand drug), the larger the discount rate. Under reference pricing there is intense price competition among generic firms in the form of unusually high discounts to pharmacies on official ex-factory prices reimbursed to pharmacies. However, this effect is highly distorting because it favours those medicines with a higher relative price in relation to the brand price before generic entry.

  6. Monthly streamflow forecasting with auto-regressive integrated moving average

    Science.gov (United States)

    Nasir, Najah; Samsudin, Ruhaidah; Shabri, Ani

    2017-09-01

    Forecasting of streamflow is one of the many ways that can contribute to better decision making for water resource management. The auto-regressive integrated moving average (ARIMA) model was selected in this research for monthly streamflow forecasting with enhancement made by pre-processing the data using singular spectrum analysis (SSA). This study also proposed an extension of the SSA technique to include a step where clustering was performed on the eigenvector pairs before reconstruction of the time series. The monthly streamflow data of Sungai Muda at Jeniang, Sungai Muda at Jambatan Syed Omar and Sungai Ketil at Kuala Pegang was gathered from the Department of Irrigation and Drainage Malaysia. A ratio of 9:1 was used to divide the data into training and testing sets. The ARIMA, SSA-ARIMA and Clustered SSA-ARIMA models were all developed in R software. Results from the proposed model are then compared to a conventional auto-regressive integrated moving average model using the root-mean-square error and mean absolute error values. It was found that the proposed model can outperform the conventional model.

  7. 40 CFR 76.11 - Emissions averaging.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Emissions averaging. 76.11 Section 76.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General...

  8. Average spectral efficiency analysis of FSO links over turbulence channel with adaptive transmissions and aperture averaging

    Science.gov (United States)

    Aarthi, G.; Ramachandra Reddy, G.

    2018-03-01

    In our paper, the impact of adaptive transmission schemes: (i) optimal rate adaptation (ORA) and (ii) channel inversion with fixed rate (CIFR) on the average spectral efficiency (ASE) are explored for free-space optical (FSO) communications with On-Off Keying (OOK), Polarization shift keying (POLSK), and Coherent optical wireless communication (Coherent OWC) systems under different turbulence regimes. Further to enhance the ASE we have incorporated aperture averaging effects along with the above adaptive schemes. The results indicate that ORA adaptation scheme has the advantage of improving the ASE performance compared with CIFR under moderate and strong turbulence regime. The coherent OWC system with ORA excels the other modulation schemes and could achieve ASE performance of 49.8 bits/s/Hz at the average transmitted optical power of 6 dBm under strong turbulence. By adding aperture averaging effect we could achieve an ASE of 50.5 bits/s/Hz under the same conditions. This makes ORA with Coherent OWC modulation as a favorable candidate for improving the ASE of the FSO communication system.

  9. TRIGA research reactors with higher power density

    International Nuclear Information System (INIS)

    Whittemore, W.L.

    1994-01-01

    The recent trend in new or upgraded research reactors is to higher power densities (hence higher neutron flux levels) but not necessarily to higher power levels. The TRIGA LEU fuel with burnable poison is available in small diameter fuel rods capable of high power per rod (≅48 kW/rod) with acceptable peak fuel temperatures. The performance of a 10-MW research reactor with a compact core of hexagonal TRIGA fuel clusters has been calculated in detail. With its light water coolant, beryllium and D 2 O reflector regions, this reactor can provide in-core experiments with thermal fluxes in excess of 3 x 10 14 n/cm 2 ·s and fast fluxes (>0.1 MeV) of 2 x 10 14 n/cm 2 ·s. The core centerline thermal neutron flux in the D 2 O reflector is about 2 x 10 14 n/cm 2 ·s and the average core power density is about 230 kW/liter. Using other TRIGA fuel developed for 25-MW test reactors but arranged in hexagonal arrays, power densities in excess of 300 kW/liter are readily available. A core with TRIGA fuel operating at 15-MW and generating such a power density is capable of producing thermal neutron fluxes in a D 2 O reflector of 3 x 10 14 n/cm 2 ·s. A beryllium-filled central region of the core can further enhance the core leakage and hence the neutron flux in the reflector. (author)

  10. A practical guide to averaging functions

    CERN Document Server

    Beliakov, Gleb; Calvo Sánchez, Tomasa

    2016-01-01

    This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...

  11. 7 CFR 51.2561 - Average moisture content.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except when...

  12. Determining average path length and average trapping time on generalized dual dendrimer

    Science.gov (United States)

    Li, Ling; Guan, Jihong

    2015-03-01

    Dendrimer has wide number of important applications in various fields. In some cases during transport or diffusion process, it transforms into its dual structure named Husimi cactus. In this paper, we study the structure properties and trapping problem on a family of generalized dual dendrimer with arbitrary coordination numbers. We first calculate exactly the average path length (APL) of the networks. The APL increases logarithmically with the network size, indicating that the networks exhibit a small-world effect. Then we determine the average trapping time (ATT) of the trapping process in two cases, i.e., the trap placed on a central node and the trap is uniformly distributed in all the nodes of the network. In both case, we obtain explicit solutions of ATT and show how they vary with the networks size. Besides, we also discuss the influence of the coordination number on trapping efficiency.

  13. Computation of the bounce-average code

    International Nuclear Information System (INIS)

    Cutler, T.A.; Pearlstein, L.D.; Rensink, M.E.

    1977-01-01

    The bounce-average computer code simulates the two-dimensional velocity transport of ions in a mirror machine. The code evaluates and bounce-averages the collision operator and sources along the field line. A self-consistent equilibrium magnetic field is also computed using the long-thin approximation. Optionally included are terms that maintain μ, J invariance as the magnetic field changes in time. The assumptions and analysis that form the foundation of the bounce-average code are described. When references can be cited, the required results are merely stated and explained briefly. A listing of the code is appended

  14. Artificial neural network optimisation for monthly average daily global solar radiation prediction

    International Nuclear Information System (INIS)

    Alsina, Emanuel Federico; Bortolini, Marco; Gamberi, Mauro; Regattieri, Alberto

    2016-01-01

    Highlights: • Prediction of the monthly average daily global solar radiation over Italy. • Multi-location Artificial Neural Network (ANN) model: 45 locations considered. • Optimal ANN configuration with 7 input climatologic/geographical parameters. • Statistical indicators: MAPE, NRMSE, MPBE. - Abstract: The availability of reliable climatologic data is essential for multiple purposes in a wide set of anthropic activities and operative sectors. Frequently direct measures present spatial and temporal lacks so that predictive approaches become of interest. This paper focuses on the prediction of the Monthly Average Daily Global Solar Radiation (MADGSR) over Italy using Artificial Neural Networks (ANNs). Data from 45 locations compose the multi-location ANN training and testing sets. For each location, 13 input parameters are considered, including the geographical coordinates and the monthly values for the most frequently adopted climatologic parameters. A subset of 17 locations is used for ANN training, while the testing step is against data from the remaining 28 locations. Furthermore, the Automatic Relevance Determination method (ARD) is used to point out the most relevant input for the accurate MADGSR prediction. The ANN best configuration includes 7 parameters, only, i.e. Top of Atmosphere (TOA) radiation, day length, number of rainy days and average rainfall, latitude and altitude. The correlation performances, expressed through statistical indicators as the Mean Absolute Percentage Error (MAPE), range between 1.67% and 4.25%, depending on the number and type of the chosen input, representing a good solution compared to the current standards.

  15. Average is Over

    Science.gov (United States)

    Eliazar, Iddo

    2018-02-01

    The popular perception of statistical distributions is depicted by the iconic bell curve which comprises of a massive bulk of 'middle-class' values, and two thin tails - one of small left-wing values, and one of large right-wing values. The shape of the bell curve is unimodal, and its peak represents both the mode and the mean. Thomas Friedman, the famous New York Times columnist, recently asserted that we have entered a human era in which "Average is Over" . In this paper we present mathematical models for the phenomenon that Friedman highlighted. While the models are derived via different modeling approaches, they share a common foundation. Inherent tipping points cause the models to phase-shift from a 'normal' bell-shape statistical behavior to an 'anomalous' statistical behavior: the unimodal shape changes to an unbounded monotone shape, the mode vanishes, and the mean diverges. Hence: (i) there is an explosion of small values; (ii) large values become super-large; (iii) 'middle-class' values are wiped out, leaving an infinite rift between the small and the super large values; and (iv) "Average is Over" indeed.

  16. Local and average structure of Mn- and La-substituted BiFeO3

    Science.gov (United States)

    Jiang, Bo; Selbach, Sverre M.

    2017-06-01

    The local and average structure of solid solutions of the multiferroic perovskite BiFeO3 is investigated by synchrotron X-ray diffraction (XRD) and electron density functional theory (DFT) calculations. The average experimental structure is determined by Rietveld refinement and the local structure by total scattering data analyzed in real space with the pair distribution function (PDF) method. With equal concentrations of La on the Bi site or Mn on the Fe site, La causes larger structural distortions than Mn. Structural models based on DFT relaxed geometry give an improved fit to experimental PDFs compared to models constrained by the space group symmetry. Berry phase calculations predict a higher ferroelectric polarization than the experimental literature values, reflecting that structural disorder is not captured in either average structure space group models or DFT calculations with artificial long range order imposed by periodic boundary conditions. Only by including point defects in a supercell, here Bi vacancies, can DFT calculations reproduce the literature results on the structure and ferroelectric polarization of Mn-substituted BiFeO3. The combination of local and average structure sensitive experimental methods with DFT calculations is useful for illuminating the structure-property-composition relationships in complex functional oxides with local structural distortions.

  17. Averaging and sampling for magnetic-observatory hourly data

    Directory of Open Access Journals (Sweden)

    J. J. Love

    2010-11-01

    Full Text Available A time and frequency-domain analysis is made of the effects of averaging and sampling methods used for constructing magnetic-observatory hourly data values. Using 1-min data as a proxy for continuous, geomagnetic variation, we construct synthetic hourly values of two standard types: instantaneous "spot" measurements and simple 1-h "boxcar" averages. We compare these average-sample types with others: 2-h average, Gaussian, and "brick-wall" low-frequency-pass. Hourly spot measurements provide a statistically unbiased representation of the amplitude range of geomagnetic-field variation, but as a representation of continuous field variation over time, they are significantly affected by aliasing, especially at high latitudes. The 1-h, 2-h, and Gaussian average-samples are affected by a combination of amplitude distortion and aliasing. Brick-wall values are not affected by either amplitude distortion or aliasing, but constructing them is, in an operational setting, relatively more difficult than it is for other average-sample types. It is noteworthy that 1-h average-samples, the present standard for observatory hourly data, have properties similar to Gaussian average-samples that have been optimized for a minimum residual sum of amplitude distortion and aliasing. For 1-h average-samples from medium and low-latitude observatories, the average of the combination of amplitude distortion and aliasing is less than the 5.0 nT accuracy standard established by Intermagnet for modern 1-min data. For medium and low-latitude observatories, average differences between monthly means constructed from 1-min data and monthly means constructed from any of the hourly average-sample types considered here are less than the 1.0 nT resolution of standard databases. We recommend that observatories and World Data Centers continue the standard practice of reporting simple 1-h-average hourly values.

  18. The average size of ordered binary subgraphs

    NARCIS (Netherlands)

    van Leeuwen, J.; Hartel, Pieter H.

    To analyse the demands made on the garbage collector in a graph reduction system, the change in size of an average graph is studied when an arbitrary edge is removed. In ordered binary trees the average number of deleted nodes as a result of cutting a single edge is equal to the average size of a

  19. Introducing Computer-Based Testing in High-Stakes Exams in Higher Education: Results of a Field Experiment.

    Science.gov (United States)

    Boevé, Anja J; Meijer, Rob R; Albers, Casper J; Beetsma, Yta; Bosker, Roel J

    2015-01-01

    The introduction of computer-based testing in high-stakes examining in higher education is developing rather slowly due to institutional barriers (the need of extra facilities, ensuring test security) and teacher and student acceptance. From the existing literature it is unclear whether computer-based exams will result in similar results as paper-based exams and whether student acceptance can change as a result of administering computer-based exams. In this study, we compared results from a computer-based and paper-based exam in a sample of psychology students and found no differences in total scores across the two modes. Furthermore, we investigated student acceptance and change in acceptance of computer-based examining. After taking the computer-based exam, fifty percent of the students preferred paper-and-pencil exams over computer-based exams and about a quarter preferred a computer-based exam. We conclude that computer-based exam total scores are similar as paper-based exam scores, but that for the acceptance of high-stakes computer-based exams it is important that students practice and get familiar with this new mode of test administration.

  20. An averaging battery model for a lead-acid battery operating in an electric car

    Science.gov (United States)

    Bozek, J. M.

    1979-01-01

    A battery model is developed based on time averaging the current or power, and is shown to be an effective means of predicting the performance of a lead acid battery. The effectiveness of this battery model was tested on battery discharge profiles expected during the operation of an electric vehicle following the various SAE J227a driving schedules. The averaging model predicts the performance of a battery that is periodically charged (regenerated) if the regeneration energy is assumed to be converted to retrievable electrochemical energy on a one-to-one basis.

  1. Estimate of average glandular dose (AGD) in national clinics of mammography

    International Nuclear Information System (INIS)

    Mora, Patricia; Segura, Helena

    2004-01-01

    The breast cancer represents the second cause of death by cancer in the femme population of our country. The specialized equipment for the obtaining of the mammographic images is higher every day and its use increases daily. The quality of the radiographic study is linked to the dose that this tissue intrinsically sensible receives to the ionizing radiations. The present work makes the first national study to quantify the average glandular doses and to connect them with the diagnostic quality and the recommendations to international scale. (Author) [es

  2. Delineation of facial archetypes by 3d averaging.

    Science.gov (United States)

    Shaweesh, Ashraf I; Thomas, C David L; Bankier, Agnes; Clement, John G

    2004-10-01

    The objective of this study was to investigate the feasibility of creating archetypal 3D faces through computerized 3D facial averaging. A 3D surface scanner Fiore and its software were used to acquire the 3D scans of the faces while 3D Rugle3 and locally-developed software generated the holistic facial averages. 3D facial averages were created from two ethnic groups; European and Japanese and from children with three previous genetic disorders; Williams syndrome, achondroplasia and Sotos syndrome as well as the normal control group. The method included averaging the corresponding depth (z) coordinates of the 3D facial scans. Compared with other face averaging techniques there was not any warping or filling in the spaces by interpolation; however, this facial average lacked colour information. The results showed that as few as 14 faces were sufficient to create an archetypal facial average. In turn this would make it practical to use face averaging as an identification tool in cases where it would be difficult to recruit a larger number of participants. In generating the average, correcting for size differences among faces was shown to adjust the average outlines of the facial features. It is assumed that 3D facial averaging would help in the identification of the ethnic status of persons whose identity may not be known with certainty. In clinical medicine, it would have a great potential for the diagnosis of syndromes with distinctive facial features. The system would also assist in the education of clinicians in the recognition and identification of such syndromes.

  3. To quantum averages through asymptotic expansion of classical averages on infinite-dimensional space

    International Nuclear Information System (INIS)

    Khrennikov, Andrei

    2007-01-01

    We study asymptotic expansions of Gaussian integrals of analytic functionals on infinite-dimensional spaces (Hilbert and nuclear Frechet). We obtain an asymptotic equality coupling the Gaussian integral and the trace of the composition of scaling of the covariation operator of a Gaussian measure and the second (Frechet) derivative of a functional. In this way we couple classical average (given by an infinite-dimensional Gaussian integral) and quantum average (given by the von Neumann trace formula). We can interpret this mathematical construction as a procedure of 'dequantization' of quantum mechanics. We represent quantum mechanics as an asymptotic projection of classical statistical mechanics with infinite-dimensional phase space. This space can be represented as the space of classical fields, so quantum mechanics is represented as a projection of 'prequantum classical statistical field theory'

  4. Cast Stone Formulation At Higher Sodium Concentrations

    International Nuclear Information System (INIS)

    Fox, K. M.; Edwards, T. A.; Roberts, K. B.

    2013-01-01

    A low temperature waste form known as Cast Stone is being considered to provide supplemental Low Activity Waste (LAW) immobilization capacity for the Hanford site. Formulation of Cast Stone at high sodium concentrations is of interest since a significant reduction in the necessary volume of Cast Stone and subsequent disposal costs could be achieved if an acceptable waste form can be produced with a high sodium molarity salt solution combined with a high water to premix (or dry blend) ratio. The objectives of this study were to evaluate the factors involved with increasing the sodium concentration in Cast Stone, including production and performance properties and the retention and release of specific components of interest. Three factors were identified for the experimental matrix: the concentration of sodium in the simulated salt solution, the water to premix ratio, and the blast furnace slag portion of the premix. The salt solution simulants used in this study were formulated to represent the overall average waste composition. The cement, blast furnace slag, and fly ash were sourced from a supplier in the Hanford area in order to be representative. The test mixes were prepared in the laboratory and fresh properties were measured. Fresh density increased with increasing sodium molarity and with decreasing water to premix ratio, as expected given the individual densities of these components. Rheology measurements showed that all of the test mixes produced very fluid slurries. The fresh density and rheology data are of potential value in designing a future Cast Stone production facility. Standing water and density gradient testing showed that settling is not of particular concern for the high sodium compositions studied. Heat of hydration measurements may provide some insight into the reactions that occur within the test mixes, which may in turn be related to the properties and performance of the waste form. These measurements showed that increased sodium

  5. Cast Stone Formulation At Higher Sodium Concentrations

    International Nuclear Information System (INIS)

    Fox, K. M.; Roberts, K. A.; Edwards, T. B.

    2014-01-01

    A low temperature waste form known as Cast Stone is being considered to provide supplemental Low Activity Waste (LAW) immobilization capacity for the Hanford site. Formulation of Cast Stone at high sodium concentrations is of interest since a significant reduction in the necessary volume of Cast Stone and subsequent disposal costs could be achieved if an acceptable waste form can be produced with a high sodium molarity salt solution combined with a high water to premix (or dry blend) ratio. The objectives of this study were to evaluate the factors involved with increasing the sodium concentration in Cast Stone, including production and performance properties and the retention and release of specific components of interest. Three factors were identified for the experimental matrix: the concentration of sodium in the simulated salt solution, the water to premix ratio, and the blast furnace slag portion of the premix. The salt solution simulants used in this study were formulated to represent the overall average waste composition. The cement, blast furnace slag, and fly ash were sourced from a supplier in the Hanford area in order to be representative. The test mixes were prepared in the laboratory and fresh properties were measured. Fresh density increased with increasing sodium molarity and with decreasing water to premix ratio, as expected given the individual densities of these components. Rheology measurements showed that all of the test mixes produced very fluid slurries. The fresh density and rheology data are of potential value in designing a future Cast Stone production facility. Standing water and density gradient testing showed that settling is not of particular concern for the high sodium compositions studied. Heat of hydration measurements may provide some insight into the reactions that occur within the test mixes, which may in turn be related to the properties and performance of the waste form. These measurements showed that increased sodium

  6. Cast Stone Formulation At Higher Sodium Concentrations

    Energy Technology Data Exchange (ETDEWEB)

    Fox, K. M. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Roberts, K. A. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Edwards, T. B. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2014-02-28

    A low temperature waste form known as Cast Stone is being considered to provide supplemental Low Activity Waste (LAW) immobilization capacity for the Hanford site. Formulation of Cast Stone at high sodium concentrations is of interest since a significant reduction in the necessary volume of Cast Stone and subsequent disposal costs could be achieved if an acceptable waste form can be produced with a high sodium molarity salt solution combined with a high water to premix (or dry blend) ratio. The objectives of this study were to evaluate the factors involved with increasing the sodium concentration in Cast Stone, including production and performance properties and the retention and release of specific components of interest. Three factors were identified for the experimental matrix: the concentration of sodium in the simulated salt solution, the water to premix ratio, and the blast furnace slag portion of the premix. The salt solution simulants used in this study were formulated to represent the overall average waste composition. The cement, blast furnace slag, and fly ash were sourced from a supplier in the Hanford area in order to be representative. The test mixes were prepared in the laboratory and fresh properties were measured. Fresh density increased with increasing sodium molarity and with decreasing water to premix ratio, as expected given the individual densities of these components. Rheology measurements showed that all of the test mixes produced very fluid slurries. The fresh density and rheology data are of potential value in designing a future Cast Stone production facility. Standing water and density gradient testing showed that settling is not of particular concern for the high sodium compositions studied. Heat of hydration measurements may provide some insight into the reactions that occur within the test mixes, which may in turn be related to the properties and performance of the waste form. These measurements showed that increased sodium

  7. Cast Stone Formulation At Higher Sodium Concentrations

    Energy Technology Data Exchange (ETDEWEB)

    Fox, K. M.; Edwards, T. A.; Roberts, K. B.

    2013-10-02

    A low temperature waste form known as Cast Stone is being considered to provide supplemental Low Activity Waste (LAW) immobilization capacity for the Hanford site. Formulation of Cast Stone at high sodium concentrations is of interest since a significant reduction in the necessary volume of Cast Stone and subsequent disposal costs could be achieved if an acceptable waste form can be produced with a high sodium molarity salt solution combined with a high water to premix (or dry blend) ratio. The objectives of this study were to evaluate the factors involved with increasing the sodium concentration in Cast Stone, including production and performance properties and the retention and release of specific components of interest. Three factors were identified for the experimental matrix: the concentration of sodium in the simulated salt solution, the water to premix ratio, and the blast furnace slag portion of the premix. The salt solution simulants used in this study were formulated to represent the overall average waste composition. The cement, blast furnace slag, and fly ash were sourced from a supplier in the Hanford area in order to be representative. The test mixes were prepared in the laboratory and fresh properties were measured. Fresh density increased with increasing sodium molarity and with decreasing water to premix ratio, as expected given the individual densities of these components. Rheology measurements showed that all of the test mixes produced very fluid slurries. The fresh density and rheology data are of potential value in designing a future Cast Stone production facility. Standing water and density gradient testing showed that settling is not of particular concern for the high sodium compositions studied. Heat of hydration measurements may provide some insight into the reactions that occur within the test mixes, which may in turn be related to the properties and performance of the waste form. These measurements showed that increased sodium

  8. Cast Stone Formulation At Higher Sodium Concentrations

    Energy Technology Data Exchange (ETDEWEB)

    Fox, K. M.; Roberts, K. A.; Edwards, T. B.

    2013-09-17

    A low temperature waste form known as Cast Stone is being considered to provide supplemental Low Activity Waste (LAW) immobilization capacity for the Hanford site. Formulation of Cast Stone at high sodium concentrations is of interest since a significant reduction in the necessary volume of Cast Stone and subsequent disposal costs could be achieved if an acceptable waste form can be produced with a high sodium molarity salt solution combined with a high water to premix (or dry blend) ratio. The objectives of this study were to evaluate the factors involved with increasing the sodium concentration in Cast Stone, including production and performance properties and the retention and release of specific components of interest. Three factors were identified for the experimental matrix: the concentration of sodium in the simulated salt solution, the water to premix ratio, and the blast furnace slag portion of the premix. The salt solution simulants used in this study were formulated to represent the overall average waste composition. The cement, blast furnace slag, and fly ash were sourced from a supplier in the Hanford area in order to be representative. The test mixes were prepared in the laboratory and fresh properties were measured. Fresh density increased with increasing sodium molarity and with decreasing water to premix ratio, as expected given the individual densities of these components. Rheology measurements showed that all of the test mixes produced very fluid slurries. The fresh density and rheology data are of potential value in designing a future Cast Stone production facility. Standing water and density gradient testing showed that settling is not of particular concern for the high sodium compositions studied. Heat of hydration measurements may provide some insight into the reactions that occur within the test mixes, which may in turn be related to the properties and performance of the waste form. These measurements showed that increased sodium

  9. Averaging Robertson-Walker cosmologies

    International Nuclear Information System (INIS)

    Brown, Iain A.; Robbers, Georg; Behrend, Juliane

    2009-01-01

    The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ω eff 0 ≈ 4 × 10 −6 , with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10 −8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w eff < −1/3 can be found for strongly phantom models

  10. The average crossing number of equilateral random polygons

    International Nuclear Information System (INIS)

    Diao, Y; Dobay, A; Kusner, R B; Millett, K; Stasiak, A

    2003-01-01

    In this paper, we study the average crossing number of equilateral random walks and polygons. We show that the mean average crossing number ACN of all equilateral random walks of length n is of the form (3/16)n ln n + O(n). A similar result holds for equilateral random polygons. These results are confirmed by our numerical studies. Furthermore, our numerical studies indicate that when random polygons of length n are divided into individual knot types, the for each knot type K can be described by a function of the form = a(n-n 0 )ln(n-n 0 ) + b(n-n 0 ) + c where a, b and c are constants depending on K and n 0 is the minimal number of segments required to form K. The profiles diverge from each other, with more complex knots showing higher than less complex knots. Moreover, the profiles intersect with the profile of all closed walks. These points of intersection define the equilibrium length of K, i.e., the chain length n e (K) at which a statistical ensemble of configurations with given knot type K-upon cutting, equilibration and reclosure to a new knot type K'-does not show a tendency to increase or decrease . This concept of equilibrium length seems to be universal, and applies also to other length-dependent observables for random knots, such as the mean radius of gyration g >

  11. Predicting Different Grades in Different Ways for Selective Admission: Disentangling the First-Year Grade Point Average

    Science.gov (United States)

    Steenman, Sebastiaan C.; Bakker, Wieger E.; van Tartwijk, Jan W. F.

    2016-01-01

    The first-year grade point average (FYGPA) is the predominant measure of student success in most studies on university admission. Previous cognitive achievements measured with high school grades or standardized tests have been found to be the strongest predictors of FYGPA. For this reason, standardized tests measuring cognitive achievement are…

  12. Criticality evaluation of BWR MOX fuel transport packages using average Pu content

    International Nuclear Information System (INIS)

    Mattera, C.; Martinotti, B.

    2004-01-01

    Currently in France, criticality studies in transport configurations for Boiling Water Reactor Mixed Oxide fuel assemblies are based on conservative hypothesis assuming that all rods (Mixed Oxide (Uranium and Plutonium), Uranium Oxide, Uranium and Gadolinium Oxide rods) are Mixed Oxide rods with the same Plutonium-content, corresponding to the maximum value. In that way, the real heterogeneous mapping of the assembly is masked and covered by a homogeneous Plutonium-content assembly, enriched at the maximum value. As this calculation hypothesis is extremely conservative, COGEMA LOGISTICS has studied a new calculation method based on the average Plutonium-content in the criticality studies. The use of the average Plutonium-content instead of the real Plutonium-content profiles provides a highest reactivity value that makes it globally conservative. This method can be applied for all Boiling Water Reactor Mixed Oxide complete fuel assemblies of type 8 x 8, 9 x 9 and 10 x 10 which Plutonium-content in mass weight does not exceed 15%; it provides advantages which are discussed in our approach. With this new method, for the same package reactivity, the Pu-content allowed in the package design approval can be higher. The COGEMA LOGISTICS' new method allows, at the design stage, to optimise the basket, materials or geometry for higher payload, keeping the same reactivity

  13. Comparison of Shock Response Spectrum for Different Gun Tests

    Directory of Open Access Journals (Sweden)

    J.A. Cordes

    2013-01-01

    Full Text Available The Soft Catch Gun at Picatinny Arsenal is regularly used for component testing. Most shots contain accelerometers which record accelerations as a function of time. Statistics of accelerometer data indicate that the muzzle exit accelerations are, on average, higher than tactical firings. For that reason, Soft Catch Gun tests with unusually high accelerations may not be scored for Lot Acceptance Tests (LAT by some customers. The 95/50 Normal Tolerance Limit (NTL is proposed as a means of determining which test results should be scored. This paper presents comparisons of Shock Response Spectra (SRS used for the 95/50 scoring criteria. The paper also provides a Discussion Section outlining some concerns with scoring LAT results based on test results outside of the proposed 95/50 criteria.

  14. Determinants of College Grade Point Averages

    Science.gov (United States)

    Bailey, Paul Dean

    2012-01-01

    Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the…

  15. Field test analysis of concentrator photovoltaic system focusing on average photon energy and temperature

    Science.gov (United States)

    Husna, Husyira Al; Ota, Yasuyuki; Minemoto, Takashi; Nishioka, Kensuke

    2015-08-01

    The concentrator photovoltaic (CPV) system is unique and different from the common flat-plate PV system. It uses a multi-junction solar cell and a Fresnel lens to concentrate direct solar radiation onto the cell while tracking the sun throughout the day. The cell efficiency could reach over 40% under high concentration ratio. In this study, we analyzed a one year set of environmental condition data of the University of Miyazaki, Japan, where the CPV system was installed. Performance ratio (PR) was discussed to describe the system’s performance. Meanwhile, the average photon energy (APE) was used to describe the spectrum distribution at the site where the CPV system was installed. A circuit simulator network was used to simulate the CPV system electrical characteristics under various environmental conditions. As for the result, we found that the PR of the CPV systems depends on the APE level rather than the cell temperature.

  16. Ergodic averages via dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    2006-01-01

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain....

  17. Incidence Rates of Clinical Mastitis among Canadian Holsteins Classified as High, Average, or Low Immune Responders

    Science.gov (United States)

    Miglior, Filippo; Mallard, Bonnie A.

    2013-01-01

    The objective of this study was to compare the incidence rate of clinical mastitis (IRCM) between cows classified as high, average, or low for antibody-mediated immune responses (AMIR) and cell-mediated immune responses (CMIR). In collaboration with the Canadian Bovine Mastitis Research Network, 458 lactating Holsteins from 41 herds were immunized with a type 1 and a type 2 test antigen to stimulate adaptive immune responses. A delayed-type hypersensitivity test to the type 1 test antigen was used as an indicator of CMIR, and serum antibody of the IgG1 isotype to the type 2 test antigen was used for AMIR determination. By using estimated breeding values for these traits, cows were classified as high, average, or low responders. The IRCM was calculated as the number of cases of mastitis experienced over the total time at risk throughout the 2-year study period. High-AMIR cows had an IRCM of 17.1 cases per 100 cow-years, which was significantly lower than average and low responders, with 27.9 and 30.7 cases per 100 cow-years, respectively. Low-AMIR cows tended to have the most severe mastitis. No differences in the IRCM were noted when cows were classified based on CMIR, likely due to the extracellular nature of mastitis-causing pathogens. The results of this study demonstrate the desirability of breeding dairy cattle for enhanced immune responses to decrease the incidence and severity of mastitis in the Canadian dairy industry. PMID:23175290

  18. Averaged head phantoms from magnetic resonance images of Korean children and young adults

    Science.gov (United States)

    Han, Miran; Lee, Ae-Kyoung; Choi, Hyung-Do; Jung, Yong Wook; Park, Jin Seo

    2018-02-01

    Increased use of mobile phones raises concerns about the health risks of electromagnetic radiation. Phantom heads are routinely used for radiofrequency dosimetry simulations, and the purpose of this study was to construct averaged phantom heads for children and young adults. Using magnetic resonance images (MRI), sectioned cadaver images, and a hybrid approach, we initially built template phantoms representing 6-, 9-, 12-, 15-year-old children and young adults. Our subsequent approach revised the template phantoms using 29 averaged items that were identified by averaging the MRI data from 500 children and young adults. In females, the brain size and cranium thickness peaked in the early teens and then decreased. This is contrary to what was observed in males, where brain size and cranium thicknesses either plateaued or grew continuously. The overall shape of brains was spherical in children and became ellipsoidal by adulthood. In this study, we devised a method to build averaged phantom heads by constructing surface and voxel models. The surface model could be used for phantom manipulation, whereas the voxel model could be used for compliance test of specific absorption rate (SAR) for users of mobile phones or other electronic devices.

  19. Introduction to the method of average magnitude analysis and application to natural convection in cavities

    International Nuclear Information System (INIS)

    Lykoudis, P.S.

    1995-01-01

    The method of Average Magnitude Analysis is a mixture of the Integral Method and the Order of Magnitude Analysis. The paper shows how the differential equations of conservation for steady-state, laminar, boundary layer flows are converted to a system of algebraic equations, where the result is a sum of the order of magnitude of each term, multiplied by, a weight coefficient. These coefficients are determined from integrals containing the assumed velocity and temperature profiles. The method is illustrated by applying it to the case of drag and heat transfer over an infinite flat plate. It is then applied to the case of natural convection over an infinite flat plate with and without the presence of a horizontal magnetic field, and subsequently to enclosures of aspect ratios of one or higher. The final correlation in this instance yields the Nusselt number as a function of the aspect ratio and the Rayleigh and Prandtl numbers. This correlation is tested against a wide range of small and large values of these parameters. 19 refs., 4 figs

  20. Colorectal cancer screening for average-risk North Americans: an economic evaluation.

    Directory of Open Access Journals (Sweden)

    Steven J Heitman

    Full Text Available BACKGROUND: Colorectal cancer (CRC fulfills the World Health Organization criteria for mass screening, but screening uptake is low in most countries. CRC screening is resource intensive, and it is unclear if an optimal strategy exists. The objective of this study was to perform an economic evaluation of CRC screening in average risk North American individuals considering all relevant screening modalities and current CRC treatment costs. METHODS AND FINDINGS: An incremental cost-utility analysis using a Markov model was performed comparing guaiac-based fecal occult blood test (FOBT or fecal immunochemical test (FIT annually, fecal DNA every 3 years, flexible sigmoidoscopy or computed tomographic colonography every 5 years, and colonoscopy every 10 years. All strategies were also compared to a no screening natural history arm. Given that different FIT assays and collection methods have been previously tested, three distinct FIT testing strategies were considered, on the basis of studies that have reported "low," "mid," and "high" test performance characteristics for detecting adenomas and CRC. Adenoma and CRC prevalence rates were based on a recent systematic review whereas screening adherence, test performance, and CRC treatment costs were based on publicly available data. The outcome measures included lifetime costs, number of cancers, cancer-related deaths, quality-adjusted life-years gained, and incremental cost-utility ratios. Sensitivity and scenario analyses were performed. Annual FIT, assuming mid-range testing characteristics, was more effective and less costly compared to all strategies (including no screening except FIT-high. Among the lifetimes of 100,000 average-risk patients, the number of cancers could be reduced from 4,857 to 1,393 [corrected] and the number of CRC deaths from 1,782 [corrected] to 457, while saving CAN$68 per person. Although screening patients with FIT became more expensive than a strategy of no screening when the

  1. A new mathematical process for the calculation of average forms of teeth.

    Science.gov (United States)

    Mehl, A; Blanz, V; Hickel, R

    2005-12-01

    Qualitative visual inspections and linear metric measurements have been predominant methods for describing the morphology of teeth. No quantitative formulation exists for the description of dental features. The aim of this study was to determine and validate a mathematical process for calculation of the average form of first maxillary molars, including the general occlusal features. Stone replicas of 174 caries-free first maxillary molar crowns from young patients ranging from 6 to 9 years of age were measured 3-dimensionally with a laser scanning system at a resolution of approximately 100,000 points. Then, the average tooth was computed, which captured the common features of the molar's surface quantitatively. This new method adapts algorithms both from computer science and neuroscience to detect and associate the same features and same surface points (correspondences) between 1 reference tooth and all other teeth. In this study, the method was tested for 7 different reference teeth. The algorithm does not involve any prior knowledge about teeth and their features. Irrespective of the reference tooth used, the procedure yielded average teeth that showed nearly no differences (less than +/-30 microm). This approach provides a valid quantitative process for calculating 3-dimensional (3D) averages of occlusal surfaces of teeth even in the event of a high number of digitized surface points. Additionally, because this process detects and assigns point-wise feature correspondences between all library teeth, it may also serve as a basis for a more substantiated principal component analysis evaluating the main natural shape deviations from the 3D average.

  2. Multiphase averaging of periodic soliton equations

    International Nuclear Information System (INIS)

    Forest, M.G.

    1979-01-01

    The multiphase averaging of periodic soliton equations is considered. Particular attention is given to the periodic sine-Gordon and Korteweg-deVries (KdV) equations. The periodic sine-Gordon equation and its associated inverse spectral theory are analyzed, including a discussion of the spectral representations of exact, N-phase sine-Gordon solutions. The emphasis is on physical characteristics of the periodic waves, with a motivation from the well-known whole-line solitons. A canonical Hamiltonian approach for the modulational theory of N-phase waves is prescribed. A concrete illustration of this averaging method is provided with the periodic sine-Gordon equation; explicit averaging results are given only for the N = 1 case, laying a foundation for a more thorough treatment of the general N-phase problem. For the KdV equation, very general results are given for multiphase averaging of the N-phase waves. The single-phase results of Whitham are extended to general N phases, and more importantly, an invariant representation in terms of Abelian differentials on a Riemann surface is provided. Several consequences of this invariant representation are deduced, including strong evidence for the Hamiltonian structure of N-phase modulational equations

  3. Evaluation report on CCTF Core-II reflood test second shakedown test, C2-SH2 (Run 54)

    International Nuclear Information System (INIS)

    Iguchi, Tadashi; Sugimoto, Jun; Akimoto, Hajime; Okubo, Tsutomu; Murao, Yoshio

    1985-03-01

    A low power test (the initial averaged linear power density = 1.18 kW/m) and the base case test (1.4 kW/m) were performed with the Cylindrical Core Test Facility (CCTF) at Japan Atomic Energy Research Institute, in order to study the effect of the power on the reflood phenomena. The former linear power density corresponds nearly to the scaled linear power density based on the current safety evaluation criterio. During the early period of the reflood ( 200s) the heat transfer coefficient became higher and resultantly the quench front advanced faster in the low power test. The core flooding rate was nearly identical between both tests, independently of the different power. The insensitiveness of the power to the core flooding rate was also observed in FLECHT-SET performed in the USA. A significatn large differential pressure oscillation at ECC ports was experienced in the low power test, and it may be important for the long term core cooling although it has not been taken note on the previous studies. (author)

  4. Average cross sections calculated in various neutron fields

    International Nuclear Information System (INIS)

    Shibata, Keiichi

    2002-01-01

    Average cross sections have been calculated for the reactions contained in the dosimetry files, JENDL/D-99, IRDF-90V2, and RRDF-98 in order to select the best data for the new library IRDF-2002. The neutron spectra used in the calculations are as follows: 1) 252 Cf spontaneous fission spectrum (NBS evaluation), 2) 235 U thermal fission spectrum (NBS evaluation), 3) Intermediate-energy Standard Neutron Field (ISNF), 4) Coupled Fast Reactivity Measurement Facility (CFRMF), 5) Coupled thermal/fast uranium and boron carbide spherical assembly (ΣΣ), 6) Fast neutron source reactor (YAYOI), 7) Experimental fast reactor (JOYO), 8) Japan Material Testing Reactor (JMTR), 9) d-Li neutron spectrum with a 2-MeV deuteron beam. The items 3)-7) represent fast neutron spectra, while JMTR is a light water reactor. The Q-value for the d-Li reaction mentioned above is 15.02 MeV. Therefore, neutrons with energies up to 17 MeV can be produced in the d-Li reaction. The calculated average cross sections were compared with the measurements. Figures 1-9 show the ratios of the calculations to the experimental data which are given. It is found from these figures that the 58 Fe(n, γ) cross section in JENDL/D-99 reproduces the measurements in the thermal and fast reactor spectra better than that in IRDF-90V2. (author)

  5. Effect of Higher Order Thinking Laboratory on the Improvement of Critical and Creative Thinking Skills

    Science.gov (United States)

    Setiawan, A.; Malik, A.; Suhandi, A.; Permanasari, A.

    2018-02-01

    This research was based on the need for improving critical and creative thinking skills of student in the 21 -st century. In this research, we have implemented HOT-Lab model for topic of force. The model was characterized by problem solving and higher order thinking development through real laboratory activities. This research used a quasy experiment method with pre-test post-test control group design. Samples of this research were 60 students of Physics Education Program of Teacher Educatuon Institution in Bandung. The samples were divided into 2 classes, experiment class (HOT-lab model) and control class (verification lab model). Research instruments were essay tests for creative and critical thinking skills measurements. The results revealed that both the models have improved student’s creative and critical thinking skills. However, the improvement of the experiment class was significantly higher than that of the control class, as indicated by the average of normalized gains (N-gain) for critical thinking skills of 60.18 and 29.30 and for creative thinking skills of 70.71 and 29.40, respectively for the experimental class and the control class. In addition, there is no significant correlation between the improvement of critical thinking skills and creative thinking skills in both the classes.

  6. Introducing Computer-Based Testing in High-Stakes Exams in Higher Education: Results of a Field Experiment

    Science.gov (United States)

    Boevé, Anja J.; Meijer, Rob R.; Albers, Casper J.; Beetsma, Yta; Bosker, Roel J.

    2015-01-01

    The introduction of computer-based testing in high-stakes examining in higher education is developing rather slowly due to institutional barriers (the need of extra facilities, ensuring test security) and teacher and student acceptance. From the existing literature it is unclear whether computer-based exams will result in similar results as paper-based exams and whether student acceptance can change as a result of administering computer-based exams. In this study, we compared results from a computer-based and paper-based exam in a sample of psychology students and found no differences in total scores across the two modes. Furthermore, we investigated student acceptance and change in acceptance of computer-based examining. After taking the computer-based exam, fifty percent of the students preferred paper-and-pencil exams over computer-based exams and about a quarter preferred a computer-based exam. We conclude that computer-based exam total scores are similar as paper-based exam scores, but that for the acceptance of high-stakes computer-based exams it is important that students practice and get familiar with this new mode of test administration. PMID:26641632

  7. Tendon surveillance requirements - average tendon force

    International Nuclear Information System (INIS)

    Fulton, J.F.

    1982-01-01

    Proposed Rev. 3 to USNRC Reg. Guide 1.35 discusses the need for comparing, for individual tendons, the measured and predicted lift-off forces. Such a comparison is intended to detect any abnormal tendon force loss which might occur. Recognizing that there are uncertainties in the prediction of tendon losses, proposed Guide 1.35.1 has allowed specific tolerances on the fundamental losses. Thus, the lift-off force acceptance criteria for individual tendons appearing in Reg. Guide 1.35, Proposed Rev. 3, is stated relative to a lower bound predicted tendon force, which is obtained using the 'plus' tolerances on the fundamental losses. There is an additional acceptance criterion for the lift-off forces which is not specifically addressed in these two Reg. Guides; however, it is included in a proposed Subsection IWX to ASME Code Section XI. This criterion is based on the overriding requirement that the magnitude of prestress in the containment structure be sufficeint to meet the minimum prestress design requirements. This design requirement can be expressed as an average tendon force for each group of vertical hoop, or dome tendons. For the purpose of comparing the actual tendon forces with the required average tendon force, the lift-off forces measured for a sample of tendons within each group can be averaged to construct the average force for the entire group. However, the individual lift-off forces must be 'corrected' (normalized) prior to obtaining the sample average. This paper derives the correction factor to be used for this purpose. (orig./RW)

  8. Average Bandwidth Allocation Model of WFQ

    Directory of Open Access Journals (Sweden)

    Tomáš Balogh

    2012-01-01

    Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.

  9. Regional averaging and scaling in relativistic cosmology

    International Nuclear Information System (INIS)

    Buchert, Thomas; Carfora, Mauro

    2002-01-01

    Averaged inhomogeneous cosmologies lie at the forefront of interest, since cosmological parameters such as the rate of expansion or the mass density are to be considered as volume-averaged quantities and only these can be compared with observations. For this reason the relevant parameters are intrinsically scale-dependent and one wishes to control this dependence without restricting the cosmological model by unphysical assumptions. In the latter respect we contrast our way to approach the averaging problem in relativistic cosmology with shortcomings of averaged Newtonian models. Explicitly, we investigate the scale-dependence of Eulerian volume averages of scalar functions on Riemannian three-manifolds. We propose a complementary view of a Lagrangian smoothing of (tensorial) variables as opposed to their Eulerian averaging on spatial domains. This programme is realized with the help of a global Ricci deformation flow for the metric. We explain rigorously the origin of the Ricci flow which, on heuristic grounds, has already been suggested as a possible candidate for smoothing the initial dataset for cosmological spacetimes. The smoothing of geometry implies a renormalization of averaged spatial variables. We discuss the results in terms of effective cosmological parameters that would be assigned to the smoothed cosmological spacetime. In particular, we find that on the smoothed spatial domain B-bar evaluated cosmological parameters obey Ω-bar B-bar m + Ω-bar B-bar R + Ω-bar B-bar A + Ω-bar B-bar Q 1, where Ω-bar B-bar m , Ω-bar B-bar R and Ω-bar B-bar A correspond to the standard Friedmannian parameters, while Ω-bar B-bar Q is a remnant of cosmic variance of expansion and shear fluctuations on the averaging domain. All these parameters are 'dressed' after smoothing out the geometrical fluctuations, and we give the relations of the 'dressed' to the 'bare' parameters. While the former provide the framework of interpreting observations with a 'Friedmannian bias

  10. A test of the thermal melanism hypothesis in the wingless grasshopper Phaulacridium vittatum.

    Science.gov (United States)

    Harris, Rebecca M; McQuillan, Peter; Hughes, Lesley

    2013-01-01

    Altitudinal clines in melanism are generally assumed to reflect the fitness benefits resulting from thermal differences between colour morphs, yet differences in thermal quality are not always discernible. The intra-specific application of the thermal melanism hypothesis was tested in the wingless grasshopper Phaulacridium vittatum (Sjöstedt) (Orthoptera: Acrididae) first by measuring the thermal properties of the different colour morphs in the laboratory, and second by testing for differences in average reflectance and spectral characteristics of populations along 14 altitudinal gradients. Correlations between reflectance, body size, and climatic variables were also tested to investigate the underlying causes of clines in melanism. Melanism in P. vittatum represents a gradation in colour rather than distinct colour morphs, with reflectance ranging from 2.49 to 5.65%. In unstriped grasshoppers, darker morphs warmed more rapidly than lighter morphs and reached a higher maximum temperature (lower temperature excess). In contrast, significant differences in thermal quality were not found between the colour morphs of striped grasshoppers. In support of the thermal melanism hypothesis, grasshoppers were, on average, darker at higher altitudes, there were differences in the spectral properties of brightness and chroma between high and low altitudes, and temperature variables were significant influences on the average reflectance of female grasshoppers. However, altitudinal gradients do not represent predictable variation in temperature, and the relationship between melanism and altitude was not consistent across all gradients. Grasshoppers generally became darker at altitudes above 800 m a.s.l., but on several gradients reflectance declined with altitude and then increased at the highest altitude.

  11. Average and local structure of α-CuI by configurational averaging

    International Nuclear Information System (INIS)

    Mohn, Chris E; Stoelen, Svein

    2007-01-01

    Configurational Boltzmann averaging together with density functional theory are used to study in detail the average and local structure of the superionic α-CuI. We find that the coppers are spread out with peaks in the atom-density at the tetrahedral sites of the fcc sublattice of iodines. We calculate Cu-Cu, Cu-I and I-I pair radial distribution functions, the distribution of coordination numbers and the distribution of Cu-I-Cu, I-Cu-I and Cu-Cu-Cu bond-angles. The partial pair distribution functions are in good agreement with experimental neutron diffraction-reverse Monte Carlo, extended x-ray absorption fine structure and ab initio molecular dynamics results. In particular, our results confirm the presence of a prominent peak at around 2.7 A in the Cu-Cu pair distribution function as well as a broader, less intense peak at roughly 4.3 A. We find highly flexible bonds and a range of coordination numbers for both iodines and coppers. This structural flexibility is of key importance in order to understand the exceptional conductivity of coppers in α-CuI; the iodines can easily respond to changes in the local environment as the coppers diffuse, and a myriad of different diffusion-pathways is expected due to the large variation in the local motifs

  12. Experimental demonstration of squeezed-state quantum averaging

    DEFF Research Database (Denmark)

    Lassen, Mikael Østergaard; Madsen, Lars Skovgaard; Sabuncu, Metin

    2010-01-01

    We propose and experimentally demonstrate a universal quantum averaging process implementing the harmonic mean of quadrature variances. The averaged variances are prepared probabilistically by means of linear optical interference and measurement-induced conditioning. We verify that the implemented...

  13. Comparing a recursive digital filter with the moving-average and sequential probability-ratio detection methods for SNM portal monitors

    International Nuclear Information System (INIS)

    Fehlau, P.E.

    1993-01-01

    The author compared a recursive digital filter proposed as a detection method for French special nuclear material monitors with the author's detection methods, which employ a moving-average scaler or a sequential probability-ratio test. Each of these nine test subjects repeatedly carried a test source through a walk-through portal monitor that had the same nuisance-alarm rate with each method. He found that the average detection probability for the test source is also the same for each method. However, the recursive digital filter may have on drawback: its exponentially decreasing response to past radiation intensity prolongs the impact of any interference from radiation sources of radiation-producing machinery. He also examined the influence of each test subject on the monitor's operation by measuring individual attenuation factors for background and source radiation, then ranked the subjects' attenuation factors against their individual probabilities for detecting the test source. The one inconsistent ranking was probably caused by that subject's unusually long stride when passing through the portal

  14. Grade Point Average System of Assessment: the Implementation Peculiarities in Russia

    Directory of Open Access Journals (Sweden)

    B. A. Sazonov

    2012-01-01

    Full Text Available The paper analyzes the specificity, as well as flaws and faults of implementing the Grade Point Average (GPA system of students’ personal assessment in Russian higher schools. Nowadays, the above system is regarded as the basic functional element of educational process organization at the world’s leading universities. The author summarizes the foreign experience and demonstrates the advantages of the GPA system in comparison with the traditional domestic scale of assessment: full records of student’s assessment, objectivity, activation of responsibility for the results achieved, and self-control motivation. The standard GPA model is demonstrated, its application systemizing both the Russian and European requirements to the higher school graduates. The author suggests his own version of the assessment scale estimating and comparing the quality of education in Russian universities and worldwide. The research findings can be of interest to the specialists in the sphere of quality measurement and educational management. 

  15. Aperture averaging in strong oceanic turbulence

    Science.gov (United States)

    Gökçe, Muhsin Caner; Baykal, Yahya

    2018-04-01

    Receiver aperture averaging technique is employed in underwater wireless optical communication (UWOC) systems to mitigate the effects of oceanic turbulence, thus to improve the system performance. The irradiance flux variance is a measure of the intensity fluctuations on a lens of the receiver aperture. Using the modified Rytov theory which uses the small-scale and large-scale spatial filters, and our previously presented expression that shows the atmospheric structure constant in terms of oceanic turbulence parameters, we evaluate the irradiance flux variance and the aperture averaging factor of a spherical wave in strong oceanic turbulence. Irradiance flux variance variations are examined versus the oceanic turbulence parameters and the receiver aperture diameter are examined in strong oceanic turbulence. Also, the effect of the receiver aperture diameter on the aperture averaging factor is presented in strong oceanic turbulence.

  16. Stochastic Averaging and Stochastic Extremum Seeking

    CERN Document Server

    Liu, Shu-Jun

    2012-01-01

    Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering  and analysis of bacterial  convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...

  17. 20 CFR 404.220 - Average-monthly-wage method.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Average-monthly-wage method. 404.220 Section... INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.220 Average-monthly-wage method. (a) Who is eligible for this method. You must...

  18. The calculation of average error probability in a digital fibre optical communication system

    Science.gov (United States)

    Rugemalira, R. A. M.

    1980-03-01

    This paper deals with the problem of determining the average error probability in a digital fibre optical communication system, in the presence of message dependent inhomogeneous non-stationary shot noise, additive Gaussian noise and intersymbol interference. A zero-forcing equalization receiver filter is considered. Three techniques for error rate evaluation are compared. The Chernoff bound and the Gram-Charlier series expansion methods are compared to the characteristic function technique. The latter predicts a higher receiver sensitivity

  19. Higher surgical training opportunities in the general hospital setting; getting the balance right.

    Science.gov (United States)

    Robertson, I; Traynor, O; Khan, W; Waldron, R; Barry, K

    2013-12-01

    The general hospital can play an important role in training of higher surgical trainees (HSTs) in Ireland and abroad. Training opportunities in such a setting have not been closely analysed to date. The aim of this study was to quantify operative exposure for HSTs over a 5-year period in a single institution. Analysis of electronic training logbooks (over a 5-year period, 2007-2012) was performed for general surgery trainees on the higher surgical training programme in Ireland. The most commonly performed adult and paediatric procedures per trainee, per year were analysed. Standard general surgery operations such as herniae (average 58, range 32-86) and cholecystectomy (average 60, range 49-72) ranked highly in each logbook. The most frequently performed emergency operations were appendicectomy (average 45, range 33-53) and laparotomy for acute abdomen (average 48, range 10-79). Paediatric surgical experience included appendicectomy, circumcision, orchidopexy and hernia/hydrocoele repair. Overall, the procedure most commonly performed in the adult setting was endoscopy, with each trainee recording an average of 116 (range 98-132) oesophagogastroduodenoscopies and 284 (range 227-354) colonoscopies. General hospitals continue to play a major role in the training of higher surgical trainees. Analysis of the electronic logbooks over a 5-year period reveals the high volume of procedures available to trainees in a non-specialist centre. Such training opportunities are invaluable in the context of changing work practices and limited resources.

  20. Raising test scores vs. teaching higher order thinking (HOT): senior science teachers' views on how several concurrent policies affect classroom practices

    Science.gov (United States)

    Zohar, Anat; Alboher Agmon, Vered

    2018-04-01

    This study investigates how senior science teachers viewed the effects of a Raising Test Scores policy and its implementation on instruction of higher order thinking (HOT), and on teaching thinking to students with low academic achievements.

  1. Light-cone averaging in cosmology: formalism and applications

    International Nuclear Information System (INIS)

    Gasperini, M.; Marozzi, G.; Veneziano, G.; Nugier, F.

    2011-01-01

    We present a general gauge invariant formalism for defining cosmological averages that are relevant for observations based on light-like signals. Such averages involve either null hypersurfaces corresponding to a family of past light-cones or compact surfaces given by their intersection with timelike hypersurfaces. Generalized Buchert-Ehlers commutation rules for derivatives of these light-cone averages are given. After introducing some adapted ''geodesic light-cone'' coordinates, we give explicit expressions for averaging the redshift to luminosity-distance relation and the so-called ''redshift drift'' in a generic inhomogeneous Universe

  2. Hydrophone area-averaging correction factors in nonlinearly generated ultrasonic beams

    International Nuclear Information System (INIS)

    Cooling, M P; Humphrey, V F; Wilkens, V

    2011-01-01

    The nonlinear propagation of an ultrasonic wave can be used to produce a wavefield rich in higher frequency components that is ideally suited to the calibration, or inter-calibration, of hydrophones. These techniques usually use a tone-burst signal, limiting the measurements to harmonics of the fundamental calibration frequency. Alternatively, using a short pulse enables calibration at a continuous spectrum of frequencies. Such a technique is used at PTB in conjunction with an optical measurement technique to calibrate devices. Experimental findings indicate that the area-averaging correction factor for a hydrophone in such a field demonstrates a complex behaviour, most notably varying periodically between frequencies that are harmonics of the centre frequency of the original pulse and frequencies that lie midway between these harmonics. The beam characteristics of such nonlinearly generated fields have been investigated using a finite difference solution to the nonlinear Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation for a focused field. The simulation results are used to calculate the hydrophone area-averaging correction factors for 0.2 mm and 0.5 mm devices. The results clearly demonstrate a number of significant features observed in the experimental investigations, including the variation with frequency, drive level and hydrophone element size. An explanation for these effects is also proposed.

  3. Hydrophone area-averaging correction factors in nonlinearly generated ultrasonic beams

    Science.gov (United States)

    Cooling, M. P.; Humphrey, V. F.; Wilkens, V.

    2011-02-01

    The nonlinear propagation of an ultrasonic wave can be used to produce a wavefield rich in higher frequency components that is ideally suited to the calibration, or inter-calibration, of hydrophones. These techniques usually use a tone-burst signal, limiting the measurements to harmonics of the fundamental calibration frequency. Alternatively, using a short pulse enables calibration at a continuous spectrum of frequencies. Such a technique is used at PTB in conjunction with an optical measurement technique to calibrate devices. Experimental findings indicate that the area-averaging correction factor for a hydrophone in such a field demonstrates a complex behaviour, most notably varying periodically between frequencies that are harmonics of the centre frequency of the original pulse and frequencies that lie midway between these harmonics. The beam characteristics of such nonlinearly generated fields have been investigated using a finite difference solution to the nonlinear Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation for a focused field. The simulation results are used to calculate the hydrophone area-averaging correction factors for 0.2 mm and 0.5 mm devices. The results clearly demonstrate a number of significant features observed in the experimental investigations, including the variation with frequency, drive level and hydrophone element size. An explanation for these effects is also proposed.

  4. Lateral dispersion coefficients as functions of averaging time

    International Nuclear Information System (INIS)

    Sheih, C.M.

    1980-01-01

    Plume dispersion coefficients are discussed in terms of single-particle and relative diffusion, and are investigated as functions of averaging time. To demonstrate the effects of averaging time on the relative importance of various dispersion processes, and observed lateral wind velocity spectrum is used to compute the lateral dispersion coefficients of total, single-particle and relative diffusion for various averaging times and plume travel times. The results indicate that for a 1 h averaging time the dispersion coefficient of a plume can be approximated by single-particle diffusion alone for travel times <250 s and by relative diffusion for longer travel times. Furthermore, it is shown that the power-law formula suggested by Turner for relating pollutant concentrations for other averaging times to the corresponding 15 min average is applicable to the present example only when the averaging time is less than 200 s and the tral time smaller than about 300 s. Since the turbulence spectrum used in the analysis is an observed one, it is hoped that the results could represent many conditions encountered in the atmosphere. However, as the results depend on the form of turbulence spectrum, the calculations are not for deriving a set of specific criteria but for demonstrating the need in discriminating various processes in studies of plume dispersion

  5. Higher BMI Is Associated with Reduced Cognitive Performance in Division I Athletes

    Directory of Open Access Journals (Sweden)

    Andrew Fedor

    2013-04-01

    Full Text Available Objective: Poor cardiovascular fitness has been implicated as a possible mechanism for obesity-related cognitive decline, though no study has examined whether BMI is associated with poorer cognitive function in persons with excellent fitness levels. The current study examined the relationship between BMI and cognitive function by the Immediate Post Concussion and Cognitive Test (ImPACT in Division I collegiate athletes. Methods: Participants had an average age of 20.14 ± 1.78 years, were 31.3% female, and 53.9% football players. BMI ranged from 19.04 to 41.14 and averaged 26.72 ± 4.62. Results: Regression analyses revealed that BMI incrementally predicted performance on visual memory (R2 change = 0.015, p = 0.026 beyond control variables. Follow-up partial correlation analyses revealed small but significant negative correlations between BMI and verbal memory (r = -0.17, visual memory (r = -0.16, and visual motor speed (r = -0.12. Conclusions: These results suggest that higher BMI is associated with reduced cognitive function, even in a sample expected to have excellent levels of cardiovascular fitness. Further work is needed to better understand mechanisms for these associations.

  6. Implementation of learning outcome attainment measurement system in aviation engineering higher education

    Science.gov (United States)

    Salleh, I. Mohd; Mat Rani, M.

    2017-12-01

    This paper aims to discuss the effectiveness of the Learning Outcome Attainment Measurement System in assisting Outcome Based Education (OBE) for Aviation Engineering Higher Education in Malaysia. Direct assessments are discussed to show the implementation processes that become a key role in the successful outcome measurement system. A case study presented in this paper involves investigation on the implementation of the system in Aircraft Structure course for Bachelor in Aircraft Engineering Technology program in UniKL-MIAT. The data has been collected for five semesters, starting from July 2014 until July 2016. The study instruments used include the report generated in Learning Outcomes Measurements System (LOAMS) that contains information on the course learning outcomes (CLO) individual and course average performance reports. The report derived from LOAMS is analyzed and the data analysis has revealed that there is a positive significant correlation between the individual performance and the average performance reports. The results for analysis of variance has further revealed that there is a significant difference in OBE grade score among the report. Independent samples F-test results, on the other hand, indicate that the variances of the two populations are unequal.

  7. Higher BMI is associated with reduced cognitive performance in division I athletes.

    Science.gov (United States)

    Fedor, Andrew; Gunstad, John

    2013-01-01

    Poor cardiovascular fitness has been implicated as a possible mechanism for obesity-related cognitive decline, though no study has examined whether BMI is associated with poorer cognitive function in persons with excellent fitness levels. The current study examined the relationship between BMI and cognitive function by the Immediate Post Concussion and Cognitive Test (ImPACT) in Division I collegiate athletes. Participants had an average age of 20.14 ± 1.78 years, were 31.3% female, and 53.9% football players. BMI ranged from 19.04 to 41.14 and averaged 26.72 ± 4.62. Regression analyses revealed that BMI incrementally predicted performance on visual memory (R(2) change = 0.015, p = 0.026) beyond control variables. Follow-up partial correlation analyses revealed small but significant negative correlations between BMI and verbal memory (r = -0.17), visual memory (r = -0.16), and visual motor speed (r = -0.12). These results suggest that higher BMI is associated with reduced cognitive function, even in a sample expected to have excellent levels of cardiovascular fitness. Further work is needed to better understand mechanisms for these associations. Copyright © 2013 S. Karger GmbH, Freiburg.

  8. Assessing the Resolution Adaptability of the Zhang-McFarlane Cumulus Parameterization With Spatial and Temporal Averaging: RESOLUTION ADAPTABILITY OF ZM SCHEME

    Energy Technology Data Exchange (ETDEWEB)

    Yun, Yuxing [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA; State Key Laboratory of Severe Weather, Chinese Academy of Meteorological Sciences, Beijing China; Fan, Jiwen [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA; Xiao, Heng [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA; Zhang, Guang J. [Scripps Institution of Oceanography, University of California, San Diego CA USA; Ghan, Steven J. [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA; Xu, Kuan-Man [NASA Langley Research Center, Hampton VA USA; Ma, Po-Lun [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA; Gustafson, William I. [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA

    2017-11-01

    Realistic modeling of cumulus convection at fine model resolutions (a few to a few tens of km) is problematic since it requires the cumulus scheme to adapt to higher resolution than they were originally designed for (~100 km). To solve this problem, we implement the spatial averaging method proposed in Xiao et al. (2015) and also propose a temporal averaging method for the large-scale convective available potential energy (CAPE) tendency in the Zhang-McFarlane (ZM) cumulus parameterization. The resolution adaptability of the original ZM scheme, the scheme with spatial averaging, and the scheme with both spatial and temporal averaging at 4-32 km resolution is assessed using the Weather Research and Forecasting (WRF) model, by comparing with Cloud Resolving Model (CRM) results. We find that the original ZM scheme has very poor resolution adaptability, with sub-grid convective transport and precipitation increasing significantly as the resolution increases. The spatial averaging method improves the resolution adaptability of the ZM scheme and better conserves the total transport of moist static energy and total precipitation. With the temporal averaging method, the resolution adaptability of the scheme is further improved, with sub-grid convective precipitation becoming smaller than resolved precipitation for resolution higher than 8 km, which is consistent with the results from the CRM simulation. Both the spatial distribution and time series of precipitation are improved with the spatial and temporal averaging methods. The results may be helpful for developing resolution adaptability for other cumulus parameterizations that are based on quasi-equilibrium assumption.

  9. Averaging operations on matrices

    Indian Academy of Sciences (India)

    2014-07-03

    Jul 3, 2014 ... Role of Positive Definite Matrices. • Diffusion Tensor Imaging: 3 × 3 pd matrices model water flow at each voxel of brain scan. • Elasticity: 6 × 6 pd matrices model stress tensors. • Machine Learning: n × n pd matrices occur as kernel matrices. Tanvi Jain. Averaging operations on matrices ...

  10. An effective approach using blended learning to assist the average students to catch up with the talented ones

    Directory of Open Access Journals (Sweden)

    Baijie Yang

    2013-03-01

    Full Text Available Because the average students are the prevailing part of the student population, it is important but difficult for the educators to help average students by improving their learning efficiency and learning outcome in school tests. We conducted a quasi-experiment with two English classes taught by one teacher in the second term of the first year of a junior high school. The experimental class was composed of average students (N=37, while the control class comprised talented students (N=34. Therefore the two classes performed differently in English subject with mean difference of 13.48 that is statistically significant based on the independent sample T-Test analysis. We tailored the web-based intelligent English instruction system, called Computer Simulation in Educational Communication (CSIEC and featured with instant feedback, to the learning content in the experiment term, and the experimental class used it one school hour per week throughout the term. This blended learning setting with the focus on vocabulary and dialogue acquisition helped the students in the experimental class improve their learning performance gradually. The mean difference of the final test between the two classes was decreased to 3.78, while the mean difference of the test designed for the specially drilled vocabulary knowledge was decreased to 2.38 and was statistically not significant. The student interview and survey also demonstrated the students’ favor to the blended learning system. We conclude that the long-term integration of this content oriented blended learning system featured with instant feedback into ordinary class is an effective approach to assist the average students to catch up with the talented ones.

  11. Retrospective cost adaptive Reynolds-averaged Navier-Stokes k-ω model for data-driven unsteady turbulent simulations

    Science.gov (United States)

    Li, Zhiyong; Hoagg, Jesse B.; Martin, Alexandre; Bailey, Sean C. C.

    2018-03-01

    This paper presents a data-driven computational model for simulating unsteady turbulent flows, where sparse measurement data is available. The model uses the retrospective cost adaptation (RCA) algorithm to automatically adjust the closure coefficients of the Reynolds-averaged Navier-Stokes (RANS) k- ω turbulence equations to improve agreement between the simulated flow and the measurements. The RCA-RANS k- ω model is verified for steady flow using a pipe-flow test case and for unsteady flow using a surface-mounted-cube test case. Measurements used for adaptation of the verification cases are obtained from baseline simulations with known closure coefficients. These verification test cases demonstrate that the RCA-RANS k- ω model can successfully adapt the closure coefficients to improve agreement between the simulated flow field and a set of sparse flow-field measurements. Furthermore, the RCA-RANS k- ω model improves agreement between the simulated flow and the baseline flow at locations at which measurements do not exist. The RCA-RANS k- ω model is also validated with experimental data from 2 test cases: steady pipe flow, and unsteady flow past a square cylinder. In both test cases, the adaptation improves agreement with experimental data in comparison to the results from a non-adaptive RANS k- ω model that uses the standard values of the k- ω closure coefficients. For the steady pipe flow, adaptation is driven by mean stream-wise velocity measurements at 24 locations along the pipe radius. The RCA-RANS k- ω model reduces the average velocity error at these locations by over 35%. For the unsteady flow over a square cylinder, adaptation is driven by time-varying surface pressure measurements at 2 locations on the square cylinder. The RCA-RANS k- ω model reduces the average surface-pressure error at these locations by 88.8%.

  12. 20 CFR 404.221 - Computing your average monthly wage.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the average...

  13. Large interface simulation in an averaged two-fluid code

    International Nuclear Information System (INIS)

    Henriques, A.

    2006-01-01

    Different ranges of size of interfaces and eddies are involved in multiphase flow phenomena. Classical formalisms focus on a specific range of size. This study presents a Large Interface Simulation (LIS) two-fluid compressible formalism taking into account different sizes of interfaces. As in the single-phase Large Eddy Simulation, a filtering process is used to point out Large Interface (LI) simulation and Small interface (SI) modelization. The LI surface tension force is modelled adapting the well-known CSF method. The modelling of SI transfer terms is done calling for classical closure laws of the averaged approach. To simulate accurately LI transfer terms, we develop a LI recognition algorithm based on a dimensionless criterion. The LIS model is applied in a classical averaged two-fluid code. The LI transfer terms modelling and the LI recognition are validated on analytical and experimental tests. A square base basin excited by a horizontal periodic movement is studied with the LIS model. The capability of the model is also shown on the case of the break-up of a bubble in a turbulent liquid flow. The break-up of a large bubble at a grid impact performed regime transition between two different scales of interface from LI to SI and from PI to LI. (author) [fr

  14. A sampling strategy for estimating plot average annual fluxes of chemical elements from forest soils

    NARCIS (Netherlands)

    Brus, D.J.; Gruijter, de J.J.; Vries, de W.

    2010-01-01

    A sampling strategy for estimating spatially averaged annual element leaching fluxes from forest soils is presented and tested in three Dutch forest monitoring plots. In this method sampling locations and times (days) are selected by probability sampling. Sampling locations were selected by

  15. Relationships between feeding behavior and average daily gain in cattle

    Directory of Open Access Journals (Sweden)

    Bruno Fagundes Cunha Lage

    2013-12-01

    Full Text Available Several studies have reported relationship between eating behavior and performance in feedlot cattle. The evaluation of behavior traits demands high degree of work and trained manpower, therefore, in recent years has been used an automated feed intake measurement system (GrowSafe System ®, that identify and record individual feeding patterns. The aim of this study was to evaluate the relationship between feeding behavior traits and average daily gain in Nellore calves undergoing feed efficiency test. Date from 85 Nelore males was recorded during the feed efficiency test performed in 2012, at Centro APTA Bovinos de Corte, Instituto de Zootecnia, São Paulo State. Were analyzed the behavioral traits: time at feeder (TF, head down duration (HD, representing the time when the animal is actually eating, frequency of visits (FV and feed rate (FR calculated as the amount of dry matter (DM consumed by time at feeder (g.min-1. The ADG was calculated by linear regression of individual weights on days in test. ADG classes were obtained considering the average ADG and standard deviation (SD being: high ADG (>mean + 1.0 SD, medium ADG (± 1.0 SD from the mean and low ADG (test as a covariate. Low gain animals remained 21.8% less time of head down than medium or high gain animals (P<0.05. Were observed significant effects of ADG class on FR (P<0.01, high ADG animals consumed more feed per time (g.min-1 than the low and medium ADG animals. No diferences were observed (P>0.05 among ADG classes for FV, indicating that these traits are not related to each other. These results shows that the ADG is related to the agility in eat food and not to the time spent in the bunk or to the number of visits in a range of 24 hours.

  16. The measurement of power losses at high magnetic field densities or at small cross-section of test specimen using the averaging

    CERN Document Server

    Gorican, V; Hamler, A; Nakata, T

    2000-01-01

    It is difficult to achieve sufficient accuracy of power loss measurement at high magnetic field densities where the magnetic field strength gets more and more distorted, or in cases where the influence of noise increases (small specimen cross section). The influence of averaging on the accuracy of power loss measurement was studied on the cast amorphous magnetic material Metglas 2605-TCA. The results show that the accuracy of power loss measurements can be improved by using the averaging of data acquisition points.

  17. Calm water resistance prediction of a bulk carrier using Reynolds averaged Navier-Stokes based solver

    Science.gov (United States)

    Rahaman, Md. Mashiur; Islam, Hafizul; Islam, Md. Tariqul; Khondoker, Md. Reaz Hasan

    2017-12-01

    Maneuverability and resistance prediction with suitable accuracy is essential for optimum ship design and propulsion power prediction. This paper aims at providing some of the maneuverability characteristics of a Japanese bulk carrier model, JBC in calm water using a computational fluid dynamics solver named SHIP Motion and OpenFOAM. The solvers are based on the Reynolds average Navier-Stokes method (RaNS) and solves structured grid using the Finite Volume Method (FVM). This paper comprises the numerical results of calm water test for the JBC model with available experimental results. The calm water test results include the total drag co-efficient, average sinkage, and trim data. Visualization data for pressure distribution on the hull surface and free water surface have also been included. The paper concludes that the presented solvers predict the resistance and maneuverability characteristics of the bulk carrier with reasonable accuracy utilizing minimum computational resources.

  18. Averaging in SU(2) open quantum random walk

    International Nuclear Information System (INIS)

    Ampadu Clement

    2014-01-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT

  19. Averaging in SU(2) open quantum random walk

    Science.gov (United States)

    Clement, Ampadu

    2014-03-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT.

  20. Determining average yarding distance.

    Science.gov (United States)

    Roger H. Twito; Charles N. Mann

    1979-01-01

    Emphasis on environmental and esthetic quality in timber harvesting has brought about increased use of complex boundaries of cutting units and a consequent need for a rapid and accurate method of determining the average yarding distance and area of these units. These values, needed for evaluation of road and landing locations in planning timber harvests, are easily and...

  1. Average Revisited in Context

    Science.gov (United States)

    Watson, Jane; Chick, Helen

    2012-01-01

    This paper analyses the responses of 247 middle school students to items requiring the concept of average in three different contexts: a city's weather reported in maximum daily temperature, the number of children in a family, and the price of houses. The mixed but overall disappointing performance on the six items in the three contexts indicates…

  2. Average bond energies between boron and elements of the fourth, fifth, sixth, and seventh groups of the periodic table

    Science.gov (United States)

    Altshuller, Aubrey P

    1955-01-01

    The average bond energies D(gm)(B-Z) for boron-containing molecules have been calculated by the Pauling geometric-mean equation. These calculated bond energies are compared with the average bond energies D(exp)(B-Z) obtained from experimental data. The higher values of D(exp)(B-Z) in comparison with D(gm)(B-Z) when Z is an element in the fifth, sixth, or seventh periodic group may be attributed to resonance stabilization or double-bond character.

  3. Experimental study of average void fraction in low-flow subcooled boiling

    International Nuclear Information System (INIS)

    Sun Qi; Wang Xiaojun; Xi Zhao; Zhao Hua; Yang Ruichang

    2005-01-01

    Low-flow subcooled void fraction in medium pressure was investigated using high-temperature high-pressure single-sensor optical probe in this paper. And then average void fraction was obtained through the integral calculation of local void fraction in the cross-section. The experimental data were compared with the void fraction model proposed in advance. The results show that the predictions of this model agree with the data quite well. The comparisons of Saha and Levy models with low-flow subcooled data show that Saha model overestimates the experimental data distinctively, and Levy model also gets relatively higher predictions although it is better than Saha model. (author)

  4. Specification of optical components for a high average-power laser environment

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, J.R.; Chow, R.; Rinmdahl, K.A.; Willis, J.B.; Wong, J.N.

    1997-06-25

    Optical component specifications for the high-average-power lasers and transport system used in the Atomic Vapor Laser Isotope Separation (AVLIS) plant must address demanding system performance requirements. The need for high performance optics has to be balanced against the practical desire to reduce the supply risks of cost and schedule. This is addressed in optical system design, careful planning with the optical industry, demonstration of plant quality parts, qualification of optical suppliers and processes, comprehensive procedures for evaluation and test, and a plan for corrective action.

  5. Socio-demographic predictors and average annual rates of caesarean section in Bangladesh between 2004 and 2014.

    Directory of Open Access Journals (Sweden)

    Md Nuruzzaman Khan

    Full Text Available Globally the rates of caesarean section (CS have steadily increased in recent decades. This rise is not fully accounted for by increases in clinical factors which indicate the need for CS. We investigated the socio-demographic predictors of CS and the average annual rates of CS in Bangladesh between 2004 and 2014.Data were derived from four waves of nationally representative Bangladesh Demographic and Health Survey (BDHS conducted between 2004 and 2014. Rate of change analysis was used to calculate the average annual rate of increase in CS from 2004 to 2014, by socio-demographic categories. Multi-level logistic regression was used to identify the socio-demographic predictors of CS in a cross-sectional analysis of the 2014 BDHS data.CS rates increased from 3.5% in 2004 to 23% in 2014. The average annual rate of increase in CS was higher among women of advanced maternal age (≥35 years, urban areas, and relatively high socio-economic status; with higher education, and who regularly accessed antenatal services. The multi-level logistic regression model indicated that lower (≤19 and advanced maternal age (≥35, urban location, relatively high socio-economic status, higher education, birth of few children (≤2, antenatal healthcare visits, overweight or obese were the key factors associated with increased utilization of CS. Underweight was a protective factor for CS.The use of CS has increased considerably in Bangladesh over the survey years. This rising trend and the risk of having CS vary significantly across regions and socio-economic status. Very high use of CS among women of relatively high socio-economic status and substantial urban-rural difference call for public awareness and practice guideline enforcement aimed at optimizing the use of CS.

  6. Socio-demographic predictors and average annual rates of caesarean section in Bangladesh between 2004 and 2014.

    Science.gov (United States)

    Khan, Md Nuruzzaman; Islam, M Mofizul; Shariff, Asma Ahmad; Alam, Md Mahmudul; Rahman, Md Mostafizur

    2017-01-01

    Globally the rates of caesarean section (CS) have steadily increased in recent decades. This rise is not fully accounted for by increases in clinical factors which indicate the need for CS. We investigated the socio-demographic predictors of CS and the average annual rates of CS in Bangladesh between 2004 and 2014. Data were derived from four waves of nationally representative Bangladesh Demographic and Health Survey (BDHS) conducted between 2004 and 2014. Rate of change analysis was used to calculate the average annual rate of increase in CS from 2004 to 2014, by socio-demographic categories. Multi-level logistic regression was used to identify the socio-demographic predictors of CS in a cross-sectional analysis of the 2014 BDHS data. CS rates increased from 3.5% in 2004 to 23% in 2014. The average annual rate of increase in CS was higher among women of advanced maternal age (≥35 years), urban areas, and relatively high socio-economic status; with higher education, and who regularly accessed antenatal services. The multi-level logistic regression model indicated that lower (≤19) and advanced maternal age (≥35), urban location, relatively high socio-economic status, higher education, birth of few children (≤2), antenatal healthcare visits, overweight or obese were the key factors associated with increased utilization of CS. Underweight was a protective factor for CS. The use of CS has increased considerably in Bangladesh over the survey years. This rising trend and the risk of having CS vary significantly across regions and socio-economic status. Very high use of CS among women of relatively high socio-economic status and substantial urban-rural difference call for public awareness and practice guideline enforcement aimed at optimizing the use of CS.

  7. Characterizing individual painDETECT symptoms by average pain severity

    Directory of Open Access Journals (Sweden)

    Sadosky A

    2016-07-01

    Full Text Available Alesia Sadosky,1 Vijaya Koduru,2 E Jay Bienen,3 Joseph C Cappelleri4 1Pfizer Inc, New York, NY, 2Eliassen Group, New London, CT, 3Outcomes Research Consultant, New York, NY, 4Pfizer Inc, Groton, CT, USA Background: painDETECT is a screening measure for neuropathic pain. The nine-item version consists of seven sensory items (burning, tingling/prickling, light touching, sudden pain attacks/electric shock-type pain, cold/heat, numbness, and slight pressure, a pain course pattern item, and a pain radiation item. The seven-item version consists only of the sensory items. Total scores of both versions discriminate average pain-severity levels (mild, moderate, and severe, but their ability to discriminate individual item severity has not been evaluated.Methods: Data were from a cross-sectional, observational study of six neuropathic pain conditions (N=624. Average pain severity was evaluated using the Brief Pain Inventory-Short Form, with severity levels defined using established cut points for distinguishing mild, moderate, and severe pain. The Wilcoxon rank sum test was followed by ridit analysis to represent the probability that a randomly selected subject from one average pain-severity level had a more favorable outcome on the specific painDETECT item relative to a randomly selected subject from a comparator severity level.Results: A probability >50% for a better outcome (less severe pain was significantly observed for each pain symptom item. The lowest probability was 56.3% (on numbness for mild vs moderate pain and highest probability was 76.4% (on cold/heat for mild vs severe pain. The pain radiation item was significant (P<0.05 and consistent with pain symptoms, as well as with total scores for both painDETECT versions; only the pain course item did not differ.Conclusion: painDETECT differentiates severity such that the ability to discriminate average pain also distinguishes individual pain item severity in an interpretable manner. Pain

  8. Higher order mode damping in a five-cell superconducting rf cavity with a photonic band gap coupler cell

    Science.gov (United States)

    Arsenyev, Sergey A.; Temkin, Richard J.; Shchegolkov, Dmitry Yu.; Simakov, Evgenya I.; Boulware, Chase H.; Grimm, Terry L.; Rogacki, Adam R.

    2016-08-01

    We present a study of higher order mode (HOM) damping in the first multicell superconducting radio-frequency (SRF) cavity with a photonic band gap (PBG) coupler cell. Achieving higher average beam currents is particularly desirable for future light sources and particle colliders based on SRF energy-recovery linacs (ERLs). Beam current in ERLs is limited by the beam breakup instability, caused by parasitic HOMs interacting with the beam in accelerating cavities. A PBG cell incorporated in an accelerating cavity can reduce the negative effect of HOMs by providing a frequency selective damping mechanism, thus allowing significantly higher beam currents. The five-cell cavity with a PBG cell was designed and optimized for HOM damping. Monopole and dipole HOMs were simulated. The SRF cavity was fabricated and tuned. External quality factors for some HOMs were measured in a cold test. The measurements agreed well with the simulations.

  9. Higher order mode damping in a five-cell superconducting rf cavity with a photonic band gap coupler cell

    Directory of Open Access Journals (Sweden)

    Sergey A. Arsenyev

    2016-08-01

    Full Text Available We present a study of higher order mode (HOM damping in the first multicell superconducting radio-frequency (SRF cavity with a photonic band gap (PBG coupler cell. Achieving higher average beam currents is particularly desirable for future light sources and particle colliders based on SRF energy-recovery linacs (ERLs. Beam current in ERLs is limited by the beam breakup instability, caused by parasitic HOMs interacting with the beam in accelerating cavities. A PBG cell incorporated in an accelerating cavity can reduce the negative effect of HOMs by providing a frequency selective damping mechanism, thus allowing significantly higher beam currents. The five-cell cavity with a PBG cell was designed and optimized for HOM damping. Monopole and dipole HOMs were simulated. The SRF cavity was fabricated and tuned. External quality factors for some HOMs were measured in a cold test. The measurements agreed well with the simulations.

  10. 47 CFR 80.759 - Average terrain elevation.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Average terrain elevation. 80.759 Section 80.759 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.759 Average terrain elevation. (a)(1) Draw radials...

  11. Two-Dimensional Depth-Averaged Beach Evolution Modeling: Case Study of the Kizilirmak River Mouth, Turkey

    DEFF Research Database (Denmark)

    Baykal, Cüneyt; Ergin, Ayşen; Güler, Işikhan

    2014-01-01

    investigated by satellite images, physical model tests, and one-dimensional numerical models. The current study uses a two-dimensional depth-averaged numerical beach evolution model, developed based on existing methodologies. This model is mainly composed of four main submodels: a phase-averaged spectral wave......This study presents an application of a two-dimensional beach evolution model to a shoreline change problem at the Kizilirmak River mouth, which has been facing severe coastal erosion problems for more than 20 years. The shoreline changes at the Kizilirmak River mouth have been thus far...... transformation model, a two-dimensional depth-averaged numerical waveinduced circulation model, a sediment transport model, and a bottom evolution model. To validate and verify the numerical model, it is applied to several cases of laboratory experiments. Later, the model is applied to a shoreline change problem...

  12. Hybrid Reynolds-Averaged/Large Eddy Simulation of the Flow in a Model SCRamjet Cavity Flameholder

    Science.gov (United States)

    Baurle, R. A.

    2016-01-01

    Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. Experimental data available for this configuration include velocity statistics obtained from particle image velocimetry. Several turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged/large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This e ort was undertaken to not only assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community, but to also begin to understand how this capability can best be used to augment standard Reynolds-averaged simulations. The numerical errors were quantified for the steady-state simulations, and at least qualitatively assessed for the scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results displayed a high degree of variability when comparing the flameholder fuel distributions obtained from each turbulence model. This prompted the consideration of applying the higher-fidelity scale-resolving simulations as a surrogate "truth" model to calibrate the Reynolds-averaged closures in a non-reacting setting prior to their use for the combusting simulations. In general, the Reynolds-averaged velocity profile predictions at the lowest fueling level matched the particle imaging measurements almost as well as was observed for the non-reacting condition. However, the velocity field predictions proved to be more sensitive to the flameholder fueling rate than was indicated in the measurements.

  13. Human-experienced temperature changes exceed global average climate changes for all income groups

    Science.gov (United States)

    Hsiang, S. M.; Parshall, L.

    2009-12-01

    Global climate change alters local climates everywhere. Many climate change impacts, such as those affecting health, agriculture and labor productivity, depend on these local climatic changes, not global mean change. Traditional, spatially averaged climate change estimates are strongly influenced by the response of icecaps and oceans, providing limited information on human-experienced climatic changes. If used improperly by decision-makers, these estimates distort estimated costs of climate change. We overlay the IPCC’s 20 GCM simulations on the global population distribution to estimate local climatic changes experienced by the world population in the 21st century. The A1B scenario leads to a well-known rise in global average surface temperature of +2.0°C between the periods 2011-2030 and 2080-2099. Projected on the global population distribution in 2000, the median human will experience an annual average rise of +2.3°C (4.1°F) and the average human will experience a rise of +2.4°C (4.3°F). Less than 1% of the population will experience changes smaller than +1.0°C (1.8°F), while 25% and 10% of the population will experience changes greater than +2.9°C (5.2°F) and +3.5°C (6.2°F) respectively. 67% of the world population experiences temperature changes greater than the area-weighted average change of +2.0°C (3.6°F). Using two approaches to characterize the spatial distribution of income, we show that the wealthiest, middle and poorest thirds of the global population experience similar changes, with no group dominating the global average. Calculations for precipitation indicate that there is little change in average precipitation, but redistributions of precipitation occur in all income groups. These results suggest that economists and policy-makers using spatially averaged estimates of climate change to approximate local changes will systematically and significantly underestimate the impacts of climate change on the 21st century population. Top: The

  14. Measurement of the single and two phase flow using newly developed average bidirectional flow tube

    International Nuclear Information System (INIS)

    Yun, Byong Jo; Euh, Dong Jin; Kang, Kyung Ho; Song, Chul Hwa; Baek, Won Pil

    2005-01-01

    A new instrument, an average BDFT (Birectional Flow Tube), was proposed to measure the flow rate in single and two phase flows. Its working principle is similar to that of the pitot tube, wherein the dynamic pressure is measured. In an average BDFT, the pressure measured at the front of the flow tube is equal to the total pressure, while that measured at the rear tube is slightly less than the static pressure of the flow field due to the suction effect downstream. The proposed instrument was tested in air/water vertical and horizontal test sections with an inner diameter of 0.08m. The tests were performed primarily in single phase water and air flow conditions to obtain the amplification factor(k) of the flow tube in the vertical and horizontal test sections. Tests were also performed in air/water vertical two phase flow conditions in which the flow regimes were bubbly, slug, and churn turbulent flows. In order to calculate the phasic mass flow rates from the measured differential pressure, the Chexal dirft-flux correlation and a momentum exchange factor between the two phases were introduced. The test results show that the proposed instrument with a combination of the measured void fraction, Chexal drift-flux correlation, and Bosio and Malnes' momentum exchange model could predict the phasic mass flow rates within a 15% error. A new momentum exchange model was also proposed from the present data and its implementation provides a 5% improvement to the measured mass flow rate when compared to that with the Bosio and Malnes' model

  15. HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS

    International Nuclear Information System (INIS)

    2005-01-01

    Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department

  16. Grassmann Averages for Scalable Robust PCA

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Black, Michael J.

    2014-01-01

    As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...

  17. Scale-invariant Green-Kubo relation for time-averaged diffusivity

    Science.gov (United States)

    Meyer, Philipp; Barkai, Eli; Kantz, Holger

    2017-12-01

    In recent years it was shown both theoretically and experimentally that in certain systems exhibiting anomalous diffusion the time- and ensemble-averaged mean-squared displacement are remarkably different. The ensemble-averaged diffusivity is obtained from a scaling Green-Kubo relation, which connects the scale-invariant nonstationary velocity correlation function with the transport coefficient. Here we obtain the relation between time-averaged diffusivity, usually recorded in single-particle tracking experiments, and the underlying scale-invariant velocity correlation function. The time-averaged mean-squared displacement is given by 〈δ2¯〉 ˜2 DνtβΔν -β , where t is the total measurement time and Δ is the lag time. Here ν is the anomalous diffusion exponent obtained from ensemble-averaged measurements 〈x2〉 ˜tν , while β ≥-1 marks the growth or decline of the kinetic energy 〈v2〉 ˜tβ . Thus, we establish a connection between exponents that can be read off the asymptotic properties of the velocity correlation function and similarly for the transport constant Dν. We demonstrate our results with nonstationary scale-invariant stochastic and deterministic models, thereby highlighting that systems with equivalent behavior in the ensemble average can differ strongly in their time average. If the averaged kinetic energy is finite, β =0 , the time scaling of 〈δ2¯〉 and 〈x2〉 are identical; however, the time-averaged transport coefficient Dν is not identical to the corresponding ensemble-averaged diffusion constant.

  18. 40 CFR 80.205 - How is the annual refinery or importer average and corporate pool average sulfur level determined?

    Science.gov (United States)

    2010-07-01

    ... volume of gasoline produced or imported in batch i. Si=The sulfur content of batch i determined under § 80.330. n=The number of batches of gasoline produced or imported during the averaging period. i=Individual batch of gasoline produced or imported during the averaging period. (b) All annual refinery or...

  19. Dielectronic recombination of P5+ and Cl7+ in configuration-average, LS-coupling, and intermediate-coupling approximations

    International Nuclear Information System (INIS)

    Badnell, N.R.; Pindzola, M.S.

    1989-01-01

    We have calculated dielectronic recombination cross sections and rate coefficients for the Ne-like ions P 5+ and Cl 7+ in configuration-average, LS-coupling, and intermediate-coupling approximations. Autoionization into excited states reduces the cross sections and rate coefficients by substantial amounts in all three methods. There is only rough agreement between the configuration-average cross-section results and the corresponding intermediate-coupling results. There is good agreement, however, between the LS-coupling cross-section results and the corresponding intermediate-coupling results. The LS-coupling and intermediate-coupling rate coefficients agree to better than 5%, while the configuration-average rate coefficients are about 30% higher than the other two coupling methods. External electric field effects, as calculated in the configuration-average approximation, are found to be relatively small for the cross sections and completely negligible for the rate coefficients. Finally, the general formula of Burgess was found to overestimate the rate coefficients by roughly a factor of 5, mainly due to the neglect of autoionization into excited states

  20. Science Library of Test Items. Volume Ten. Mastery Testing Programme. [Mastery Tests Series 2.] Tests M14-M26.

    Science.gov (United States)

    New South Wales Dept. of Education, Sydney (Australia).

    As part of a series of tests to measure mastery of specific skills in the natural sciences, copies of tests 14 through 26 include: (14) calculating an average; (15) identifying parts of the scientific method; (16) reading a geological map; (17) identifying elements, mixtures and compounds; (18) using Ohm's law in calculation; (19) interpreting…

  1. Overview of Commercial Building Partnerships in Higher Education

    Energy Technology Data Exchange (ETDEWEB)

    Schatz, Glenn [Energy Efficiency and Renewable Energy (EERE), Washington, DC (United States)

    2013-03-01

    Higher education uses less energy per square foot than most commercial building sectors. However, higher education campuses house energy-intensive laboratories and data centers that may spend more than this average; laboratories, in particular, are disproportionately represented in the higher education sector. The Commercial Building Partnership (CBP), a public/private, cost-shared program sponsored by the U.S. Department of Energy (DOE), paired selected commercial building owners and operators with representatives of DOE, its national laboratories, and private-sector technical experts. These teams explored energy-saving measures across building systems–including some considered too costly or technologically challenging–and used advanced energy modeling to achieve peak whole-building performance. Modeling results were then included in new construction or retrofit designs to achieve significant energy reductions.

  2. Occupant kinematics of the Hybrid III, THOR-M, and postmortem human surrogates under various restraint conditions in full-scale frontal sled tests.

    Science.gov (United States)

    Albert, Devon L; Beeman, Stephanie M; Kemper, Andrew R

    2018-02-28

    The objective of this research was to compare the occupant kinematics of the Hybrid III (HIII), THOR-M, and postmortem human surrogates (PMHS) during full-scale frontal sled tests under 3 safety restraint conditions: knee bolster (KB), knee bolster and steering wheel airbag (KB/SWAB), and knee bolster airbag and steering wheel airbag (KBAB/SWAB). A total of 20 frontal sled tests were performed with at least 2 tests performed per restraint condition per surrogate. The tests were designed to match the 2012 Toyota Camry New Car Assessment Program (NCAP) full-scale crash test. Rigid polyurethane foam surrogates with compressive strength ratings of 65 and 19 psi were used to simulate the KB and KBAB, respectively. The excursions of the head, shoulders, hips, knees, and ankles were collected using motion capture. Linear acceleration and angular velocity data were also collected from the head, thorax, and pelvis of each surrogate. Time histories were compared between surrogates and restraint conditions using ISO/TS 18571. All surrogates showed some degree of sensitivity to changes in restraint condition. For example, the use of a KBAB decreased the pelvis accelerations and the forward excursions of the knees and hips for all surrogates. However, these trends were not observed for the thorax, shoulders, and head, which showed more sensitivity to the presence of a SWAB. The average scores computed using ISO/TS 18571 for the HIII/PMHS and THOR-M/PMHS comparisons were 0.527 and 0.518, respectively. The HIII had slightly higher scores than the THOR-M for the excursions (HIII average = 0.574; THOR average = 0.520). However, the THOR-M had slightly higher scores for the accelerations and angular rates (HIII average = 0.471; THOR average = 0.516). The data from the current study showed that both KBABs and SWABs affected the kinematics of all surrogates during frontal sled tests. The results of the objective rating analysis indicated that the HIII and THOR-M had comparable

  3. Preoptometry and optometry school grade point average and optometry admissions test scores as predictors of performance on the national board of examiners in optometry part I (basic science) examination.

    Science.gov (United States)

    Bailey, J E; Yackle, K A; Yuen, M T; Voorhees, L I

    2000-04-01

    To evaluate preoptometry and optometry school grade point averages and Optometry Admission Test (OAT) scores as predictors of performance on the National Board of Examiners in Optometry NBEO Part I (Basic Science) (NBEOPI) examination. Simple and multiple correlation coefficients were computed from data obtained from a sample of three consecutive classes of optometry students (1995-1997; n = 278) at Southern California College of Optometry. The GPA after year two of optometry school was the highest correlation (r = 0.75) among all predictor variables; the average of all scores on the OAT was the highest correlation among preoptometry predictor variables (r = 0.46). Stepwise regression analysis indicated a combination of the optometry GPA, the OAT Academic Average, and the GPA in certain optometry curricular tracks resulted in an improved correlation (multiple r = 0.81). Predicted NBEOPI scores were computed from the regression equation and then analyzed by receiver operating characteristic (roc) and statistic of agreement (kappa) methods. From this analysis, we identified the predicted score that maximized identification of true and false NBEOPI failures (71% and 10%, respectively). Cross validation of this result on a separate class of optometry students resulted in a slightly lower correlation between actual and predicted NBEOPI scores (r = 0.77) but showed the criterion-predicted score to be somewhat lax. The optometry school GPA after 2 years is a reasonably good predictor of performance on the full NBEOPI examination, but the prediction is enhanced by adding the Academic Average OAT score. However, predicting performance in certain subject areas of the NBEOPI examination, for example Psychology and Ocular/Visual Biology, was rather insubstantial. Nevertheless, predicting NBEOPI performance from the best combination of year two optometry GPAs and preoptometry variables is better than has been shown in previous studies predicting optometry GPA from the best

  4. Demonstration of two-phase Direct Numerical Simulation (DNS) methods potentiality to give information to averaged models: application to bubbles column

    International Nuclear Information System (INIS)

    Magdeleine, S.

    2009-11-01

    This work is a part of a long term project that aims at using two-phase Direct Numerical Simulation (DNS) in order to give information to averaged models. For now, it is limited to isothermal bubbly flows with no phase change. It could be subdivided in two parts: Firstly, theoretical developments are made in order to build an equivalent of Large Eddy Simulation (LES) for two phase flows called Interfaces and Sub-grid Scales (ISS). After the implementation of the ISS model in our code called Trio U , a set of various cases is used to validate this model. Then, special test are made in order to optimize the model for our particular bubbly flows. Thus we showed the capacity of the ISS model to produce a cheap pertinent solution. Secondly, we use the ISS model to perform simulations of bubbly flows in column. Results of these simulations are averaged to obtain quantities that appear in mass, momentum and interfacial area density balances. Thus, we processed to an a priori test of a complete one dimensional averaged model.We showed that this model predicts well the simplest flows (laminar and monodisperse). Moreover, the hypothesis of one pressure, which is often made in averaged model like CATHARE, NEPTUNE and RELAP5, is satisfied in such flows. At the opposite, without a polydisperse model, the drag is over-predicted and the uncorrelated A i flux needs a closure law. Finally, we showed that in turbulent flows, fluctuations of velocity and pressure in the liquid phase are not represented by the tested averaged model. (author)

  5. Averaging processes in granular flows driven by gravity

    Science.gov (United States)

    Rossi, Giulia; Armanini, Aronne

    2016-04-01

    One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental

  6. Average-case analysis of numerical problems

    CERN Document Server

    2000-01-01

    The average-case analysis of numerical problems is the counterpart of the more traditional worst-case approach. The analysis of average error and cost leads to new insight on numerical problems as well as to new algorithms. The book provides a survey of results that were mainly obtained during the last 10 years and also contains new results. The problems under consideration include approximation/optimal recovery and numerical integration of univariate and multivariate functions as well as zero-finding and global optimization. Background material, e.g. on reproducing kernel Hilbert spaces and random fields, is provided.

  7. Discriminant Analysis of Essay, Mathematics/Science Type of Essay, College Scholastic Ability Test, and Grade Point Average as Predictors of Acceptance to a Pre-med Course at a Korean Medical School

    OpenAIRE

    Geum-Hee Jeong

    2008-01-01

    A discriminant analysis was conducted to investigate how an essay, a mathematics/science type of essay, a college scholastic ability test, and grade point average affect acceptance to a pre-med course at a Korean medical school. Subjects included 122 and 385 applicants for, respectively, early and regular admission to a medical school in Korea. The early admission examination was conducted in October 2007, and the regular admission examination was conducted in January 2008. The analysis of ea...

  8. [Development of a microenvironment test chamber for airborne microbe research].

    Science.gov (United States)

    Zhan, Ningbo; Chen, Feng; Du, Yaohua; Cheng, Zhi; Li, Chenyu; Wu, Jinlong; Wu, Taihu

    2017-10-01

    One of the most important environmental cleanliness indicators is airborne microbe. However, the particularity of clean operating environment and controlled experimental environment often leads to the limitation of the airborne microbe research. This paper designed and implemented a microenvironment test chamber for airborne microbe research in normal test conditions. Numerical simulation by Fluent showed that airborne microbes were evenly dispersed in the upper part of test chamber, and had a bottom-up concentration growth distribution. According to the simulation results, the verification experiment was carried out by selecting 5 sampling points in different space positions in the test chamber. Experimental results showed that average particle concentrations of all sampling points reached 10 7 counts/m 3 after 5 minutes' distributing of Staphylococcus aureus , and all sampling points showed the accordant mapping of concentration distribution. The concentration of airborne microbe in the upper chamber was slightly higher than that in the middle chamber, and that was also slightly higher than that in the bottom chamber. It is consistent with the results of numerical simulation, and it proves that the system can be well used for airborne microbe research.

  9. 40 CFR 86.1865-12 - How to comply with the fleet average CO2 standards.

    Science.gov (United States)

    2010-07-01

    ... efficiency credits earned according to the provisions of § 86.1866-12(c); (iii) Off-cycle technology credits... change Selective Enforcement Auditing or in-use testing failures from a failure to a non-failure. The.... (l) Maintenance of records and submittal of information relevant to compliance with fleet average CO...

  10. Using Aptitude Testing to Diversify Higher Education Intake--An Australian Case Study

    Science.gov (United States)

    Edwards, Daniel; Coates, Hamish; Friedman, Tim

    2013-01-01

    Australian higher education is currently entering a new phase of growth. Within the remit of this expansion is an express commitment to widen participation in higher education among under-represented groups--in particular those from low socioeconomic backgrounds. This paper argues that one key mechanism for achieving this goal should be the…

  11. Average L-shell fluorescence, Auger, and electron yields

    International Nuclear Information System (INIS)

    Krause, M.O.

    1980-01-01

    The dependence of the average L-shell fluorescence and Auger yields on the initial vacancy distribution is shown to be small. By contrast, the average electron yield pertaining to both Auger and Coster-Kronig transitions is shown to display a strong dependence. Numerical examples are given on the basis of Krause's evaluation of subshell radiative and radiationless yields. Average yields are calculated for widely differing vacancy distributions and are intercompared graphically for 40 3 subshell yields in most cases of inner-shell ionization

  12. Bayesian Averaging is Well-Temperated

    DEFF Research Database (Denmark)

    Hansen, Lars Kai

    2000-01-01

    Bayesian predictions are stochastic just like predictions of any other inference scheme that generalize from a finite sample. While a simple variational argument shows that Bayes averaging is generalization optimal given that the prior matches the teacher parameter distribution the situation is l...

  13. Interventions to Educate Family Physicians to Change Test Ordering

    Directory of Open Access Journals (Sweden)

    Roger Edmund Thomas MD, PhD, CCFP, MRCGP

    2016-03-01

    Full Text Available The purpose is to systematically review randomised controlled trials (RCTs to change family physicians’ laboratory test-ordering. We searched 15 electronic databases (no language/date limitations. We identified 29 RCTs (4,111 physicians, 175,563 patients. Six studies specifically focused on reducing unnecessary tests, 23 on increasing screening tests. Using Cochrane methodology 48.5% of studies were low risk-of-bias for randomisation, 7% concealment of randomisation, 17% blinding of participants/personnel, 21% blinding outcome assessors, 27.5% attrition, 93% selective reporting. Only six studies were low risk for both randomisation and attrition. Twelve studies performed a power computation, three an intention-to-treat analysis and 13 statistically controlled clustering. Unweighted averages were computed to compare intervention/control groups for tests assessed by >5 studies. The results were that fourteen studies assessed lipids (average 10% more tests than control, 14 diabetes (average 8% > control, 5 cervical smears, 2 INR, one each thyroid, fecal occult-blood, cotinine, throat-swabs, testing after prescribing, and urine-cultures. Six studies aimed to decrease test groups (average decrease 18%, and two to increase test groups. Intervention strategies: one study used education (no change: two feedback (one 5% increase, one 27% desired decrease; eight education + feedback (average increase in desired direction >control 4.9%, ten system change (average increase 14.9%, one system change + feedback (increases 5-44%, three education + system change (average increase 6%, three education + system change + feedback (average 7.7% increase, one delayed testing. The conclusions are that only six RCTs were assessed at low risk of bias from both randomisation and attrition. Nevertheless, despite methodological shortcomings studies that found large changes (e.g. >20% probably obtained real change.

  14. Practicing the Test Produces Strength Equivalent to Higher Volume Training.

    Science.gov (United States)

    Mattocks, Kevin T; Buckner, Samuel L; Jessee, Matthew B; Dankel, Scott J; Mouser, J Grant; Loenneke, Jeremy P

    2017-09-01

    To determine if muscle growth is important for increasing muscle strength or if changes in strength can be entirely explained from practicing the strength test. Thirty-eight untrained individuals performed knee extension and chest press exercise for 8 wk. Individuals were randomly assigned to either a high-volume training group (HYPER) or a group just performing the one repetition maximum (1RM) strength test (TEST). The HYPER group performed four sets to volitional failure (~8RM-12RM), whereas the TEST group performed up to five attempts to lift as much weight as possible one time each visit. Data are presented as mean (90% confidence interval). The change in muscle size was greater in the HYPER group for both the upper and lower bodies at most but not all sites. The change in 1RM strength for both the upper body (difference of -1.1 [-4.8, 2.4] kg) and lower body (difference of 1.0 [-0.7, 2.8] kg for dominant leg) was not different between groups (similar for nondominant). Changes in isometric and isokinetic torque were not different between groups. The HYPER group observed a greater change in muscular endurance (difference of 2 [1,4] repetitions) only in the dominant leg. There were no differences in the change between groups in upper body endurance. There were between-group differences for exercise volume (mean [95% confidence interval]) of the dominant (difference of 11,049.3 [9254.6-12,844.0] kg) leg (similar for nondominant) and chest press with the HYPER group completing significantly more total volume (difference of 13259.9 [9632.0-16,887.8] kg). These findings suggest that neither exercise volume nor the change in muscle size from training contributed to greater strength gains compared with just practicing the test.

  15. Modelling lidar volume-averaging and its significance to wind turbine wake measurements

    Science.gov (United States)

    Meyer Forsting, A. R.; Troldborg, N.; Borraccino, A.

    2017-05-01

    Lidar velocity measurements need to be interpreted differently than conventional in-situ readings. A commonly ignored factor is “volume-averaging”, which refers to lidars not sampling in a single, distinct point but along its entire beam length. However, especially in regions with large velocity gradients, like the rotor wake, can it be detrimental. Hence, an efficient algorithm mimicking lidar flow sampling is presented, which considers both pulsed and continous-wave lidar weighting functions. The flow-field around a 2.3 MW turbine is simulated using Detached Eddy Simulation in combination with an actuator line to test the algorithm and investigate the potential impact of volume-averaging. Even with very few points discretising the lidar beam is volume-averaging captured accurately. The difference in a lidar compared to a point measurement is greatest at the wake edges and increases from 30% one rotor diameter (D) downstream of the rotor to 60% at 3D.

  16. The Gap in Noise test in 11 and 12-year-old children.

    Science.gov (United States)

    Perez, Ana Paula; Pereira, Liliane Desgualdo

    2010-01-01

    gap detection in 11 and 12-year-old children. to investigate temporal resolution through the Gap in Noise test in children of 11 and 12 years in order to establish criteria of normal development. participants were 92 children, with ages of 11 and 12 years, enrolled in elementary school, with no evidences of otologic, and/or neurologic, and/or cognitive disorders, as well as with no history of learning difficulties or school failure. Besides that, participants' hearing thresholds were within normal limits and their verbal recognition in the dichotic test of digits was equal or superior to 95% of hits. All were submitted to the Gap in Noise test. The statistical analysis was performed by non-parametric tests with significance level of 0.05 (5%). the average of the gap thresholds was 5.05 ms, and the average percentage of correct answers was 71.70%. There was no significant statistical difference between the responses by age (eleven and twelve years), by ear (right and left), by gender (male and female). However, when comparing the tests, it was observed that the 1st test showed a higher percentage of identifications of gap, statistically significant than the 2nd test. in 78.27% of the population of this study, the gap thresholds were up to 5 ms, response recommended as normality reference for the age group searched.

  17. Simultaneous inference for model averaging of derived parameters

    DEFF Research Database (Denmark)

    Jensen, Signe Marie; Ritz, Christian

    2015-01-01

    Model averaging is a useful approach for capturing uncertainty due to model selection. Currently, this uncertainty is often quantified by means of approximations that do not easily extend to simultaneous inference. Moreover, in practice there is a need for both model averaging and simultaneous...... inference for derived parameters calculated in an after-fitting step. We propose a method for obtaining asymptotically correct standard errors for one or several model-averaged estimates of derived parameters and for obtaining simultaneous confidence intervals that asymptotically control the family...

  18. Predicting different grades in different ways for selective admission : Disentangling the first-year grade point average

    NARCIS (Netherlands)

    Steenman, Sebastiaan C.; Bakker, Wieger E.; van Tartwijk, Jan W F

    2016-01-01

    The first-year grade point average (FYGPA) is the predominant measure of student success in most studies on university admission. Previous cognitive achievements measured with high school grades or standardized tests have been found to be the strongest predictors of FYGPA. For this reason,

  19. Design of a high average-power FEL driven by an existing 20 MV electrostatic-accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Kimel, I.; Elias, L.R. [Univ. of Central Florida, Orlando, FL (United States)

    1995-12-31

    There are some important applications where high average-power radiation is required. Two examples are industrial machining and space power-beaming. Unfortunately, up to date no FEL has been able to show more than 10 Watts of average power. To remedy this situation we started a program geared towards the development of high average-power FELs. As a first step we are building in our CREOL laboratory, a compact FEL which will generate close to 1 kW in CW operation. As the next step we are also engaged in the design of a much higher average-power system based on a 20 MV electrostatic accelerator. This FEL will be capable of operating CW with a power output of 60 kW. The idea is to perform a high power demonstration using the existing 20 MV electrostatic accelerator at the Tandar facility in Buenos Aires. This machine has been dedicated to accelerate heavy ions for experiments and applications in nuclear and atomic physics. The necessary adaptations required to utilize the machine to accelerate electrons will be described. An important aspect of the design of the 20 MV system, is the electron beam optics through almost 30 meters of accelerating and decelerating tubes as well as the undulator. Of equal importance is a careful design of the long resonator with mirrors able to withstand high power loading with proper heat dissipation features.

  20. Nonequilibrium statistical averages and thermo field dynamics

    International Nuclear Information System (INIS)

    Marinaro, A.; Scarpetta, Q.

    1984-01-01

    An extension of thermo field dynamics is proposed, which permits the computation of nonequilibrium statistical averages. The Brownian motion of a quantum oscillator is treated as an example. In conclusion it is pointed out that the procedure proposed to computation of time-dependent statistical average gives the correct two-point Green function for the damped oscillator. A simple extension can be used to compute two-point Green functions of free particles

  1. Emissions characteristics of higher alcohol/gasoline blends

    International Nuclear Information System (INIS)

    Gautam, M.; Martin, D.W.; Carder, D.

    2000-01-01

    An experimental investigation was conducted to determine the emissions characteristics of higher alcohols and gasoline (UTG96) blends. While lower alcohols (methanol and ethanol) have been used in blends with gasoline, very little work has been done or reported on higher alcohols (propanol, butanol and pentanol). Comparisons of emissions and fuel characteristics between higher alcohol/gasoline blends and neat gasoline were made to determine the advantages and disadvantages of blending higher alcohols with gasoline. All tests were conducted on a single-cylinder Waukesha Cooperative Fuel Research engine operating at steady state conditions and stoichiometric air-fuel (A/F) ratio. Emissions test were conducted at the optimum spark timing-knock limiting compression ratio combination for the particular blend being tested. The cycle emission [mass per unit time (g/h)] of CO, CO 2 and organic matter hydrocarbon equivalent (OMHCE) from the higher alcohol/gasoline blends were very similar to those from neat gasoline. Cycle emissions of NO x from the blends were higher than those from neat gasoline. However, for all the emissions species considered, the brake specific emissions (g/kW h) were significantly lower for the higher alcohol/gasoline blends than for neat gasoline. This was because the blends had greater resistance to knock and allowed higher compression ratios, which increased engine power output. The contribution of alcohols and aldehydes to the overall OMHCE emissions was found to be minimal. Cycle fuel consumption (g/h) of higher alcohol/gasoline blends was slightly higher than with neat gasoline due to the lower stoichiometric A/F ratios required by the blends. However, the brake specific fuel consumption (g/kW h) for the blends was significantly lower than that for neat gasoline. (Author)

  2. Analysis and Design of Improved Weighted Average Current Control Strategy for LCL-Type Grid-Connected Inverters

    DEFF Research Database (Denmark)

    Han, Yang; Li, Zipeng; Yang, Ping

    2017-01-01

    The LCL grid-connected inverter has the ability to attenuate the high-frequency current harmonics. However, the inherent resonance of the LCL filter affects the system stability significantly. To damp the resonance effect, the dual-loop current control can be used to stabilize the system. The grid...... Control Strategy for LCL-Type Grid-Connected Inverters. Available from: https://www.researchgate.net/publication/313734269_Analysis_and_Design_of_Improved_Weighted_Average_Current_Control_Strategy_for_LCL-Type_Grid-Connected_Inverters [accessed Apr 20, 2017]....... current plus capacitor current feedback system is widely used for its better transient response and high robustness against the grid impedance variations. While the weighted average current (WAC) feedback scheme is capable to provide a wider bandwidth at higher frequencies but show poor stability...

  3. Time average vibration fringe analysis using Hilbert transformation

    International Nuclear Information System (INIS)

    Kumar, Upputuri Paul; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad

    2010-01-01

    Quantitative phase information from a single interferogram can be obtained using the Hilbert transform (HT). We have applied the HT method for quantitative evaluation of Bessel fringes obtained in time average TV holography. The method requires only one fringe pattern for the extraction of vibration amplitude and reduces the complexity in quantifying the data experienced in the time average reference bias modulation method, which uses multiple fringe frames. The technique is demonstrated for the measurement of out-of-plane vibration amplitude on a small scale specimen using a time average microscopic TV holography system.

  4. Structure of two-phase air-water flows. Study of average void fraction and flow patterns

    International Nuclear Information System (INIS)

    Roumy, R.

    1969-01-01

    This report deals with experimental work on a two phase air-water mixture in vertical tubes of different diameters. The average void fraction was measured in a 2 metre long test section by means of quick-closing valves. Using resistive probes and photographic techniques, we have determined the flow patterns and developed diagrams to indicate the boundaries between the various patterns: independent bubbles, agglomerated bubbles, slugs, semi-annular, annular. In the case of bubble flow and slug flow, it is shown that the relationship between the average void fraction and the superficial velocities of the phases is given by: V sg = f( ) * g(V sl ). The function g(V sl ) for the case of independent bubbles has been found to be: g(V sl ) = V sl + 20. For semi-annular and annular flow conditions; it appears that the average void fraction depends, to a first approximation only on the ratio V sg /V sl . (author) [fr

  5. Bounds on Average Time Complexity of Decision Trees

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any

  6. Reynolds averaged turbulence modelling using deep neural networks with embedded invariance

    International Nuclear Information System (INIS)

    Ling, Julia; Kurzawski, Andrew; Templeton, Jeremy

    2016-01-01

    There exists significant demand for improved Reynolds-averaged Navier–Stokes (RANS) turbulence models that are informed by and can represent a richer set of turbulence physics. This paper presents a method of using deep neural networks to learn a model for the Reynolds stress anisotropy tensor from high-fidelity simulation data. A novel neural network architecture is proposed which uses a multiplicative layer with an invariant tensor basis to embed Galilean invariance into the predicted anisotropy tensor. It is demonstrated that this neural network architecture provides improved prediction accuracy compared with a generic neural network architecture that does not embed this invariance property. Furthermore, the Reynolds stress anisotropy predictions of this invariant neural network are propagated through to the velocity field for two test cases. For both test cases, significant improvement versus baseline RANS linear eddy viscosity and nonlinear eddy viscosity models is demonstrated.

  7. Nodal O(h4)-superconvergence in 3D by averaging piecewise linear, bilinear, and trilinear FE approximations

    Czech Academy of Sciences Publication Activity Database

    Hannukainen, A.; Korotov, S.; Křížek, Michal

    2010-01-01

    Roč. 28, č. 1 (2010), s. 1-10 ISSN 0254-9409 R&D Projects: GA AV ČR(CZ) IAA100190803 Institutional research plan: CEZ:AV0Z10190503 Keywords : higher order error estimates * tetrahedral and prismatic elements * superconvergence * averaging operators Subject RIV: BA - General Mathematics Impact factor: 0.760, year: 2010 http://www.jstor.org/stable/43693564

  8. Analysis of nonlinear systems using ARMA [autoregressive moving average] models

    International Nuclear Information System (INIS)

    Hunter, N.F. Jr.

    1990-01-01

    While many vibration systems exhibit primarily linear behavior, a significant percentage of the systems encountered in vibration and model testing are mildly to severely nonlinear. Analysis methods for such nonlinear systems are not yet well developed and the response of such systems is not accurately predicted by linear models. Nonlinear ARMA (autoregressive moving average) models are one method for the analysis and response prediction of nonlinear vibratory systems. In this paper we review the background of linear and nonlinear ARMA models, and illustrate the application of these models to nonlinear vibration systems. We conclude by summarizing the advantages and disadvantages of ARMA models and emphasizing prospects for future development. 14 refs., 11 figs

  9. Visibility-Based Hypothesis Testing Using Higher-Order Optical Interference

    Science.gov (United States)

    Jachura, Michał; Jarzyna, Marcin; Lipka, Michał; Wasilewski, Wojciech; Banaszek, Konrad

    2018-03-01

    Many quantum information protocols rely on optical interference to compare data sets with efficiency or security unattainable by classical means. Standard implementations exploit first-order coherence between signals whose preparation requires a shared phase reference. Here, we analyze and experimentally demonstrate the binary discrimination of visibility hypotheses based on higher-order interference for optical signals with a random relative phase. This provides a robust protocol implementation primitive when a phase lock is unavailable or impractical. With the primitive cost quantified by the total detected optical energy, optimal operation is typically reached in the few-photon regime.

  10. An average salary: approaches to the index determination

    Directory of Open Access Journals (Sweden)

    T. M. Pozdnyakova

    2017-01-01

    Full Text Available The article “An average salary: approaches to the index determination” is devoted to studying various methods of calculating this index, both used by official state statistics of the Russian Federation and offered by modern researchers.The purpose of this research is to analyze the existing approaches to calculating the average salary of employees of enterprises and organizations, as well as to make certain additions that would help to clarify this index.The information base of the research is laws and regulations of the Russian Federation Government, statistical and analytical materials of the Federal State Statistics Service of Russia for the section «Socio-economic indexes: living standards of the population», as well as materials of scientific papers, describing different approaches to the average salary calculation. The data on the average salary of employees of educational institutions of the Khabarovsk region served as the experimental base of research. In the process of conducting the research, the following methods were used: analytical, statistical, calculated-mathematical and graphical.The main result of the research is an option of supplementing the method of calculating average salary index within enterprises or organizations, used by Goskomstat of Russia, by means of introducing a correction factor. Its essence consists in the specific formation of material indexes for different categories of employees in enterprises or organizations, mainly engaged in internal secondary jobs. The need for introducing this correction factor comes from the current reality of working conditions of a wide range of organizations, when an employee is forced, in addition to the main position, to fulfill additional job duties. As a result, the situation is frequent when the average salary at the enterprise is difficult to assess objectively because it consists of calculating multiple rates per staff member. In other words, the average salary of

  11. 7 CFR 1437.11 - Average market price and payment factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Average market price and payment factors. 1437.11... ASSISTANCE PROGRAM General Provisions § 1437.11 Average market price and payment factors. (a) An average... average market price by the applicable payment factor (i.e., harvested, unharvested, or prevented planting...

  12. Anomalous behavior of q-averages in nonextensive statistical mechanics

    International Nuclear Information System (INIS)

    Abe, Sumiyoshi

    2009-01-01

    A generalized definition of average, termed the q-average, is widely employed in the field of nonextensive statistical mechanics. Recently, it has however been pointed out that such an average value may behave unphysically under specific deformations of probability distributions. Here, the following three issues are discussed and clarified. Firstly, the deformations considered are physical and may be realized experimentally. Secondly, in view of the thermostatistics, the q-average is unstable in both finite and infinite discrete systems. Thirdly, a naive generalization of the discussion to continuous systems misses a point, and a norm better than the L 1 -norm should be employed for measuring the distance between two probability distributions. Consequently, stability of the q-average is shown not to be established in all of the cases

  13. Books average previous decade of economic misery.

    Science.gov (United States)

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  14. Asynchronous Gossip for Averaging and Spectral Ranking

    Science.gov (United States)

    Borkar, Vivek S.; Makhijani, Rahul; Sundaresan, Rajesh

    2014-08-01

    We consider two variants of the classical gossip algorithm. The first variant is a version of asynchronous stochastic approximation. We highlight a fundamental difficulty associated with the classical asynchronous gossip scheme, viz., that it may not converge to a desired average, and suggest an alternative scheme based on reinforcement learning that has guaranteed convergence to the desired average. We then discuss a potential application to a wireless network setting with simultaneous link activation constraints. The second variant is a gossip algorithm for distributed computation of the Perron-Frobenius eigenvector of a nonnegative matrix. While the first variant draws upon a reinforcement learning algorithm for an average cost controlled Markov decision problem, the second variant draws upon a reinforcement learning algorithm for risk-sensitive control. We then discuss potential applications of the second variant to ranking schemes, reputation networks, and principal component analysis.

  15. Benchmarking statistical averaging of spectra with HULLAC

    Science.gov (United States)

    Klapisch, Marcel; Busquet, Michel

    2008-11-01

    Knowledge of radiative properties of hot plasmas is important for ICF, astrophysics, etc When mid-Z or high-Z elements are present, the spectra are so complex that one commonly uses statistically averaged description of atomic systems [1]. In a recent experiment on Fe[2], performed under controlled conditions, high resolution transmission spectra were obtained. The new version of HULLAC [3] allows the use of the same model with different levels of details/averaging. We will take advantage of this feature to check the effect of averaging with comparison with experiment. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Quant. Spectros. Rad. Transf. 65, 43 (2000). [2] J. E. Bailey, G. A. Rochau, C. A. Iglesias et al., Phys. Rev. Lett. 99, 265002-4 (2007). [3]. M. Klapisch, M. Busquet, and A. Bar-Shalom, AIP Conference Proceedings 926, 206-15 (2007).

  16. A microNewton thrust stand for average thrust measurement of pulsed microthruster.

    Science.gov (United States)

    Zhou, Wei-Jing; Hong, Yan-Ji; Chang, Hao

    2013-12-01

    A torsional thrust stand has been developed for the study of the average thrust for microNewton pulsed thrusters. The main body of the thrust stand mainly consists of a torsional balance, a pair of flexural pivots, a capacitive displacement sensor, a calibration assembly, and an eddy current damper. The behavior of the stand was thoroughly studied. The principle of thrust measurement was analyzed. The average thrust is determined as a function of the average equilibrium angle displacement of the balance and the spring stiffness. The thrust stand has a load capacity up to 10 kg, and it can theoretically measure the force up to 609.6 μN with a resolution of 24.4 nN. The static calibrations were performed based on the calibration assembly composed of the multiturn coil and the permanent magnet. The calibration results demonstrated good repeatability (less than 0.68% FSO) and good linearity (less than 0.88% FSO). The assembly of the multiturn coil and the permanent magnet was also used as an exciter to simulate the microthruster to further research the performance of the thrust stand. Three sets of force pulses at 17, 33.5, and 55 Hz with the same amplitude and pulse width were tested. The repeatability error at each frequency was 7.04%, 1.78%, and 5.08%, respectively.

  17. OrthoANI: An improved algorithm and software for calculating average nucleotide identity.

    Science.gov (United States)

    Lee, Imchang; Ouk Kim, Yeong; Park, Sang-Cheol; Chun, Jongsik

    2016-02-01

    Species demarcation in Bacteria and Archaea is mainly based on overall genome relatedness, which serves a framework for modern microbiology. Current practice for obtaining these measures between two strains is shifting from experimentally determined similarity obtained by DNA-DNA hybridization (DDH) to genome-sequence-based similarity. Average nucleotide identity (ANI) is a simple algorithm that mimics DDH. Like DDH, ANI values between two genome sequences may be different from each other when reciprocal calculations are compared. We compared 63 690 pairs of genome sequences and found that the differences in reciprocal ANI values are significantly high, exceeding 1 % in some cases. To resolve this problem of not being symmetrical, a new algorithm, named OrthoANI, was developed to accommodate the concept of orthology for which both genome sequences were fragmented and only orthologous fragment pairs taken into consideration for calculating nucleotide identities. OrthoANI is highly correlated with ANI (using BLASTn) and the former showed approximately 0.1 % higher values than the latter. In conclusion, OrthoANI provides a more robust and faster means of calculating average nucleotide identity for taxonomic purposes. The standalone software tools are freely available at http://www.ezbiocloud.net/sw/oat.

  18. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....

  19. High average power supercontinuum sources

    Indian Academy of Sciences (India)

    The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium.

  20. An Enhanced Method to Estimate Heart Rate from Seismocardiography via Ensemble Averaging of Body Movements at Six Degrees of Freedom

    Directory of Open Access Journals (Sweden)

    Hyunwoo Lee

    2018-01-01

    Full Text Available Continuous cardiac monitoring has been developed to evaluate cardiac activity outside of clinical environments due to the advancement of novel instruments. Seismocardiography (SCG is one of the vital components that could develop such a monitoring system. Although SCG has been presented with a lower accuracy, this novel cardiac indicator has been steadily proposed over traditional methods such as electrocardiography (ECG. Thus, it is necessary to develop an enhanced method by combining the significant cardiac indicators. In this study, the six-axis signals of accelerometer and gyroscope were measured and integrated by the L2 normalization and multi-dimensional kineticardiography (MKCG approaches, respectively. The waveforms of accelerometer and gyroscope were standardized and combined via ensemble averaging, and the heart rate was calculated from the dominant frequency. Thirty participants (15 females were asked to stand or sit in relaxed and aroused conditions. Their SCG was measured during the task. As a result, proposed method showed higher accuracy than traditional SCG methods in all measurement conditions. The three main contributions are as follows: (1 the ensemble averaging enhanced heart rate estimation with the benefits of the six-axis signals; (2 the proposed method was compared with the previous SCG method that employs fewer-axis; and (3 the method was tested in various measurement conditions for a more practical application.

  1. Average monthly and annual climate maps for Bolivia

    KAUST Repository

    Vicente-Serrano, Sergio M.

    2015-02-24

    This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.

  2. The relationship between limit of Dysphagia and average volume per swallow in patients with Parkinson's disease.

    Science.gov (United States)

    Belo, Luciana Rodrigues; Gomes, Nathália Angelina Costa; Coriolano, Maria das Graças Wanderley de Sales; de Souza, Elizabete Santos; Moura, Danielle Albuquerque Alves; Asano, Amdore Guescel; Lins, Otávio Gomes

    2014-08-01

    The goal of this study was to obtain the limit of dysphagia and the average volume per swallow in patients with mild to moderate Parkinson's disease (PD) but without swallowing complaints and in normal subjects, and to investigate the relationship between them. We hypothesize there is a direct relationship between these two measurements. The study included 10 patients with idiopathic PD and 10 age-matched normal controls. Surface electromyography was recorded over the suprahyoid muscle group. The limit of dysphagia was obtained by offering increasing volumes of water until piecemeal deglutition occurred. The average volume per swallow was calculated by dividing the time taken by the number of swallows used to drink 100 ml of water. The PD group showed a significantly lower dysphagia limit and lower average volume per swallow. There was a significantly moderate direct correlation and association between the two measurements. About half of the PD patients had an abnormally low dysphagia limit and average volume per swallow, although none had spontaneously related swallowing problems. Both measurements may be used as a quick objective screening test for the early identification of swallowing alterations that may lead to dysphagia in PD patients, but the determination of the average volume per swallow is much quicker and simpler.

  3. Higher cost of implementing Xpert(®) MTB/RIF in Ugandan peripheral settings: implications for cost-effectiveness.

    Science.gov (United States)

    Hsiang, E; Little, K M; Haguma, P; Hanrahan, C F; Katamba, A; Cattamanchi, A; Davis, J L; Vassall, A; Dowdy, D

    2016-09-01

    Initial cost-effectiveness evaluations of Xpert(®) MTB/RIF for tuberculosis (TB) diagnosis have not fully accounted for the realities of implementation in peripheral settings. To evaluate costs and diagnostic outcomes of Xpert testing implemented at various health care levels in Uganda. We collected empirical cost data from five health centers utilizing Xpert for TB diagnosis, using an ingredients approach. We reviewed laboratory and patient records to assess outcomes at these sites and10 sites without Xpert. We also estimated incremental cost-effectiveness of Xpert testing; our primary outcome was the incremental cost of Xpert testing per newly detected TB case. The mean unit cost of an Xpert test was US$21 based on a mean monthly volume of 54 tests per site, although unit cost varied widely (US$16-58) and was primarily determined by testing volume. Total diagnostic costs were 2.4-fold higher in Xpert clinics than in non-Xpert clinics; however, Xpert only increased diagnoses by 12%. The diagnostic costs of Xpert averaged US$119 per newly detected TB case, but were as high as US$885 at the center with the lowest volume of tests. Xpert testing can detect TB cases at reasonable cost, but may double diagnostic budgets for relatively small gains, with cost-effectiveness deteriorating with lower testing volumes.

  4. Average stress in a Stokes suspension of disks

    NARCIS (Netherlands)

    Prosperetti, Andrea

    2004-01-01

    The ensemble-average velocity and pressure in an unbounded quasi-random suspension of disks (or aligned cylinders) are calculated in terms of average multipoles allowing for the possibility of spatial nonuniformities in the system. An expression for the stress due to the suspended particles is

  5. Salecker-Wigner-Peres clock and average tunneling times

    International Nuclear Information System (INIS)

    Lunardi, Jose T.; Manzoni, Luiz A.; Nystrom, Andrew T.

    2011-01-01

    The quantum clock of Salecker-Wigner-Peres is used, by performing a post-selection of the final state, to obtain average transmission and reflection times associated to the scattering of localized wave packets by static potentials in one dimension. The behavior of these average times is studied for a Gaussian wave packet, centered around a tunneling wave number, incident on a rectangular barrier and, in particular, on a double delta barrier potential. The regime of opaque barriers is investigated and the results show that the average transmission time does not saturate, showing no evidence of the Hartman effect (or its generalized version).

  6. Average wind statistics for SRP area meteorological towers

    International Nuclear Information System (INIS)

    Laurinat, J.E.

    1987-01-01

    A quality assured set of average wind Statistics for the seven SRP area meteorological towers has been calculated for the five-year period 1982--1986 at the request of DOE/SR. A Similar set of statistics was previously compiled for the years 1975-- 1979. The updated wind statistics will replace the old statistics as the meteorological input for calculating atmospheric radionuclide doses from stack releases, and will be used in the annual environmental report. This report details the methods used to average the wind statistics and to screen out bad measurements and presents wind roses generated by the averaged statistics

  7. Two Questions about Critical-Thinking Tests in Higher Education

    Science.gov (United States)

    Benjamin, Roger

    2014-01-01

    In this article, the author argues first, that critical-thinking skills do exist independent of disciplinary thinking skills and are not compromised by interaction effects with the major; and second, that standardized tests (e.g., the Collegiate Learning Assessment, or CLA, which is his example throughout the article) are the best way to measure…

  8. Local and average structure of Mn- and La-substituted BiFeO{sub 3}

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Bo; Selbach, Sverre M., E-mail: selbach@ntnu.no

    2017-06-15

    The local and average structure of solid solutions of the multiferroic perovskite BiFeO{sub 3} is investigated by synchrotron X-ray diffraction (XRD) and electron density functional theory (DFT) calculations. The average experimental structure is determined by Rietveld refinement and the local structure by total scattering data analyzed in real space with the pair distribution function (PDF) method. With equal concentrations of La on the Bi site or Mn on the Fe site, La causes larger structural distortions than Mn. Structural models based on DFT relaxed geometry give an improved fit to experimental PDFs compared to models constrained by the space group symmetry. Berry phase calculations predict a higher ferroelectric polarization than the experimental literature values, reflecting that structural disorder is not captured in either average structure space group models or DFT calculations with artificial long range order imposed by periodic boundary conditions. Only by including point defects in a supercell, here Bi vacancies, can DFT calculations reproduce the literature results on the structure and ferroelectric polarization of Mn-substituted BiFeO{sub 3}. The combination of local and average structure sensitive experimental methods with DFT calculations is useful for illuminating the structure-property-composition relationships in complex functional oxides with local structural distortions. - Graphical abstract: The experimental and simulated partial pair distribution functions (PDF) for BiFeO{sub 3}, BiFe{sub 0.875}Mn{sub 0.125}O{sub 3}, BiFe{sub 0.75}Mn{sub 0.25}O{sub 3} and Bi{sub 0.9}La{sub 0.1}FeO{sub 3}.

  9. NOAA Average Annual Salinity (3-Zone)

    Data.gov (United States)

    California Natural Resource Agency — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...

  10. Centers for Disease Control and Prevention Funding for HIV Testing Associated With Higher State Percentage of Persons Tested.

    Science.gov (United States)

    Hayek, Samah; Dietz, Patricia M; Van Handel, Michelle; Zhang, Jun; Shrestha, Ram K; Huang, Ya-Lin A; Wan, Choi; Mermin, Jonathan

    2015-01-01

    To assess the association between state per capita allocations of Centers for Disease Control and Prevention (CDC) funding for HIV testing and the percentage of persons tested for HIV. We examined data from 2 sources: 2011 Behavioral Risk Factor Surveillance System and 2010-2011 State HIV Budget Allocations Reports. Behavioral Risk Factor Surveillance System data were used to estimate the percentage of persons aged 18 to 64 years who had reported testing for HIV in the last 2 years in the United States by state. State HIV Budget Allocations Reports were used to calculate the state mean annual per capita allocations for CDC-funded HIV testing reported by state and local health departments in the United States. The association between the state fixed-effect per capita allocations for CDC-funded HIV testing and self-reported HIV testing in the last 2 years among persons aged 18 to 64 years was assessed with a hierarchical logistic regression model adjusting for individual-level characteristics. The percentage of persons tested for HIV in the last 2 years. In 2011, 18.7% (95% confidence interval = 18.4-19.0) of persons reported being tested for HIV in last 2 years (state range, 9.7%-28.2%). During 2010-2011, the state mean annual per capita allocation for CDC-funded HIV testing was $0.34 (state range, $0.04-$1.04). A $0.30 increase in per capita allocation for CDC-funded HIV testing was associated with an increase of 2.4 percentage points (14.0% vs 16.4%) in the percentage of persons tested for HIV per state. Providing HIV testing resources to health departments was associated with an increased percentage of state residents tested for HIV.

  11. Artificial Intelligence Can Predict Daily Trauma Volume and Average Acuity.

    Science.gov (United States)

    Stonko, David P; Dennis, Bradley M; Betzold, Richard D; Peetz, Allan B; Gunter, Oliver L; Guillamondegui, Oscar D

    2018-04-19

    The goal of this study was to integrate temporal and weather data in order to create an artificial neural network (ANN) to predict trauma volume, the number of emergent operative cases, and average daily acuity at a level 1 trauma center. Trauma admission data from TRACS and weather data from the National Oceanic and Atmospheric Administration (NOAA) was collected for all adult trauma patients from July 2013-June 2016. The ANN was constructed using temporal (time, day of week), and weather factors (daily high, active precipitation) to predict four points of daily trauma activity: number of traumas, number of penetrating traumas, average ISS, and number of immediate OR cases per day. We trained a two-layer feed-forward network with 10 sigmoid hidden neurons via the Levenberg-Marquardt backpropagation algorithm, and performed k-fold cross validation and accuracy calculations on 100 randomly generated partitions. 10,612 patients over 1,096 days were identified. The ANN accurately predicted the daily trauma distribution in terms of number of traumas, number of penetrating traumas, number of OR cases, and average daily ISS (combined training correlation coefficient r = 0.9018+/-0.002; validation r = 0.8899+/- 0.005; testing r = 0.8940+/-0.006). We were able to successfully predict trauma and emergent operative volume, and acuity using an ANN by integrating local weather and trauma admission data from a level 1 center. As an example, for June 30, 2016, it predicted 9.93 traumas (actual: 10), and a mean ISS score of 15.99 (actual: 13.12); see figure 3. This may prove useful for predicting trauma needs across the system and hospital administration when allocating limited resources. Level III STUDY TYPE: Prognostic/Epidemiological.

  12. The Average IQ of Sub-Saharan Africans: Comments on Wicherts, Dolan, and van der Maas

    Science.gov (United States)

    Lynn, Richard; Meisenberg, Gerhard

    2010-01-01

    Wicherts, Dolan, and van der Maas (2009) contend that the average IQ of sub-Saharan Africans is about 80. A critical evaluation of the studies presented by WDM shows that many of these are based on unrepresentative elite samples. We show that studies of 29 acceptably representative samples on tests other than the Progressive Matrices give a…

  13. An approach to averaging digitized plantagram curves.

    Science.gov (United States)

    Hawes, M R; Heinemeyer, R; Sovak, D; Tory, B

    1994-07-01

    The averaging of outline shapes of the human foot for the purposes of determining information concerning foot shape and dimension within the context of comfort of fit of sport shoes is approached as a mathematical problem. An outline of the human footprint is obtained by standard procedures and the curvature is traced with a Hewlett Packard Digitizer. The paper describes the determination of an alignment axis, the identification of two ray centres and the division of the total curve into two overlapping arcs. Each arc is divided by equiangular rays which intersect chords between digitized points describing the arc. The radial distance of each ray is averaged within groups of foot lengths which vary by +/- 2.25 mm (approximately equal to 1/2 shoe size). The method has been used to determine average plantar curves in a study of 1197 North American males (Hawes and Sovak 1993).

  14. Books Average Previous Decade of Economic Misery

    Science.gov (United States)

    Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  15. Experimental results of some cluster tests in NSRR

    International Nuclear Information System (INIS)

    Kobayashi, Shinsho; Ohnishi, Nobuaki; Yoshimura, Tomio; Lussie, W.G.

    1978-01-01

    The NSRR programme is in progress in JAERI using a pulsed reactor to evaluate the behavior of reactor fuels under reactivity accident conditions. This report describes briefly the experimental results and preliminary analysis of two cluster tests. In the cluster configuration of five fuel rods, the power distribution in outer fuel rods are not symmetric due to neutron absorption in central fuel rod. The cladding temperature on the exterior boundaries of the cluster is higher than that in interior. Good agreement was obtained between the calculated and measured cladding temperature histories. In the 3.8$ excess reactivity test, cluster averaged energy deposition of 237 cal/g.UO 2 , cladding melting and deformation were limited to the portions of the fuel rods that were on the exterior boundaries of the cluster. (auth.)

  16. Test of the local form of higher-spin equations via AdS/CFT

    Directory of Open Access Journals (Sweden)

    V.E. Didenko

    2017-12-01

    Full Text Available The local form of higher-spin equations found recently to the second order [1] is shown to properly reproduce the anticipated AdS/CFT correlators for appropriate boundary conditions. It is argued that consistent AdS/CFT holography for the parity-broken boundary models needs a nontrivial modification of the bosonic truncation of the original higher-spin theory with the doubled number of fields, as well as a nonlinear deformation of the boundary conditions in the higher orders.

  17. Effects of Video-Based and Applied Problems on the Procedural Math Skills of Average- and Low-Achieving Adolescents.

    Science.gov (United States)

    Bottge, Brian A.; Heinrichs, Mary; Chan, Shih-Yi; Mehta, Zara Dee; Watson, Elizabeth

    2003-01-01

    This study examined effects of video-based, anchored instruction and applied problems on the ability of 11 low-achieving (LA) and 26 average-achieving (AA) eighth graders to solve computation and word problems. Performance for both groups was higher during anchored instruction than during baseline, but no differences were found between instruction…

  18. High average daily intake of PCDD/Fs and serum levels in residents living near a deserted factory producing pentachlorophenol (PCP) in Taiwan: Influence of contaminated fish consumption

    Energy Technology Data Exchange (ETDEWEB)

    Lee, C.C. [Department of Environmental and Occupational Health, Medical College, National Cheng Kung University, Tainan, Taiwan (China); Research Center of Environmental Trace Toxic Substances, Medical College, National Cheng Kung University, Tainan, Taiwan (China); Lin, W.T. [Department of Environmental and Occupational Health, Medical College, National Cheng Kung University, Tainan, Taiwan (China); Liao, P.C. [Department of Environmental and Occupational Health, Medical College, National Cheng Kung University, Tainan, Taiwan (China); Research Center of Environmental Trace Toxic Substances, Medical College, National Cheng Kung University, Tainan, Taiwan (China); Su, H.J. [Department of Environmental and Occupational Health, Medical College, National Cheng Kung University, Tainan, Taiwan (China); Research Center of Environmental Trace Toxic Substances, Medical College, National Cheng Kung University, Tainan, Taiwan (China); Chen, H.L. [Department of Industrial Safety and Health, Hung Kuang University, Taichung, 34 Chung Chie Rd. Sha Lu, Taichung 433, Taiwan (China)]. E-mail: hsiulin@sunrise.hk.edu.tw

    2006-05-15

    An abandoned pentachlorophenol plant and nearby area in southern Taiwan was heavily contaminated by dioxins, impurities formed in the PCP production process. The investigation showed that the average serum PCDD/Fs of residents living nearby area (62.5 pg WHO-TEQ/g lipid) was higher than those living in the non-polluted area (22.5 and 18.2 pg WHO-TEQ/g lipid) (P < 0.05). In biota samples, average PCDD/F of milkfish in sea reservoir (28.3 pg WHO-TEQ/g) was higher than those in the nearby fish farm (0.15 pg WHO-TEQ/g), and Tilapia and shrimp showed the similar trend. The average daily PCDD/Fs intake of 38% participants was higher than 4 pg WHO-TEQ/kg/day suggested by the world health organization. Serum PCDD/F was positively associated with average daily intake (ADI) after adjustment for age, sex, BMI, and smoking status. In addition, a prospective cohort study is suggested to determine the long-term health effects on the people living near factory. - Inhabitants living near a deserted PCP factory are exposed to high PCDD/F levels.

  19. The north–south divide in the Italian higher education system

    DEFF Research Database (Denmark)

    Abramo, Giovanni; D’Angelo, Ciriaco Andrea; Rosati, Francesco

    2016-01-01

    This work examines whether the macroeconomic divide between northern and southern Italy is also present at the level of higher education. The analysis confirms that the research performance in the sciences of the professors in the south is on average less than that of the professors in the north...

  20. Model averaging, optimal inference and habit formation

    Directory of Open Access Journals (Sweden)

    Thomas H B FitzGerald

    2014-06-01

    Full Text Available Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function – the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge – that of determining which model or models of their environment are the best for guiding behaviour. Bayesian model averaging – which says that an agent should weight the predictions of different models according to their evidence – provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent’s behaviour should show an equivalent balance. We hypothesise that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realisable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behaviour. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded Bayesian inference, focussing particularly upon the relationship between goal-directed and habitual behaviour.

  1. [Algorithm for taking into account the average annual background of air pollution in the assessment of health risks].

    Science.gov (United States)

    Fokin, M V

    2013-01-01

    State Budgetary Educational Institution of Higher Professional Education "I.M. Sechenov First Moscow State Medical University" of the Ministry of Health care and Social Development, Moscow, Russian Federation. The assessment of health risks from air pollution with emissions from industrial facilities, without the average annual background of air pollution does not meet sanitary legislation. However Russian Federal Service for Hydrometeorology and Environmental Monitoring issues official certificates for a limited number of areas covered by the observations of the full program on the stationary points. Questions of accounting average background air pollution in the evaluation of health risks from exposure to emissions from industrial facilities are considered.

  2. Exploring Modeling Options and Conversion of Average Response to Appropriate Vibration Envelopes for a Typical Cylindrical Vehicle Panel with Rib-stiffened Design

    Science.gov (United States)

    Harrison, Phil; LaVerde, Bruce; Teague, David

    2009-01-01

    Although applications for Statistical Energy Analysis (SEA) techniques are more widely used in the aerospace industry today, opportunities to anchor the response predictions using measured data from a flight-like launch vehicle structure are still quite valuable. Response and excitation data from a ground acoustic test at the Marshall Space Flight Center permitted the authors to compare and evaluate several modeling techniques available in the SEA module of the commercial code VA One. This paper provides an example of vibration response estimates developed using different modeling approaches to both approximate and bound the response of a flight-like vehicle panel. Since both vibration response and acoustic levels near the panel were available from the ground test, the evaluation provided an opportunity to learn how well the different modeling options can match band-averaged spectra developed from the test data. Additional work was performed to understand the spatial averaging of the measurements across the panel from measured data. Finally an evaluation/comparison of two conversion approaches from the statistical average response results that are output from an SEA analysis to a more useful envelope of response spectra appropriate to specify design and test vibration levels for a new vehicle.

  3. Bootstrapping pre-averaged realized volatility under market microstructure noise

    DEFF Research Database (Denmark)

    Hounyo, Ulrich; Goncalves, Sílvia; Meddahi, Nour

    The main contribution of this paper is to propose a bootstrap method for inference on integrated volatility based on the pre-averaging approach of Jacod et al. (2009), where the pre-averaging is done over all possible overlapping blocks of consecutive observations. The overlapping nature of the pre......-averaged returns implies that these are kn-dependent with kn growing slowly with the sample size n. This motivates the application of a blockwise bootstrap method. We show that the "blocks of blocks" bootstrap method suggested by Politis and Romano (1992) (and further studied by Bühlmann and Künsch (1995......)) is valid only when volatility is constant. The failure of the blocks of blocks bootstrap is due to the heterogeneity of the squared pre-averaged returns when volatility is stochastic. To preserve both the dependence and the heterogeneity of squared pre-averaged returns, we propose a novel procedure...

  4. A 35-year comparison of children labelled as gifted, unlabelled as gifted and average-ability

    Directory of Open Access Journals (Sweden)

    Joan Freeman

    2014-09-01

    Full Text Available http://dx.doi.org/10.5902/1984686X14273Why are some children seen as gifted while others of identical ability are not?  To find out why and what the consequences might be, in 1974 I began in England with 70 children labelled as gifted.  Each one was matched for age, sex and socio-economic level with two comparison children in the same school class. The first comparison child had an identical gift, and the second taken at random.  Investigation was by a battery of tests and deep questioning of pupils, teachers and parents in their schools and homes which went on for 35 years. A major significant difference was that those labelled gifted had significantly more emotional problems than either the unlabelled but identically gifted or the random controls.  The vital aspects of success for the entire sample, whether gifted or not, have been hard work, emotional support and a positive personal outlook.  But in general, the higher the individual’s intelligence the better their chances in life. 

  5. Internet-based cohort study of HIV testing over 1 year among men who have sex with men living in England and exposed to a social marketing intervention promoting testing.

    Science.gov (United States)

    Hickson, Ford; Tomlin, Keith; Hargreaves, James; Bonell, Chris; Reid, David; Weatherburn, Peter

    2015-02-01

    Increasing HIV testing among men who have sex with men (MSM) is a major policy goal in the UK. Social marketing is a common intervention to increase testing uptake. We used an online panel of MSM to examine rates of HIV testing behaviour and the impact of a social marketing intervention on them. MSM in England were recruited to a longitudinal internet panel through community websites and a previous survey. Following an enrolment survey, respondents were invited to self-complete 13 surveys at monthly intervals throughout 2011. A unique alphanumeric code linked surveys for individuals. Rates of HIV testing were compared relative to prompted recognition of a multi-part media campaign aiming to normalise HIV testing. Of 3386 unique enrolments, 2047 respondents were included in the analysis, between them submitting 15,353 monthly surveys (equivalent to 1279 years of follow-up), and recording 1517 HIV tests taken, giving an annual rate of tests per participant of 1.19 (95% CI 1.13 to 1.25). Tests were highly clustered in individuals (61% reported no test during the study). Testing rates were higher in London, single men and those aged 25-34 years. Only 7.6% recognised the intervention when prompted. After controlling for sociodemographic characteristics and exposure to other health promotion campaigns, intervention recognition was not associated with increased likelihood of testing. Higher rates of testing were strongly associated with higher number of casual sexual partners and how recently men had HIV tested before study enrolment. This social marketing intervention was not associated with increased rates of HIV testing. More effective promotion of HIV testing is needed among MSM in England to reduce the average duration of undiagnosed infection. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  6. Significance of acceleration period in a dynamic strength testing study.

    Science.gov (United States)

    Chen, W L; Su, F C; Chou, Y L

    1994-06-01

    The acceleration period that occurs during isokinetic tests may provide valuable information regarding neuromuscular readiness to produce maximal contraction. The purpose of this study was to collect the normative data of acceleration time during isokinetic knee testing, to calculate the acceleration work (Wacc), and to determine the errors (ERexp, ERwork, ERpower) due to ignoring Wacc during explosiveness, total work, and average power measurements. Seven male and 13 female subjects attended the test by using the Cybex 325 system and electronic stroboscope machine for 10 testing speeds (30-300 degrees/sec). A three-way ANOVA was used to assess gender, direction, and speed factors on acceleration time, Wacc, and errors. The results indicated that acceleration time was significantly affected by speed and direction; Wacc and ERexp by speed, direction, and gender; and ERwork and ERpower by speed and gender. The errors appeared to increase when testing the female subjects, during the knee flexion test, or when speed increased. To increase validity in clinical testing, it is important to consider the acceleration phase effect, especially in higher velocity isokinetic testing or for weaker muscle groups.

  7. MN Temperature Average (1961-1990) - Line

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  8. MN Temperature Average (1961-1990) - Polygon

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  9. High average-power induction linacs

    International Nuclear Information System (INIS)

    Prono, D.S.; Barrett, D.; Bowles, E.

    1989-01-01

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  10. Interpreting Bivariate Regression Coefficients: Going beyond the Average

    Science.gov (United States)

    Halcoussis, Dennis; Phillips, G. Michael

    2010-01-01

    Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…

  11. Unscrambling The "Average User" Of Habbo Hotel

    Directory of Open Access Journals (Sweden)

    Mikael Johnson

    2007-01-01

    Full Text Available The “user” is an ambiguous concept in human-computer interaction and information systems. Analyses of users as social actors, participants, or configured users delineate approaches to studying design-use relationships. Here, a developer’s reference to a figure of speech, termed the “average user,” is contrasted with design guidelines. The aim is to create an understanding about categorization practices in design through a case study about the virtual community, Habbo Hotel. A qualitative analysis highlighted not only the meaning of the “average user,” but also the work that both the developer and the category contribute to this meaning. The average user a represents the unknown, b influences the boundaries of the target user groups, c legitimizes the designer to disregard marginal user feedback, and d keeps the design space open, thus allowing for creativity. The analysis shows how design and use are intertwined and highlights the developers’ role in governing different users’ interests.

  12. The association between estimated average glucose levels and fasting plasma glucose levels

    Directory of Open Access Journals (Sweden)

    Giray Bozkaya

    2010-01-01

    Full Text Available OBJECTIVE: The level of hemoglobin A1c (HbA1c, also known as glycated hemoglobin, determines how well a patient's blood glucose level has been controlled over the previous 8-12 weeks. HbA1c levels help patients and doctors understand whether a particular diabetes treatment is working and whether adjustments need to be made to the treatment. Because the HbA1c level is a marker of blood glucose for the previous 120 days, average blood glucose levels can be estimated using HbA1c levels. Our aim in the present study was to investigate the relationship between estimated average glucose levels, as calculated by HbA1c levels, and fasting plasma glucose levels. METHODS: The fasting plasma glucose levels of 3891 diabetic patient samples (1497 male, 2394 female were obtained from the laboratory information system used for HbA1c testing by the Department of Internal Medicine at the Izmir Bozyaka Training and Research Hospital in Turkey. These samples were selected from patient samples that had hemoglobin levels between 12 and 16 g/dL. The estimated glucose levels were calculated using the following formula: 28.7 x HbA1c - 46.7. Glucose and HbA1c levels were determined using hexokinase and high performance liquid chromatography (HPLC methods, respectively. RESULTS: A strong positive correlation between fasting plasma glucose levels and estimated average blood glucose levels (r=0.757, p<0.05 was observed. The difference was statistically significant. CONCLUSION: Reporting the estimated average glucose level together with the HbA1c level is believed to assist patients and doctors determine the effectiveness of blood glucose control measures.

  13. Suggestion of an average bidirectional flow tube for the measurement of single and two phase flow rate

    International Nuclear Information System (INIS)

    Yun, B.J.; Kang, K.H.; Euh, D.J.; Song, C.H.; Baek, W.P.

    2005-01-01

    Full text of publication follows: A new type instrumentation, average bidirectional flow tube, was suggested to apply to the single and two phase flow condition. Its working principle is similar to that of the Pitot tube. The pressure measured at the front of the flow tube is equal to the total pressure, while that measured at the rear tube is slightly less than static pressure of flow field due to the suction effect at the downstream. It gives an amplification effect of measured pressure difference at the flow tube. The proposed instrumentation has the characteristics that it could be applicable to low flow condition and measure bidirectional flow. It was tested in the air-water vertical and horizontal test sections which have 0.08 m inner diameter. The pressure difference across the average bidirectional flow tube, system pressure, average void fraction and injection phasic mass flow rates were measured on the measuring plane. Test was performed primarily in the single phase water and air flow condition to get the amplification factor k of the flow tube. The test was also performed in the air-water two phase flow condition and the covered flow regimes were bubbly, slug, churn turbulent flow in the vertical pipe and stratified flow in the horizontal pipe. In order to calculate the phasic and total mass flow rates from the measured differential pressure, Chexal drift-flux correlation and momentum exchange factor between the two phases were introduced. The test result shows that the suggested instrumentation with the measured void fraction, Chexal drift-flux correlation and Bosio and Malnes' momentum exchange model can predict the phasic mass flow rates within 15% error compared to the true values. A new momentum exchange model was also suggested and it gives up to 5% improvement of the measured mass flow rate compared to combination of Bosio and Malnes' momentum exchange model. (authors)

  14. GI Joe or Average Joe? The impact of average-size and muscular male fashion models on men's and women's body image and advertisement effectiveness.

    Science.gov (United States)

    Diedrichs, Phillippa C; Lee, Christina

    2010-06-01

    Increasing body size and shape diversity in media imagery may promote positive body image. While research has largely focused on female models and women's body image, men may also be affected by unrealistic images. We examined the impact of average-size and muscular male fashion models on men's and women's body image and perceived advertisement effectiveness. A sample of 330 men and 289 women viewed one of four advertisement conditions: no models, muscular, average-slim or average-large models. Men and women rated average-size models as equally effective in advertisements as muscular models. For men, exposure to average-size models was associated with more positive body image in comparison to viewing no models, but no difference was found in comparison to muscular models. Similar results were found for women. Internalisation of beauty ideals did not moderate these effects. These findings suggest that average-size male models can promote positive body image and appeal to consumers. 2010 Elsevier Ltd. All rights reserved.

  15. Accounting for Institutional Variation in Expected Returns to Higher Education

    Science.gov (United States)

    Dorius, Shawn F.; Tandberg, David A.; Cram, Bridgette

    2017-01-01

    This study leverages human capital theory to identify the correlates of expected returns on investment in higher education at the level of institutions. We leverage estimates of average ROI in post-secondary education among more than 400 baccalaureate degree conferring colleges and universities to understand the correlates of a relatively new…

  16. Average: the juxtaposition of procedure and context

    Science.gov (United States)

    Watson, Jane; Chick, Helen; Callingham, Rosemary

    2014-09-01

    This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.

  17. Utilization of diabetes medication and cost of testing supplies in Saskatchewan, 2001.

    Science.gov (United States)

    Johnson, Jeffrey A; Pohar, Sheri L; Secnik, Kristina; Yurgin, Nicole; Hirji, Zeenat

    2006-12-12

    The purpose of this study was to describe the patterns of antidiabetic medication use and the cost of testing supplies in Canada using information collected by Saskatchewan's Drug Plan (DP) in 2001. The diabetes cohort (n = 41,630) included individuals who met the National Diabetes Surveillance System (NDSS) case definition. An algorithm was then used to identify subjects as having type 1 or type 2 diabetes. Among those identified as having type 2 diabetes (n = 37,625), 38% did not have records for antidiabetic medication in 2001. One-third of patients with type 2 diabetes received monotherapy. Metformin, alone or in combination with other medications, was the most commonly prescribed antidiabetic medication. Just over one-half of the all patients with diabetes had a DP records for diabetes testing supplies. For individuals (n = 4,005) with type 1 diabetes, 79% had a DP record for supplies, with an average annual cost of 472 +/- 560 dollars. For type 2 diabetes, 50% had records for testing supplies, with an average annual cost of 122 +/- 233 dollars. Those individuals with type 2 diabetes who used insulin had higher testing supply costs than those on oral antidiabetic medication alone (359 vs 131 dollars; p < 0.001).

  18. High average-power induction linacs

    International Nuclear Information System (INIS)

    Prono, D.S.; Barrett, D.; Bowles, E.; Caporaso, G.J.; Chen, Yu-Jiuan; Clark, J.C.; Coffield, F.; Newton, M.A.; Nexsen, W.; Ravenscroft, D.; Turner, W.C.; Watson, J.A.

    1989-01-01

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of ∼ 50-ns duration pulses to > 100 MeV. In this paper the authors report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  19. Serpent-COREDAX analysis of CANDU-6 time-average model

    Energy Technology Data Exchange (ETDEWEB)

    Motalab, M.A.; Cho, B.; Kim, W.; Cho, N.Z.; Kim, Y., E-mail: yongheekim@kaist.ac.kr [Korea Advanced Inst. of Science and Technology (KAIST), Dept. of Nuclear and Quantum Engineering Daejeon (Korea, Republic of)

    2015-07-01

    COREDAX-2 is the nuclear core analysis nodal code that has adopted the Analytic Function Expansion Nodal (AFEN) methodology which has been developed in Korea. AFEN method outperforms in terms of accuracy compared to other conventional nodal methods. To evaluate the possibility of CANDU-type core analysis using the COREDAX-2, the time-average analysis code system was developed. The two-group homogenized cross-sections were calculated using Monte Carlo code, Serpent2. A stand-alone time-average module was developed to determine the time-average burnup distribution in the core for a given fuel management strategy. The coupled Serpent-COREDAX-2 calculation converges to an equilibrium time-average model for the CANDU-6 core. (author)

  20. A high speed digital signal averager for pulsed NMR

    International Nuclear Information System (INIS)

    Srinivasan, R.; Ramakrishna, J.; Ra agopalan, S.R.

    1978-01-01

    A 256-channel digital signal averager suitable for pulsed nuclear magnetic resonance spectroscopy is described. It implements 'stable averaging' algorithm and hence provides a calibrated display of the average signal at all times during the averaging process on a CRT. It has a maximum sampling rate of 2.5 μ sec and a memory capacity of 256 x 12 bit words. Number of sweeps is selectable through a front panel control in binary steps from 2 3 to 2 12 . The enhanced signal can be displayed either on a CRT or by a 3.5-digit LED display. The maximum S/N improvement that can be achieved with this instrument is 36 dB. (auth.)

  1. Genetics educational needs in China: physicians' experience and knowledge of genetic testing.

    Science.gov (United States)

    Li, Jing; Xu, Tengda; Yashar, Beverly M

    2015-09-01

    The aims of this study were to explore the relationship between physicians' knowledge and utilization of genetic testing and to explore genetics educational needs in China. An anonymous survey about experience, attitudes, and knowledge of genetic testing was conducted among physicians affiliated with Peking Union Medical College Hospital during their annual health evaluation. A personal genetics knowledge score was developed and predictors of personal genetics knowledge score were evaluated. Sixty-four physicians (33% male) completed the survey. Fifty-eight percent of them had used genetic testing in their clinical practice. Using a 4-point scale, mean knowledge scores of six common genetic testing techniques ranged from 1.7 ± 0.9 to 2.4 ± 1.0, and the average personal genetics knowledge score was 2.1 ± 0.8. In regression analysis, significant predictors of higher personal genetics knowledge score were ordering of genetic testing, utilization of pedigrees, higher medical degree, and recent genetics training (P education. This study demonstrated a sizable gap between Chinese physicians' knowledge and utilization of genetic testing. Participants had high self-perceived genetics educational needs. Development of genetics educational platforms is both warranted and desired in China.Genet Med 17 9, 757-760.

  2. Multiple-Choice Exams: An Obstacle for Higher-Level Thinking in Introductory Science Classes

    Science.gov (United States)

    Stanger-Hall, Kathrin F.

    2012-01-01

    Learning science requires higher-level (critical) thinking skills that need to be practiced in science classes. This study tested the effect of exam format on critical-thinking skills. Multiple-choice (MC) testing is common in introductory science courses, and students in these classes tend to associate memorization with MC questions and may not see the need to modify their study strategies for critical thinking, because the MC exam format has not changed. To test the effect of exam format, I used two sections of an introductory biology class. One section was assessed with exams in the traditional MC format, the other section was assessed with both MC and constructed-response (CR) questions. The mixed exam format was correlated with significantly more cognitively active study behaviors and a significantly better performance on the cumulative final exam (after accounting for grade point average and gender). There was also less gender-bias in the CR answers. This suggests that the MC-only exam format indeed hinders critical thinking in introductory science classes. Introducing CR questions encouraged students to learn more and to be better critical thinkers and reduced gender bias. However, student resistance increased as students adjusted their perceptions of their own critical-thinking abilities. PMID:22949426

  3. Average configuration of the geomagnetic tail

    International Nuclear Information System (INIS)

    Fairfield, D.H.

    1979-01-01

    Over 3000 hours of Imp 6 magnetic field data obtained between 20 and 33 R/sub E/ in the geomagnetic tail have been used in a statistical study of the tail configuration. A distribution of 2.5-min averages of B/sub z/ as a function of position across the tail reveals that more flux crosses the equatorial plane near the dawn and dusk flanks (B-bar/sub z/=3.γ) than near midnight (B-bar/sub z/=1.8γ). The tail field projected in the solar magnetospheric equatorial plane deviates from the x axis due to flaring and solar wind aberration by an angle α=-0.9 Y/sub SM/-2.7, where Y/sub SM/ is in earth radii and α is in degrees. After removing these effects, the B/sub y/ component of the tail field is found to depend on interplanetary sector structure. During an 'away' sector the B/sub y/ component of the tail field is on average 0.5γ greater than that during a 'toward' sector, a result that is true in both tail lobes and is independent of location across the tail. This effect means the average field reversal between northern and southern lobes of the tail is more often 178 0 rather than the 180 0 that is generally supposed

  4. Coping Strategies Applied to Comprehend Multistep Arithmetic Word Problems by Students with Above-Average Numeracy Skills and Below-Average Reading Skills

    Science.gov (United States)

    Nortvedt, Guri A.

    2011-01-01

    This article discusses how 13-year-old students with above-average numeracy skills and below-average reading skills cope with comprehending word problems. Compared to other students who are proficient in numeracy and are skilled readers, these students are more disadvantaged when solving single-step and multistep arithmetic word problems. The…

  5. Simplifying consent for HIV testing is associated with an increase in HIV testing and case detection in highest risk groups, San Francisco January 2003-June 2007.

    Directory of Open Access Journals (Sweden)

    Nicola M Zetola

    2008-07-01

    Full Text Available Populations at highest risk for HIV infection face multiple barriers to HIV testing. To facilitate HIV testing procedures, the San Francisco General Hospital Medical Center eliminated required written patient consent for HIV testing in its medical settings in May 2006. To describe the change in HIV testing rates in different hospital settings and populations after the change in HIV testing policy in the SFDH medical center, we performed an observational study using interrupted time series analysis.Data from all patients aged 18 years and older seen from January 2003 through June 2007 at the San Francisco Department of Public Health (SFDPH medical care system were included in the analysis. The monthly HIV testing rate per 1000 had patient-visits was calculated for the overall population and stratified by hospital setting, age, sex, race/ethnicity, homelessness status, insurance status and primary language.By June 2007, the average monthly rate of HIV tests per 1000 patient-visits increased 4.38 (CI, 2.17-6.60, p<0.001 over the number predicted if the policy change had not occurred (representing a 44% increase. The monthly average number of new positive HIV tests increased from 8.9 (CI, 6.3-11.5 to 14.9 (CI, 10.6-19.2, p<0.001, representing a 67% increase. Although increases in HIV testing were seen in all populations, populations at highest risk for HIV infection, particularly men, the homeless, and the uninsured experienced the highest increases in monthly HIV testing rates after the policy change.The elimination of the requirement for written consent in May 2006 was associated with a significant and sustained increase in HIV testing rates and HIV case detection in the SFDPH medical center. Populations facing the higher barriers to HIV testing had the highest increases in HIV testing rates and case detection in response to the policy change.

  6. Less Physician Practice Competition Is Associated With Higher Prices Paid For Common Procedures.

    Science.gov (United States)

    Austin, Daniel R; Baker, Laurence C

    2015-10-01

    Concentration among physician groups has been steadily increasing, which may affect prices for physician services. We assessed the relationship in 2010 between physician competition and prices paid by private preferred provider organizations for fifteen common, high-cost procedures to understand whether higher concentration of physician practices and accompanying increased market power were associated with higher prices for services. Using county-level measures of the concentration of physician practices and county average prices, and statistically controlling for a range of other regional characteristics, we found that physician practice concentration and prices were significantly associated for twelve of the fifteen procedures we studied. For these procedures, counties with the highest average physician concentrations had prices 8-26 percent higher than prices in the lowest counties. We concluded that physician competition is frequently associated with prices. Policies that would influence physician practice organization should take this into consideration. Project HOPE—The People-to-People Health Foundation, Inc.

  7. TEST OF AN ANIMAL DRAWN FIELD IMPLEMENT CART

    Directory of Open Access Journals (Sweden)

    Paolo Spugnoli

    2008-06-01

    Full Text Available The field performance of a horse-drawn hitch cart equipped with a PTO system powered by the two cart ground wheels have been investigated. For this purpose field tests on clay and turf soil, with varying ballast and PTO torque, have been carried out pulling the cart by a tractor. Preliminary tests were aimed at assessing the traction capability of horse breed. These tests showed that the mean draught force given by two of these horses was 173daN, average working speed was about 1m*s-1, resulting a mean draught power developed by each horse of about 0.86kW. The PTO cart system performance has shown that the torque has not exceeded 2.4daN*m, maximum draught or PTO power was 1.15kW, rotation speed just higher than 400min-1, with mean efficiency of about 50%. These values are consistent with horse performance and small haymaking, fertilizing, seeding and chemical application machine requirements.

  8. High-Average, High-Peak Current Injector Design

    CERN Document Server

    Biedron, S G; Virgo, M

    2005-01-01

    There is increasing interest in high-average-power (>100 kW), um-range FELs. These machines require high peak current (~1 kA), modest transverse emittance, and beam energies of ~100 MeV. High average currents (~1 A) place additional constraints on the design of the injector. We present a design for an injector intended to produce the required peak currents at the injector, eliminating the need for magnetic compression within the linac. This reduces the potential for beam quality degradation due to CSR and space charge effects within magnetic chicanes.

  9. Extended averaging phase-shift schemes for Fizeau interferometry on high-numerical-aperture spherical surfaces

    Science.gov (United States)

    Burke, Jan

    2010-08-01

    Phase-shifting Fizeau interferometry on spherical surfaces is impaired by phase-shift errors increasing with the numerical aperture, unless a custom optical set-up or wavelength shifting is used. This poses a problem especially for larger numerical apertures, and requires good error tolerance of the phase-shift method used; but it also constitutes a useful testing facility for phase-shift formulae, because a vast range of phase-shift intervals can be tested in a single measurement. In this paper I show how the "characteristic polynomials" method can be used to generate a phase-shifting method for the actual numerical aperture, and analyse residual cyclical phase errors by comparing a phase map from an interferogram with a few fringes to a phase mpa from a nulled fringe. Unrelated to the phase-shift miscalibration, thirdharmonic error fringes are found. These can be dealt with by changing the nominal phase shift from 90°/step to 60°/step and re-tailoring the evaluation formula for third-harmonic rejection. The residual error has the same frequency as the phase-shift signal itself, and can be removed by averaging measurements. Some interesting features of the characteristic polynomials for the averaged formulae emerge, which also shed some light on the mechanism that generates cyclical phase errors.

  10. Statistics on exponential averaging of periodograms

    Energy Technology Data Exchange (ETDEWEB)

    Peeters, T.T.J.M. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Ciftcioglu, Oe. [Istanbul Technical Univ. (Turkey). Dept. of Electrical Engineering

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a {chi}{sup 2} distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.).

  11. Statistics on exponential averaging of periodograms

    International Nuclear Information System (INIS)

    Peeters, T.T.J.M.; Ciftcioglu, Oe.

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a χ 2 distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.)

  12. The ugliness-in-averageness effect: Tempering the warm glow of familiarity.

    Science.gov (United States)

    Carr, Evan W; Huber, David E; Pecher, Diane; Zeelenberg, Rene; Halberstadt, Jamin; Winkielman, Piotr

    2017-06-01

    Mere exposure (i.e., stimulus repetition) and blending (i.e., stimulus averaging) are classic ways to increase social preferences, including facial attractiveness. In both effects, increases in preference involve enhanced familiarity. Prominent memory theories assume that familiarity depends on a match between the target and similar items in memory. These theories predict that when individual items are weakly learned, their blends (morphs) should be relatively familiar, and thus liked-a beauty-in-averageness effect ( BiA ). However, when individual items are strongly learned, they are also more distinguishable. This "differentiation" hypothesis predicts that with strongly encoded items, familiarity (and thus, preference) for the blend will be relatively lower than individual items-an ugliness-in-averageness effect ( UiA ). We tested this novel theoretical prediction in 5 experiments. Experiment 1 showed that with weak learning, facial morphs were more attractive than contributing individuals (BiA effect). Experiments 2A and 2B demonstrated that when participants first strongly learned a subset of individual faces (either in a face-name memory task or perceptual-tracking task), morphs of trained individuals were less attractive than the trained individuals (UiA effect). Experiment 3 showed that changes in familiarity for the trained morph (rather than interstimulus conflict) drove the UiA effect. Using a within-subjects design, Experiment 4 mapped out the transition from BiA to UiA solely as a function of memory training. Finally, computational modeling using a well-known memory framework (REM) illustrated the familiarity transition observed in Experiment 4. Overall, these results highlight how memory processes illuminate classic and modern social preference phenomena. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  13. Fitting a function to time-dependent ensemble averaged data

    DEFF Research Database (Denmark)

    Fogelmark, Karl; Lomholt, Michael A.; Irbäck, Anders

    2018-01-01

    Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion...... method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software....

  14. Experimental study on the natural gas dual fuel engine test and the higher the mixture ratio of hydrogen to natural gas

    Energy Technology Data Exchange (ETDEWEB)

    Kim, B.S.; Lee, Y.S.; Park, C.K. [Cheonnam University, Kwangju (Korea); Masahiro, S. [Kyoto University, Kyoto (Japan)

    1999-05-28

    One of the unsolved problems of the natural gas dual fuel engine is that there is too much exhaust of Total Hydrogen Carbon(THC) at a low equivalent mixture ratio. To fix it, a natural gas mixed with hydrogen was applied to engine test. The results showed that the higher the mixture ratio of hydrogen to natural gas, the higher the combustion efficiency. And when the amount of the intake air is reached to 90% of WOT, the combustion efficiency was promoted. But, like a case making the injection timing earlier, the equivalent mixture ratio for the nocking limit decreases and the produce of NOx increases. 5 refs., 9 figs., 1 tab.

  15. Design of a randomized trial of diabetes genetic risk testing to motivate behavior change: the Genetic Counseling/lifestyle Change (GC/LC) Study for Diabetes Prevention.

    Science.gov (United States)

    Grant, Richard W; Meigs, James B; Florez, Jose C; Park, Elyse R; Green, Robert C; Waxler, Jessica L; Delahanty, Linda M; O'Brien, Kelsey E

    2011-10-01

    The efficacy of diabetes genetic risk testing to motivate behavior change for diabetes prevention is currently unknown. This paper presents key issues in the design and implementation of one of the first randomized trials (The Genetic Counseling/Lifestyle Change (GC/LC) Study for Diabetes Prevention) to test whether knowledge of diabetes genetic risk can motivate patients to adopt healthier behaviors. Because individuals may react differently to receiving 'higher' vs 'lower' genetic risk results, we designed a 3-arm parallel group study to separately test the hypotheses that: (1) patients receiving 'higher' diabetes genetic risk results will increase healthy behaviors compared to untested controls, and (2) patients receiving 'lower' diabetes genetic risk results will decrease healthy behaviors compared to untested controls. In this paper we describe several challenges to implementing this study, including: (1) the application of a novel diabetes risk score derived from genetic epidemiology studies to a clinical population, (2) the use of the principle of Mendelian randomization to efficiently exclude 'average' diabetes genetic risk patients from the intervention, and (3) the development of a diabetes genetic risk counseling intervention that maintained the ethical need to motivate behavior change in both 'higher' and 'lower' diabetes genetic risk result recipients. Diabetes genetic risk scores were developed by aggregating the results of 36 diabetes-associated single nucleotide polymorphisms. Relative risk for type 2 diabetes was calculated using Framingham Offspring Study outcomes, grouped by quartiles into 'higher', 'average' (middle two quartiles) and 'lower' genetic risk. From these relative risks, revised absolute risks were estimated using the overall absolute risk for the study group. For study efficiency, we excluded all patients receiving 'average' diabetes risk results from the subsequent intervention. This post-randomization allocation strategy was

  16. Paternity tests in Mexico: Results obtained in 3005 cases.

    Science.gov (United States)

    García-Aceves, M E; Romero Rentería, O; Díaz-Navarro, X X; Rangel-Villalobos, H

    2018-04-01

    National and international reports regarding the paternity testing activity scarcely include information from Mexico and other Latin American countries. Therefore, we report different results from the analysis of 3005 paternity cases analyzed during a period of five years in a Mexican paternity testing laboratory. Motherless tests were the most frequent (77.27%), followed by trio cases (20.70%); the remaining 2.04% included different cases of kinship reconstruction. The paternity exclusion rate was 29.58%, higher but into the range reported by the American Association of Blood Banks (average 24.12%). We detected 65 mutations, most of them involving one-step (93.8% and the remaining were two-step mutations (6.2%) thus, we were able to estimate the paternal mutation rate for 17 different STR loci: 0.0018 (95% CI 0.0005-0.0047). Five triallelic patterns and 12 suspected null alleles were detected during this period; however, re-amplification of these samples with a different Human Identification (HID) kit confirmed the homozygous genotypes, which suggests that most of these exclusions actually are one-step mutations. HID kits with ≥20 STRs detected more exclusions, diminishing the rate of inconclusive results with isolated exclusions (Powerplex 21 kit (20 STRs) and Powerplex Fusion kit (22 STRs) offered similar PI (p = 0.379) and average number of exclusions (PE) (p = 0.339) when a daughter was involved in motherless tests. In brief, besides to report forensic parameters from paternity tests in Mexico, results describe improvements to solve motherless paternity tests using HID kits with ≥20 STRs instead of one including 15 STRs. Copyright © 2018 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  17. A group's physical attractiveness is greater than the average attractiveness of its members : The group attractiveness effect

    NARCIS (Netherlands)

    van Osch, Y.M.J.; Blanken, Irene; Meijs, Maartje H. J.; van Wolferen, Job

    2015-01-01

    We tested whether the perceived physical attractiveness of a group is greater than the average attractiveness of its members. In nine studies, we find evidence for the so-called group attractiveness effect (GA-effect), using female, male, and mixed-gender groups, indicating that group impressions of

  18. Test-retest reproducibility of accommodative facility measures in primary school children.

    Science.gov (United States)

    Adler, Paul; Scally, Andrew J; Barrett, Brendan T

    2018-05-08

    To determine the test-retest reproducibility of accommodative facility (AF) measures in an unselected sample of UK primary school children. Using ±2.00 DS flippers and a viewing distance of 40 cm, AF was measured in 136 children (range 4-12 years, average 8.1 ± 2.1) by five testers on three occasions (average interval between successive tests: eight days, range 1-21 days). On each occasion, AF was measured monocularly and binocularly, for two minutes. Full datasets were obtained in 111 children (81.6 per cent). Intra-individual variation in AF was large (standard deviation [SD] = 3.8 cycles per minute [cpm]) and there was variation due to the identity of the tester (SD = 1.6 cpm). On average, AF was greater: (i) in monocular compared to binocular testing (by 1.4 cpm, p cpm, p cpm lower than in children ≥ 10 years old, p = 0.009); and (iv) on subsequent testing occasions (for example, visit-2 AF was 2.0 cpm higher than visit-1 AF, p cpm monocularly and ≥ 8 cpm binocularly), but this rose to 83.8 per cent after the third test. Using less stringent pass criteria (≥ 6 cpm monocularly and ≥ 3 cpm binocularly), the equivalent figures were 82.9 and 96.4 per cent, respectively. Reduced AF did not co-exist with abnormal near point of accommodation or reduced visual acuity. The results reveal considerable intra-individual variability in raw AF measures in children. When the results are considered as pass/fail, children who initially exhibit normal AF continued to do so on repeat testing. Conversely, the vast majority of children with initially reduced AF exhibit normal performance on repeat testing. Using established pass/fail criteria, the prevalence of persistently reduced AF in this sample is 3.6 per cent. © 2018 Optometry Australia.

  19. Improved PFB operations - 400-hour turbine test results

    Science.gov (United States)

    Rollbuhler, R. J.; Benford, S. M.; Zellars, G. R.

    1980-04-01

    The paper deals with a 400-hr small turbine test in the effluent of a pressurized fluidized bed (PFB) at an average temperature of 770 C, an average relative gas velocity of 300 m/sec, and average solid loadings of 200 ppm. Consideration is given to combustion parameters and operating procedure as well as to the turbine system and turbine test operating procedures. Emphasis is placed on erosion/corrosion results.

  20. Raising Test Scores vs. Teaching Higher Order Thinking (HOT): Senior Science Teachers' Views on How Several Concurrent Policies Affect Classroom Practices

    Science.gov (United States)

    Zohar, Anat; Alboher Agmon, Vered

    2018-01-01

    Purpose: This study investigates how senior science teachers viewed the effects of a Raising Test Scores policy and its implementation on instruction of higher order thinking (HOT), and on teaching thinking to students with low academic achievements. Background: The study was conducted in the context of three concurrent policies advocating: (a)…

  1. Average Soil Water Retention Curves Measured by Neutron Radiography

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Chu-Lin [ORNL; Perfect, Edmund [University of Tennessee, Knoxville (UTK); Kang, Misun [ORNL; Voisin, Sophie [ORNL; Bilheux, Hassina Z [ORNL; Horita, Juske [Texas Tech University (TTU); Hussey, Dan [NIST Center for Neutron Research (NCRN), Gaithersburg, MD

    2011-01-01

    Water retention curves are essential for understanding the hydrologic behavior of partially-saturated porous media and modeling flow transport processes within the vadose zone. In this paper we report direct measurements of the main drying and wetting branches of the average water retention function obtained using 2-dimensional neutron radiography. Flint sand columns were saturated with water and then drained under quasi-equilibrium conditions using a hanging water column setup. Digital images (2048 x 2048 pixels) of the transmitted flux of neutrons were acquired at each imposed matric potential (~10-15 matric potential values per experiment) at the NCNR BT-2 neutron imaging beam line. Volumetric water contents were calculated on a pixel by pixel basis using Beer-Lambert s law after taking into account beam hardening and geometric corrections. To remove scattering effects at high water contents the volumetric water contents were normalized (to give relative saturations) by dividing the drying and wetting sequences of images by the images obtained at saturation and satiation, respectively. The resulting pixel values were then averaged and combined with information on the imposed basal matric potentials to give average water retention curves. The average relative saturations obtained by neutron radiography showed an approximate one-to-one relationship with the average values measured volumetrically using the hanging water column setup. There were no significant differences (at p < 0.05) between the parameters of the van Genuchten equation fitted to the average neutron radiography data and those estimated from replicated hanging water column data. Our results indicate that neutron imaging is a very effective tool for quantifying the average water retention curve.

  2. Measurement of average radon gas concentration at workplaces

    International Nuclear Information System (INIS)

    Kavasi, N.; Somlai, J.; Kovacs, T.; Gorjanacz, Z.; Nemeth, Cs.; Szabo, T.; Varhegyi, A.; Hakl, J.

    2003-01-01

    In this paper results of measurement of average radon gas concentration at workplaces (the schools and kindergartens and the ventilated workplaces) are presented. t can be stated that the one month long measurements means very high variation (as it is obvious in the cases of the hospital cave and the uranium tailing pond). Consequently, in workplaces where the expectable changes of radon concentration considerable with the seasons should be measure for 12 months long. If it is not possible, the chosen six months period should contain summer and winter months as well. The average radon concentration during working hours can be differ considerable from the average of the whole time in the cases of frequent opening the doors and windows or using artificial ventilation. (authors)

  3. The flattening of the average potential in models with fermions

    International Nuclear Information System (INIS)

    Bornholdt, S.

    1993-01-01

    The average potential is a scale dependent scalar effective potential. In a phase with spontaneous symmetry breaking its inner region becomes flat as the averaging extends over infinite volume and the average potential approaches the convex effective potential. Fermion fluctuations affect the shape of the average potential in this region and its flattening with decreasing physical scale. They have to be taken into account to find the true minimum of the scalar potential which determines the scale of spontaneous symmetry breaking. (orig.)

  4. Sea Surface Temperature Average_SST_Master

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using ArcGIS...

  5. The role of chronotype, gender, test anxiety, and conscientiousness in academic achievement of high school students.

    Science.gov (United States)

    Rahafar, Arash; Maghsudloo, Mahdis; Farhangnia, Sajedeh; Vollmer, Christian; Randler, Christoph

    2016-01-01

    Previous findings have demonstrated that chronotype (morningness/intermediate/eveningness) is correlated with cognitive functions, that is, people show higher mental performance when they do a test at their preferred time of day. Empirical studies found a relationship between morningness and higher learning achievement at school and university. However, only a few of them controlled for other moderating and mediating variables. In this study, we included chronotype, gender, conscientiousness and test anxiety in a structural equation model (SEM) with grade point average (GPA) as academic achievement outcome. Participants were 158 high school students and results revealed that boys and girls differed in GPA and test anxiety significantly, with girls reporting better grades and higher test anxiety. Moreover, there was a positive correlation between conscientiousness and GPA (r = 0.17) and morningness (r = 0.29), respectively, and a negative correlation between conscientiousness and test anxiety (r = -0.22). The SEM demonstrated that gender was the strongest predictor of academic achievement. Lower test anxiety predicted higher GPA in girls but not in boys. Additionally, chronotype as moderator revealed a significant association between gender and GPA for evening types and intermediate types, while intermediate types showed a significant relationship between test anxiety and GPA. Our results suggest that gender is an essential predictor of academic achievement even stronger than low or absent test anxiety. Future studies are needed to explore how gender and chronotype act together in a longitudinal panel design and how chronotype is mediated by conscientiousness in the prediction of academic achievement.

  6. The effects of undergraduate nursing student-faculty interaction outside the classroom on college grade point average.

    Science.gov (United States)

    Al-Hussami, Mahmoud; Saleh, Mohammad Y N; Hayajneh, Ferial; Abdalkader, Raghed Hussein; Mahadeen, Alia I

    2011-09-01

    The effects of student-faculty interactions in higher education have received considerable empirical attention. However, there has been no empirical study that has examined the relation between student-faculty interaction and college grade point average. This is aimed at identifying the effect of nursing student-faculty interaction outside the classroom on students' semester college grade point average at a public university in Jordan. The research was cross-sectional study of the effect of student-faculty interaction outside the classroom on the students' semester college grade point average of participating juniors and seniors. Total interaction of the students was crucial as it is extremely significant (t = 16.2, df = 271, P ≤ 0.001) in relation to students' academic scores between those students who had ≥70 and those who had <70 academic scores. However, gender differences between students, and other variables were not significant either to affect students' academic scores or students' interaction. This study provides some evidence that student-faculty interactions outside classrooms are significantly associated with student's academically achievements. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Conceptualizing and Assessing Higher-Order Thinking in Reading

    Science.gov (United States)

    Afflerbach, Peter; Cho, Byeong-Young; Kim, Jong-Yun

    2015-01-01

    Students engage in higher-order thinking as they read complex texts and perform complex reading-related tasks. However, the most consequential assessments, high-stakes tests, are currently limited in providing information about students' higher-order thinking. In this article, we describe higher-order thinking in relation to reading. We provide a…

  8. Effect of tank geometry on its average performance

    Science.gov (United States)

    Orlov, Aleksey A.; Tsimbalyuk, Alexandr F.; Malyugin, Roman V.; Leontieva, Daria A.; Kotelnikova, Alexandra A.

    2018-03-01

    The mathematical model of non-stationary filling of vertical submerged tanks with gaseous uranium hexafluoride is presented in the paper. There are calculations of the average productivity, heat exchange area, and filling time of various volumes tanks with smooth inner walls depending on their "height : radius" ratio as well as the average productivity, degree, and filling time of horizontal ribbing tank with volume 6.10-2 m3 with change central hole diameter of the ribs. It has been shown that the growth of "height / radius" ratio in tanks with smooth inner walls up to the limiting values allows significantly increasing tank average productivity and reducing its filling time. Growth of H/R ratio of tank with volume 1.0 m3 to the limiting values (in comparison with the standard tank having H/R equal 3.49) augments tank productivity by 23.5 % and the heat exchange area by 20%. Besides, we have demonstrated that maximum average productivity and a minimum filling time are reached for the tank with volume 6.10-2 m3 having central hole diameter of horizontal ribs 6.4.10-2 m.

  9. An alternative scheme of the Bogolyubov's average method

    International Nuclear Information System (INIS)

    Ortiz Peralta, T.; Ondarza R, R.; Camps C, E.

    1990-01-01

    In this paper the average energy and the magnetic moment conservation laws in the Drift Theory of charged particle motion are obtained in a simple way. The approach starts from the energy and magnetic moment conservation laws and afterwards the average is performed. This scheme is more economic from the standpoint of time and algebraic calculations than the usual procedure of Bogolyubov's method. (Author)

  10. Influence of coma aberration on aperture averaged scintillations in oceanic turbulence

    Science.gov (United States)

    Luo, Yujuan; Ji, Xiaoling; Yu, Hong

    2018-01-01

    The influence of coma aberration on aperture averaged scintillations in oceanic turbulence is studied in detail by using the numerical simulation method. In general, in weak oceanic turbulence, the aperture averaged scintillation can be effectively suppressed by means of the coma aberration, and the aperture averaged scintillation decreases as the coma aberration coefficient increases. However, in moderate and strong oceanic turbulence the influence of coma aberration on aperture averaged scintillations can be ignored. In addition, the aperture averaged scintillation dominated by salinity-induced turbulence is larger than that dominated by temperature-induced turbulence. In particular, it is shown that for coma-aberrated Gaussian beams, the behavior of aperture averaged scintillation index is quite different from the behavior of point scintillation index, and the aperture averaged scintillation index is more suitable for characterizing scintillations in practice.

  11. Calculating Free Energies Using Average Force

    Science.gov (United States)

    Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)

    2001-01-01

    A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.

  12. A Front End for Multipetawatt Lasers Based on a High-Energy, High-Average-Power Optical Parametric Chirped-Pulse Amplifier

    International Nuclear Information System (INIS)

    Bagnoud, V.

    2004-01-01

    We report on a high-energy, high-average-power optical parametric chirped-pulse amplifier developed as the front end for the OMEGA EP laser. The amplifier provides a gain larger than 109 in two stages leading to a total energy of 400 mJ with a pump-to-signal conversion efficiency higher than 25%

  13. A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, Manfred

    2003-01-01

    We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages...

  14. 7 CFR 1209.12 - On average.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false On average. 1209.12 Section 1209.12 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS... CONSUMER INFORMATION ORDER Mushroom Promotion, Research, and Consumer Information Order Definitions § 1209...

  15. The association between estimated average glucose levels and fasting plasma glucose levels in a rural tertiary care centre

    Directory of Open Access Journals (Sweden)

    Raja Reddy P

    2013-01-01

    Full Text Available The level of hemoglobin A1c (HbA1c, also known as glycated hemoglobin, determines how well a patient’s blood glucose level has been controlled over the previous 8-12 weeks. HbA1c levels help patients and doctors understand whether a particular diabetes treatment is working and whether adjustments need to be made to the treatment. Because the HbA1c level is a marker of blood glucose for the previous 60- 90 days, average blood glucose levels can be estimated using HbA1c levels. Aim in the present study was to investigate the relationship between estimated average glucose levels, as calculated by HbA1c levels, and fasting plasma glucose levels. Methods: Type 2 diabetes patients attending medicine outpatient department of RL Jalappa hospital, Kolar between March 2010 and July 2012 were taken. The estimated glucose levels (mg/dl were calculated using the following formula: 28.7 x HbA1c-46.7. Glucose levels were determined using the hexokinase method. HbA1c levels were determined using an HPLC method. Correlation and independent t- test was the test of significance for quantitative data. Results: A strong positive correlation between fasting plasma glucose level and estimated average blood glucose levels (r=0.54, p=0.0001 was observed. The difference was statistically significant. Conclusion: Reporting the estimated average glucose level together with the HbA1c level is believed to assist patients and doctors determine the effectiveness of blood glucose control measures.

  16. Assessing Higher Education Learning Outcomes in Brazil

    Science.gov (United States)

    Pedrosa, Renato H. L.; Amaral, Eliana; Knobel, Marcelo

    2013-01-01

    Brazil has developed an encompassing system for quality assessment of higher education, the National System of Higher Education Evaluation (SINAES), which includes a test for assessing learning outcomes at the undergraduate level, the National Exam of Student Performance (ENADE). The present system has been running since 2004, and also serves as…

  17. Values of average daily gain of swine posted to commercial hybrids on pork in youth phase depending on the type

    Directory of Open Access Journals (Sweden)

    Diana Marin

    2013-10-01

    Full Text Available Values of average daily gain of weight are calculated according to the ratio of total growth and total number of days of feeding. In the case of the four commercial hybrids intensively exploited was observed, as test applied, that there were no statistically significant differences in terms of average daily gain of these hybrids, but the lowest values ​​of this index were recorded in hybrid B (with Large White as terminal boar.

  18. Human Capital Theory and Internal Migration: Do Average Outcomes Distort Our View of Migrant Motives?

    Science.gov (United States)

    Korpi, Martin; Clark, William A W

    2017-05-01

    By modelling the distribution of percentage income gains for movers in Sweden, using multinomial logistic regression, this paper shows that those receiving large pecuniary returns from migration are primarily those moving to the larger metropolitan areas and those with higher education, and that there is much more variability in income gains than what is often assumed in models of average gains to migration. This suggests that human capital models of internal migration often overemphasize the job and income motive for moving, and fail to explore where and when human capital motivated migration occurs.

  19. State Spending on Higher Education Capital Outlays

    Science.gov (United States)

    Delaney, Jennifer A.; Doyle, William R.

    2014-01-01

    This paper explores the role that state spending on higher education capital outlays plays in state budgets by considering the functional form of the relationship between state spending on higher education capital outlays and four types of state expenditures. Three possible functional forms are tested: a linear model, a quadratic model, and the…

  20. The average covering tree value for directed graph games

    NARCIS (Netherlands)

    Khmelnitskaya, Anna Borisovna; Selcuk, Özer; Talman, Dolf

    We introduce a single-valued solution concept, the so-called average covering tree value, for the class of transferable utility games with limited communication structure represented by a directed graph. The solution is the average of the marginal contribution vectors corresponding to all covering

  1. The Average Covering Tree Value for Directed Graph Games

    NARCIS (Netherlands)

    Khmelnitskaya, A.; Selcuk, O.; Talman, A.J.J.

    2012-01-01

    Abstract: We introduce a single-valued solution concept, the so-called average covering tree value, for the class of transferable utility games with limited communication structure represented by a directed graph. The solution is the average of the marginal contribution vectors corresponding to all

  2. 18 CFR 301.7 - Average System Cost methodology functionalization.

    Science.gov (United States)

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Average System Cost... REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER MARKETING ADMINISTRATIONS AVERAGE SYSTEM COST METHODOLOGY FOR SALES FROM UTILITIES TO BONNEVILLE POWER ADMINISTRATION UNDER NORTHWEST POWER...

  3. Average multiplications in deep inelastic processes and their interpretation

    International Nuclear Information System (INIS)

    Kiselev, A.V.; Petrov, V.A.

    1983-01-01

    Inclusive production of hadrons in deep inelastic proceseseus is considered. It is shown that at high energies the jet evolution in deep inelastic processes is mainly of nonperturbative character. With the increase of a final hadron state energy the leading contribution to an average multiplicity comes from a parton subprocess due to production of massive quark and gluon jets and their further fragmentation as diquark contribution becomes less and less essential. The ratio of the total average multiplicity in deep inelastic processes to the average multiplicity in e + e - -annihilation at high energies tends to unity

  4. Typology of end-of-life priorities in Saudi females: averaging analysis and Q-methodology

    Directory of Open Access Journals (Sweden)

    Hammami MM

    2016-05-01

    Full Text Available Muhammad M Hammami,1,2 Safa Hammami,1 Hala A Amer,1 Nesrine A Khodr1 1Clinical Studies and Empirical Ethics Department, King Faisal Specialist Hospital and Research Centre, 2College of Medicine, Alfaisal University, Riyadh, Saudi Arabia Background: Understanding culture-and sex-related end-of-life preferences is essential to provide quality end-of-life care. We have previously explored end-of-life choices in Saudi males and found important culture-related differences and that Q-methodology is useful in identifying intraculture, opinion-based groups. Here, we explore Saudi females’ end-of-life choices.Methods: A volunteer sample of 68 females rank-ordered 47 opinion statements on end-of-life issues into a nine-category symmetrical distribution. The ranking scores of the statements were analyzed by averaging analysis and Q-methodology.Results: The mean age of the females in the sample was 30.3 years (range, 19–55 years. Among them, 51% reported average religiosity, 78% reported very good health, 79% reported very good life quality, and 100% reported high-school education or more. The extreme five overall priorities were to be able to say the statement of faith, be at peace with God, die without having the body exposed, maintain dignity, and resolve all conflicts. The extreme five overall dis-priorities were to die in the hospital, die well dressed, be informed about impending death by family/friends rather than doctor, die at peak of life, and not know if one has a fatal illness. Q-methodology identified five opinion-based groups with qualitatively different characteristics: “physical and emotional privacy concerned, family caring” (younger, lower religiosity, “whole person” (higher religiosity, “pain and informational privacy concerned” (lower life quality, “decisional privacy concerned” (older, higher life quality, and “life quantity concerned, family dependent” (high life quality, low life satisfaction. Out of the

  5. 12 CFR 702.105 - Weighted-average life of investments.

    Science.gov (United States)

    2010-01-01

    ... investment funds. (1) For investments in registered investment companies (e.g., mutual funds) and collective investment funds, the weighted-average life is defined as the maximum weighted-average life disclosed, directly or indirectly, in the prospectus or trust instrument; (2) For investments in money market funds...

  6. System for evaluation of the true average input-pulse rate

    International Nuclear Information System (INIS)

    Eichenlaub, D.P.; Garrett, P.

    1977-01-01

    The description is given of a digital radiation monitoring system making use of current digital circuit and microprocessor for rapidly processing the pulse data coming from remote radiation controllers. This system analyses the pulse rates in order to determine if a new datum is statistically the same as that previously received. Hence it determines the best possible average time for itself. So long as the true average pulse rate stays constant, the time required to establish an average can increase until the statistical error is under the desired level, i.e. 1%. When the digital processing of the pulse data indicates a change in the true average pulse rate, the time required to establish an average can be reduced so as to improve the response time of the system at the statistical error. This concept includes a fixed compromise between the statistical error and the response time [fr

  7. Surveys of radon levels in homes in the United States: A test of the linear-no-threshold dose-response relationship for radiation carcinogenesis

    International Nuclear Information System (INIS)

    Cohen, B.L.

    1987-01-01

    The University of Pittsburgh Radon Project for large scale measurements of radon concentrations in homes is described. Its principal research is to test the linear-no threshold dose-response relationship for radiation carcinogenesis by determining average radon levels in the 25 U.S. counties (within certain population ranges) with highest and lowest lung cancer rates. The theory predicts that the former should have about 3 times higher average radon levels than the latter, under the assumption that any correlation between exposure to radon and exposure to other causes of lung cancer is weak. The validity of this assumption is tested with data on average radon level vs replies to items on questionnaires; there is little correlation between radon levels in houses and smoking habits, educational attainment, or economic status of the occupants, or with urban vs rural environs which is an indicator of exposure to air pollution

  8. Beta-energy averaging and beta spectra

    International Nuclear Information System (INIS)

    Stamatelatos, M.G.; England, T.R.

    1976-07-01

    A simple yet highly accurate method for approximately calculating spectrum-averaged beta energies and beta spectra for radioactive nuclei is presented. This method should prove useful for users who wish to obtain accurate answers without complicated calculations of Fermi functions, complex gamma functions, and time-consuming numerical integrations as required by the more exact theoretical expressions. Therefore, this method should be a good time-saving alternative for investigators who need to make calculations involving large numbers of nuclei (e.g., fission products) as well as for occasional users interested in restricted number of nuclides. The average beta-energy values calculated by this method differ from those calculated by ''exact'' methods by no more than 1 percent for nuclides with atomic numbers in the 20 to 100 range and which emit betas of energies up to approximately 8 MeV. These include all fission products and the actinides. The beta-energy spectra calculated by the present method are also of the same quality

  9. Decision trees with minimum average depth for sorting eight elements

    KAUST Repository

    AbouEisha, Hassan M.

    2015-11-19

    We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees is approximately equal to 8.548×10^326365), has also minimum depth. Both problems were considered by Knuth (1998). To obtain these results, we use tools based on extensions of dynamic programming which allow us to make sequential optimization of decision trees relative to depth and average depth, and to count the number of decision trees with minimum average depth.

  10. Plan averaging for multicriteria navigation of sliding window IMRT and VMAT

    International Nuclear Information System (INIS)

    Craft, David; Papp, Dávid; Unkelbach, Jan

    2014-01-01

    Purpose: To describe a method for combining sliding window plans [intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT)] for use in treatment plan averaging, which is needed for Pareto surface navigation based multicriteria treatment planning. Methods: The authors show that by taking an appropriately defined average of leaf trajectories of sliding window plans, the authors obtain a sliding window plan whose fluence map is the exact average of the fluence maps corresponding to the initial plans. In the case of static-beam IMRT, this also implies that the dose distribution of the averaged plan is the exact dosimetric average of the initial plans. In VMAT delivery, the dose distribution of the averaged plan is a close approximation of the dosimetric average of the initial plans. Results: The authors demonstrate the method on three Pareto optimal VMAT plans created for a demanding paraspinal case, where the tumor surrounds the spinal cord. The results show that the leaf averaged plans yield dose distributions that approximate the dosimetric averages of the precomputed Pareto optimal plans well. Conclusions: The proposed method enables the navigation of deliverable Pareto optimal plans directly, i.e., interactive multicriteria exploration of deliverable sliding window IMRT and VMAT plans, eliminating the need for a sequencing step after navigation and hence the dose degradation that is caused by such a sequencing step

  11. Plan averaging for multicriteria navigation of sliding window IMRT and VMAT.

    Science.gov (United States)

    Craft, David; Papp, Dávid; Unkelbach, Jan

    2014-02-01

    To describe a method for combining sliding window plans [intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT)] for use in treatment plan averaging, which is needed for Pareto surface navigation based multicriteria treatment planning. The authors show that by taking an appropriately defined average of leaf trajectories of sliding window plans, the authors obtain a sliding window plan whose fluence map is the exact average of the fluence maps corresponding to the initial plans. In the case of static-beam IMRT, this also implies that the dose distribution of the averaged plan is the exact dosimetric average of the initial plans. In VMAT delivery, the dose distribution of the averaged plan is a close approximation of the dosimetric average of the initial plans. The authors demonstrate the method on three Pareto optimal VMAT plans created for a demanding paraspinal case, where the tumor surrounds the spinal cord. The results show that the leaf averaged plans yield dose distributions that approximate the dosimetric averages of the precomputed Pareto optimal plans well. The proposed method enables the navigation of deliverable Pareto optimal plans directly, i.e., interactive multicriteria exploration of deliverable sliding window IMRT and VMAT plans, eliminating the need for a sequencing step after navigation and hence the dose degradation that is caused by such a sequencing step.

  12. Plan averaging for multicriteria navigation of sliding window IMRT and VMAT

    Energy Technology Data Exchange (ETDEWEB)

    Craft, David, E-mail: dcraft@partners.org; Papp, Dávid; Unkelbach, Jan [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114 (United States)

    2014-02-15

    Purpose: To describe a method for combining sliding window plans [intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT)] for use in treatment plan averaging, which is needed for Pareto surface navigation based multicriteria treatment planning. Methods: The authors show that by taking an appropriately defined average of leaf trajectories of sliding window plans, the authors obtain a sliding window plan whose fluence map is the exact average of the fluence maps corresponding to the initial plans. In the case of static-beam IMRT, this also implies that the dose distribution of the averaged plan is the exact dosimetric average of the initial plans. In VMAT delivery, the dose distribution of the averaged plan is a close approximation of the dosimetric average of the initial plans. Results: The authors demonstrate the method on three Pareto optimal VMAT plans created for a demanding paraspinal case, where the tumor surrounds the spinal cord. The results show that the leaf averaged plans yield dose distributions that approximate the dosimetric averages of the precomputed Pareto optimal plans well. Conclusions: The proposed method enables the navigation of deliverable Pareto optimal plans directly, i.e., interactive multicriteria exploration of deliverable sliding window IMRT and VMAT plans, eliminating the need for a sequencing step after navigation and hence the dose degradation that is caused by such a sequencing step.

  13. A Rank Test on Equality of Population Medians

    OpenAIRE

    Pooi Ah Hin

    2012-01-01

    The Kruskal-Wallis test is a non-parametric test for the equality of K population medians. The test statistic involved is a measure of the overall closeness of the K average ranks in the individual samples to the average rank in the combined sample. The resulting acceptance region of the test however may not be the smallest region with the required acceptance probability under the null hypothesis. Presently an alternative acceptance region is constructed such that it has the smallest size, ap...

  14. Personal characteristics of students entering higher medical school

    Directory of Open Access Journals (Sweden)

    Akimova O.V.

    2014-06-01

    Full Text Available The article presents the structure of personal features of students decided to devote their life to medical profession, their personal readiness for a profession of a doctor. 241 students going to enter the Saratov Medical University in 2013 serve as an object of research. Methods of research included psychology tests on a self-assessment of a mental state, ability to empathy, a motivation orientation. Result. It was revealed that the majority of respondents low level of uneasiness, low level of frustration, the average level of aggression, the average level of a rigidity, and also high rates on an empathy scale. The types of the personality in relation to work are emotive and intuitive. Prevalence of motive of achievement of success or motive of avoiding of failures directly depends on specifics of a situation. Conclusion. Students possess qualities which are necessary in professional activity for doctors, namely high resistance to stress, absence of fear before difficulties, low level of rigidity, high level of empathy, the average level of aggression. Students are motivated on success, in situations when they are fully confident.

  15. Average contraction and synchronization of complex switched networks

    International Nuclear Information System (INIS)

    Wang Lei; Wang Qingguo

    2012-01-01

    This paper introduces an average contraction analysis for nonlinear switched systems and applies it to investigating the synchronization of complex networks of coupled systems with switching topology. For a general nonlinear system with a time-dependent switching law, a basic convergence result is presented according to average contraction analysis, and a special case where trajectories of a distributed switched system converge to a linear subspace is then investigated. Synchronization is viewed as the special case with all trajectories approaching the synchronization manifold, and is thus studied for complex networks of coupled oscillators with switching topology. It is shown that the synchronization of a complex switched network can be evaluated by the dynamics of an isolated node, the coupling strength and the time average of the smallest eigenvalue associated with the Laplacians of switching topology and the coupling fashion. Finally, numerical simulations illustrate the effectiveness of the proposed methods. (paper)

  16. DSCOVR Magnetometer Level 2 One Minute Averages

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-minute average of Level 1 data

  17. DSCOVR Magnetometer Level 2 One Second Averages

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-second average of Level 1 data

  18. High-average-power diode-pumped Yb: YAG lasers

    International Nuclear Information System (INIS)

    Avizonis, P V; Beach, R; Bibeau, C M; Emanuel, M A; Harris, D G; Honea, E C; Monroe, R S; Payne, S A; Skidmore, J A; Sutton, S B

    1999-01-01

    A scaleable diode end-pumping technology for high-average-power slab and rod lasers has been under development for the past several years at Lawrence Livermore National Laboratory (LLNL). This technology has particular application to high average power Yb:YAG lasers that utilize a rod configured gain element. Previously, this rod configured approach has achieved average output powers in a single 5 cm long by 2 mm diameter Yb:YAG rod of 430 W cw and 280 W q-switched. High beam quality (M(sup 2)= 2.4) q-switched operation has also been demonstrated at over 180 W of average output power. More recently, using a dual rod configuration consisting of two, 5 cm long by 2 mm diameter laser rods with birefringence compensation, we have achieved 1080 W of cw output with an M(sup 2) value of 13.5 at an optical-to-optical conversion efficiency of 27.5%. With the same dual rod laser operated in a q-switched mode, we have also demonstrated 532 W of average power with an M(sup 2) and lt; 2.5 at 17% optical-to-optical conversion efficiency. These q-switched results were obtained at a 10 kHz repetition rate and resulted in 77 nsec pulse durations. These improved levels of operational performance have been achieved as a result of technology advancements made in several areas that will be covered in this manuscript. These enhancements to our architecture include: (1) Hollow lens ducts that enable the use of advanced cavity architectures permitting birefringence compensation and the ability to run in large aperture-filling near-diffraction-limited modes. (2) Compound laser rods with flanged-nonabsorbing-endcaps fabricated by diffusion bonding. (3) Techniques for suppressing amplified spontaneous emission (ASE) and parasitics in the polished barrel rods

  19. 47 CFR 1.959 - Computation of average terrain elevation.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Computation of average terrain elevation. 1.959 Section 1.959 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Wireless Radio Services Applications and Proceedings Application Requirements and Procedures § 1.959 Computation of average terrain elevation. Except a...

  20. Typology of end-of-life priorities in Saudi females: averaging analysis and Q-methodology

    Science.gov (United States)

    Hammami, Muhammad M; Hammami, Safa; Amer, Hala A; Khodr, Nesrine A

    2016-01-01

    Background Understanding culture-and sex-related end-of-life preferences is essential to provide quality end-of-life care. We have previously explored end-of-life choices in Saudi males and found important culture-related differences and that Q-methodology is useful in identifying intraculture, opinion-based groups. Here, we explore Saudi females’ end-of-life choices. Methods A volunteer sample of 68 females rank-ordered 47 opinion statements on end-of-life issues into a nine-category symmetrical distribution. The ranking scores of the statements were analyzed by averaging analysis and Q-methodology. Results The mean age of the females in the sample was 30.3 years (range, 19–55 years). Among them, 51% reported average religiosity, 78% reported very good health, 79% reported very good life quality, and 100% reported high-school education or more. The extreme five overall priorities were to be able to say the statement of faith, be at peace with God, die without having the body exposed, maintain dignity, and resolve all conflicts. The extreme five overall dis-priorities were to die in the hospital, die well dressed, be informed about impending death by family/friends rather than doctor, die at peak of life, and not know if one has a fatal illness. Q-methodology identified five opinion-based groups with qualitatively different characteristics: “physical and emotional privacy concerned, family caring” (younger, lower religiosity), “whole person” (higher religiosity), “pain and informational privacy concerned” (lower life quality), “decisional privacy concerned” (older, higher life quality), and “life quantity concerned, family dependent” (high life quality, low life satisfaction). Out of the extreme 14 priorities/dis-priorities for each group, 21%–50% were not represented among the extreme 20 priorities/dis-priorities for the entire sample. Conclusion Consistent with the previously reported findings in Saudi males, transcendence and dying in

  1. Typology of end-of-life priorities in Saudi females: averaging analysis and Q-methodology.

    Science.gov (United States)

    Hammami, Muhammad M; Hammami, Safa; Amer, Hala A; Khodr, Nesrine A

    2016-01-01

    Understanding culture-and sex-related end-of-life preferences is essential to provide quality end-of-life care. We have previously explored end-of-life choices in Saudi males and found important culture-related differences and that Q-methodology is useful in identifying intraculture, opinion-based groups. Here, we explore Saudi females' end-of-life choices. A volunteer sample of 68 females rank-ordered 47 opinion statements on end-of-life issues into a nine-category symmetrical distribution. The ranking scores of the statements were analyzed by averaging analysis and Q-methodology. The mean age of the females in the sample was 30.3 years (range, 19-55 years). Among them, 51% reported average religiosity, 78% reported very good health, 79% reported very good life quality, and 100% reported high-school education or more. The extreme five overall priorities were to be able to say the statement of faith, be at peace with God, die without having the body exposed, maintain dignity, and resolve all conflicts. The extreme five overall dis-priorities were to die in the hospital, die well dressed, be informed about impending death by family/friends rather than doctor, die at peak of life, and not know if one has a fatal illness. Q-methodology identified five opinion-based groups with qualitatively different characteristics: "physical and emotional privacy concerned, family caring" (younger, lower religiosity), "whole person" (higher religiosity), "pain and informational privacy concerned" (lower life quality), "decisional privacy concerned" (older, higher life quality), and "life quantity concerned, family dependent" (high life quality, low life satisfaction). Out of the extreme 14 priorities/dis-priorities for each group, 21%-50% were not represented among the extreme 20 priorities/dis-priorities for the entire sample. Consistent with the previously reported findings in Saudi males, transcendence and dying in the hospital were the extreme end-of-life priority and dis

  2. Average Costs versus Net Present Value

    NARCIS (Netherlands)

    E.A. van der Laan (Erwin); R.H. Teunter (Ruud)

    2000-01-01

    textabstractWhile the net present value (NPV) approach is widely accepted as the right framework for studying production and inventory control systems, average cost (AC) models are more widely used. For the well known EOQ model it can be verified that (under certain conditions) the AC approach gives

  3. Autoregressive Moving Average Graph Filtering

    OpenAIRE

    Isufi, Elvin; Loukas, Andreas; Simonetto, Andrea; Leus, Geert

    2016-01-01

    One of the cornerstones of the field of signal processing on graphs are graph filters, direct analogues of classical filters, but intended for signals defined on graphs. This work brings forth new insights on the distributed graph filtering problem. We design a family of autoregressive moving average (ARMA) recursions, which (i) are able to approximate any desired graph frequency response, and (ii) give exact solutions for tasks such as graph signal denoising and interpolation. The design phi...

  4. Average inactivity time model, associated orderings and reliability properties

    Science.gov (United States)

    Kayid, M.; Izadkhah, S.; Abouammoh, A. M.

    2018-02-01

    In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.

  5. Changing mortality and average cohort life expectancy

    Directory of Open Access Journals (Sweden)

    Robert Schoen

    2005-10-01

    Full Text Available Period life expectancy varies with changes in mortality, and should not be confused with the life expectancy of those alive during that period. Given past and likely future mortality changes, a recent debate has arisen on the usefulness of the period life expectancy as the leading measure of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure, the average cohort life expectancy (ACLE, to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate measures of mortality are calculated for England and Wales, Norway, and Switzerland for the years 1880 to 2000. CAL is found to be sensitive to past and present changes in death rates. ACLE requires the most data, but gives the best representation of the survivorship of cohorts present at a given time.

  6. Taphonomic trade-offs in tropical marine death assemblages: Differential time averaging, shell loss, and probable bias in siliciclastic vs. carbonate facies

    Science.gov (United States)

    Kidwell, Susan M.; Best, Mairi M. R.; Kaufman, Darrell S.

    2005-09-01

    Radiocarbon-calibrated amino-acid racemization ages of individually dated bivalve mollusk shells from Caribbean reef, nonreefal carbonate, and siliciclastic sediments in Panama indicate that siliciclastic sands and muds contain significantly older shells (median 375 yr, range up to ˜5400 yr) than nearby carbonate seafloors (median 72 yr, range up to ˜2900 yr; maximum shell ages differ significantly at p < 0.02 using extreme-value statistics). The implied difference in shell loss rates is contrary to physicochemical expectations but is consistent with observed differences in shell condition (greater bioerosion and dissolution in carbonates). Higher rates of shell loss in carbonate sediments should lead to greater compositional bias in surviving skeletal material, resulting in taphonomic trade-offs: less time averaging but probably higher taxonomic bias in pure carbonate sediments, and lower bias but greater time averaging in siliciclastic sediments from humid-weathered accretionary arc terrains, which are a widespread setting of tropical sedimentation.

  7. The Control Based on Internal Average Kinetic Energy in Complex Environment for Multi-robot System

    Science.gov (United States)

    Yang, Mao; Tian, Yantao; Yin, Xianghua

    In this paper, reference trajectory is designed according to minimum energy consumed for multi-robot system, which nonlinear programming and cubic spline interpolation are adopted. The control strategy is composed of two levels, which lower-level is simple PD control and the upper-level is based on the internal average kinetic energy for multi-robot system in the complex environment with velocity damping. Simulation tests verify the effectiveness of this control strategy.

  8. Wave function collapse implies divergence of average displacement

    OpenAIRE

    Marchewka, A.; Schuss, Z.

    2005-01-01

    We show that propagating a truncated discontinuous wave function by Schr\\"odinger's equation, as asserted by the collapse axiom, gives rise to non-existence of the average displacement of the particle on the line. It also implies that there is no Zeno effect. On the other hand, if the truncation is done so that the reduced wave function is continuous, the average coordinate is finite and there is a Zeno effect. Therefore the collapse axiom of measurement needs to be revised.

  9. Function reconstruction from noisy local averages

    International Nuclear Information System (INIS)

    Chen Yu; Huang Jianguo; Han Weimin

    2008-01-01

    A regularization method is proposed for the function reconstruction from noisy local averages in any dimension. Error bounds for the approximate solution in L 2 -norm are derived. A number of numerical examples are provided to show computational performance of the method, with the regularization parameters selected by different strategies

  10. Average beta-beating from random errors

    CERN Document Server

    Tomas Garcia, Rogelio; Langner, Andy Sven; Malina, Lukas; Franchi, Andrea; CERN. Geneva. ATS Department

    2018-01-01

    The impact of random errors on average β-beating is studied via analytical derivations and simulations. A systematic positive β-beating is expected from random errors quadratic with the sources or, equivalently, with the rms β-beating. However, random errors do not have a systematic effect on the tune.

  11. Strengthened glass for high average power laser applications

    International Nuclear Information System (INIS)

    Cerqua, K.A.; Lindquist, A.; Jacobs, S.D.; Lambropoulos, J.

    1987-01-01

    Recent advancements in high repetition rate and high average power laser systems have put increasing demands on the development of improved solid state laser materials with high thermal loading capabilities. The authors have developed a process for strengthening a commercially available Nd doped phosphate glass utilizing an ion-exchange process. Results of thermal loading fracture tests on moderate size (160 x 15 x 8 mm) glass slabs have shown a 6-fold improvement in power loading capabilities for strengthened samples over unstrengthened slabs. Fractographic analysis of post-fracture samples has given insight into the mechanism of fracture in both unstrengthened and strengthened samples. Additional stress analysis calculations have supported these findings. In addition to processing the glass' surface during strengthening in a manner which preserves its post-treatment optical quality, the authors have developed an in-house optical fabrication technique utilizing acid polishing to minimize subsurface damage in samples prior to exchange treatment. Finally, extension of the strengthening process to alternate geometries of laser glass has produced encouraging results, which may expand the potential or strengthened glass in laser systems, making it an exciting prospect for many applications

  12. Average niche breadths of species in lake macrophyte communities respond to ecological gradients variably in four regions on two continents.

    Science.gov (United States)

    Alahuhta, Janne; Virtala, Antti; Hjort, Jan; Ecke, Frauke; Johnson, Lucinda B; Sass, Laura; Heino, Jani

    2017-05-01

    Different species' niche breadths in relation to ecological gradients are infrequently examined within the same study and, moreover, species niche breadths have rarely been averaged to account for variation in entire ecological communities. We investigated how average environmental niche breadths (climate, water quality and climate-water quality niches) in aquatic macrophyte communities are related to ecological gradients (latitude, longitude, altitude, species richness and lake area) among four distinct regions (Finland, Sweden and US states of Minnesota and Wisconsin) on two continents. We found that correlations between the three different measures of average niche breadths and ecological gradients varied considerably among the study regions, with average climate and average water quality niche breadth models often showing opposite trends. However, consistent patterns were also found, such as widening of average climate niche breadths and narrowing of average water quality niche breadths of aquatic macrophytes along increasing latitudinal and altitudinal gradients. This result suggests that macrophyte species are generalists in relation to temperature variations at higher latitudes and altitudes, whereas species in southern, lowland lakes are more specialised. In contrast, aquatic macrophytes growing in more southern nutrient-rich lakes were generalists in relation to water quality, while specialist species are adapted to low-productivity conditions and are found in highland lakes. Our results emphasise that species niche breadths should not be studied using only coarse-scale data of species distributions and corresponding environmental conditions, but that investigations on different kinds of niche breadths (e.g., climate vs. local niches) also require finer resolution data at broad spatial extents.

  13. Bootstrapping Density-Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...... (1989). In many cases validity of bootstrap-based inference procedures is found to depend crucially on whether the bandwidth sequence satisfies a particular (asymptotic linearity) condition. An exception to this rule occurs for inference procedures involving a studentized estimator employing a "robust...

  14. Adaptive Spontaneous Transitions between Two Mechanisms of Numerical Averaging.

    Science.gov (United States)

    Brezis, Noam; Bronfman, Zohar Z; Usher, Marius

    2015-06-04

    We investigated the mechanism with which humans estimate numerical averages. Participants were presented with 4, 8 or 16 (two-digit) numbers, serially and rapidly (2 numerals/second) and were instructed to convey the sequence average. As predicted by a dual, but not a single-component account, we found a non-monotonic influence of set-size on accuracy. Moreover, we observed a marked decrease in RT as set-size increases and RT-accuracy tradeoff in the 4-, but not in the 16-number condition. These results indicate that in accordance with the normative directive, participants spontaneously employ analytic/sequential thinking in the 4-number condition and intuitive/holistic thinking in the 16-number condition. When the presentation rate is extreme (10 items/sec) we find that, while performance still remains high, the estimations are now based on intuitive processing. The results are accounted for by a computational model postulating population-coding underlying intuitive-averaging and working-memory-mediated symbolic procedures underlying analytical-averaging, with flexible allocation between the two.

  15. ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE

    Directory of Open Access Journals (Sweden)

    Carmen BOGHEAN

    2013-12-01

    Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.

  16. Category structure determines the relative attractiveness of global versus local averages.

    Science.gov (United States)

    Vogel, Tobias; Carr, Evan W; Davis, Tyler; Winkielman, Piotr

    2018-02-01

    Stimuli that capture the central tendency of presented exemplars are often preferred-a phenomenon also known as the classic beauty-in-averageness effect . However, recent studies have shown that this effect can reverse under certain conditions. We propose that a key variable for such ugliness-in-averageness effects is the category structure of the presented exemplars. When exemplars cluster into multiple subcategories, the global average should no longer reflect the underlying stimulus distributions, and will thereby become unattractive. In contrast, the subcategory averages (i.e., local averages) should better reflect the stimulus distributions, and become more attractive. In 3 studies, we presented participants with dot patterns belonging to 2 different subcategories. Importantly, across studies, we also manipulated the distinctiveness of the subcategories. We found that participants preferred the local averages over the global average when they first learned to classify the patterns into 2 different subcategories in a contrastive categorization paradigm (Experiment 1). Moreover, participants still preferred local averages when first classifying patterns into a single category (Experiment 2) or when not classifying patterns at all during incidental learning (Experiment 3), as long as the subcategories were sufficiently distinct. Finally, as a proof-of-concept, we mapped our empirical results onto predictions generated by a well-known computational model of category learning (the Generalized Context Model [GCM]). Overall, our findings emphasize the key role of categorization for understanding the nature of preferences, including any effects that emerge from stimulus averaging. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  17. Oppugning the assumptions of spatial averaging of segment and joint orientations.

    Science.gov (United States)

    Pierrynowski, Michael Raymond; Ball, Kevin Arthur

    2009-02-09

    Movement scientists frequently calculate "arithmetic averages" when examining body segment or joint orientations. Such calculations appear routinely, yet are fundamentally flawed. Three-dimensional orientation data are computed as matrices, yet three-ordered Euler/Cardan/Bryant angle parameters are frequently used for interpretation. These parameters are not geometrically independent; thus, the conventional process of averaging each parameter is incorrect. The process of arithmetic averaging also assumes that the distances between data are linear (Euclidean); however, for the orientation data these distances are geodesically curved (Riemannian). Therefore we question (oppugn) whether use of the conventional averaging approach is an appropriate statistic. Fortunately, exact methods of averaging orientation data have been developed which both circumvent the parameterization issue, and explicitly acknowledge the Euclidean or Riemannian distance measures. The details of these matrix-based averaging methods are presented and their theoretical advantages discussed. The Euclidian and Riemannian approaches offer appealing advantages over the conventional technique. With respect to practical biomechanical relevancy, examinations of simulated data suggest that for sets of orientation data possessing characteristics of low dispersion, an isotropic distribution, and less than 30 degrees second and third angle parameters, discrepancies with the conventional approach are less than 1.1 degrees . However, beyond these limits, arithmetic averaging can have substantive non-linear inaccuracies in all three parameterized angles. The biomechanics community is encouraged to recognize that limitations exist with the use of the conventional method of averaging orientations. Investigations requiring more robust spatial averaging over a broader range of orientations may benefit from the use of matrix-based Euclidean or Riemannian calculations.

  18. Primary HPV testing recommendations of US providers, 2015.

    Science.gov (United States)

    Cooper, Crystale Purvis; Saraiya, Mona

    2017-12-01

    To investigate the HPV testing recommendations of US physicians who perform cervical cancer screening. Data from the 2015 DocStyles survey of U.S. health care providers were analyzed using multivariate logistic regression to identify provider characteristics associated with routine recommendation of primary HPV testing for average-risk, asymptomatic women ≥30years old. The analysis was limited to primary care physicians and obstetrician-gynecologists who performed cervical cancer screening (N=843). Primary HPV testing for average-risk, asymptomatic women ≥30years old was recommended by 40.8% of physicians who performed cervical cancer screening, and 90.1% of these providers recommended primary HPV testing for women of all ages. The screening intervals most commonly recommended for primary HPV testing with average-risk, asymptomatic women ≥30years old were every 3years (35.5%) and annually (30.2%). Physicians who reported that patient HPV vaccination status influenced their cervical cancer screening practices were almost four times more likely to recommend primary HPV testing for average-risk, asymptomatic women ≥30years old than other providers (Adj OR=3.96, 95% CI=2.82-5.57). Many US physicians recommended primary HPV testing for women of all ages, contrary to guidelines which limit this screening approach to women ≥25years old. The association between provider recommendation of primary HPV testing and patient HPV vaccination status may be due to anticipated reductions in the most oncogenic HPV types among vaccinated women. Published by Elsevier Inc.

  19. Higher-Order Finite Element Solutions of Option Prices

    DEFF Research Database (Denmark)

    Raahauge, Peter

    2004-01-01

    Kinks and jumps in the payoff function of option contracts prevent an effectiveimplementation of higher-order numerical approximation methods. Moreover, thederivatives (the greeks) are not easily determined around such singularities, even withstandard lower-order methods. This paper suggests...... for prices as well as for first and second order derivatives(delta and gamma). Unlike similar studies, numerical approximation errors aremeasured both as weighted averages and in the supnorm over a state space includingtime-to-maturities down to a split second.KEYWORDS: Numerical option pricing, Transformed...

  20. Ice-condenser aerosol tests

    International Nuclear Information System (INIS)

    Ligotke, M.W.; Eschbach, E.J.; Winegardner, W.K.

    1991-09-01

    This report presents the results of an experimental investigation of aerosol particle transport and capture using a full-scale height and reduced-scale cross section test facility based on the design of the ice compartment of a pressurized water reactor (PWR) ice-condenser containment system. Results of 38 tests included thermal-hydraulic as well as aerosol particle data. Particle retention in the test section was greatly influenced by thermal-hydraulic and aerosol test parameters. Test-average decontamination factor (DF) ranged between 1.0 and 36 (retentions between ∼0 and 97.2%). The measured test-average particle retentions for tests without and with ice and steam ranged between DF = 1.0 and 2.2 and DF = 2.4 and 36, respectively. In order to apparent importance, parameters that caused particle retention in the test section in the presence of ice were steam mole fraction (SMF), noncondensible gas flow rate (residence time), particle solubility, and inlet particle size. Ice-basket section noncondensible flows greater than 0.1 m 3 /s resulted in stable thermal stratification whereas flows less than 0.1 m 3 /s resulted in thermal behavior termed meandering with frequent temperature crossovers between flow channels. 10 refs., 66 figs., 16 tabs

  1. Analytic computation of average energy of neutrons inducing fission

    International Nuclear Information System (INIS)

    Clark, Alexander Rich

    2016-01-01

    The objective of this report is to describe how I analytically computed the average energy of neutrons that induce fission in the bare BeRP ball. The motivation of this report is to resolve a discrepancy between the average energy computed via the FMULT and F4/FM cards in MCNP6 by comparison to the analytic results.

  2. Average glandular dose in digital mammography and breast tomosynthesis

    Energy Technology Data Exchange (ETDEWEB)

    Olgar, T. [Ankara Univ. (Turkey). Dept. of Engineering Physics; Universitaetsklinikum Leipzig AoeR (Germany). Klinik und Poliklinik fuer Diagnostische und Interventionelle Radiologie; Kahn, T.; Gosch, D. [Universitaetsklinikum Leipzig AoeR (Germany). Klinik und Poliklinik fuer Diagnostische und Interventionelle Radiologie

    2012-10-15

    Purpose: To determine the average glandular dose (AGD) in digital full-field mammography (2 D imaging mode) and in breast tomosynthesis (3 D imaging mode). Materials and Methods: Using the method described by Boone, the AGD was calculated from the exposure parameters of 2247 conventional 2 D mammograms and 984 mammograms in 3 D imaging mode of 641 patients examined with the digital mammographic system Hologic Selenia Dimensions. The breast glandular tissue content was estimated by the Hologic R2 Quantra automated volumetric breast density measurement tool for each patient from right craniocaudal (RCC) and left craniocaudal (LCC) images in 2 D imaging mode. Results: The mean compressed breast thickness (CBT) was 52.7 mm for craniocaudal (CC) and 56.0 mm for mediolateral oblique (MLO) views. The mean percentage of breast glandular tissue content was 18.0 % and 17.4 % for RCC and LCC projections, respectively. The mean AGD values in 2 D imaging mode per exposure for the standard breast were 1.57 mGy and 1.66 mGy, while the mean AGD values after correction for real breast composition were 1.82 mGy and 1.94 mGy for CC and MLO views, respectively. The mean AGD values in 3 D imaging mode per exposure for the standard breast were 2.19 mGy and 2.29 mGy, while the mean AGD values after correction for the real breast composition were 2.53 mGy and 2.63 mGy for CC and MLO views, respectively. No significant relationship was found between the AGD and CBT in 2 D imaging mode and a good correlation coefficient of 0.98 in 3 D imaging mode. Conclusion: In this study the mean calculated AGD per exposure in 3 D imaging mode was on average 34 % higher than for 2 D imaging mode for patients examined with the same CBT.

  3. Average glandular dose in digital mammography and breast tomosynthesis

    International Nuclear Information System (INIS)

    Olgar, T.; Universitaetsklinikum Leipzig AoeR; Kahn, T.; Gosch, D.

    2012-01-01

    Purpose: To determine the average glandular dose (AGD) in digital full-field mammography (2 D imaging mode) and in breast tomosynthesis (3 D imaging mode). Materials and Methods: Using the method described by Boone, the AGD was calculated from the exposure parameters of 2247 conventional 2 D mammograms and 984 mammograms in 3 D imaging mode of 641 patients examined with the digital mammographic system Hologic Selenia Dimensions. The breast glandular tissue content was estimated by the Hologic R2 Quantra automated volumetric breast density measurement tool for each patient from right craniocaudal (RCC) and left craniocaudal (LCC) images in 2 D imaging mode. Results: The mean compressed breast thickness (CBT) was 52.7 mm for craniocaudal (CC) and 56.0 mm for mediolateral oblique (MLO) views. The mean percentage of breast glandular tissue content was 18.0 % and 17.4 % for RCC and LCC projections, respectively. The mean AGD values in 2 D imaging mode per exposure for the standard breast were 1.57 mGy and 1.66 mGy, while the mean AGD values after correction for real breast composition were 1.82 mGy and 1.94 mGy for CC and MLO views, respectively. The mean AGD values in 3 D imaging mode per exposure for the standard breast were 2.19 mGy and 2.29 mGy, while the mean AGD values after correction for the real breast composition were 2.53 mGy and 2.63 mGy for CC and MLO views, respectively. No significant relationship was found between the AGD and CBT in 2 D imaging mode and a good correlation coefficient of 0.98 in 3 D imaging mode. Conclusion: In this study the mean calculated AGD per exposure in 3 D imaging mode was on average 34 % higher than for 2 D imaging mode for patients examined with the same CBT.

  4. MODEL TESTING OF LOW PRESSURE HYDRAULIC TURBINE WITH HIGHER EFFICIENCY

    Directory of Open Access Journals (Sweden)

    V. K. Nedbalsky

    2007-01-01

    Full Text Available A design of low pressure turbine has been developed and it is covered by an invention patent and a useful model patent. Testing of the hydraulic turbine model has been carried out when it was installed on a vertical shaft. The efficiency was equal to 76–78 % that exceeds efficiency of the known low pressure blade turbines. 

  5. Estimating the population size and colony boundary of subterranean termites by using the density functions of directionally averaged capture probability.

    Science.gov (United States)

    Su, Nan-Yao; Lee, Sang-Hee

    2008-04-01

    Marked termites were released in a linear-connected foraging arena, and the spatial heterogeneity of their capture probabilities was averaged for both directions at distance r from release point to obtain a symmetrical distribution, from which the density function of directionally averaged capture probability P(x) was derived. We hypothesized that as marked termites move into the population and given sufficient time, the directionally averaged capture probability may reach an equilibrium P(e) over the distance r and thus satisfy the equal mixing assumption of the mark-recapture protocol. The equilibrium capture probability P(e) was used to estimate the population size N. The hypothesis was tested in a 50-m extended foraging arena to simulate the distance factor of field colonies of subterranean termites. Over the 42-d test period, the density functions of directionally averaged capture probability P(x) exhibited four phases: exponential decline phase, linear decline phase, equilibrium phase, and postequilibrium phase. The equilibrium capture probability P(e), derived as the intercept of the linear regression during the equilibrium phase, correctly projected N estimates that were not significantly different from the known number of workers in the arena. Because the area beneath the probability density function is a constant (50% in this study), preequilibrium regression parameters and P(e) were used to estimate the population boundary distance 1, which is the distance between the release point and the boundary beyond which the population is absent.

  6. Emissions from laboratory combustor tests of manufactured wood products

    Energy Technology Data Exchange (ETDEWEB)

    Wilkening, R.; Evans, M.; Ragland, K. [Univ. of Wisconsin, Madison, WI (United States); Baker, A. [USDA Forest Products Lab., Madison, WI (United States)

    1993-12-31

    Manufactured wood products contain wood, wood fiber, and materials added during manufacture of the product. Manufacturing residues and the used products are burned in a furnace or boiler instead of landfilling. Emissions from combustion of these products contain additional compounds from the combustion of non-wood material which have not been adequately characterized to specify the best combustion conditions, emissions control equipment, and disposal procedures. Total hydrocarbons, formaldehyde, higher aldehydes and carbon monoxide emissions from aspen flakeboard and aspen cubes were measured in a 76 mm i.d. by 1.5 m long fixed bed combustor as a function of excess oxygen, and temperature. Emissions of hydrocarbons, aldehydes and CO from flakeboard and from clean aspen were very sensitive to average combustor temperature and excess oxygen. Hydrocarbon and aldehyde emissions below 10 ppM were achieved with 5% excess oxygen and 1,200{degrees}C average temperature for aspen flakeboard and 1,100{degrees}C for clean aspen at a 0.9 s residence time. When the average temperature decreased below these levels, the emissions increased rapidly. For example, at 950{degrees}C and 5% excess oxygen the formaldehyde emissions were over 1,000 ppM. These laboratory tests reinforce the need to carefully control the temperature and excess oxygen in full-scale wood combustors.

  7. Valid Competency Assessment in Higher Education

    Directory of Open Access Journals (Sweden)

    Olga Zlatkin-Troitschanskaia

    2017-01-01

    Full Text Available The aim of the 15 collaborative projects conducted during the new funding phase of the German research program Modeling and Measuring Competencies in Higher Education—Validation and Methodological Innovations (KoKoHs is to make a significant contribution to advancing the field of modeling and valid measurement of competencies acquired in higher education. The KoKoHs research teams assess generic competencies and domain-specific competencies in teacher education, social and economic sciences, and medicine based on findings from and using competency models and assessment instruments developed during the first KoKoHs funding phase. Further, they enhance, validate, and test measurement approaches for use in higher education in Germany. Results and findings are transferred at various levels to national and international research, higher education practice, and education policy.

  8. Asymptotic behaviour of time averages for non-ergodic Gaussian processes

    Science.gov (United States)

    Ślęzak, Jakub

    2017-08-01

    In this work, we study the behaviour of time-averages for stationary (non-ageing), but ergodicity-breaking Gaussian processes using their representation in Fourier space. We provide explicit formulae for various time-averaged quantities, such as mean square displacement, density, and analyse the behaviour of time-averaged characteristic function, which gives insight into rich memory structure of the studied processes. Moreover, we show applications of the ergodic criteria in Fourier space, determining the ergodicity of the generalised Langevin equation's solutions.

  9. Dynamic stall characterization using modal analysis of phase-averaged pressure distributions

    Science.gov (United States)

    Harms, Tanner; Nikoueeyan, Pourya; Naughton, Jonathan

    2017-11-01

    Dynamic stall characterization by means of surface pressure measurements can simplify the time and cost associated with experimental investigation of unsteady airfoil aerodynamics. A unique test capability has been developed at University of Wyoming over the past few years that allows for time and cost efficient measurement of dynamic stall. A variety of rotorcraft and wind turbine airfoils have been tested under a variety of pitch oscillation conditions resulting in a range of dynamic stall behavior. Formation, development and separation of different flow structures are responsible for the complex aerodynamic loading behavior experienced during dynamic stall. These structures have unique signatures on the pressure distribution over the airfoil. This work investigates the statistical behavior of phase-averaged pressure distribution for different types of dynamic stall by means of modal analysis. The use of different modes to identify specific flow structures is being investigated. The use of these modes for different types of dynamic stall can provide a new approach for understanding and categorizing these flows. This work uses airfoil data acquired under Army contract W911W60160C-0021, DOE Grant DE-SC0001261, and a gift from BP Alternative Energy North America, Inc.

  10. Gibbs equilibrium averages and Bogolyubov measure

    International Nuclear Information System (INIS)

    Sankovich, D.P.

    2011-01-01

    Application of the functional integration methods in equilibrium statistical mechanics of quantum Bose-systems is considered. We show that Gibbs equilibrium averages of Bose-operators can be represented as path integrals over a special Gauss measure defined in the corresponding space of continuous functions. We consider some problems related to integration with respect to this measure

  11. Wear of alumina on alumina total hip prosthesis - effect of lubricant on hip simulator test

    Energy Technology Data Exchange (ETDEWEB)

    Ueno, M.; Amino, H. [Kyocera Corp., Fushimi, Kyoto (Japan). Bioceram Div.; Oonishi, H. [Dept. of Orthopaedic Surgery, Artificial Joint Sect. and Biomat. Res. Lab., Osaka Minami National Hospital, Osaka (Japan); Clarke, I.C.; Good, V. [Dept. of Orthopaedic Surgery, Loma Linda Univ. Medical Center, CA (United States)

    2001-07-01

    The complex wear-friction-lubrication behavior of alumina on alumina combination in total hip prostheses (THP) was investigated using a hip joint simulator. The objectives of this study were to evaluate the effect of the ball/cup clearance and of the lubricant conditions. Alumina bearings were categorized in three diametrical clearances, 20-30, 60-70 and 90-100 micrometer, three each and wear tests were carried out with 90% bovine serum. There was no significant difference between three groups. Volumetric wear in the run-in phase for all tested nine ceramic liners averaged 0.27mm{sup 3}/million cycles and in the steady-state phase averaged 0.0042mm{sup 3}/million cycles. In addition to the 90% serum, 27% serum and saline were used as the lubricant for evaluate the effect of serum concentration on alumina on alumina wear couples. The wear test results showed that in all tested conditions the wear trends of alumina BEARING were bi-phasic and wear volume could be affected by the serum concentration. Both ''Run-in'' and ''Steady-state'' wear rates in 90% bovine serum were three times higher than those in saline. (orig.)

  12. Procedure for the characterization of radon potential in existing dwellings and to assess the annual average indoor radon concentration

    International Nuclear Information System (INIS)

    Collignan, Bernard; Powaga, Emilie

    2014-01-01

    Risk assessment due to radon exposure indoors is based on annual average indoor radon activity concentration. To assess the radon exposure in a building, measurement is generally performed during at least two months during heating period in order to be representative of the annual average value. This is because radon presence indoors could be very variable during time. This measurement protocol is fairly reliable but may be a limiting in the radon risk management, particularly during a real estate transaction due to the duration of the measurement and the limitation of the measurement period. A previous field study defined a rapid methodology to characterize radon entry in dwellings. The objective of this study was at first, to test this methodology in various dwellings to assess its relevance with a daily test. At second, a ventilation model was used to assess numerically the air renewal of a building, the indoor air quality all along the year and the annual average indoor radon activity concentration, based on local meteorological conditions, some building characteristics and in-situ characterization of indoor pollutant emission laws. Experimental results obtained on thirteen individual dwellings showed that it is generally possible to obtain a representative characterization of radon entry into homes. It was also possible to refine the methodology defined in the previous study. In addition, numerical assessments of annual average indoor radon activity concentration showed generally a good agreement with measured values. These results are encouraging to allow a procedure with a short measurement time to be used to characterize long-term radon potential in dwellings. - Highlights: • Test of a daily procedure to characterize radon potential in dwellings. • Numerical assessment of the annual radon concentration. • Procedure applied on thirteen dwellings, characterization generally satisfactory. • Procedure useful to manage radon risk in dwellings, for real

  13. A Stochastic Model of Space-Time Variability of Tropical Rainfall: I. Statistics of Spatial Averages

    Science.gov (United States)

    Kundu, Prasun K.; Bell, Thomas L.; Lau, William K. M. (Technical Monitor)

    2002-01-01

    Global maps of rainfall are of great importance in connection with modeling of the earth s climate. Comparison between the maps of rainfall predicted by computer-generated climate models with observation provides a sensitive test for these models. To make such a comparison, one typically needs the total precipitation amount over a large area, which could be hundreds of kilometers in size over extended periods of time of order days or months. This presents a difficult problem since rain varies greatly from place to place as well as in time. Remote sensing methods using ground radar or satellites detect rain over a large area by essentially taking a series of snapshots at infrequent intervals and indirectly deriving the average rain intensity within a collection of pixels , usually several kilometers in size. They measure area average of rain at a particular instant. Rain gauges, on the other hand, record rain accumulation continuously in time but only over a very small area tens of centimeters across, say, the size of a dinner plate. They measure only a time average at a single location. In making use of either method one needs to fill in the gaps in the observation - either the gaps in the area covered or the gaps in time of observation. This involves using statistical models to obtain information about the rain that is missed from what is actually detected. This paper investigates such a statistical model and validates it with rain data collected over the tropical Western Pacific from ship borne radars during TOGA COARE (Tropical Oceans Global Atmosphere Coupled Ocean-Atmosphere Response Experiment). The model incorporates a number of commonly observed features of rain. While rain varies rapidly with location and time, the variability diminishes when averaged over larger areas or longer periods of time. Moreover, rain is patchy in nature - at any instant on the average only a certain fraction of the observed pixels contain rain. The fraction of area covered by

  14. Average [O II] nebular emission associated with Mg II absorbers: dependence on Fe II absorption

    Science.gov (United States)

    Joshi, Ravi; Srianand, Raghunathan; Petitjean, Patrick; Noterdaeme, Pasquier

    2018-05-01

    We investigate the effect of Fe II equivalent width (W2600) and fibre size on the average luminosity of [O II] λλ3727, 3729 nebular emission associated with Mg II absorbers (at 0.55 ≤ z ≤ 1.3) in the composite spectra of quasars obtained with 3 and 2 arcsec fibres in the Sloan Digital Sky Survey. We confirm the presence of strong correlations between [O II] luminosity (L_{[O II]}) and equivalent width (W2796) and redshift of Mg II absorbers. However, we show L_{[O II]} and average luminosity surface density suffer from fibre size effects. More importantly, for a given fibre size, the average L_{[O II]} strongly depends on the equivalent width of Fe II absorption lines and found to be higher for Mg II absorbers with R ≡W2600/W2796 ≥ 0.5. In fact, we show the observed strong correlations of L_{[O II]} with W2796 and z of Mg II absorbers are mainly driven by such systems. Direct [O II] detections also confirm the link between L_{[O II]} and R. Therefore, one has to pay attention to the fibre losses and dependence of redshift evolution of Mg II absorbers on W2600 before using them as a luminosity unbiased probe of global star formation rate density. We show that the [O II] nebular emission detected in the stacked spectrum is not dominated by few direct detections (i.e. detections ≥3σ significant level). On an average, the systems with R ≥ 0.5 and W2796 ≥ 2 Å are more reddened, showing colour excess E(B - V) ˜ 0.02, with respect to the systems with R < 0.5 and most likely trace the high H I column density systems.

  15. Generalized Empirical Likelihood-Based Focused Information Criterion and Model Averaging

    Directory of Open Access Journals (Sweden)

    Naoya Sueishi

    2013-07-01

    Full Text Available This paper develops model selection and averaging methods for moment restriction models. We first propose a focused information criterion based on the generalized empirical likelihood estimator. We address the issue of selecting an optimal model, rather than a correct model, for estimating a specific parameter of interest. Then, this study investigates a generalized empirical likelihood-based model averaging estimator that minimizes the asymptotic mean squared error. A simulation study suggests that our averaging estimator can be a useful alternative to existing post-selection estimators.

  16. Strength and Pain Threshold Handheld Dynamometry Test Reliability in Patellofemoral Pain.

    Science.gov (United States)

    van der Heijden, R A; Vollebregt, T; Bierma-Zeinstra, S M A; van Middelkoop, M

    2015-12-01

    Patellofemoral pain syndrome (PFPS), characterized by peri- and retropatellar pain, is a common disorder in young, active people. The etiology is unclear; however, quadriceps strength seems to be a contributing factor, and sensitization might play a role. The study purpose is determining the inter-rater reliability of handheld dynamometry to test both quadriceps strength and pressure pain threshold (PPT), a measure for sensitization, in patients with PFPS. This cross-sectional case-control study comprises 3 quadriceps strength and one PPT measurements performed by 2 independent investigators in 22 PFPS patients and 16 matched controls. Inter-rater reliability was analyzed using intraclass correlation coefficients (ICC) and Bland-Altman plots. Inter-rater reliability of quadriceps strength testing was fair to good in PFPS patients (ICC=0.72) and controls (ICC=0.63). Bland-Altman plots showed an increased difference between assessors when average quadriceps strength values exceeded 250 N. Inter-rater reliability of PPT was excellent in patients (ICC=0.79) and fair to good in controls (ICC=0.52). Handheld dynamometry seems to be a reliable method to test both quadriceps strength and PPT in PFPS patients. Inter-rater reliability was higher in PFPS patients compared to control subjects. With regard to quadriceps testing, a higher variance between assessors occurs when quadriceps strength increases. © Georg Thieme Verlag KG Stuttgart · New York.

  17. Should the average tax rate be marginalized?

    Czech Academy of Sciences Publication Activity Database

    Feldman, N. E.; Katuščák, Peter

    -, č. 304 (2006), s. 1-65 ISSN 1211-3298 Institutional research plan: CEZ:MSM0021620846 Keywords : tax * labor supply * average tax Subject RIV: AH - Economics http://www.cerge-ei.cz/pdf/wp/Wp304.pdf

  18. Comparison of power pulses from homogeneous and time-average-equivalent models

    International Nuclear Information System (INIS)

    De, T.K.; Rouben, B.

    1995-01-01

    The time-average-equivalent model is an 'instantaneous' core model designed to reproduce the same three dimensional power distribution as that generated by a time-average model. However it has been found that the time-average-equivalent model gives a full-core static void reactivity about 8% smaller than the time-average or homogeneous models. To investigate the consequences of this difference in static void reactivity in time dependent calculations, simulations of the power pulse following a hypothetical large-loss-of-coolant accident were performed with a homogeneous model and compared with the power pulse from the time-average-equivalent model. The results show that there is a much smaller difference in peak dynamic reactivity than in static void reactivity between the two models. This is attributed to the fact that voiding is not complete, but also to the retardation effect of the delayed-neutron precursors on the dynamic flux shape. The difference in peak reactivity between the models is 0.06 milli-k. The power pulses are essentially the same in the two models, because the delayed-neutron fraction in the time-average-equivalent model is lower than in the homogeneous model, which compensates for the lower void reactivity in the time-average-equivalent model. (author). 1 ref., 5 tabs., 9 figs

  19. The drug target genes show higher evolutionary conservation than non-target genes.

    Science.gov (United States)

    Lv, Wenhua; Xu, Yongdeng; Guo, Yiying; Yu, Ziqi; Feng, Guanglong; Liu, Panpan; Luan, Meiwei; Zhu, Hongjie; Liu, Guiyou; Zhang, Mingming; Lv, Hongchao; Duan, Lian; Shang, Zhenwei; Li, Jin; Jiang, Yongshuai; Zhang, Ruijie

    2016-01-26

    Although evidence indicates that drug target genes share some common evolutionary features, there have been few studies analyzing evolutionary features of drug targets from an overall level. Therefore, we conducted an analysis which aimed to investigate the evolutionary characteristics of drug target genes. We compared the evolutionary conservation between human drug target genes and non-target genes by combining both the evolutionary features and network topological properties in human protein-protein interaction network. The evolution rate, conservation score and the percentage of orthologous genes of 21 species were included in our study. Meanwhile, four topological features including the average shortest path length, betweenness centrality, clustering coefficient and degree were considered for comparison analysis. Then we got four results as following: compared with non-drug target genes, 1) drug target genes had lower evolutionary rates; 2) drug target genes had higher conservation scores; 3) drug target genes had higher percentages of orthologous genes and 4) drug target genes had a tighter network structure including higher degrees, betweenness centrality, clustering coefficients and lower average shortest path lengths. These results demonstrate that drug target genes are more evolutionarily conserved than non-drug target genes. We hope that our study will provide valuable information for other researchers who are interested in evolutionary conservation of drug targets.

  20. AVERAGE METALLICITY AND STAR FORMATION RATE OF Ly{alpha} EMITTERS PROBED BY A TRIPLE NARROWBAND SURVEY

    Energy Technology Data Exchange (ETDEWEB)

    Nakajima, Kimihiko; Shimasaku, Kazuhiro; Ono, Yoshiaki; Okamura, Sadanori [Department of Astronomy, Graduate School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Ouchi, Masami [Institute for the Physics and Mathematics of the Universe (IPMU), TODIAS, The University of Tokyo, 5-1-5 Kashiwanoha, Kashiwa, Chiba 277-8583 (Japan); Lee, Janice C.; Ly, Chun [Observatories of the Carnegie Institution of Washington, 813 Santa Barbara Street, Pasadena, CA 91101 (United States); Foucaud, Sebastien [Department of Earth Sciences, National Taiwan Normal University, No. 88, Tingzhou Road, Sec. 4, Taipei 11677, Taiwan (China); Dale, Daniel A. [Department of Physics and Astronomy, University of Wyoming, Laramie, WY (United States); Salim, Samir [Department of Astronomy, Indiana University, Bloomington, IN (United States); Finn, Rose [Department of Physics, Siena College, Loudonville, NY (United States); Almaini, Omar, E-mail: nakajima@astron.s.u-tokyo.ac.jp [School of Physics and Astronomy, University of Nottingham, Nottingham (United Kingdom)

    2012-01-20

    We present the average metallicity and star formation rate (SFR) of Ly{alpha} emitters (LAEs) measured from our large-area survey with three narrowband (NB) filters covering the Ly{alpha}, [O II]{lambda}3727, and H{alpha}+[N II] lines of LAEs at z = 2.2. We select 919 z = 2.2 LAEs from Subaru/Suprime-Cam NB data in conjunction with Magellan/IMACS spectroscopy. Of these LAEs, 561 and 105 are observed with KPNO/NEWFIRM near-infrared NB filters whose central wavelengths are matched to redshifted [O II] and H{alpha} nebular lines, respectively. By stacking the near-infrared images of the LAEs, we successfully obtain average nebular-line fluxes of LAEs, the majority of which are too faint to be identified individually by NB imaging or deep spectroscopy. The stacked object has an H{alpha} luminosity of 1.7 Multiplication-Sign 10{sup 42} erg s{sup -1} corresponding to an SFR of 14 M{sub Sun} yr{sup -1}. We place, for the first time, a firm lower limit to the average metallicity of LAEs of Z {approx}> 0.09 Z{sub Sun} (2{sigma}) based on the [O II]/(H{alpha}+[N II]) index together with photoionization models and empirical relations. This lower limit of metallicity rules out the hypothesis that LAEs, so far observed at z {approx} 2, are extremely metal-poor (Z < 2 Multiplication-Sign 10{sup -2} Z{sub Sun }) galaxies at the 4{sigma} level. This limit is higher than a simple extrapolation of the observed mass-metallicity relation of z {approx} 2 UV-selected galaxies toward lower masses (5 Multiplication-Sign 10{sup 8} M{sub Sun }), but roughly consistent with a recently proposed fundamental mass-metallicity relation when the LAEs' relatively low SFR is taken into account. The H{alpha} and Ly{alpha} luminosities of our NB-selected LAEs indicate that the escape fraction of Ly{alpha} photons is {approx}12%-30%, much higher than the values derived for other galaxy populations at z {approx} 2.

  1. AVERAGE METALLICITY AND STAR FORMATION RATE OF Lyα EMITTERS PROBED BY A TRIPLE NARROWBAND SURVEY

    International Nuclear Information System (INIS)

    Nakajima, Kimihiko; Shimasaku, Kazuhiro; Ono, Yoshiaki; Okamura, Sadanori; Ouchi, Masami; Lee, Janice C.; Ly, Chun; Foucaud, Sebastien; Dale, Daniel A.; Salim, Samir; Finn, Rose; Almaini, Omar

    2012-01-01

    We present the average metallicity and star formation rate (SFR) of Lyα emitters (LAEs) measured from our large-area survey with three narrowband (NB) filters covering the Lyα, [O II]λ3727, and Hα+[N II] lines of LAEs at z = 2.2. We select 919 z = 2.2 LAEs from Subaru/Suprime-Cam NB data in conjunction with Magellan/IMACS spectroscopy. Of these LAEs, 561 and 105 are observed with KPNO/NEWFIRM near-infrared NB filters whose central wavelengths are matched to redshifted [O II] and Hα nebular lines, respectively. By stacking the near-infrared images of the LAEs, we successfully obtain average nebular-line fluxes of LAEs, the majority of which are too faint to be identified individually by NB imaging or deep spectroscopy. The stacked object has an Hα luminosity of 1.7 × 10 42 erg s –1 corresponding to an SFR of 14 M ☉ yr –1 . We place, for the first time, a firm lower limit to the average metallicity of LAEs of Z ∼> 0.09 Z ☉ (2σ) based on the [O II]/(Hα+[N II]) index together with photoionization models and empirical relations. This lower limit of metallicity rules out the hypothesis that LAEs, so far observed at z ∼ 2, are extremely metal-poor (Z –2 Z ☉ ) galaxies at the 4σ level. This limit is higher than a simple extrapolation of the observed mass-metallicity relation of z ∼ 2 UV-selected galaxies toward lower masses (5 × 10 8 M ☉ ), but roughly consistent with a recently proposed fundamental mass-metallicity relation when the LAEs' relatively low SFR is taken into account. The Hα and Lyα luminosities of our NB-selected LAEs indicate that the escape fraction of Lyα photons is ∼12%-30%, much higher than the values derived for other galaxy populations at z ∼ 2.

  2. Impact of Answer-Switching Behavior on Multiple-Choice Test Scores in Higher Education

    Directory of Open Access Journals (Sweden)

    Ramazan BAŞTÜRK

    2011-06-01

    Full Text Available The multiple- choice format is one of the most popular selected-response item formats used in educational testing. Researchers have shown that Multiple-choice type test is a useful vehicle for student assessment in core university subjects that usually have large student numbers. Even though the educators, test experts and different test recourses maintain the idea that the first answer should be retained, many researchers argued that this argument is not dependent with empirical findings. The main question of this study is to examine how the answer switching behavior affects the multiple-choice test score. Additionally, gender differences and relationship between number of answer switching behavior and item parameters (item difficulty and item discrimination were investigated. The participants in this study consisted of 207 upper-level College of Education students from mid-sized universities. A Midterm exam consisted of 20 multiple-choice questions was used. According to the result of this study, answer switching behavior statistically increase test scores. On the other hand, there is no significant gender difference in answer-switching behavior. Additionally, there is a significant negative relationship between answer switching behavior and item difficulties.

  3. Average arterial input function for quantitative dynamic contrast enhanced magnetic resonance imaging of neck nodal metastases

    International Nuclear Information System (INIS)

    Shukla-Dave, Amita; Lee, Nancy; Stambuk, Hilda; Wang, Ya; Huang, Wei; Thaler, Howard T; Patel, Snehal G; Shah, Jatin P; Koutcher, Jason A

    2009-01-01

    The present study determines the feasibility of generating an average arterial input function (Avg-AIF) from a limited population of patients with neck nodal metastases to be used for pharmacokinetic modeling of dynamic contrast-enhanced MRI (DCE-MRI) data in clinical trials of larger populations. Twenty patients (mean age 50 years [range 27–77 years]) with neck nodal metastases underwent pretreatment DCE-MRI studies with a temporal resolution of 3.75 to 7.5 sec on a 1.5T clinical MRI scanner. Eleven individual AIFs (Ind-AIFs) met the criteria of expected enhancement pattern and were used to generate Avg-AIF. Tofts model was used to calculate pharmacokinetic DCE-MRI parameters. Bland-Altman plots and paired Student t-tests were used to describe significant differences between the pharmacokinetic parameters obtained from individual and average AIFs. Ind-AIFs obtained from eleven patients were used to calculate the Avg-AIF. No overall significant difference (bias) was observed for the transfer constant (K trans ) measured with Ind-AIFs compared to Avg-AIF (p = 0.20 for region-of-interest (ROI) analysis and p = 0.18 for histogram median analysis). Similarly, no overall significant difference was observed for interstitial fluid space volume fraction (v e ) measured with Ind-AIFs compared to Avg-AIF (p = 0.48 for ROI analysis and p = 0.93 for histogram median analysis). However, the Bland-Altman plot suggests that as K trans increases, the Ind-AIF estimates tend to become proportionally higher than the Avg-AIF estimates. We found no statistically significant overall bias in K trans or v e estimates derived from Avg-AIF, generated from a limited population, as compared with Ind-AIFs. However, further study is needed to determine whether calibration is needed across the range of K trans . The Avg-AIF obtained from a limited population may be used for pharmacokinetic modeling of DCE-MRI data in larger population studies with neck nodal metastases. Further validation of

  4. Follow the Money: Strategies for Using Finance to Leverage Change in Higher Education. Complete to Compete Briefing Paper

    Science.gov (United States)

    Conklin, Kristin

    2011-01-01

    The U.S. spends twice as much as the average industrialized country on higher education, but continues to slide relative to other nations in the percentage of young adults with an associate degree or higher. Despite recent reductions in state aid to higher education, state taxpayers continue to be the largest single source of unrestricted funds…

  5. Decreases in average bacterial community rRNA operon copy number during succession.

    Science.gov (United States)

    Nemergut, Diana R; Knelman, Joseph E; Ferrenberg, Scott; Bilinski, Teresa; Melbourne, Brett; Jiang, Lin; Violle, Cyrille; Darcy, John L; Prest, Tiffany; Schmidt, Steven K; Townsend, Alan R

    2016-05-01

    Trait-based studies can help clarify the mechanisms driving patterns of microbial community assembly and coexistence. Here, we use a trait-based approach to explore the importance of rRNA operon copy number in microbial succession, building on prior evidence that organisms with higher copy numbers respond more rapidly to nutrient inputs. We set flasks of heterotrophic media into the environment and examined bacterial community assembly at seven time points. Communities were arrayed along a geographic gradient to introduce stochasticity via dispersal processes and were analyzed using 16 S rRNA gene pyrosequencing, and rRNA operon copy number was modeled using ancestral trait reconstruction. We found that taxonomic composition was similar between communities at the beginning of the experiment and then diverged through time; as well, phylogenetic clustering within communities decreased over time. The average rRNA operon copy number decreased over the experiment, and variance in rRNA operon copy number was lowest both early and late in succession. We then analyzed bacterial community data from other soil and sediment primary and secondary successional sequences from three markedly different ecosystem types. Our results demonstrate that decreases in average copy number are a consistent feature of communities across various drivers of ecological succession. Importantly, our work supports the scaling of the copy number trait over multiple levels of biological organization, ranging from cells to populations and communities, with implications for both microbial ecology and evolution.

  6. The Average Temporal and Spectral Evolution of Gamma-Ray Bursts

    International Nuclear Information System (INIS)

    Fenimore, E.E.

    1999-01-01

    We have averaged bright BATSE bursts to uncover the average overall temporal and spectral evolution of gamma-ray bursts (GRBs). We align the temporal structure of each burst by setting its duration to a standard duration, which we call T left-angleDurright-angle . The observed average open-quotes aligned T left-angleDurright-angle close quotes profile for 32 bright bursts with intermediate durations (16 - 40 s) has a sharp rise (within the first 20% of T left-angleDurright-angle ) and then a linear decay. Exponentials and power laws do not fit this decay. In particular, the power law seen in the X-ray afterglow (∝T -1.4 ) is not observed during the bursts, implying that the X-ray afterglow is not just an extension of the average temporal evolution seen during the gamma-ray phase. The average burst spectrum has a low-energy slope of -1.03, a high-energy slope of -3.31, and a peak in the νF ν distribution at 390 keV. We determine the average spectral evolution. Remarkably, it is also a linear function, with the peak of the νF ν distribution given by ∼680-600(T/T left-angleDurright-angle ) keV. Since both the temporal profile and the peak energy are linear functions, on average, the peak energy is linearly proportional to the intensity. This behavior is inconsistent with the external shock model. The observed temporal and spectral evolution is also inconsistent with that expected from variations in just a Lorentz factor. Previously, trends have been reported for GRB evolution, but our results are quantitative relationships that models should attempt to explain. copyright copyright 1999. The American Astronomical Society

  7. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-01-01

    to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic

  8. Is average daily travel time expenditure constant? In search of explanations for an increase in average travel time.

    NARCIS (Netherlands)

    van Wee, B.; Rietveld, P.; Meurs, H.

    2006-01-01

    Recent research suggests that the average time spent travelling by the Dutch population has increased over the past decades. However, different data sources show different levels of increase. This paper explores possible causes for this increase. They include a rise in incomes, which has probably

  9. Yearly, seasonal and monthly daily average diffuse sky radiation models

    International Nuclear Information System (INIS)

    Kassem, A.S.; Mujahid, A.M.; Turner, D.W.

    1993-01-01

    A daily average diffuse sky radiation regression model based on daily global radiation was developed utilizing two year data taken near Blytheville, Arkansas (Lat. =35.9 0 N, Long. = 89.9 0 W), U.S.A. The model has a determination coefficient of 0.91 and 0.092 standard error of estimate. The data were also analyzed for a seasonal dependence and four seasonal average daily models were developed for the spring, summer, fall and winter seasons. The coefficient of determination is 0.93, 0.81, 0.94 and 0.93, whereas the standard error of estimate is 0.08, 0.102, 0.042 and 0.075 for spring, summer, fall and winter, respectively. A monthly average daily diffuse sky radiation model was also developed. The coefficient of determination is 0.92 and the standard error of estimate is 0.083. A seasonal monthly average model was also developed which has 0.91 coefficient of determination and 0.085 standard error of estimate. The developed monthly daily average and daily models compare well with a selected number of previously developed models. (author). 11 ref., figs., tabs

  10. Parameterization of Time-Averaged Suspended Sediment Concentration in the Nearshore

    Directory of Open Access Journals (Sweden)

    Hyun-Doug Yoon

    2015-11-01

    Full Text Available To quantify the effect of wave breaking turbulence on sediment transport in the nearshore, the vertical distribution of time-averaged suspended sediment concentration (SSC in the surf zone was parameterized in terms of the turbulent kinetic energy (TKE at different cross-shore locations, including the bar crest, bar trough, and inner surf zone. Using data from a large-scale laboratory experiment, a simple relationship was developed between the time-averaged SSC and the time-averaged TKE. The vertical variation of the time-averaged SSC was fitted to an equation analogous to the turbulent dissipation rate term. At the bar crest, the proposed equation was slightly modified to incorporate the effect of near-bed sediment processes and yielded reasonable agreement. This parameterization yielded the best agreement at the bar trough, with a coefficient of determination R2 ≥ 0.72 above the bottom boundary layer. The time-averaged SSC in the inner surf zone showed good agreement near the bed but poor agreement near the water surface, suggesting that there is a different sedimentation mechanism that controls the SSC in the inner surf zone.

  11. 42 CFR 100.2 - Average cost of a health insurance policy.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false Average cost of a health insurance policy. 100.2... VACCINE INJURY COMPENSATION § 100.2 Average cost of a health insurance policy. For purposes of determining..., less certain deductions. One of the deductions is the average cost of a health insurance policy, as...

  12. Annual average equivalent dose of workers form health area

    International Nuclear Information System (INIS)

    Daltro, T.F.L.; Campos, L.L.

    1992-01-01

    The data of personnel monitoring during 1985 and 1991 of personnel that work in health area were studied, obtaining a general overview of the value change of annual average equivalent dose. Two different aspects were presented: the analysis of annual average equivalent dose in the different sectors of a hospital and the comparison of these doses in the same sectors in different hospitals. (C.G.C.)

  13. Time averaging, ageing and delay analysis of financial time series

    Science.gov (United States)

    Cherstvy, Andrey G.; Vinod, Deepak; Aghion, Erez; Chechkin, Aleksei V.; Metzler, Ralf

    2017-06-01

    We introduce three strategies for the analysis of financial time series based on time averaged observables. These comprise the time averaged mean squared displacement (MSD) as well as the ageing and delay time methods for varying fractions of the financial time series. We explore these concepts via statistical analysis of historic time series for several Dow Jones Industrial indices for the period from the 1960s to 2015. Remarkably, we discover a simple universal law for the delay time averaged MSD. The observed features of the financial time series dynamics agree well with our analytical results for the time averaged measurables for geometric Brownian motion, underlying the famed Black-Scholes-Merton model. The concepts we promote here are shown to be useful for financial data analysis and enable one to unveil new universal features of stock market dynamics.

  14. A clinical comparison between Technegas SPECT, CT, and pulmonary function tests in patients with emphysema

    International Nuclear Information System (INIS)

    Satoh, Katashi; Nakano, Satoru; Tanabe, Masatada

    1997-01-01

    Pulmonary emphysema can be easily diagnosed by X-ray CT (CT) as low attenuation areas. Recently 99m Tc Technegas has been used for ventilation scintigraphy. The present study was undertaken to assess the usefulness of SPECT images using Technegas scintigraphy and CT, as compared with pulmonary function tests in patients with pulmonary emphysema. Fifteen patients were examined. We classified and defined the score according to the findings of Technegas scintigraphy (Technegas) images into four grades, from Score 0 to Score 3, and those of CT into five grades, from Score 0 to Score 4, both from normal to severe. The right lung was divided into nine segments, and the left into eight. To obtain the average of the entire lung, the total score from both lungs was divided by 17. These average scores in for SPECT and CT were compared with the results of pulmonary function tests. The average score of Technegas correlated well with % forced expiratory volume in one second (%FEV 1.0 ) (r=0.87), and forced expiratory volume in one second % (FEV 1.0 %) (r=0.83). These results were better than those provided by CT. The average scores of the upper and lower lung fields were also calculated. The score in the upper lung field was higher than that in the lower field. Technegas can assess ventilation impairment in pulmonary emphysema more easily than CT, especially in the upper lung field. (author)

  15. Analysis of experimental data: The average shape of extreme wave forces on monopile foundations and the NewForce model

    DEFF Research Database (Denmark)

    Schløer, Signe; Bredmose, Henrik; Ghadirian, Amin

    2017-01-01

    Experiments with a stiff pile subjected to extreme wave forces typical of offshore wind farm storm conditions are considered. The exceedance probability curves of the nondimensional force peaks and crest heights are analysed. The average force time history normalised with their peak values are co...... to the average shapes. For more nonlinear wave shapes, higher order terms has to be considered in order for the NewForce model to be able to predict the expected shapes.......Experiments with a stiff pile subjected to extreme wave forces typical of offshore wind farm storm conditions are considered. The exceedance probability curves of the nondimensional force peaks and crest heights are analysed. The average force time history normalised with their peak values...... are compared across the sea states. It is found that the force shapes show a clear similarity when grouped after the values of the normalised peak force, F/(ρghR2), normalised depth h/(gT2p) and presented in a normalised time scale t/Ta. For the largest force events, slamming can be seen as a distinct ‘hat...

  16. A more general expression for the average X-ray diffraction intensity of crystals with an incommensurate one-dimensional modulation

    International Nuclear Information System (INIS)

    Lam, E.J.W.; Beurskens, P.T.; Smaalen, S. van

    1994-01-01

    Statistical methods are used to derive an expression for the average X-ray diffraction intensity, as a function of (sinθ)/λ, of crystals with an incommensurate one-dimensional modulation. Displacive and density modulations are considered, as well as a combination of these two. The atomic modulation functions are given by truncated Fourier series that may contain higher-order harmonics. The resulting expression for the average X-ray diffraction intensity is valid for main reflections and low-order satellite reflections. The modulation of individual atoms is taken into account by the introduction of overall modulation amplitudes. The accuracy of this expression for the average X-ray diffraction intensity is illustrated by comparison with model structures. A definition is presented for normalized structure factors of crystals with an incommensurate one-dimensional modulation that can be used in direct-methods procedures for solving the phase problem in X-ray crystallography. A numerical fitting procedure is described that can extract a scale factor, an overall temperature parameter and overall modulation amplitudes from experimental reflection intensities. (orig.)

  17. Can we predict podiatric medical school grade point average using an admission screen?

    Science.gov (United States)

    Shaw, Graham P; Velis, Evelio; Molnar, David

    2012-01-01

    Most medical school admission committees use cognitive and noncognitive measures to inform their final admission decisions. We evaluated using admission data to predict academic success for podiatric medical students using first-semester grade point average (GPA) and cumulative GPA at graduation as outcome measures. In this study, we used linear multiple regression to examine the predictive power of an admission screen. A cross-validation technique was used to assess how the results of the regression model would generalize to an independent data set. Undergraduate GPA and Medical College Admission Test score accounted for only 22% of the variance in cumulative GPA at graduation. Undergraduate GPA, Medical College Admission Test score, and a time trend variable accounted for only 24% of the variance in first-semester GPA. Seventy-five percent of the individual variation in cumulative GPA at graduation and first-semester GPA remains unaccounted for by admission screens that rely on only cognitive measures, such as undergraduate GPA and Medical College Admission Test score. A reevaluation of admission screens is warranted, and medical educators should consider broadening the criteria used to select the podiatric physicians of the future.

  18. 47 CFR 64.1801 - Geographic rate averaging and rate integration.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Geographic rate averaging and rate integration. 64.1801 Section 64.1801 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) MISCELLANEOUS RULES RELATING TO COMMON CARRIERS Geographic Rate Averaging and...

  19. Potential breeding distributions of U.S. birds predicted with both short-term variability and long-term average climate data.

    Science.gov (United States)

    Bateman, Brooke L; Pidgeon, Anna M; Radeloff, Volker C; Flather, Curtis H; VanDerWal, Jeremy; Akçakaya, H Resit; Thogmartin, Wayne E; Albright, Thomas P; Vavrus, Stephen J; Heglund, Patricia J

    2016-12-01

    Climate conditions, such as temperature or precipitation, averaged over several decades strongly affect species distributions, as evidenced by experimental results and a plethora of models demonstrating statistical relations between species occurrences and long-term climate averages. However, long-term averages can conceal climate changes that have occurred in recent decades and may not capture actual species occurrence well because the distributions of species, especially at the edges of their range, are typically dynamic and may respond strongly to short-term climate variability. Our goal here was to test whether bird occurrence models can be predicted by either covariates based on short-term climate variability or on long-term climate averages. We parameterized species distribution models (SDMs) based on either short-term variability or long-term average climate covariates for 320 bird species in the conterminous USA and tested whether any life-history trait-based guilds were particularly sensitive to short-term conditions. Models including short-term climate variability performed well based on their cross-validated area-under-the-curve AUC score (0.85), as did models based on long-term climate averages (0.84). Similarly, both models performed well compared to independent presence/absence data from the North American Breeding Bird Survey (independent AUC of 0.89 and 0.90, respectively). However, models based on short-term variability covariates more accurately classified true absences for most species (73% of true absences classified within the lowest quarter of environmental suitability vs. 68%). In addition, they have the advantage that they can reveal the dynamic relationship between species and their environment because they capture the spatial fluctuations of species potential breeding distributions. With this information, we can identify which species and guilds are sensitive to climate variability, identify sites of high conservation value where climate

  20. Rapid diagnostic tests for typhoid and paratyphoid (enteric) fever.

    Science.gov (United States)

    Wijedoru, Lalith; Mallett, Sue; Parry, Christopher M

    2017-05-26

    rates in the study populations ranged from 1% to 75% (median prevalence 24%, interquartile range (IQR) 11% to 46%). The included studies evaluated 16 different RDTs, and 16 studies compared two or more different RDTs. Only three studies used the Grade 1 reference standard, and only 11 studies recruited unselected febrile patients. Most included studies were from Asia, with five studies from sub-Saharan Africa. All of the RDTs were designed to detect S.Typhi infection only.Most studies evaluated three RDTs and their variants: TUBEX in 14 studies; Typhidot (Typhidot, Typhidot-M, and TyphiRapid-Tr02) in 22 studies; and the Test-It Typhoid immunochromatographic lateral flow assay, and its earlier prototypes (dipstick, latex agglutination) developed by the Royal Tropical Institute, Amsterdam (KIT) in nine studies. Meta-analyses showed an average sensitivity of 78% (95% confidence interval (CI) 71% to 85%) and specificity of 87% (95% CI 82% to 91%) for TUBEX; and an average sensitivity of 69% (95% CI 59% to 78%) and specificity of 90% (95% CI 78% to 93%) for all Test-It Typhoid and prototype tests (KIT). Across all forms of the Typhidot test, the average sensitivity was 84% (95% CI 73% to 91%) and specificity was 79% (95% CI 70% to 87%). When we based the analysis on the 13 studies of the Typhidot test that either reported indeterminate test results or where the test format means there are no indeterminate results, the average sensitivity was 78% (95% CI 65% to 87%) and specificity was 77% (95% CI 66% to 86%). We did not identify any difference in either sensitivity or specificity between TUBEX, Typhidot, and Test-it Typhoid tests when based on comparison to the 13 Typhidot studies where indeterminate results are either reported or not applicable. If TUBEX and Test-it Typhoid are compared to all Typhidot studies, the sensitivity of Typhidot was higher than Test-it Typhoid (15% (95% CI 2% to 28%), but other comparisons did not show a difference at the 95% level of CIs.In a

  1. Arrange and average algorithm for the retrieval of aerosol parameters from multiwavelength high-spectral-resolution lidar/Raman lidar data.

    Science.gov (United States)

    Chemyakin, Eduard; Müller, Detlef; Burton, Sharon; Kolgotin, Alexei; Hostetler, Chris; Ferrare, Richard

    2014-11-01

    We present the results of a feasibility study in which a simple, automated, and unsupervised algorithm, which we call the arrange and average algorithm, is used to infer microphysical parameters (complex refractive index, effective radius, total number, surface area, and volume concentrations) of atmospheric aerosol particles. The algorithm uses backscatter coefficients at 355, 532, and 1064 nm and extinction coefficients at 355 and 532 nm as input information. Testing of the algorithm is based on synthetic optical data that are computed from prescribed monomodal particle size distributions and complex refractive indices that describe spherical, primarily fine mode pollution particles. We tested the performance of the algorithm for the "3 backscatter (β)+2 extinction (α)" configuration of a multiwavelength aerosol high-spectral-resolution lidar (HSRL) or Raman lidar. We investigated the degree to which the microphysical results retrieved by this algorithm depends on the number of input backscatter and extinction coefficients. For example, we tested "3β+1α," "2β+1α," and "3β" lidar configurations. This arrange and average algorithm can be used in two ways. First, it can be applied for quick data processing of experimental data acquired with lidar. Fast automated retrievals of microphysical particle properties are needed in view of the enormous amount of data that can be acquired by the NASA Langley Research Center's airborne "3β+2α" High-Spectral-Resolution Lidar (HSRL-2). It would prove useful for the growing number of ground-based multiwavelength lidar networks, and it would provide an option for analyzing the vast amount of optical data acquired with a future spaceborne multiwavelength lidar. The second potential application is to improve the microphysical particle characterization with our existing inversion algorithm that uses Tikhonov's inversion with regularization. This advanced algorithm has recently undergone development to allow automated and

  2. The B-dot Earth Average Magnetic Field

    Science.gov (United States)

    Capo-Lugo, Pedro A.; Rakoczy, John; Sanders, Devon

    2013-01-01

    The average Earth's magnetic field is solved with complex mathematical models based on mean square integral. Depending on the selection of the Earth magnetic model, the average Earth's magnetic field can have different solutions. This paper presents a simple technique that takes advantage of the damping effects of the b-dot controller and is not dependent of the Earth magnetic model; but it is dependent on the magnetic torquers of the satellite which is not taken into consideration in the known mathematical models. Also the solution of this new technique can be implemented so easily that the flight software can be updated during flight, and the control system can have current gains for the magnetic torquers. Finally, this technique is verified and validated using flight data from a satellite that it has been in orbit for three years.

  3. Asymptotic Time Averages and Frequency Distributions

    Directory of Open Access Journals (Sweden)

    Muhammad El-Taha

    2016-01-01

    Full Text Available Consider an arbitrary nonnegative deterministic process (in a stochastic setting {X(t,  t≥0} is a fixed realization, i.e., sample-path of the underlying stochastic process with state space S=(-∞,∞. Using a sample-path approach, we give necessary and sufficient conditions for the long-run time average of a measurable function of process to be equal to the expectation taken with respect to the same measurable function of its long-run frequency distribution. The results are further extended to allow unrestricted parameter (time space. Examples are provided to show that our condition is not superfluous and that it is weaker than uniform integrability. The case of discrete-time processes is also considered. The relationship to previously known sufficient conditions, usually given in stochastic settings, will also be discussed. Our approach is applied to regenerative processes and an extension of a well-known result is given. For researchers interested in sample-path analysis, our results will give them the choice to work with the time average of a process or its frequency distribution function and go back and forth between the two under a mild condition.

  4. A precise measurement of the average b hadron lifetime

    CERN Document Server

    Buskulic, Damir; De Bonis, I; Décamp, D; Ghez, P; Goy, C; Lees, J P; Lucotte, A; Minard, M N; Odier, P; Pietrzyk, B; Ariztizabal, F; Chmeissani, M; Crespo, J M; Efthymiopoulos, I; Fernández, E; Fernández-Bosman, M; Gaitan, V; Garrido, L; Martínez, M; Orteu, S; Pacheco, A; Padilla, C; Palla, Fabrizio; Pascual, A; Perlas, J A; Sánchez, F; Teubert, F; Colaleo, A; Creanza, D; De Palma, M; Farilla, A; Gelao, G; Girone, M; Iaselli, Giuseppe; Maggi, G; Maggi, M; Marinelli, N; Natali, S; Nuzzo, S; Ranieri, A; Raso, G; Romano, F; Ruggieri, F; Selvaggi, G; Silvestris, L; Tempesta, P; Zito, G; Huang, X; Lin, J; Ouyang, Q; Wang, T; Xie, Y; Xu, R; Xue, S; Zhang, J; Zhang, L; Zhao, W; Bonvicini, G; Cattaneo, M; Comas, P; Coyle, P; Drevermann, H; Engelhardt, A; Forty, Roger W; Frank, M; Hagelberg, R; Harvey, J; Jacobsen, R; Janot, P; Jost, B; Knobloch, J; Lehraus, Ivan; Markou, C; Martin, E B; Mato, P; Meinhard, H; Minten, Adolf G; Miquel, R; Oest, T; Palazzi, P; Pater, J R; Pusztaszeri, J F; Ranjard, F; Rensing, P E; Rolandi, Luigi; Schlatter, W D; Schmelling, M; Schneider, O; Tejessy, W; Tomalin, I R; Venturi, A; Wachsmuth, H W; Wiedenmann, W; Wildish, T; Witzeling, W; Wotschack, J; Ajaltouni, Ziad J; Bardadin-Otwinowska, Maria; Barrès, A; Boyer, C; Falvard, A; Gay, P; Guicheney, C; Henrard, P; Jousset, J; Michel, B; Monteil, S; Montret, J C; Pallin, D; Perret, P; Podlyski, F; Proriol, J; Rossignol, J M; Saadi, F; Fearnley, Tom; Hansen, J B; Hansen, J D; Hansen, J R; Hansen, P H; Nilsson, B S; Kyriakis, A; Simopoulou, Errietta; Siotis, I; Vayaki, Anna; Zachariadou, K; Blondel, A; Bonneaud, G R; Brient, J C; Bourdon, P; Passalacqua, L; Rougé, A; Rumpf, M; Tanaka, R; Valassi, Andrea; Verderi, M; Videau, H L; Candlin, D J; Parsons, M I; Focardi, E; Parrini, G; Corden, M; Delfino, M C; Georgiopoulos, C H; Jaffe, D E; Antonelli, A; Bencivenni, G; Bologna, G; Bossi, F; Campana, P; Capon, G; Chiarella, V; Felici, G; Laurelli, P; Mannocchi, G; Murtas, F; Murtas, G P; Pepé-Altarelli, M; Dorris, S J; Halley, A W; ten Have, I; Knowles, I G; Lynch, J G; Morton, W T; O'Shea, V; Raine, C; Reeves, P; Scarr, J M; Smith, K; Smith, M G; Thompson, A S; Thomson, F; Thorn, S; Turnbull, R M; Becker, U; Braun, O; Geweniger, C; Graefe, G; Hanke, P; Hepp, V; Kluge, E E; Putzer, A; Rensch, B; Schmidt, M; Sommer, J; Stenzel, H; Tittel, K; Werner, S; Wunsch, M; Beuselinck, R; Binnie, David M; Cameron, W; Colling, D J; Dornan, Peter J; Konstantinidis, N P; Moneta, L; Moutoussi, A; Nash, J; San Martin, G; Sedgbeer, J K; Stacey, A M; Dissertori, G; Girtler, P; Kneringer, E; Kuhn, D; Rudolph, G; Bowdery, C K; Brodbeck, T J; Colrain, P; Crawford, G; Finch, A J; Foster, F; Hughes, G; Sloan, Terence; Whelan, E P; Williams, M I; Galla, A; Greene, A M; Kleinknecht, K; Quast, G; Raab, J; Renk, B; Sander, H G; Wanke, R; Van Gemmeren, P; Zeitnitz, C; Aubert, Jean-Jacques; Bencheikh, A M; Benchouk, C; Bonissent, A; Bujosa, G; Calvet, D; Carr, J; Diaconu, C A; Etienne, F; Thulasidas, M; Nicod, D; Payre, P; Rousseau, D; Talby, M; Abt, I; Assmann, R W; Bauer, C; Blum, Walter; Brown, D; Dietl, H; Dydak, Friedrich; Ganis, G; Gotzhein, C; Jakobs, K; Kroha, H; Lütjens, G; Lutz, Gerhard; Männer, W; Moser, H G; Richter, R H; Rosado-Schlosser, A; Schael, S; Settles, Ronald; Seywerd, H C J; Stierlin, U; Saint-Denis, R; Wolf, G; Alemany, R; Boucrot, J; Callot, O; Cordier, A; Courault, F; Davier, M; Duflot, L; Grivaz, J F; Heusse, P; Jacquet, M; Kim, D W; Le Diberder, F R; Lefrançois, J; Lutz, A M; Musolino, G; Nikolic, I A; Park, H J; Park, I C; Schune, M H; Simion, S; Veillet, J J; Videau, I; Abbaneo, D; Azzurri, P; Bagliesi, G; Batignani, G; Bettarini, S; Bozzi, C; Calderini, G; Carpinelli, M; Ciocci, M A; Ciulli, V; Dell'Orso, R; Fantechi, R; Ferrante, I; Foà, L; Forti, F; Giassi, A; Giorgi, M A; Gregorio, A; Ligabue, F; Lusiani, A; Marrocchesi, P S; Messineo, A; Rizzo, G; Sanguinetti, G; Sciabà, A; Spagnolo, P; Steinberger, Jack; Tenchini, Roberto; Tonelli, G; Triggiani, G; Vannini, C; Verdini, P G; Walsh, J; Betteridge, A P; Blair, G A; Bryant, L M; Cerutti, F; Gao, Y; Green, M G; Johnson, D L; Medcalf, T; Mir, L M; Perrodo, P; Strong, J A; Bertin, V; Botterill, David R; Clifft, R W; Edgecock, T R; Haywood, S; Edwards, M; Maley, P; Norton, P R; Thompson, J C; Bloch-Devaux, B; Colas, P; Duarte, H; Emery, S; Kozanecki, Witold; Lançon, E; Lemaire, M C; Locci, E; Marx, B; Pérez, P; Rander, J; Renardy, J F; Rosowsky, A; Roussarie, A; Schuller, J P; Schwindling, J; Si Mohand, D; Trabelsi, A; Vallage, B; Johnson, R P; Kim, H Y; Litke, A M; McNeil, M A; Taylor, G; Beddall, A; Booth, C N; Boswell, R; Cartwright, S L; Combley, F; Dawson, I; Köksal, A; Letho, M; Newton, W M; Rankin, C; Thompson, L F; Böhrer, A; Brandt, S; Cowan, G D; Feigl, E; Grupen, Claus; Lutters, G; Minguet-Rodríguez, J A; Rivera, F; Saraiva, P; Smolik, L; Stephan, F; Apollonio, M; Bosisio, L; Della Marina, R; Giannini, G; Gobbo, B; Ragusa, F; Rothberg, J E; Wasserbaech, S R; Armstrong, S R; Bellantoni, L; Elmer, P; Feng, P; Ferguson, D P S; Gao, Y S; González, S; Grahl, J; Harton, J L; Hayes, O J; Hu, H; McNamara, P A; Nachtman, J M; Orejudos, W; Pan, Y B; Saadi, Y; Schmitt, M; Scott, I J; Sharma, V; Turk, J; Walsh, A M; Wu Sau Lan; Wu, X; Yamartino, J M; Zheng, M; Zobernig, G

    1996-01-01

    An improved measurement of the average b hadron lifetime is performed using a sample of 1.5 million hadronic Z decays, collected during the 1991-1993 runs of ALEPH, with the silicon vertex detector fully operational. This uses the three-dimensional impact parameter distribution of lepton tracks coming from semileptonic b decays and yields an average b hadron lifetime of 1.533 \\pm 0.013 \\pm 0.022 ps.

  5. A study of statistical tests for near-real-time materials accountancy using field test data of Tokai reprocessing plant

    International Nuclear Information System (INIS)

    Ihara, Hitoshi; Nishimura, Hideo; Ikawa, Koji; Miura, Nobuyuki; Iwanaga, Masayuki; Kusano, Toshitsugu.

    1988-03-01

    An Near-Real-Time Materials Accountancy(NRTA) system had been developed as an advanced safeguards measure for PNC Tokai Reprocessing Plant; a minicomputer system for NRTA data processing was designed and constructed. A full scale field test was carried out as a JASPAS(Japan Support Program for Agency Safeguards) project with the Agency's participation and the NRTA data processing system was used. Using this field test data, investigation of the detection power of a statistical test under real circumstances was carried out for five statistical tests, i.e., a significance test of MUF, CUMUF test, average loss test, MUF residual test and Page's test on MUF residuals. The result shows that the CUMUF test, average loss test, MUF residual test and the Page's test on MUF residual test are useful to detect a significant loss or diversion. An unmeasured inventory estimation model for the PNC reprocessing plant was developed in this study. Using this model, the field test data from the C-1 to 85 - 2 campaigns were re-analyzed. (author)

  6. Decision trees with minimum average depth for sorting eight elements

    KAUST Repository

    AbouEisha, Hassan M.; Chikalov, Igor; Moshkov, Mikhail

    2015-01-01

    We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees

  7. Small female rib cage fracture in frontal sled tests.

    Science.gov (United States)

    Shaw, Greg; Lessley, David; Ash, Joseph; Poplin, Jerry; McMurry, Tim; Sochor, Mark; Crandall, Jeff

    2017-01-02

    The 2 objectives of this study are to (1) examine the rib and sternal fractures sustained by small stature elderly females in simulated frontal crashes and (2) determine how the findings are characterized by prior knowledge and field data. A test series was conducted to evaluate the response of 5 elderly (average age 76 years) female postmortem human subjects (PMHS), similar in mass and size to a 5th percentile female, in 30 km/h frontal sled tests. The subjects were restrained on a rigid planar seat by bilateral rigid knee bolsters, pelvic blocks, and a custom force-limited 3-point shoulder and lap belt. Posttest subject injury assessment included identifying rib cage fractures by means of a radiologist read of a posttest computed tomography (CT) and an autopsy. The data from a motion capture camera system were processed to provide chest deflection, defined as the movement of the sternum relative to the spine at the level of T8.  A complementary field data investigation involved querying the NASS-CDS database over the years 1997-2012. The targeted cases involved belted front seat small female passenger vehicle occupants over 40 years old who were injured in 25 to 35 km/h delta-V frontal crashes (11 to 1 o'clock). Peak upper shoulder belt tension averaged 1,970 N (SD = 140 N) in the sled tests. For all subjects, the peak x-axis deflection was recorded at the sternum with an average of -44.5 mm or 25% of chest depth. The thoracic injury severity based on the number and distribution of rib fractures yielded 4 subjects coded as Abbreviated Injury Scale (AIS) 3 (serious) and one as AIS 5 (critical). The NASS-CDS field data investigation of small females identified 205 occupants who met the search criteria. Rib fractures were reported for 2.7% of the female occupants. The small elderly test subjects sustained a higher number of rib cage fractures than expected in what was intended to be a minimally injurious frontal crash test condition. Neither field studies nor

  8. Chromospheric oscillations observed with OSO 8. III. Average phase spectra for Si II

    International Nuclear Information System (INIS)

    White, O.R.; Athay, R.G.

    1979-01-01

    Time series of intensity and Doppler-shift fluctuations in the Si II emission lines lambda816.93 and lambda817.45 are Fourier analyzed to determine the frequency variation of phase differences between intensity and velocity and between these two lines formed 300 km apart in the middle chromosphere. Average phase spectra show that oscillations between 2 and 9 mHz in the two lines have time delays from 35 to 40 s, which is consistent with the upward propagation of sound wave at 8.6-7.5 km s -1 . In this same frequency band near 3 mHz, maximum brightness leads maximum blueshift by 60 0 . At frequencies above 11 mHz where the power spectrum is flat, the phase differences are uncertain, but approximately 65% of the cases indicate upward propagation. At these higher frequencies, the phase lead between intensity and blue Doppler shift ranges from 0 0 to 180 0 with an average value of 90 0 . However, the phase estimates in this upper band are corrupted by both aliasing and randomness inherent to the measured signals. Phase differences in the two narrow spectral features seen at 10.5 and 27 mHz in the power spectra are shown to be consistent with properties expected for aliases of the wheel rotation rate of the spacecraft wheel section

  9. Generalized Jackknife Estimators of Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic...

  10. A boundary-optimized rejection region test for the two-sample binomial problem.

    Science.gov (United States)

    Gabriel, Erin E; Nason, Martha; Fay, Michael P; Follmann, Dean A

    2018-03-30

    Testing the equality of 2 proportions for a control group versus a treatment group is a well-researched statistical problem. In some settings, there may be strong historical data that allow one to reliably expect that the control proportion is one, or nearly so. While one-sample tests or comparisons to historical controls could be used, neither can rigorously control the type I error rate in the event the true control rate changes. In this work, we propose an unconditional exact test that exploits the historical information while controlling the type I error rate. We sequentially construct a rejection region by first maximizing the rejection region in the space where all controls have an event, subject to the constraint that our type I error rate does not exceed α for any true event rate; then with any remaining α we maximize the additional rejection region in the space where one control avoids the event, and so on. When the true control event rate is one, our test is the most powerful nonrandomized test for all points in the alternative space. When the true control event rate is nearly one, we demonstrate that our test has equal or higher mean power, averaging over the alternative space, than a variety of well-known tests. For the comparison of 4 controls and 4 treated subjects, our proposed test has higher power than all comparator tests. We demonstrate the properties of our proposed test by simulation and use our method to design a malaria vaccine trial. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  11. Estimating average glandular dose by measuring glandular rate in mammograms

    International Nuclear Information System (INIS)

    Goto, Sachiko; Azuma, Yoshiharu; Sumimoto, Tetsuhiro; Eiho, Shigeru

    2003-01-01

    The glandular rate of the breast was objectively measured in order to calculate individual patient exposure dose (average glandular dose) in mammography. By employing image processing techniques and breast-equivalent phantoms with various glandular rate values, a conversion curve for pixel value to glandular rate can be determined by a neural network. Accordingly, the pixel values in clinical mammograms can be converted to the glandular rate value for each pixel. The individual average glandular dose can therefore be calculated using the individual glandular rates on the basis of the dosimetry method employed for quality control in mammography. In the present study, a data set of 100 craniocaudal mammograms from 50 patients was used to evaluate our method. The average glandular rate and average glandular dose of the data set were 41.2% and 1.79 mGy, respectively. The error in calculating the individual glandular rate can be estimated to be less than ±3%. When the calculation error of the glandular rate is taken into consideration, the error in the individual average glandular dose can be estimated to be 13% or less. We feel that our method for determining the glandular rate from mammograms is useful for minimizing subjectivity in the evaluation of patient breast composition. (author)

  12. An application of commercial data averaging techniques in pulsed photothermal experiments

    International Nuclear Information System (INIS)

    Grozescu, I.V.; Moksin, M.M.; Wahab, Z.A.; Yunus, W.M.M.

    1997-01-01

    We present an application of data averaging technique commonly implemented in many commercial digital oscilloscopes or waveform digitizers. The technique was used for transient data averaging in the pulsed photothermal radiometry experiments. Photothermal signals are surrounded by an important amount of noise which affect the precision of the measurements. The effect of the noise level on photothermal signal parameters in our particular case, fitted decay time, is shown. The results of the analysis can be used in choosing the most effective averaging technique and estimating the averaging parameter values. This would help to reduce the data acquisition time while improving the signal-to-noise ratio

  13. Runoff and leaching of metolachlor from Mississippi River alluvial soil during seasons of average and below-average rainfall.

    Science.gov (United States)

    Southwick, Lloyd M; Appelboom, Timothy W; Fouss, James L

    2009-02-25

    The movement of the herbicide metolachlor [2-chloro-N-(2-ethyl-6-methylphenyl)-N-(2-methoxy-1-methylethyl)acetamide] via runoff and leaching from 0.21 ha plots planted to corn on Mississippi River alluvial soil (Commerce silt loam) was measured for a 6-year period, 1995-2000. The first three years received normal rainfall (30 year average); the second three years experienced reduced rainfall. The 4-month periods prior to application plus the following 4 months after application were characterized by 1039 +/- 148 mm of rainfall for 1995-1997 and by 674 +/- 108 mm for 1998-2000. During the normal rainfall years 216 +/- 150 mm of runoff occurred during the study seasons (4 months following herbicide application), accompanied by 76.9 +/- 38.9 mm of leachate. For the low-rainfall years these amounts were 16.2 +/- 18.2 mm of runoff (92% less than the normal years) and 45.1 +/- 25.5 mm of leachate (41% less than the normal seasons). Runoff of metolachlor during the normal-rainfall seasons was 4.5-6.1% of application, whereas leaching was 0.10-0.18%. For the below-normal periods, these losses were 0.07-0.37% of application in runoff and 0.22-0.27% in leachate. When averages over the three normal and the three less-than-normal seasons were taken, a 35% reduction in rainfall was characterized by a 97% reduction in runoff loss and a 71% increase in leachate loss of metolachlor on a percent of application basis. The data indicate an increase in preferential flow in the leaching movement of metolachlor from the surface soil layer during the reduced rainfall periods. Even with increased preferential flow through the soil during the below-average rainfall seasons, leachate loss (percent of application) of the herbicide remained below 0.3%. Compared to the average rainfall seasons of 1995-1997, the below-normal seasons of 1998-2000 were characterized by a 79% reduction in total runoff and leachate flow and by a 93% reduction in corresponding metolachlor movement via these routes

  14. Average nuclear surface properties

    International Nuclear Information System (INIS)

    Groote, H. von.

    1979-01-01

    The definition of the nuclear surface energy is discussed for semi-infinite matter. This definition is extended also for the case that there is a neutron gas instead of vacuum on the one side of the plane surface. The calculations were performed with the Thomas-Fermi Model of Syler and Blanchard. The parameters of the interaction of this model were determined by a least squares fit to experimental masses. The quality of this fit is discussed with respect to nuclear masses and density distributions. The average surface properties were calculated for different particle asymmetry of the nucleon-matter ranging from symmetry beyond the neutron-drip line until the system no longer can maintain the surface boundary and becomes homogeneous. The results of the calculations are incorporated in the nuclear Droplet Model which then was fitted to experimental masses. (orig.)

  15. Americans' Average Radiation Exposure

    International Nuclear Information System (INIS)

    2000-01-01

    We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body

  16. The average action for scalar fields near phase transitions

    International Nuclear Information System (INIS)

    Wetterich, C.

    1991-08-01

    We compute the average action for fields in two, three and four dimensions, including the effects of wave function renormalization. A study of the one loop evolution equations for the scale dependence of the average action gives a unified picture of the qualitatively different behaviour in various dimensions for discrete as well as abelian and nonabelian continuous symmetry. The different phases and the phase transitions can be infered from the evolution equation. (orig.)

  17. Average geodesic distance of skeleton networks of Sierpinski tetrahedron

    Science.gov (United States)

    Yang, Jinjin; Wang, Songjing; Xi, Lifeng; Ye, Yongchao

    2018-04-01

    The average distance is concerned in the research of complex networks and is related to Wiener sum which is a topological invariant in chemical graph theory. In this paper, we study the skeleton networks of the Sierpinski tetrahedron, an important self-similar fractal, and obtain their asymptotic formula for average distances. To provide the formula, we develop some technique named finite patterns of integral of geodesic distance on self-similar measure for the Sierpinski tetrahedron.

  18. On quality control procedures for solar radiation and meteorological measures, from subhourly to montly average time periods

    Science.gov (United States)

    Espinar, B.; Blanc, P.; Wald, L.; Hoyer-Klick, C.; Schroedter-Homscheidt, M.; Wanderer, T.

    2012-04-01

    Meteorological data measured by ground stations are often a key element in the development and validation of methods exploiting satellite images. These data are considered as a reference against which satellite-derived estimates are compared. Long-term radiation and meteorological measurements are available from a large number of measuring stations. However, close examination of the data often reveals a lack of quality, often for extended periods of time. This lack of quality has been the reason, in many cases, of the rejection of large amount of available data. The quality data must be checked before their use in order to guarantee the inputs for the methods used in modelling, monitoring, forecast, etc. To control their quality, data should be submitted to several conditions or tests. After this checking, data that are not flagged by any of the test is released as a plausible data. In this work, it has been performed a bibliographical research of quality control tests for the common meteorological variables (ambient temperature, relative humidity and wind speed) and for the usual solar radiometrical variables (horizontal global and diffuse components of the solar radiation and the beam normal component). The different tests have been grouped according to the variable and the average time period (sub-hourly, hourly, daily and monthly averages). The quality test may be classified as follows: • Range checks: test that verify values are within a specific range. There are two types of range checks, those based on extrema and those based on rare observations. • Step check: test aimed at detecting unrealistic jumps or stagnation in the time series. • Consistency checks: test that verify the relationship between two or more time series. The gathered quality tests are applicable for all latitudes as they have not been optimized regionally nor seasonably with the aim of being generic. They have been applied to ground measurements in several geographic locations, what

  19. Higher cigarette prices influence cigarette purchase patterns.

    Science.gov (United States)

    Hyland, A; Bauer, J E; Li, Q; Abrams, S M; Higbee, C; Peppone, L; Cummings, K M

    2005-04-01

    To examine cigarette purchasing patterns of current smokers and to determine the effects of cigarette price on use of cheaper sources, discount/generic cigarettes, and coupons. Higher cigarette prices result in decreased cigarette consumption, but price sensitive smokers may seek lower priced or tax-free cigarette sources, especially if they are readily available. This price avoidance behaviour costs states excise tax money and dampens the health impact of higher cigarette prices. Telephone survey data from 3602 US smokers who were originally in the COMMIT (community intervention trial for smoking cessation) study were analysed to assess cigarette purchase patterns, use of discount/generic cigarettes, and use of coupons. 59% reported engaging in a high price avoidance strategy, including 34% who regularly purchase from a low or untaxed venue, 28% who smoke a discount/generic cigarette brand, and 18% who report using cigarette coupons more frequently that they did five years ago. The report of engaging in a price avoidance strategy was associated with living within 40 miles of a state or Indian reservation with lower cigarette excise taxes, higher average cigarette consumption, white, non-Hispanic race/ethnicity, and female sex. Data from this study indicate that most smokers are price sensitive and seek out measures to purchase less expensive cigarettes, which may decrease future cessation efforts.

  20. Post-test analysis of ROSA-III experiment Run 702

    International Nuclear Information System (INIS)

    Koizumi, Yasuo; Kikuchi, Osamu; Soda, Kunihisa

    1980-01-01

    The purpose of the ROSA-III experiment with a scaled BWR test facility is to examine primary coolant thermal-hydraulic behavior and performance of ECCS during a posturated loss-of-coolant accident of BWR. The results provide information for verification and improvement of reactor safety analysis codes. Run 702 assumed a 200% split break at the recirculation pump suction line under an average core power without ECCS activation. Post - test analysis of the Run 702 experiment was made with computer code RELAP4J. Agreement of the calculated system pressure and the experiment one was good. However, the calculated heater surface temperatures were higher than the measured ones. Also, the axial temperature distribution was different in tendency from the experimental one. From these results, the necessity was indicated of improving the analytical model of void distribution in the core and the nodalization in the pressure vassel, in order to make the analysis more realistic. And also, the need of characteristic test was indicated for ROSA-III test facility components, such as jet pump and piping form loss coefficient; likewise, flow rate measurements must be increased and refined. (author)