WorldWideScience

Sample records for higher average test

  1. Self-similarity of higher-order moving averages

    Science.gov (United States)

    Arianos, Sergio; Carbone, Anna; Türk, Christian

    2011-10-01

    In this work, higher-order moving average polynomials are defined by straightforward generalization of the standard moving average. The self-similarity of the polynomials is analyzed for fractional Brownian series and quantified in terms of the Hurst exponent H by using the detrending moving average method. We prove that the exponent H of the fractional Brownian series and of the detrending moving average variance asymptotically agree for the first-order polynomial. Such asymptotic values are compared with the results obtained by the simulations. The higher-order polynomials correspond to trend estimates at shorter time scales as the degree of the polynomial increases. Importantly, the increase of polynomial degree does not require to change the moving average window. Thus trends at different time scales can be obtained on data sets with the same size. These polynomials could be interesting for those applications relying on trend estimates over different time horizons (financial markets) or on filtering at different frequencies (image analysis).

  2. Average gluon and quark jet multiplicities at higher orders

    Energy Technology Data Exchange (ETDEWEB)

    Bolzoni, Paolo; Kniehl, Bernd A. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Kotikov, Anatoly V. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Joint Institute of Nuclear Research, Moscow (Russian Federation). Bogoliubov Lab. of Theoretical Physics

    2013-05-15

    We develop a new formalism for computing and including both the perturbative and nonperturbative QCD contributions to the scale evolution of average gluon and quark jet multiplicities. The new method is motivated by recent progress in timelike small-x resummation obtained in the MS factorization scheme. We obtain next-to-next-to-leading-logarithmic (NNLL) resummed expressions, which represent generalizations of previous analytic results. Our expressions depend on two nonperturbative parameters with clear and simple physical interpretations. A global fit of these two quantities to all available experimental data sets that are compatible with regard to the jet algorithms demonstrates by its goodness how our results solve a longstanding problem of QCD. We show that the statistical and theoretical uncertainties both do not exceed 5% for scales above 10 GeV. We finally propose to use the jet multiplicity data as a new way to extract the strong-coupling constant. Including all the available theoretical input within our approach, we obtain {alpha}{sub s}{sup (5)}(M{sub Z})=0.1199{+-}0.0026 in the MS scheme in an approximation equivalent to next-to-next-to-leading order enhanced by the resummations of ln(x) terms through the NNLL level and of ln Q{sup 2} terms by the renormalization group, in excellent agreement with the present world average.

  3. The true bladder dose: on average thrice higher than the ICRU reference

    International Nuclear Information System (INIS)

    Barillot, I.; Horiot, J.C.; Maingon, P.; Bone-Lepinoy, M.C.; D'Hombres, A.; Comte, J.; Delignette, A.; Feutray, S.; Vaillant, D.

    1996-01-01

    The aim of this study is to compare ICRU dose to doses at the bladder base located from ultrasonography measurements. Since 1990, the dose delivered to the bladder during utero-vaginal brachytherapy was systematically calculated at 3 or 4 points representative of bladder base determined with ultrasonography. The ICRU Reference Dose (IRD) from films, the Maximum Dose (Dmax), the Mean Dose (Dmean) representative of the dose received by a large area of bladder mucosa, the Reference Dose Rate (RDR) and the Mean Dose Rate (MDR) were recorded. Material: from 1990 to 1994, 198 measurements were performed in 152 patients. 98 patients were treated for cervix carcinomas, 54 for endometrial carcinomas. Methods: Bladder complications were classified using French Italian Syllabus. The influence of doses and dose rates on complications were tested using non parametric t test. Results: On average IRD is 21 Gy +/- 12 Gy, Dmax is 51Gy +/- 21Gy, Dmean is 40 Gy +/16 Gy. On average Dmax is thrice higher than IRD and Dmean twice higher than IRD. The same results are obtained for cervix and endometrium. Comparisons on dose rates were also performed: MDR is on average twice higher than RDR (RDR 48 cGy/h vs MDR 88 cGy/h). The five observed complications consist of incontinence only (3 G1, 1G2, 1G3). They are only statistically correlated with RDR p=0.01 (46 cGy/h in patients without complications vs 74 cGy/h in patients with complications). However the full responsibility of RT remains doubtful and should be shared with surgery in all cases. In summary: Bladder mucosa seems to tolerate well much higher doses than previous recorded without increased risk of severe sequelae. However this finding is probably explained by our efforts to spare most of bladder mucosa by 1 deg. ) customised external irradiation therapy (4 fields, full bladder) 2 deg. ) reproduction of physiologic bladder filling during brachytherapy by intermittent clamping of the Foley catheter

  4. Analytical expressions for conditional averages: A numerical test

    DEFF Research Database (Denmark)

    Pécseli, H.L.; Trulsen, J.

    1991-01-01

    Conditionally averaged random potential fluctuations are an important quantity for analyzing turbulent electrostatic plasma fluctuations. Experimentally, this averaging can be readily performed by sampling the fluctuations only when a certain condition is fulfilled at a reference position...

  5. Testing averaged cosmology with type Ia supernovae and BAO data

    Energy Technology Data Exchange (ETDEWEB)

    Santos, B.; Alcaniz, J.S. [Departamento de Astronomia, Observatório Nacional, 20921-400, Rio de Janeiro – RJ (Brazil); Coley, A.A. [Department of Mathematics and Statistics, Dalhousie University, Halifax, B3H 3J5 Canada (Canada); Devi, N. Chandrachani, E-mail: thoven@on.br, E-mail: aac@mathstat.dal.ca, E-mail: chandrachaniningombam@astro.unam.mx, E-mail: alcaniz@on.br [Instituto de Astronomía, Universidad Nacional Autónoma de México, Box 70-264, México City, México (Mexico)

    2017-02-01

    An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO data is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.

  6. Testing averaged cosmology with type Ia supernovae and BAO data

    International Nuclear Information System (INIS)

    Santos, B.; Alcaniz, J.S.; Coley, A.A.; Devi, N. Chandrachani

    2017-01-01

    An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO data is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.

  7. Control of underactuated driftless systems using higher-order averaging theory

    OpenAIRE

    Vela, Patricio A.; Burdick, Joel W.

    2003-01-01

    This paper applies a recently developed "generalized averaging theory" to construct stabilizing feedback control laws for underactuated driftless systems. These controls exponentialy stabilize in the average; the actual system may orbit around the average. Conditions for which the orbit collapses to the averaged trajectory are given. An example validates the theory, demonstrating its utility.

  8. Predicting Freshman Grade Point Average From College Admissions Test Scores and State High School Test Scores

    OpenAIRE

    Koretz, Daniel; Yu, C; Mbekeani, Preeya Pandya; Langi, M.; Dhaliwal, Tasminda Kaur; Braslow, David Arthur

    2016-01-01

    The current focus on assessing “college and career readiness” raises an empirical question: How do high school tests compare with college admissions tests in predicting performance in college? We explored this using data from the City University of New York and public colleges in Kentucky. These two systems differ in the choice of college admissions test, the stakes for students on the high school test, and demographics. We predicted freshman grade point average (FGPA) from high school GPA an...

  9. PSA testing for men at average risk of prostate cancer

    Directory of Open Access Journals (Sweden)

    Bruce K Armstrong

    2017-07-01

    Full Text Available Prostate-specific antigen (PSA testing of men at normal risk of prostate cancer is one of the most contested issues in cancer screening. There is no formal screening program, but testing is common – arguably a practice that ran ahead of the evidence. Public and professional communication about PSA screening has been highly varied and potentially confusing for practitioners and patients alike. There has been much research and policy activity relating to PSA testing in recent years. Landmark randomised controlled trials have been reported; authorities – including the 2013 Prostate Cancer World Congress, the Prostate Cancer Foundation of Australia, Cancer Council Australia, and the National Health and Medical Research Council – have made or endorsed public statements and/or issued clinical practice guidelines; and the US Preventive Services Task Force is revising its recommendations. But disagreement continues. The contention is partly over what the new evidence means. It is also a result of different valuing and prioritisation of outcomes that are hard to compare: prostate cancer deaths prevented (a small and disputed number; prevention of metastatic disease (somewhat more common; and side-effects of treatment such as incontinence, impotence and bowel trouble (more common again. A sizeable proportion of men diagnosed through PSA testing (somewhere between 20% and 50% would never have had prostate cancer symptoms sufficient to prompt investigation; many of these men are older, with competing comorbidities. It is a complex picture. Below are four viewpoints from expert participants in the evolving debate, commissioned for this cancer screening themed issue of Public Health Research & Practice. We asked the authors to respond to the challenge of PSA testing of asymptomatic, normal-risk men. They raise important considerations: uncertainty, harms, the trustworthiness and interpretation of the evidence, cost (e.g. of using multiparametric

  10. Predicting Freshman Grade Point Average From College Admissions Test Scores and State High School Test Scores

    Directory of Open Access Journals (Sweden)

    Daniel Koretz

    2016-09-01

    Full Text Available The current focus on assessing “college and career readiness” raises an empirical question: How do high school tests compare with college admissions tests in predicting performance in college? We explored this using data from the City University of New York and public colleges in Kentucky. These two systems differ in the choice of college admissions test, the stakes for students on the high school test, and demographics. We predicted freshman grade point average (FGPA from high school GPA and both college admissions and high school tests in mathematics and English. In both systems, the choice of tests had only trivial effects on the aggregate prediction of FGPA. Adding either test to an equation that included the other had only trivial effects on prediction. Although the findings suggest that the choice of test might advantage or disadvantage different students, it had no substantial effect on the over- and underprediction of FGPA for students classified by race-ethnicity or poverty.

  11. 76 FR 28947 - Bus Testing: Calculation of Average Passenger Weight and Test Vehicle Weight, and Public Meeting...

    Science.gov (United States)

    2011-05-19

    ...-0015] RIN 2132-AB01 Bus Testing: Calculation of Average Passenger Weight and Test Vehicle Weight, and... of proposed rulemaking (NPRM) regarding the calculation of average passenger weights and test vehicle... passenger weights and actual transit vehicle loads. Specifically, FTA proposed to change the average...

  12. Facial averageness and genetic quality: Testing heritability, genetic correlation with attractiveness, and the paternal age effect.

    Science.gov (United States)

    Lee, Anthony J; Mitchem, Dorian G; Wright, Margaret J; Martin, Nicholas G; Keller, Matthew C; Zietsch, Brendan P

    2016-01-01

    Popular theory suggests that facial averageness is preferred in a partner for genetic benefits to offspring. However, whether facial averageness is associated with genetic quality is yet to be established. Here, we computed an objective measure of facial averageness for a large sample ( N = 1,823) of identical and nonidentical twins and their siblings to test two predictions from the theory that facial averageness reflects genetic quality. First, we use biometrical modelling to estimate the heritability of facial averageness, which is necessary if it reflects genetic quality. We also test for a genetic association between facial averageness and facial attractiveness. Second, we assess whether paternal age at conception (a proxy of mutation load) is associated with facial averageness and facial attractiveness. Our findings are mixed with respect to our hypotheses. While we found that facial averageness does have a genetic component, and a significant phenotypic correlation exists between facial averageness and attractiveness, we did not find a genetic correlation between facial averageness and attractiveness (therefore, we cannot say that the genes that affect facial averageness also affect facial attractiveness) and paternal age at conception was not negatively associated with facial averageness. These findings support some of the previously untested assumptions of the 'genetic benefits' account of facial averageness, but cast doubt on others.

  13. Averaging Principle for the Higher Order Nonlinear Schrödinger Equation with a Random Fast Oscillation

    Science.gov (United States)

    Gao, Peng

    2018-04-01

    This work concerns the problem associated with averaging principle for a higher order nonlinear Schrödinger equation perturbed by a oscillating term arising as the solution of a stochastic reaction-diffusion equation evolving with respect to the fast time. This model can be translated into a multiscale stochastic partial differential equations. Stochastic averaging principle is a powerful tool for studying qualitative analysis of stochastic dynamical systems with different time-scales. To be more precise, under suitable conditions, we prove that there is a limit process in which the fast varying process is averaged out and the limit process which takes the form of the higher order nonlinear Schrödinger equation is an average with respect to the stationary measure of the fast varying process. Finally, by using the Khasminskii technique we can obtain the rate of strong convergence for the slow component towards the solution of the averaged equation, and as a consequence, the system can be reduced to a single higher order nonlinear Schrödinger equation with a modified coefficient.

  14. Averaging Principle for the Higher Order Nonlinear Schrödinger Equation with a Random Fast Oscillation

    Science.gov (United States)

    Gao, Peng

    2018-06-01

    This work concerns the problem associated with averaging principle for a higher order nonlinear Schrödinger equation perturbed by a oscillating term arising as the solution of a stochastic reaction-diffusion equation evolving with respect to the fast time. This model can be translated into a multiscale stochastic partial differential equations. Stochastic averaging principle is a powerful tool for studying qualitative analysis of stochastic dynamical systems with different time-scales. To be more precise, under suitable conditions, we prove that there is a limit process in which the fast varying process is averaged out and the limit process which takes the form of the higher order nonlinear Schrödinger equation is an average with respect to the stationary measure of the fast varying process. Finally, by using the Khasminskii technique we can obtain the rate of strong convergence for the slow component towards the solution of the averaged equation, and as a consequence, the system can be reduced to a single higher order nonlinear Schrödinger equation with a modified coefficient.

  15. 76 FR 13580 - Bus Testing; Calculation of Average Passenger Weight and Test Vehicle Weight

    Science.gov (United States)

    2011-03-14

    ... averages (See, Advisory Circular 120-27E, ``Aircraft Weight and Balance Control,'' June 10, 2005) and the...' needs, or they may choose to upgrade individual components, such as chassis, wheels, tires, brakes, or...

  16. Gender Gaps in High School GPA and ACT Scores: High School Grade Point Average and ACT Test Score by Subject and Gender. Information Brief 2014-12

    Science.gov (United States)

    ACT, Inc., 2014

    2014-01-01

    Female students who graduated from high school in 2013 averaged higher grades than their male counterparts in all subjects, but male graduates earned higher scores on the math and science sections of the ACT. This information brief looks at high school grade point average and ACT test score by subject and gender

  17. An Extended Quadratic Frobenius Primality Test with Average and Worst Case Error Estimates

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Frandsen, Gudmund Skovbjerg

    2003-01-01

    We present an Extended Quadratic Frobenius Primality Test (EQFT), which is related to an extends the Miller-Rabin test and the Quadratic Frobenius test (QFT) by Grantham. EQFT takes time about equivalent to 2 Miller-Rabin tests, but has much smaller error probability, namely 256/331776t for t...... for the error probability of this algorithm as well as a general closed expression bounding the error. For instance, it is at most 2-143 for k = 500, t = 2. Compared to earlier similar results for the Miller-Rabin test, the results indicates that our test in the average case has the effect of 9 Miller......-Rabin tests, while only taking time equivalent to about 2 such tests. We also give bounds for the error in case a prime is sought by incremental search from a random starting point....

  18. An Extended Quadratic Frobenius Primality Test with Average- and Worst-Case Error Estimate

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Frandsen, Gudmund Skovbjerg

    2006-01-01

    We present an Extended Quadratic Frobenius Primality Test (EQFT), which is related to an extends the Miller-Rabin test and the Quadratic Frobenius test (QFT) by Grantham. EQFT takes time about equivalent to 2 Miller-Rabin tests, but has much smaller error probability, namely 256/331776t for t...... for the error probability of this algorithm as well as a general closed expression bounding the error. For instance, it is at most 2-143 for k = 500, t = 2. Compared to earlier similar results for the Miller-Rabin test, the results indicates that our test in the average case has the effect of 9 Miller......-Rabin tests, while only taking time equivalent to about 2 such tests. We also give bounds for the error in case a prime is sought by incremental search from a random starting point....

  19. An Extended Quadratic Frobenius Primality Test with Average Case Error Estimates

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Frandsen, Gudmund Skovbjerg

    2001-01-01

    We present an Extended Quadratic Frobenius Primality Test (EQFT), which is related to an extends the Miller-Rabin test and the Quadratic Frobenius test (QFT) by Grantham. EQFT takes time about equivalent to 2 Miller-Rabin tests, but has much smaller error probability, namely 256/331776t for t...... for the error probability of this algorithm as well as a general closed expression bounding the error. For instance, it is at most 2-143 for k = 500, t = 2. Compared to earlier similar results for the Miller-Rabin test, the results indicates that our test in the average case has the effect of 9 Miller......-Rabin tests, while only taking time equivalent to about 2 such tests. We also give bounds for the error in case a prime is sought by incremental search from a random starting point....

  20. Raven's Test Performance of Sub-Saharan Africans: Average Performance, Psychometric Properties, and the Flynn Effect

    Science.gov (United States)

    Wicherts, Jelte M.; Dolan, Conor V.; Carlson, Jerry S.; van der Maas, Han L. J.

    2010-01-01

    This paper presents a systematic review of published data on the performance of sub-Saharan Africans on Raven's Progressive Matrices. The specific goals were to estimate the average level of performance, to study the Flynn Effect in African samples, and to examine the psychometric meaning of Raven's test scores as measures of general intelligence.…

  1. Changes in Student Populations and Average Test Scores of Dutch Primary Schools

    Science.gov (United States)

    Luyten, Hans; de Wolf, Inge

    2011-01-01

    This article focuses on the relation between student population characteristics and average test scores per school in the final grade of primary education from a dynamic perspective. Aggregated data of over 5,000 Dutch primary schools covering a 6-year period were used to study the relation between changes in school populations and shifts in mean…

  2. 77 FR 74452 - Bus Testing: Calculation of Average Passenger Weight and Test Vehicle Weight

    Science.gov (United States)

    2012-12-14

    ... require FTA to work with bus manufacturers and transit agencies to establish a new pass/ fail standard for... buses from the current value of 150 pounds to a new value of 175 pounds. This increase was proposed to... new pass/fail standards that require a more comprehensive review of its overall bus testing program...

  3. A benchmark test of computer codes for calculating average resonance parameters

    International Nuclear Information System (INIS)

    Ribon, P.; Thompson, A.

    1983-01-01

    A set of resonance parameters has been generated from known, but secret, average values; the parameters have then been adjusted to mimic experimental data by including the effects of Doppler broadening, resolution broadening and statistical fluctuations. Average parameters calculated from the dataset by various computer codes are compared with each other, and also with the true values. The benchmark test is fully described in the report NEANDC160-U (NEA Data Bank Newsletter No. 27 July 1982); the present paper is a summary of this document. (Auth.)

  4. Testing VRIN framework: Resource value and rareness as sources of competitive advantage and above average performance

    OpenAIRE

    Talaja, Anita

    2012-01-01

    In this study, structural equation model that analyzes the impact of resource and capability characteristics, more specifically value and rareness, on sustainable competitive advantage and above average performance is developed and empirically tested. According to the VRIN framework, if a company possesses and exploits valuable, rare, inimitable and non-substitutable resources and capabilities, it will achieve sustainable competitive advantage. Although the above mentioned statement is widely...

  5. Contemporary and prospective fuel cycles for WWER-440 based on new assemblies with higher uranium capacity and higher average fuel enrichment

    International Nuclear Information System (INIS)

    Gagarinskiy, A.A.; Saprykin, V.V.

    2009-01-01

    RRC 'Kurchatov Institute' has performed an extensive cycle of calculations intended to validate the opportunities of improving different fuel cycles for WWER-440 reactors. Works were performed to upgrade and improve WWER-440 fuel cycles on the basis of second-generation fuel assemblies allowing core thermal power to be uprated to 107 108 % of its nominal value (1375 MW), while maintaining the same fuel operation lifetime. Currently intensive work is underway to develop fuel cycles based on second-generation assemblies with higher fuel capacity and average fuel enrichment per assembly increased up to 4.87 % of U-235. Fuel capacity of second-generation assemblies was increased by means of eliminated central apertures of fuel pellets, and pellet diameter extended due to reduced fuel cladding thickness. This paper intends to summarize the results of works performed in the field of WWER-440 fuel cycle modernization, and to present yet unemployed opportunities and prospects of further improvement of WWER-440 neutronic and operating parameters by means of additional optimization of fuel assembly designs and fuel element arrangements applied. (Authors)

  6. The weighted average cost of capital over the lifecycle of the firm: Is the overinvestment problem of mature firms intensified by a higher WACC?

    Directory of Open Access Journals (Sweden)

    Carlos S. Garcia

    2016-08-01

    Full Text Available Firm lifecycle theory predicts that the Weighted Average Cost of Capital (WACC will tend to fall over the lifecycle of the firm (Mueller, 2003, p. 80-81. However, given that previous research finds that corporate governance deteriorates as firms get older (Mueller and Yun, 1998; Saravia, 2014 there is good reason to suspect that the opposite could be the case, that is, that the WACC is higher for older firms. Since our literature review indicates that no direct tests to clarify this question have been carried out up till now, this paper aims to fill the gap by testing this prediction empirically. Our findings support the proposition that the WACC of younger firms is higher than that of mature firms. Thus, we find that the mature firm overinvestment problem is not intensified by a higher cost of capital, on the contrary, our results suggest that mature firms manage to invest in negative net present value projects even though they have access to cheaper capital. This finding sheds new light on the magnitude of the corporate governance problems found in mature firms.

  7. Acceleration test with mixed higher harmonics in HIMAC

    International Nuclear Information System (INIS)

    Kanazawa, M.; Sugiura, A.; Misu, T.

    2004-01-01

    In HIMAC synchrotron, beam tests with a magnetic ally loaded cavity have been performed. This cavity has very low Q-value of about 0.5, and can be added higher harmonics with fundamental acceleration frequency. In our tested system for higher harmonics, wave form of a DDS (Direct Digital Synthesizer) can be rewrite, and arbitrary wave form can be used for beam acceleration. In the beam test, second and third harmonic wave were added on the fundamental acceleration frequency, and increases of the accelerated beam intensity have been achieved. In this paper, results of the beam test and the acceleration system are presented. (author)

  8. The Use of Tests in Admissions to Higher Education.

    Science.gov (United States)

    Fruen, Mary

    1978-01-01

    There are both strengths and weaknesses of using standardized test scores as a criterion for admission to institutions of higher education. The relative importance of scores is dependent on the institution's degree of selectivity. In general, decision processes and admissions criteria are not well defined. Advantages of test scores include: use of…

  9. Beauty is in the ease of the beholding: A neurophysiological test of the averageness theory of facial attractiveness

    Science.gov (United States)

    Trujillo, Logan T.; Jankowitsch, Jessica M.; Langlois, Judith H.

    2014-01-01

    Multiple studies show that people prefer attractive over unattractive faces. But what is an attractive face and why is it preferred? Averageness theory claims that faces are perceived as attractive when their facial configuration approximates the mathematical average facial configuration of the population. Conversely, faces that deviate from this average configuration are perceived as unattractive. The theory predicts that both attractive and mathematically averaged faces should be processed more fluently than unattractive faces, whereas the averaged faces should be processed marginally more fluently than the attractive faces. We compared neurocognitive and behavioral responses to attractive, unattractive, and averaged human faces to test these predictions. We recorded event-related potentials (ERPs) and reaction times (RTs) from 48 adults while they discriminated between human and chimpanzee faces. Participants categorized averaged and high attractive faces as “human” faster than low attractive faces. The posterior N170 (150 – 225 ms) face-evoked ERP component was smaller in response to high attractive and averaged faces versus low attractive faces. Single-trial EEG analysis indicated that this reduced ERP response arose from the engagement of fewer neural resources and not from a change in the temporal consistency of how those resources were engaged. These findings provide novel evidence that faces are perceived as attractive when they approximate a facial configuration close to the population average and suggest that processing fluency underlies preferences for attractive faces. PMID:24326966

  10. Field test analysis of concentrator photovoltaic system focusing on average photon energy and temperature

    Science.gov (United States)

    Husna, Husyira Al; Ota, Yasuyuki; Minemoto, Takashi; Nishioka, Kensuke

    2015-08-01

    The concentrator photovoltaic (CPV) system is unique and different from the common flat-plate PV system. It uses a multi-junction solar cell and a Fresnel lens to concentrate direct solar radiation onto the cell while tracking the sun throughout the day. The cell efficiency could reach over 40% under high concentration ratio. In this study, we analyzed a one year set of environmental condition data of the University of Miyazaki, Japan, where the CPV system was installed. Performance ratio (PR) was discussed to describe the system’s performance. Meanwhile, the average photon energy (APE) was used to describe the spectrum distribution at the site where the CPV system was installed. A circuit simulator network was used to simulate the CPV system electrical characteristics under various environmental conditions. As for the result, we found that the PR of the CPV systems depends on the APE level rather than the cell temperature.

  11. Average structure of the upper earth mantle and crust between Albuquerque and the Nevada Test Site

    International Nuclear Information System (INIS)

    Garbin, H.D.

    1979-08-01

    Models of Earth structures were constructed by inverting seismic data obtained from nuclear events with a 1600-m-long laser strain meter. With these models the general structure of the earth's upper mantle and crust between Albuquerque and the Nevada Test Site was determined. 3 figures, 3 tables

  12. FY-2016 Methyl Iodide Higher NOx Adsorption Test Report

    Energy Technology Data Exchange (ETDEWEB)

    Soelberg, Nicholas Ray [Idaho National Lab. (INL), Idaho Falls, ID (United States); Watson, Tony Leroy [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-09-01

    Deep-bed methyl iodide adsorption testing has continued in Fiscal Year 2016 under the Department of Energy (DOE) Fuel Cycle Technology (FCT) Program Offgas Sigma Team to further research and advance the technical maturity of solid sorbents for capturing iodine-129 in off-gas streams during used nuclear fuel reprocessing. Adsorption testing with higher levels of NO (approximately 3,300 ppm) and NO2 (up to about 10,000 ppm) indicate that high efficiency iodine capture by silver aerogel remains possible. Maximum iodine decontamination factors (DFs, or the ratio of iodine flowrate in the sorbent bed inlet gas compared to the iodine flowrate in the outlet gas) exceeded 3,000 until bed breakthrough rapidly decreased the DF levels to as low as about 2, when the adsorption capability was near depletion. After breakthrough, nearly all of the uncaptured iodine that remains in the bed outlet gas stream is no longer in the form of the original methyl iodide. The methyl iodide molecules are cleaved in the sorbent bed, even after iodine adsorption is no longer efficient, so that uncaptured iodine is in the form of iodine species soluble in caustic scrubber solutions, and detected and reported here as diatomic I2. The mass transfer zone depths were estimated at 8 inches, somewhat deeper than the 2-5 inch range estimated for both silver aerogels and silver zeolites in prior deep-bed tests, which had lower NOx levels. The maximum iodine adsorption capacity and silver utilization for these higher NOx tests, at about 5-15% of the original sorbent mass, and about 12-35% of the total silver, respectively, were lower than for trends from prior silver aerogel and silver zeolite tests with lower NOx levels. Additional deep-bed testing and analyses are recommended to expand the database for organic iodide adsorption and increase the technical maturity if iodine adsorption processes.

  13. Warfarin maintenance dose in older patients: higher average dose and wider dose frequency distribution in patients of African ancestry than those of European ancestry.

    Science.gov (United States)

    Garwood, Candice L; Clemente, Jennifer L; Ibe, George N; Kandula, Vijay A; Curtis, Kristy D; Whittaker, Peter

    2010-06-15

    Studies report that warfarin doses required to maintain therapeutic anticoagulation decrease with age; however, these studies almost exclusively enrolled patients of European ancestry. Consequently, universal application of dosing paradigms based on such evidence may be confounded because ethnicity also influences dose. Therefore, we determined if warfarin dose decreased with age in Americans of African ancestry, if older African and European ancestry patients required different doses, and if their daily dose frequency distributions differed. Our chart review examined 170 patients of African ancestry and 49 patients of European ancestry cared for in our anticoagulation clinic. We calculated the average weekly dose required for each stable, anticoagulated patient to maintain an international normalized ratio of 2.0 to 3.0, determined dose averages for groups 80 years of age and plotted dose as a function of age. The maintenance dose in patients of African ancestry decreased with age (PAfrican ancestry required higher average weekly doses than patients of European ancestry: 33% higher in the 70- to 79-year-old group (38.2+/-1.9 vs. 28.8+/-1.7 mg; P=0.006) and 52% in the >80-year-old group (33.2+/-1.7 vs. 21.8+/-3.8 mg; P=0.011). Therefore, 43% of older patients of African ancestry required daily doses >5mg and hence would have been under-dosed using current starting-dose guidelines. The dose frequency distribution was wider for older patients of African ancestry compared to those of European ancestry (PAfrican ancestry indicate that strategies for initiating warfarin therapy based on studies of patients of European ancestry could result in insufficient anticoagulation and thereby potentially increase their thromboembolism risk. Copyright 2010 Elsevier Inc. All rights reserved.

  14. State Averages

    Data.gov (United States)

    U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...

  15. Dose calculation for photon-emitting brachytherapy sources with average energy higher than 50 keV: report of the AAPM and ESTRO.

    Science.gov (United States)

    Perez-Calatayud, Jose; Ballester, Facundo; Das, Rupak K; Dewerd, Larry A; Ibbott, Geoffrey S; Meigooni, Ali S; Ouhib, Zoubir; Rivard, Mark J; Sloboda, Ron S; Williamson, Jeffrey F

    2012-05-01

    Recommendations of the American Association of Physicists in Medicine (AAPM) and the European Society for Radiotherapy and Oncology (ESTRO) on dose calculations for high-energy (average energy higher than 50 keV) photon-emitting brachytherapy sources are presented, including the physical characteristics of specific (192)Ir, (137)Cs, and (60)Co source models. This report has been prepared by the High Energy Brachytherapy Source Dosimetry (HEBD) Working Group. This report includes considerations in the application of the TG-43U1 formalism to high-energy photon-emitting sources with particular attention to phantom size effects, interpolation accuracy dependence on dose calculation grid size, and dosimetry parameter dependence on source active length. Consensus datasets for commercially available high-energy photon sources are provided, along with recommended methods for evaluating these datasets. Recommendations on dosimetry characterization methods, mainly using experimental procedures and Monte Carlo, are established and discussed. Also included are methodological recommendations on detector choice, detector energy response characterization and phantom materials, and measurement specification methodology. Uncertainty analyses are discussed and recommendations for high-energy sources without consensus datasets are given. Recommended consensus datasets for high-energy sources have been derived for sources that were commercially available as of January 2010. Data are presented according to the AAPM TG-43U1 formalism, with modified interpolation and extrapolation techniques of the AAPM TG-43U1S1 report for the 2D anisotropy function and radial dose function.

  16. Unit testing as a teaching tool in higher education

    Directory of Open Access Journals (Sweden)

    Peláez Canek

    2016-01-01

    Full Text Available Unit testing in the programming world has had a profound impact in the way modern complex systems are developed. Many Open Source and Free Software projects encourage (and in some cases, mandate the use of unit tests for new code submissions, and many software companies around the world have incorporated unit testing as part of their standard developing practices. And although not all software engineers use them, very few (if at all object their use. However, there is almost no research available pertaining the use of unit tests as a teaching tool in introductory programming courses. I have been teaching introductory programming courses in the Computer Sciences program at the Sciences Faculty in the National Autonomous University of Mexico for almost ten years, and since 2013 I have been using unit testing as a teaching tool in those courses. The intent of this paper is to discuss the results of this experience.

  17. Average Skin-Friction Drag Coefficients from Tank Tests of a Parabolic Body of Revolution (NACA RM-10)

    Science.gov (United States)

    Mottard, Elmo J; Loposer, J Dan

    1954-01-01

    Average skin-friction drag coefficients were obtained from boundary-layer total-pressure measurements on a parabolic body of revolution (NACA rm-10, basic fineness ratio 15) in water at Reynolds numbers from 4.4 x 10(6) to 70 x 10(6). The tests were made in the Langley tank no. 1 with the body sting-mounted at a depth of two maximum body diameters. The arithmetic mean of three drag measurements taken around the body was in good agreement with flat-plate results, but, apparently because of the slight surface wave caused by the body, the distribution of the boundary layer around the body was not uniform over part of the Reynolds number range.

  18. Results of faecal immunochemical test for colorectal cancer screening, in average risk population, in a cohort of 1389 subjects.

    Science.gov (United States)

    Miuţescu, Bogdan; Sporea, Ioan; Popescu, Alina; Bota, Simona; Iovănescu, Dana; Burlea, Amelia; Mos, Liana; Miuţescu, Eftimie

    2013-01-01

    The aim of this study is to evaluate the usefulness of the fecal immunochemical test (FIT) in colorectal cancer screening, detection of precancerous lesions and early colorectal cancer. The study evaluated asymptomatic patients with average risk (no personal or family antecedents of polyps or colorectal cancer), aged between 50 and 74 years. The presence of the occult haemorrhage was tested with the immunochemical faecal test Hem Check 1 (Veda Lab, France). The subjects were not requested to have any dietary or drug restrictions. Colonoscopy was recommended in all subjects that tested positive. In our study, we had a total of 1389 participants who met the inclusion criteria, with a mean age of 61.2 ± 12.8 years, 565 (40.7%) men and 824 (59.3%) women. FIT was positive in 87 individuals (6.3%). In 57/87 subjects (65.5%) with positive FIT, colonoscopy was performed, while the rest of the subjects refused or delayed the investigation. A number of 5 (8.8%) patients were not able to have a complete colonoscopy, due to neoplastic stenosis. The colonoscopies revealed in 10 cases (0.7%) cancer, in 29 cases (2.1%) advanced adenomas and in 15 cases (1.1%) non advanced adenomas from the total participants in the study. The colonoscopies performed revealed a greater percentage of advanced adenomas in the left colon compared to the right colon, 74.1% vs. 28.6% (p<0.001). In our study, FIT had a positivity rate of 6.3%. The detection rate for advanced neoplasia was 2.8% (0.7% for cancer, 2.1% for advanced adenomas) in our study group. Adherence to colonoscopy for FIT-positive subjects was 65.5%.

  19. Two Questions about Critical-Thinking Tests in Higher Education

    Science.gov (United States)

    Benjamin, Roger

    2014-01-01

    In this article, the author argues first, that critical-thinking skills do exist independent of disciplinary thinking skills and are not compromised by interaction effects with the major; and second, that standardized tests (e.g., the Collegiate Learning Assessment, or CLA, which is his example throughout the article) are the best way to measure…

  20. MODEL TESTING OF LOW PRESSURE HYDRAULIC TURBINE WITH HIGHER EFFICIENCY

    Directory of Open Access Journals (Sweden)

    V. K. Nedbalsky

    2007-01-01

    Full Text Available A design of low pressure turbine has been developed and it is covered by an invention patent and a useful model patent. Testing of the hydraulic turbine model has been carried out when it was installed on a vertical shaft. The efficiency was equal to 76–78 % that exceeds efficiency of the known low pressure blade turbines. 

  1. Practicing the Test Produces Strength Equivalent to Higher Volume Training.

    Science.gov (United States)

    Mattocks, Kevin T; Buckner, Samuel L; Jessee, Matthew B; Dankel, Scott J; Mouser, J Grant; Loenneke, Jeremy P

    2017-09-01

    To determine if muscle growth is important for increasing muscle strength or if changes in strength can be entirely explained from practicing the strength test. Thirty-eight untrained individuals performed knee extension and chest press exercise for 8 wk. Individuals were randomly assigned to either a high-volume training group (HYPER) or a group just performing the one repetition maximum (1RM) strength test (TEST). The HYPER group performed four sets to volitional failure (~8RM-12RM), whereas the TEST group performed up to five attempts to lift as much weight as possible one time each visit. Data are presented as mean (90% confidence interval). The change in muscle size was greater in the HYPER group for both the upper and lower bodies at most but not all sites. The change in 1RM strength for both the upper body (difference of -1.1 [-4.8, 2.4] kg) and lower body (difference of 1.0 [-0.7, 2.8] kg for dominant leg) was not different between groups (similar for nondominant). Changes in isometric and isokinetic torque were not different between groups. The HYPER group observed a greater change in muscular endurance (difference of 2 [1,4] repetitions) only in the dominant leg. There were no differences in the change between groups in upper body endurance. There were between-group differences for exercise volume (mean [95% confidence interval]) of the dominant (difference of 11,049.3 [9254.6-12,844.0] kg) leg (similar for nondominant) and chest press with the HYPER group completing significantly more total volume (difference of 13259.9 [9632.0-16,887.8] kg). These findings suggest that neither exercise volume nor the change in muscle size from training contributed to greater strength gains compared with just practicing the test.

  2. Destroying charged black holes in higher dimensions with test particles

    Science.gov (United States)

    Wu, Bin; Liu, Weiyang; Tang, Hao; Yue, Rui-Hong

    2017-07-01

    A possible way to destroy the Tangherlini Reissner-Nordström black hole is discussed in the spirit of Wald’s gedanken experiment. By neglecting radiation and self force effects, the absorbing condition and destruction condition of the test point particle which is capable of destroying the black hole are obtained. We find that it is impossible to challenge the weak cosmic censorship for an initially extremal black hole in all dimensions. Instead, it is shown that the near extremal black hole will turn into a naked singularity in this particular process, in which case the allowed range of the particle’s energy is very narrow. The result indicates that the self-force effects may well change the outcome of the calculation.

  3. The use of averages and other summation quantities in the testing of evaluated fission product yield and decay data. Applications to ENDF/B(IV)

    International Nuclear Information System (INIS)

    Walker, W.H.

    1976-01-01

    Averages of some fission product properties can be obtained by multiplying the fission product yield for each fission product by the value of the property (e.g. mass, atomic number, mass defect) for that fission product and summing all significant contributions. These averages can be used to test the reliability of the yield set or provide useful data for reactor calculations. The report gives the derivation of these averages and discusses their application using the ENDF/B(IV) fission product library. The following quantities are treated here: the number of fission products per fission ΣYsub(i); the average mass number and the average number of neutrons per fission; the average atomic number of the stable fission products and the average number of β-decays per fission; the average mass defect of the stable fission products and the total energy release per fission; the average decay energy per fission (beta, gamma and anti-neutrino); the average β-decay energy per fission; individual and group-averaged delayed neutron emission; the total yield for each fission product element. Wherever it is meaningful to do so, a sum is subdivided into its light and heavy mass components. The most significant differences between calculated values based on ENDF/B(IV) and measurements are the β and γ decay energies for 235 U thermal fission and delayed neutron yields for other fissile nuclides, most notably 238 U. (author)

  4. Raven’s test performance of sub-Saharan Africans: average performance, psychometric properties, and the Flynn Effect

    NARCIS (Netherlands)

    Wicherts, J.M.; Dolan, C.V.; Carlson, J.S.; van der Maas, H.L.J.

    2010-01-01

    This paper presents a systematic review of published data on the performance of sub-Saharan Africans on Raven's Progressive Matrices. The specific goals were to estimate the average level of performance, to study the Flynn Effect in African samples, and to examine the psychometric meaning of Raven's

  5. Pedestrian headform testing: inferring performance at impact speeds and for headform masses not tested, and estimating average performance in a range of real-world conditions.

    Science.gov (United States)

    Hutchinson, T Paul; Anderson, Robert W G; Searson, Daniel J

    2012-01-01

    Tests are routinely conducted where instrumented headforms are projected at the fronts of cars to assess pedestrian safety. Better information would be obtained by accounting for performance over the range of expected impact conditions in the field. Moreover, methods will be required to integrate the assessment of secondary safety performance with primary safety systems that reduce the speeds of impacts. Thus, we discuss how to estimate performance over a range of impact conditions from performance in one test and how this information can be combined with information on the probability of different impact speeds to provide a balanced assessment of pedestrian safety. Theoretical consideration is given to 2 distinct aspects to impact safety performance: the test impact severity (measured by the head injury criterion, HIC) at a speed at which a structure does not bottom out and the speed at which bottoming out occurs. Further considerations are given to an injury risk function, the distribution of impact speeds likely in the field, and the effect of primary safety systems on impact speeds. These are used to calculate curves that estimate injuriousness for combinations of test HIC, bottoming out speed, and alternative distributions of impact speeds. The injuriousness of a structure that may be struck by the head of a pedestrian depends not only on the result of the impact test but also the bottoming out speed and the distribution of impact speeds. Example calculations indicate that the relationship between the test HIC and injuriousness extends over a larger range than is presently used by the European New Car Assessment Programme (Euro NCAP), that bottoming out at speeds only slightly higher than the test speed can significantly increase the injuriousness of an impact location and that effective primary safety systems that reduce impact speeds significantly modify the relationship between the test HIC and injuriousness. Present testing regimes do not take fully into

  6. Test of Axel-Brink predictions by a discrete approach to resonance-averaged (n,γ) spectroscopy

    International Nuclear Information System (INIS)

    Raman, S.; Shahal, O.; Slaughter, G.G.

    1981-01-01

    The limitations imposed by Porter-Thomas fluctuations in the study of primary γ rays following neutron capture have been partly overcome by obtaining individual γ-ray spectra from 48 resonances in the 173 Yb(n,γ) reaction and summing them after appropriate normalizations. The resulting average radiation widths (and hence the γ-ray strength function) are in good agreement with the Axel-Brink predictions based on a giant dipole resonance model

  7. Noninjured Knees of Patients With Noncontact ACL Injuries Display Higher Average Anterior and Internal Rotational Knee Laxity Compared With Healthy Knees of a Noninjured Population.

    Science.gov (United States)

    Mouton, Caroline; Theisen, Daniel; Meyer, Tim; Agostinis, Hélène; Nührenbörger, Christian; Pape, Dietrich; Seil, Romain

    2015-08-01

    Excessive physiological anterior and rotational knee laxity is thought to be a risk factor for noncontact anterior cruciate ligament (ACL) injuries and inferior reconstruction outcomes, but no thresholds have been established to identify patients with increased laxity. (1) To determine if the healthy contralateral knees of ACL-injured patients have greater anterior and rotational knee laxity, leading to different laxity profiles (combination of laxities), compared with healthy control knees and (2) to set a threshold to help discriminate anterior and rotational knee laxity between these groups. Case-sectional study; Level of evidence, 3. A total of 171 healthy contralateral knees of noncontact ACL-injured patients (ACL-H group) and 104 healthy knees of control participants (CTL group) were tested for anterior and rotational laxity. Laxity scores (measurements corrected for sex and body mass) were used to classify knees as hypolax (score 1). Proportions of patients in each group were compared using χ(2) tests. Receiver operating characteristic curves were computed to discriminate laxity between the groups. Odds ratios were calculated to determine the probability of being in the ACL-H group. The ACL-H group displayed greater laxity scores for anterior displacement and internal rotation in their uninjured knee compared with the CTL group (P knees of patients with noncontact ACL injuries display different laxity values both for internal rotation and anterior displacement compared with healthy control knees. The identification of knee laxity profiles may be of relevance for primary and secondary prevention programs of noncontact ACL injuries. © 2015 The Author(s).

  8. The influence of age, education and experience on the grade point average (GPA) of trainees of nondestructive testing (NDT) courses

    International Nuclear Information System (INIS)

    Loterina, Roel A.; Relunia, Estrella D.

    2008-01-01

    The Philippine National Standard, PNS/ISO9712:2006, entitled ''Nondestructive Testing Qualification and Certification of Personnel'' requires education, training and experience to quality personnel to take the National Certifying Body (NCB) examination. The NDT training courses offered by the Philippine Society for Nondestructive Testing (PSNT) in cooperation with the Philippine Nuclear Research Institute (PNRI) is designed to qualify trainees to take the National Certifying Body (NCB) for NDT. (author)

  9. Vouchers, Tests, Loans, Privatization: Will They Help Tackle Corruption in Russian Higher Education?

    Science.gov (United States)

    Osipian, Ararat L.

    2009-01-01

    Higher education in Russia is currently being reformed. A standardized computer-graded test and educational vouchers were introduced to make higher education more accessible, fund it more effectively, and reduce corruption in admissions to public colleges. The voucher project failed and the test faces furious opposition. This paper considers…

  10. Prepharmacy predictors of success in pharmacy school: grade point averages, pharmacy college admissions test, communication abilities, and critical thinking skills.

    Science.gov (United States)

    Allen, D D; Bond, C A

    2001-07-01

    Good admissions decisions are essential for identifying successful students and good practitioners. Various parameters have been shown to have predictive power for academic success. Previous academic performance, the Pharmacy College Admissions Test (PCAT), and specific prepharmacy courses have been suggested as academic performance indicators. However, critical thinking abilities have not been evaluated. We evaluated the connection between academic success and each of the following predictive parameters: the California Critical Thinking Skills Test (CCTST) score, PCAT score, interview score, overall academic performance prior to admission at a pharmacy school, and performance in specific prepharmacy courses. We confirmed previous reports but demonstrated intriguing results in predicting practice-based skills. Critical thinking skills predict practice-based course success. Also, the CCTST and PCAT scores (Pearson correlation [pc] = 0.448, p critical thinking skills in pharmacy practice courses and clerkships. Further study is needed to confirm this finding and determine which PCAT components predict critical thinking abilities.

  11. Introducing Vouchers and Standardized Tests for Higher Education in Russia: Expectations and Measurements

    OpenAIRE

    Osipian, Ararat

    2008-01-01

    The reform of higher education in Russia, based on standardized tests and educational vouchers, was intended to reduce inequalities in access to higher education. The initiative with the vouchers has failed and by now is already forgotten while the national test is planned to be introduced nationwide in 2009. The national test called to replace the present corrupt system of entry examinations has experienced numerous problems so far and will likely have even more problems in the future. This ...

  12. Predicting Student Grade Point Average at a Community College from Scholastic Aptitude Tests and from Measures Representing Three Constructs in Vroom's Expectancy Theory Model of Motivation.

    Science.gov (United States)

    Malloch, Douglas C.; Michael, William B.

    1981-01-01

    This study was designed to determine whether an unweighted linear combination of community college students' scores on standardized achievement tests and a measure of motivational constructs derived from Vroom's expectance theory model of motivation was predictive of academic success (grade point average earned during one quarter of an academic…

  13. The Changing Faces of Corruption in Georgian Higher Education: Access through Times and Tests

    Science.gov (United States)

    Orkodashvili, Mariam

    2012-01-01

    This article presents a comparative-historical analysis of access to higher education in Georgia. It describes the workings of corrupt channels during the Soviet and early post-Soviet periods and the role of standardized tests in fighting corruption in higher education admission processes after introduction of the Unified National Entrance…

  14. The effects of sweep numbers per average and protocol type on the accuracy of the p300-based concealed information test.

    Science.gov (United States)

    Dietrich, Ariana B; Hu, Xiaoqing; Rosenfeld, J Peter

    2014-03-01

    In the first of two experiments, we compared the accuracy of the P300 concealed information test protocol as a function of numbers of trials experienced by subjects and ERP averages analyzed by investigators. Contrary to Farwell et al. (Cogn Neurodyn 6(2):115-154, 2012), we found no evidence that 100 trial based averages are more accurate than 66 or 33 trial based averages (all numbers led to accuracies of 84-94 %). There was actually a trend favoring the lowest trial numbers. The second study compared numbers of irrelevant stimuli recalled and recognized in the 3-stimulus protocol versus the complex trial protocol (Rosenfeld in Memory detection: theory and application of the concealed information test, Cambridge University Press, New York, pp 63-89, 2011). Again, in contrast to expectations from Farwell et al. (Cogn Neurodyn 6(2):115-154, 2012), there were no differences between protocols, although there were more irrelevant stimuli recognized than recalled, and irrelevant 4-digit number group stimuli were neither recalled nor recognized as well as irrelevant city name stimuli. We therefore conclude that stimulus processing in the P300-based complex trial protocol-with no more than 33 sweep averages-is adequate to allow accurate detection of concealed information.

  15. Pigeons exhibit higher accuracy for chosen memory tests than for forced memory tests in duration matching-to-sample.

    Science.gov (United States)

    Adams, Allison; Santi, Angelo

    2011-03-01

    Following training to match 2- and 8-sec durations of feederlight to red and green comparisons with a 0-sec baseline delay, pigeons were allowed to choose to take a memory test or to escape the memory test. The effects of sample omission, increases in retention interval, and variation in trial spacing on selection of the escape option and accuracy were studied. During initial testing, escaping the test did not increase as the task became more difficult, and there was no difference in accuracy between chosen and forced memory tests. However, with extended training, accuracy for chosen tests was significantly greater than for forced tests. In addition, two pigeons exhibited higher accuracy on chosen tests than on forced tests at the short retention interval and greater escape rates at the long retention interval. These results have not been obtained in previous studies with pigeons when the choice to take the test or to escape the test is given before test stimuli are presented. It appears that task-specific methodological factors may determine whether a particular species will exhibit the two behavioral effects that were initially proposed as potentially indicative of metacognition.

  16. Standard Test Method for Measuring Neutron Fluence and Average Energy from 3H(d,n)4He Neutron Generators by Radioactivation Techniques 1

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2009-01-01

    1.1 This test method covers a general procedure for the measurement of the fast-neutron fluence rate produced by neutron generators utilizing the 3H(d,n)4He reaction. Neutrons so produced are usually referred to as 14-MeV neutrons, but range in energy depending on a number of factors. This test method does not adequately cover fusion sources where the velocity of the plasma may be an important consideration. 1.2 This test method uses threshold activation reactions to determine the average energy of the neutrons and the neutron fluence at that energy. At least three activities, chosen from an appropriate set of dosimetry reactions, are required to characterize the average energy and fluence. The required activities are typically measured by gamma ray spectroscopy. 1.3 The measurement of reaction products in their metastable states is not covered. If the metastable state decays to the ground state, the ground state reaction may be used. 1.4 The values stated in SI units are to be regarded as standard. No oth...

  17. Higher mind-brain development in successful leaders: testing a unified theory of performance.

    Science.gov (United States)

    Harung, Harald S; Travis, Frederick

    2012-05-01

    This study explored mind-brain characteristics of successful leaders as reflected in scores on the Brain Integration Scale, Gibbs's Socio-moral Reasoning questionnaire, and an inventory of peak experiences. These variables, which in previous studies distinguished world-class athletes and professional classical musicians from average-performing controls, were recorded in 20 Norwegian top-level managers and in 20 low-level managers-matched for age, gender, education, and type of organization (private or public). Top-level managers were characterized by higher Brain Integration Scale scores, higher levels of moral reasoning, and more frequent peak experiences. These multilevel measures could be useful tools in selection and recruiting of potential managers and in assessing leadership education and development programs. Future longitudinal research could further investigate the relationship between leadership success and these and other multilevel variables.

  18. Will Higher Education Pass "A Test of Leadership"? An Interview with Spellings Commission Chairman Charles Miller

    Science.gov (United States)

    Callan, Pat

    2007-01-01

    Charles Miller, former chairman of the University of Texas System's Board of Regents, chaired the recent Commission on the Future of Higher Education created by Secretary of Education Margaret Spellings. Here he is interviewed regarding the panel's widely discussed report, "A Test of Leadership," by Pat Callan, president of the National…

  19. Dropouts and Budgets: A Test of a Dropout Reduction Model among Students in Israeli Higher Education

    Science.gov (United States)

    Bar-Am, Ran; Arar, Osama

    2017-01-01

    This article deals with the problem of student dropout during the first year in a higher education institution. To date, no model on a budget has been developed and tested to prevent dropout among Engineering Students. This case study was conducted among first-year students taking evening classes in two practical engineering colleges in Israel.…

  20. Testes mass, but not sperm length, increases with higher levels of polyandry in an ancient sex model.

    Directory of Open Access Journals (Sweden)

    David E Vrech

    Full Text Available There is strong evidence that polyandrous taxa have evolved relatively larger testes than monogamous relatives. Sperm size may either increase or decrease across species with the risk or intensity of sperm competition. Scorpions represent an ancient direct mode with spermatophore-mediated sperm transfer and are particularly well suited for studies in sperm competition. This work aims to analyze for the first time the variables affecting testes mass, ejaculate volume and sperm length, according with their levels of polyandry, in species belonging to the Neotropical family Bothriuridae. Variables influencing testes mass and sperm length were obtained by model selection analysis using corrected Akaike Information Criterion. Testes mass varied greatly among the seven species analyzed, ranging from 1.6 ± 1.1 mg in Timogenes dorbignyi to 16.3 ± 4.5 mg in Brachistosternus pentheri with an average of 8.4 ± 5.0 mg in all the species. The relationship between testes mass and body mass was not significant. Body allocation in testes mass, taken as Gonadosomatic Index, was high in Bothriurus cordubensis and Brachistosternus ferrugineus and low in Timogenes species. The best-fitting model for testes mass considered only polyandry as predictor with a positive influence. Model selection showed that body mass influenced sperm length negatively but after correcting for body mass, none of the variables analyzed explained sperm length. Both body mass and testes mass influenced spermatophore volume positively. There was a strong phylogenetic effect on the model containing testes mass. As predicted by the sperm competition theory and according to what happens in other arthropods, testes mass increased in species with higher levels of sperm competition, and influenced positively spermatophore volume, but data was not conclusive for sperm length.

  1. Higher emotional intelligence is related to lower test anxiety among students

    Science.gov (United States)

    Ahmadpanah, Mohammad; Keshavarz, Mohammadreza; Haghighi, Mohammad; Jahangard, Leila; Bajoghli, Hafez; Sadeghi Bahmani, Dena; Holsboer-Trachsler, Edith; Brand, Serge

    2016-01-01

    Background For students attending university courses, experiencing test anxiety (TA) dramatically impairs cognitive performance and success at exams. Whereas TA is a specific case of social phobia, emotional intelligence (EI) is an umbrella term covering interpersonal and intrapersonal skills, along with positive stress management, adaptability, and mood. In the present study, we tested the hypothesis that higher EI and lower TA are associated. Further, sex differences were explored. Method During an exam week, a total of 200 university students completed questionnaires covering sociodemographic information, TA, and EI. Results Higher scores on EI traits were associated with lower TA scores. Relative to male participants, female participants reported higher TA scores, but not EI scores. Intrapersonal and interpersonal skills and mood predicted low TA, while sex, stress management, and adaptability were excluded from the equation. Conclusion The pattern of results suggests that efforts to improve intrapersonal and interpersonal skills, and mood might benefit students with high TA. Specifically, social commitment might counteract TA. PMID:26834474

  2. The Generalized Higher Criticism for Testing SNP-Set Effects in Genetic Association Studies

    Science.gov (United States)

    Barnett, Ian; Mukherjee, Rajarshi; Lin, Xihong

    2017-01-01

    It is of substantial interest to study the effects of genes, genetic pathways, and networks on the risk of complex diseases. These genetic constructs each contain multiple SNPs, which are often correlated and function jointly, and might be large in number. However, only a sparse subset of SNPs in a genetic construct is generally associated with the disease of interest. In this article, we propose the generalized higher criticism (GHC) to test for the association between an SNP set and a disease outcome. The higher criticism is a test traditionally used in high-dimensional signal detection settings when marginal test statistics are independent and the number of parameters is very large. However, these assumptions do not always hold in genetic association studies, due to linkage disequilibrium among SNPs and the finite number of SNPs in an SNP set in each genetic construct. The proposed GHC overcomes the limitations of the higher criticism by allowing for arbitrary correlation structures among the SNPs in an SNP-set, while performing accurate analytic p-value calculations for any finite number of SNPs in the SNP-set. We obtain the detection boundary of the GHC test. We compared empirically using simulations the power of the GHC method with existing SNP-set tests over a range of genetic regions with varied correlation structures and signal sparsity. We apply the proposed methods to analyze the CGEM breast cancer genome-wide association study. Supplementary materials for this article are available online. PMID:28736464

  3. Effects on fatigue life of gate valves due to higher torque switch settings during operability testing

    International Nuclear Information System (INIS)

    Richins, W.D.; Snow, S.D.; Miller, G.K.; Russell, M.J.; Ware, A.G.

    1995-12-01

    Some motor operated valves now have higher torque switch settings due to regulatory requirements to ensure valve operability with appropriate margins at design basis conditions. Verifying operability with these settings imposes higher stem loads during periodic inservice testing. These higher test loads increase stresses in the various valve internal parts which may in turn increase the fatigue usage factors. This increased fatigue is judged to be a concern primarily in the valve disks, seats, yokes, stems, and stem nuts. Although the motor operators may also have significantly increased loading, they are being evaluated by the manufacturers and are beyond the scope of this study. Two gate valves representative of both relatively weak and strong valves commonly used in commercial nuclear applications were selected for fatigue analyses. Detailed dimensional and test data were available for both valves from previous studies at the Idaho National Engineering Laboratory. Finite element models were developed to estimate maximum stresses in the internal parts of the valves and to identity the critical areas within the valves where fatigue may be a concern. Loads were estimated using industry standard equations for calculating torque switch settings prior and subsequent to the testing requirements of USNRC Generic Letter 89--10. Test data were used to determine both; (1) the overshoot load between torque switch trip and final seating of the disk during valve closing and (2) the stem thrust required to open the valves. The ranges of peak stresses thus determined were then used to estimate the increase in the fatigue usage factors due to the higher stem thrust loads. The usages that would be accumulated by 100 base cycles plus one or eight test cycles per year over 40 and 60 years of operation were calculated

  4. Impression formation of tests: retrospective judgments of performance are higher when easier questions come first.

    Science.gov (United States)

    Jackson, Abigail; Greene, Robert L

    2014-11-01

    Four experiments are reported on the importance of retrospective judgments of performance (postdictions) on tests. Participants answered general knowledge questions and estimated how many questions they answered correctly. They gave higher postdictions when easy questions preceded difficult questions. This was true when time to answer each question was equalized and constrained, when participants were instructed not to write answers, and when questions were presented in a multiple-choice format. Results are consistent with the notion that first impressions predominate in overall perception of test difficulty.

  5. Average male and female virtual dummy model (BioRID and EvaRID) simulations with two seat concepts in the Euro NCAP low severity rear impact test configuration.

    Science.gov (United States)

    Linder, Astrid; Holmqvist, Kristian; Svensson, Mats Y

    2018-05-01

    Soft tissue neck injuries, also referred to as whiplash injuries, which can lead to long term suffering accounts for more than 60% of the cost of all injuries leading to permanent medical impairment for the insurance companies, with respect to injuries sustained in vehicle crashes. These injuries are sustained in all impact directions, however they are most common in rear impacts. Injury statistics have since the mid-1960s consistently shown that females are subject to a higher risk of sustaining this type of injury than males, on average twice the risk of injury. Furthermore, some recently developed anti-whiplash systems have revealed they provide less protection for females than males. The protection of both males and females should be addresses equally when designing and evaluating vehicle safety systems to ensure maximum safety for everyone. This is currently not the case. The norm for crash test dummies representing humans in crash test laboratories is an average male. The female part of the population is not represented in tests performed by consumer information organisations such as NCAP or in regulatory tests due to the absence of a physical dummy representing an average female. Recently, the world first virtual model of an average female crash test dummy was developed. In this study, simulations were run with both this model and an average male dummy model, seated in a simplified model of a vehicle seat. The results of the simulations were compared to earlier published results from simulations run in the same test set-up with a vehicle concepts seat. The three crash pulse severities of the Euro NCAP low severity rear impact test were applied. The motion of the neck, head and upper torso were analysed in addition to the accelerations and the Neck Injury Criterion (NIC). Furthermore, the response of the virtual models was compared to the response of volunteers as well as the average male model, to that of the response of a physical dummy model. Simulations

  6. Higher emotional intelligence is related to lower test anxiety among students

    Directory of Open Access Journals (Sweden)

    Ahmadpanah M

    2016-01-01

    Full Text Available Mohammad Ahmadpanah,1 Mohammadreza Keshavarz,1 Mohammad Haghighi,1 Leila Jahangard,1 Hafez Bajoghli,2 Dena Sadeghi Bahmani,3 Edith Holsboer-Trachsler,3 Serge Brand3,4 1Behavioral Disorders and Substances Abuse, Research Center, Hamadan University of Medical Sciences, Hamadan, Iran; 2Iranian National Center for Addiction Studies (INCAS, Iranian Institute for Reduction of High-Risk Behaviors, Tehran University of Medical Sciences, Tehran, Iran; 3Psychiatric Clinics of the University of Basel, Center for Affective, Stress and Sleep Disorders, 4Department of Sport, Exercise and Health Science, Sport Science Section, University of Basel, Basel, Switzerland Background: For students attending university courses, experiencing test anxiety (TA dramatically impairs cognitive performance and success at exams. Whereas TA is a specific case of social phobia, emotional intelligence (EI is an umbrella term covering interpersonal and intrapersonal skills, along with positive stress management, adaptability, and mood. In the present study, we tested the hypothesis that higher EI and lower TA are associated. Further, sex differences were explored.Method: During an exam week, a total of 200 university students completed questionnaires covering sociodemographic information, TA, and EI.Results: Higher scores on EI traits were associated with lower TA scores. Relative to male participants, female participants reported higher TA scores, but not EI scores. Intrapersonal and interpersonal skills and mood predicted low TA, while sex, stress management, and adaptability were excluded from the equation.Conclusion: The pattern of results suggests that efforts to improve intrapersonal and interpersonal skills, and mood might benefit students with high TA. Specifically, social commitment might counteract TA. Keywords: test anxiety, emotional intelligence, students, interpersonal skills, intrapersonal skills

  7. Change of direction ability test differentiates higher level and lower level soccer referees

    Science.gov (United States)

    Los, Arcos A; Grande, I; Casajús, JA

    2016-01-01

    This report examines the agility and level of acceleration capacity of Spanish soccer referees and investigates the possible differences between field referees of different categories. The speed test consisted of 3 maximum acceleration stretches of 15 metres. The change of direction ability (CODA) test used in this study was a modification of the Modified Agility Test (MAT). The study included a sample of 41 Spanish soccer field referees from the Navarre Committee of Soccer Referees divided into two groups: i) the higher level group (G1, n = 20): 2ndA, 2ndB and 3rd division referees from the Spanish National Soccer League (28.43 ± 1.39 years); and ii) the lower level group (G2, n = 21): Navarre Provincial League soccer referees (29.54 ± 1.87 years). Significant differences were found with respect to the CODA between G1 (5.72 ± 0.13 s) and G2 (6.06 ± 0.30 s), while no differences were encountered between groups in acceleration ability. No significant correlations were obtained in G1 between agility and the capacity to accelerate. Significant correlations were found between sprint and agility times in the G2 and in the total group. The results of this study showed that agility can be used as a discriminating factor for differentiating between national and regional field referees; however, no observable differences were found over the 5 and 15 m sprint tests. PMID:27274111

  8. Effects of Organic Pesticides on Enchytraeids (Oligochaeta in Agroecosystems: Laboratory and Higher-Tier Tests

    Directory of Open Access Journals (Sweden)

    Jörg Römbke

    2017-05-01

    Full Text Available Enchytraeidae (Oligochaeta, Annelida are often considered to be typical forestliving organisms, but they are regularly found in agroecosystems of the temperate regions of the world. Although less known than their larger relatives, the earthworms, these saprophagous organisms play similar roles in agricultural soils (but at a smaller scale, e.g., influencing soil structure and organic matter dynamics via microbial communities, and having a central place in soil food webs. Their diversity is rarely studied or often underestimated due to difficulties in distinguishing the species. New genetic techniques reveal that even in anthropogenically highly influenced soils, more than 10 species per site can be found. Because of their close contact with the soil pore water, a high ingestion rate and a thin cuticle, they often react very sensitively to a broad range of pesticides. Firstly we provide a short overview of the diversity and abundance of enchytraeid communities in agroecosystems. Afterwards, we explore the available data on enchytraeid sensitivity toward pesticides at different levels of biological organization, focusing on pesticides used in (mainly European agroecosystems. Starting with non-standardized studies on the effects of pesticides on the sub-individual level, we compile the results of standard laboratory tests performed following OECD and ISO guidelines as well as those of higher-tier studies (i.e., semi-field and field tests. The number of comparable test data is still limited, because tests with enchytraeids are not a regulatory requirement in the European Union. While focusing on the effects of pesticides, attention is also given to their interactions with environmental stressors (e.g., climate change. In conclusion, we recommend to increase the use of enchytraeids in pesticide risk assessment because of their diversity and functional importance as well as their increasingly simplified use in (mostly standardized tests at all levels

  9. [Correlation and concordance between the national test of medicine (ENAM) and the grade point average (GPA): analysis of the peruvian experience in the period 2007 - 2009].

    Science.gov (United States)

    Huamaní, Charles; Gutiérrez, César; Mezones-Holguín, Edward

    2011-03-01

    To evaluate the correlation and concordance between the 'Peruvian National Exam of Medicine' (ENAM) and the Mean Grade Point Average (GPA) in recently graduated medical students in the period 2007 to 2009. We carried out a secondary data analysis, using the records of the physicians applying to the Rural and Urban Marginal Service in Health of Peru (SERUMS) processes for the years 2008 to 2010. We extracted from these registers, the grades obtained in the ENAM and GPA. We performed a descriptive analysis using medians and 1st and 3rd quartiles (q1/q3); we calculated the correlation between both scores using the Spearman correlation coefficient, additionally, we conducted a lineal regression analysis, and the concordance was measured using the Bland and Altman coefficient. A total of 6 117 physicians were included, the overall median for the GPA was 13.4 (12.7/14.2) and for the ENAM was 11.6 (10.2/13.0).Of the total assessed, 36.8% failed the TEST. We observed an increase in annual median of ENAM scores, with the consequent decrease in the difference between both grades. The correlation between ENAM and PPU is direct and moderate (0.582), independent from the year, type of university management (Public or Private) and location. However, the concordance between both ratings is regular, with a global coefficient of 0.272 (CI 95%: 0.260 to 0.284). Independently of the year, location or type of university management, there is a moderate correlation between the ENAM and the PPU; however, there is only a regular concordance between both grades.

  10. The measurement of power losses at high magnetic field densities or at small cross-section of test specimen using the averaging

    CERN Document Server

    Gorican, V; Hamler, A; Nakata, T

    2000-01-01

    It is difficult to achieve sufficient accuracy of power loss measurement at high magnetic field densities where the magnetic field strength gets more and more distorted, or in cases where the influence of noise increases (small specimen cross section). The influence of averaging on the accuracy of power loss measurement was studied on the cast amorphous magnetic material Metglas 2605-TCA. The results show that the accuracy of power loss measurements can be improved by using the averaging of data acquisition points.

  11. Neutron resonance averaging

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1986-10-01

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs

  12. Exploiting scale dependence in cosmological averaging

    International Nuclear Information System (INIS)

    Mattsson, Teppo; Ronkainen, Maria

    2008-01-01

    We study the role of scale dependence in the Buchert averaging method, using the flat Lemaitre–Tolman–Bondi model as a testing ground. Within this model, a single averaging scale gives predictions that are too coarse, but by replacing it with the distance of the objects R(z) for each redshift z, we find an O(1%) precision at z<2 in the averaged luminosity and angular diameter distances compared to their exact expressions. At low redshifts, we show the improvement for generic inhomogeneity profiles, and our numerical computations further verify it up to redshifts z∼2. At higher redshifts, the method breaks down due to its inability to capture the time evolution of the inhomogeneities. We also demonstrate that the running smoothing scale R(z) can mimic acceleration, suggesting that it could be at least as important as the backreaction in explaining dark energy as an inhomogeneity induced illusion

  13. Centers for Disease Control and Prevention Funding for HIV Testing Associated With Higher State Percentage of Persons Tested.

    Science.gov (United States)

    Hayek, Samah; Dietz, Patricia M; Van Handel, Michelle; Zhang, Jun; Shrestha, Ram K; Huang, Ya-Lin A; Wan, Choi; Mermin, Jonathan

    2015-01-01

    To assess the association between state per capita allocations of Centers for Disease Control and Prevention (CDC) funding for HIV testing and the percentage of persons tested for HIV. We examined data from 2 sources: 2011 Behavioral Risk Factor Surveillance System and 2010-2011 State HIV Budget Allocations Reports. Behavioral Risk Factor Surveillance System data were used to estimate the percentage of persons aged 18 to 64 years who had reported testing for HIV in the last 2 years in the United States by state. State HIV Budget Allocations Reports were used to calculate the state mean annual per capita allocations for CDC-funded HIV testing reported by state and local health departments in the United States. The association between the state fixed-effect per capita allocations for CDC-funded HIV testing and self-reported HIV testing in the last 2 years among persons aged 18 to 64 years was assessed with a hierarchical logistic regression model adjusting for individual-level characteristics. The percentage of persons tested for HIV in the last 2 years. In 2011, 18.7% (95% confidence interval = 18.4-19.0) of persons reported being tested for HIV in last 2 years (state range, 9.7%-28.2%). During 2010-2011, the state mean annual per capita allocation for CDC-funded HIV testing was $0.34 (state range, $0.04-$1.04). A $0.30 increase in per capita allocation for CDC-funded HIV testing was associated with an increase of 2.4 percentage points (14.0% vs 16.4%) in the percentage of persons tested for HIV per state. Providing HIV testing resources to health departments was associated with an increased percentage of state residents tested for HIV.

  14. Using Aptitude Testing to Diversify Higher Education Intake--An Australian Case Study

    Science.gov (United States)

    Edwards, Daniel; Coates, Hamish; Friedman, Tim

    2013-01-01

    Australian higher education is currently entering a new phase of growth. Within the remit of this expansion is an express commitment to widen participation in higher education among under-represented groups--in particular those from low socioeconomic backgrounds. This paper argues that one key mechanism for achieving this goal should be the…

  15. Test of the local form of higher-spin equations via AdS/CFT

    Directory of Open Access Journals (Sweden)

    V.E. Didenko

    2017-12-01

    Full Text Available The local form of higher-spin equations found recently to the second order [1] is shown to properly reproduce the anticipated AdS/CFT correlators for appropriate boundary conditions. It is argued that consistent AdS/CFT holography for the parity-broken boundary models needs a nontrivial modification of the bosonic truncation of the original higher-spin theory with the doubled number of fields, as well as a nonlinear deformation of the boundary conditions in the higher orders.

  16. Biodegradation testing of chemicals with high Henry’s constants – separating mass and effective concentration reveals higher rate constants

    DEFF Research Database (Denmark)

    Birch, Heidi; Andersen, Henrik Rasmus; Comber, Mike

    Microextraction (HS-SPME) was applied directly on the test systems to measure substrate depletion by biodegradation relative to abiotic controls. HS-SPME was also applied to determine air to water partitioning ratios. Water phase biodegradation rate constants, kwater, were up to 72 times higher than test system...

  17. MCQ testing in higher education: Yes, there are bad items and invalid scores—A case study identifying solutions

    OpenAIRE

    Brown, Gavin

    2017-01-01

    This is a lecture given at Umea University, Sweden in September 2017. It is based on the published study: Brown, G. T. L., & Abdulnabi, H. (2017). Evaluating the quality of higher education instructor-constructed multiple-choice tests: Impact on student grades. Frontiers in Education: Assessment, Testing, & Applied Measurement, 2(24).. doi:10.3389/feduc.2017.00024

  18. NMR measurement of dynamic nuclear polarization: a technique to test the quality of its volume average obtained with different NMR coil configurations

    International Nuclear Information System (INIS)

    Zhao, W.H.; Cox, S.F.J.

    1980-07-01

    In the NMR measurement of dynamic nuclear polarization, a volume average is obtained where the contribution from different parts of the sample is weighted according to the local intensity of the RF field component perpendicular to the large static field. A method of mapping this quantity is described. A small metallic object whose geometry is chosen to perturb the appropriate RF component is scanned through the region to be occupied by the sample. The response of the phase angle of the impedance of a tuned circuit comprising the NMR coil gives a direct measurement of the local weighting factor. The correlation between theory and experiment was obtained by using a circular coil. The measuring method, checked in this way, was then used to investigate the field profiles of practical coils which are required to be rectangular for a proposed experimental neutron polarizing filter. This method can be used to evaluate other practical RF coils. (author)

  19. Higher Quality of Molecular Testing, an Unfulfilled Priority: Results from External Quality Assessment for KRAS Mutation Testing in Colorectal Cancer

    NARCIS (Netherlands)

    Tembuyser, L.; Ligtenberg, M.J.L.; Normanno, N.; Delen, S.; Krieken, J.H.J.M. van; Dequeker, E.M.

    2014-01-01

    Precision medicine is now a key element in clinical oncology. RAS mutational status is a crucial predictor of responsiveness to anti-epidermal growth factor receptor agents in metastatic colorectal cancer. In an effort to guarantee high-quality testing services in molecular pathology, the European

  20. Higher quality of molecular testing, an unfulfilled priority: Results from external quality assessment for KRAS mutation testing in colorectal cancer.

    Science.gov (United States)

    Tembuyser, Lien; Ligtenberg, Marjolijn J L; Normanno, Nicola; Delen, Sofie; van Krieken, J Han; Dequeker, Elisabeth M C

    2014-05-01

    Precision medicine is now a key element in clinical oncology. RAS mutational status is a crucial predictor of responsiveness to anti-epidermal growth factor receptor agents in metastatic colorectal cancer. In an effort to guarantee high-quality testing services in molecular pathology, the European Society of Pathology has been organizing an annual KRAS external quality assessment program since 2009. In 2012, 10 formalin-fixed, paraffin-embedded samples, of which 8 from invasive metastatic colorectal cancer tissue and 2 artificial samples of cell line material, were sent to more than 100 laboratories from 26 countries with a request for routine KRAS testing. Both genotyping and clinical reports were assessed independently. Twenty-seven percent of the participants genotyped at least 1 of 10 samples incorrectly. In total, less than 5% of the distributed specimens were genotyped incorrectly. Genotyping errors consisted of false negatives, false positives, and incorrectly genotyped mutations. Twenty percent of the laboratories reported a technical error for one or more samples. A review of the written reports showed that several essential elements were missing, most notably a clinical interpretation of the test result, the method sensitivity, and the use of a reference sequence. External quality assessment serves as a valuable educational tool in assessing and improving molecular testing quality and is an important asset for monitoring quality assurance upon incorporation of new biomarkers in diagnostic services. Copyright © 2014 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.

  1. Impact of Answer-Switching Behavior on Multiple-Choice Test Scores in Higher Education

    Directory of Open Access Journals (Sweden)

    Ramazan BAŞTÜRK

    2011-06-01

    Full Text Available The multiple- choice format is one of the most popular selected-response item formats used in educational testing. Researchers have shown that Multiple-choice type test is a useful vehicle for student assessment in core university subjects that usually have large student numbers. Even though the educators, test experts and different test recourses maintain the idea that the first answer should be retained, many researchers argued that this argument is not dependent with empirical findings. The main question of this study is to examine how the answer switching behavior affects the multiple-choice test score. Additionally, gender differences and relationship between number of answer switching behavior and item parameters (item difficulty and item discrimination were investigated. The participants in this study consisted of 207 upper-level College of Education students from mid-sized universities. A Midterm exam consisted of 20 multiple-choice questions was used. According to the result of this study, answer switching behavior statistically increase test scores. On the other hand, there is no significant gender difference in answer-switching behavior. Additionally, there is a significant negative relationship between answer switching behavior and item difficulties.

  2. Results of physics start-up tests of Mochovce and Bohunice units with 2-nd generation Gd fuel (average enrichment 4.87 %)

    International Nuclear Information System (INIS)

    Polakovic, F.

    2015-01-01

    There are presented main features of the fuel and the list of experimental neutron-physical characteristics measured during physics start-up tests.All together there were carried out 14 physics start-ups at Bohunice and Mochovce Units with the new type of fuel. Differences between theoretical and experimental neutron-physical characteristics were statistically processed and compared with the tests acceptance criteria. There are summarized results of reactor physics start-ups with 2-nd generation Gd fuel usage [ru

  3. 10 CFR Appendix R to Subpart B of... - Uniform Test Method for Measuring Average Lamp Efficacy (LE), Color Rendering Index (CRI), and...

    Science.gov (United States)

    2010-01-01

    ... (LE), Color Rendering Index (CRI), and Correlated Color Temperature (CCT) of Electric Lamps R Appendix R to Subpart B of Part 430 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CONSERVATION PROGRAM FOR CONSUMER PRODUCTS Test Procedures Pt. 430, Subpt. B, App. R Appendix R to Subpart B of Part...

  4. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    1999-01-01

    In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belo...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion......In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  5. Measurements of the acoustic field on austenitic welds: a way to higher reliability in ultrasonic tests

    International Nuclear Information System (INIS)

    Kemnitz, P.; Richter, U.; Klueber, H.

    1997-01-01

    In nuclear power plants many of the welds in austenitic tubes have to be inspected by means of ultrasonic techniques. If component-identical test pieces are available, they are used to qualify the ultrasonic test technology. Acoustic field measurements on such test blocks give information whether the beam of the ultrasonic transducer reaches all critical parts of the weld region and which transducer type is best suited. Acoustic fields have been measured at a bimetallic, a V-shaped and a narrow gap weld in test pieces of wall thickness 33, 25 and 17 mm, respectively. Compression wave transducers 45, 60 and 70 and 45 shear wave transducers have been included in the investigation. The results are presented: (1) as acoustic C-scans for one definite probe position, (2) as series of C-scans for the probe moving on a track perpendicular to the weld, (3) as scan along the weld and (4) as effective beam profile. The influence of the scanning electrodynamic probe is also discussed. (orig.)

  6. Results of a pilot scale melter test to attain higher production rates

    International Nuclear Information System (INIS)

    Elliott, M.L.; Perez, J.M. Jr.; Chapman, C.C.

    1991-01-01

    A pilot-scale melter test was completed as part of the effort to enhance glass production rates. The experiment was designed to evaluate the effects of bulk glass temperature and feed oxide loading. The maximum glass production rate obtained, 86 kg/hr-m 2 , was over 200% better than the previous record for the melter used

  7. Visibility-Based Hypothesis Testing Using Higher-Order Optical Interference

    Science.gov (United States)

    Jachura, Michał; Jarzyna, Marcin; Lipka, Michał; Wasilewski, Wojciech; Banaszek, Konrad

    2018-03-01

    Many quantum information protocols rely on optical interference to compare data sets with efficiency or security unattainable by classical means. Standard implementations exploit first-order coherence between signals whose preparation requires a shared phase reference. Here, we analyze and experimentally demonstrate the binary discrimination of visibility hypotheses based on higher-order interference for optical signals with a random relative phase. This provides a robust protocol implementation primitive when a phase lock is unavailable or impractical. With the primitive cost quantified by the total detected optical energy, optimal operation is typically reached in the few-photon regime.

  8. Averaged RMHD equations

    International Nuclear Information System (INIS)

    Ichiguchi, Katsuji

    1998-01-01

    A new reduced set of resistive MHD equations is derived by averaging the full MHD equations on specified flux coordinates, which is consistent with 3D equilibria. It is confirmed that the total energy is conserved and the linearized equations for ideal modes are self-adjoint. (author)

  9. Determining average yarding distance.

    Science.gov (United States)

    Roger H. Twito; Charles N. Mann

    1979-01-01

    Emphasis on environmental and esthetic quality in timber harvesting has brought about increased use of complex boundaries of cutting units and a consequent need for a rapid and accurate method of determining the average yarding distance and area of these units. These values, needed for evaluation of road and landing locations in planning timber harvests, are easily and...

  10. Average Revisited in Context

    Science.gov (United States)

    Watson, Jane; Chick, Helen

    2012-01-01

    This paper analyses the responses of 247 middle school students to items requiring the concept of average in three different contexts: a city's weather reported in maximum daily temperature, the number of children in a family, and the price of houses. The mixed but overall disappointing performance on the six items in the three contexts indicates…

  11. Averaging operations on matrices

    Indian Academy of Sciences (India)

    2014-07-03

    Jul 3, 2014 ... Role of Positive Definite Matrices. • Diffusion Tensor Imaging: 3 × 3 pd matrices model water flow at each voxel of brain scan. • Elasticity: 6 × 6 pd matrices model stress tensors. • Machine Learning: n × n pd matrices occur as kernel matrices. Tanvi Jain. Averaging operations on matrices ...

  12. Average-energy games

    Directory of Open Access Journals (Sweden)

    Patricia Bouyer

    2015-09-01

    Full Text Available Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in NP inter coNP and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements.

  13. Plastic spatula with narrow long tip provides higher satisfactory smears for Pap test

    Directory of Open Access Journals (Sweden)

    Pervinder Kaur

    2013-01-01

    Full Text Available Background: Ayre spatula for cervical smear collection is being used despite the suggestion that different modified spatulas provide more satisfactory sampling. Aims: To see whether the cytological pickup improves with the use of long tipped spatula. Setting and Design: Rurally based University Hospital; crossover study. Materials and Methods: Pap smear using Ayre spatula in 500 and with plastic narrow long tip (Szalay spatula in 500 clinic attending women was taken and analyzed. Crossover smears were taken with modified spatula in 163 and using Ayre spatula in 187 women after 2 weeks of initial smears. The same pathologist made cytological reporting for all smears and was unaware of the type of spatula used. Results: Smears from Ayre spatula had significantly higher reports of inadequate smears (94 of 500 vs. 68 of 500 for Ayre and Szalay, respectively; P = 0.032 and it remained so even after crossover (94 of 187 vs. 70 of 163 for Ayre and Szalay, respectively; P = 0.2. Cellular quality appeared better with smears taken using Szalay spatula, but the overall abnormal smear detection rate remained similar with either collection tool (χ2 = 1.5; P = 0.2. Conclusions: Proportion of satisfactory smears is higher when long tip plastic spatula is used for collection of sample.

  14. Plastic spatula with narrow long tip provides higher satisfactory smears for Pap test.

    Science.gov (United States)

    Kaur, Pervinder; Kushtagi, Pralhad

    2013-07-01

    Ayre spatula for cervical smear collection is being used despite the suggestion that different modified spatulas provide more satisfactory sampling. To see whether the cytological pickup improves with the use of long tipped spatula. Rurally based University Hospital; crossover study. Pap smear using Ayre spatula in 500 and with plastic narrow long tip (Szalay) spatula in 500 clinic attending women was taken and analyzed. Crossover smears were taken with modified spatula in 163 and using Ayre spatula in 187 women after 2 weeks of initial smears. The same pathologist made cytological reporting for all smears and was unaware of the type of spatula used. Smears from Ayre spatula had significantly higher reports of inadequate smears (94 of 500 vs. 68 of 500 for Ayre and Szalay, respectively; P = 0.032) and it remained so even after crossover (94 of 187 vs. 70 of 163 for Ayre and Szalay, respectively; P = 0.2). Cellular quality appeared better with smears taken using Szalay spatula, but the overall abnormal smear detection rate remained similar with either collection tool (χ(2) = 1.5; P = 0.2). Proportion of satisfactory smears is higher when long tip plastic spatula is used for collection of sample.

  15. Global testing under sparse alternatives: ANOVA, multiple comparisons and the higher criticism

    OpenAIRE

    Arias-Castro, Ery; Candès, Emmanuel J.; Plan, Yaniv

    2011-01-01

    Testing for the significance of a subset of regression coefficients in a linear model, a staple of statistical analysis, goes back at least to the work of Fisher who introduced the analysis of variance (ANOVA). We study this problem under the assumption that the coefficient vector is sparse, a common situation in modern high-dimensional settings. Suppose we have $p$ covariates and that under the alternative, the response only depends upon the order of $p^{1-\\alpha}$ of those, $0\\le\\alpha\\le1$...

  16. Higher-Order Asymptotics and Its Application to Testing the Equality of the Examinee Ability Over Two Sets of Items.

    Science.gov (United States)

    Sinharay, Sandip; Jensen, Jens Ledet

    2018-06-27

    In educational and psychological measurement, researchers and/or practitioners are often interested in examining whether the ability of an examinee is the same over two sets of items. Such problems can arise in measurement of change, detection of cheating on unproctored tests, erasure analysis, detection of item preknowledge, etc. Traditional frequentist approaches that are used in such problems include the Wald test, the likelihood ratio test, and the score test (e.g., Fischer, Appl Psychol Meas 27:3-26, 2003; Finkelman, Weiss, & Kim-Kang, Appl Psychol Meas 34:238-254, 2010; Glas & Dagohoy, Psychometrika 72:159-180, 2007; Guo & Drasgow, Int J Sel Assess 18:351-364, 2010; Klauer & Rettig, Br J Math Stat Psychol 43:193-206, 1990; Sinharay, J Educ Behav Stat 42:46-68, 2017). This paper shows that approaches based on higher-order asymptotics (e.g., Barndorff-Nielsen & Cox, Inference and asymptotics. Springer, London, 1994; Ghosh, Higher order asymptotics. Institute of Mathematical Statistics, Hayward, 1994) can also be used to test for the equality of the examinee ability over two sets of items. The modified signed likelihood ratio test (e.g., Barndorff-Nielsen, Biometrika 73:307-322, 1986) and the Lugannani-Rice approximation (Lugannani & Rice, Adv Appl Prob 12:475-490, 1980), both of which are based on higher-order asymptotics, are shown to provide some improvement over the traditional frequentist approaches in three simulations. Two real data examples are also provided.

  17. Unconscious fearful priming followed by a psychosocial stress test results in higher cortisol levels.

    Science.gov (United States)

    Hänsel, Alexander; von Känel, Roland

    2013-10-01

    Human perception of stress includes an automatic pathway that processes subliminal presented stimuli below the threshold of conscious awareness. Subliminal stimuli can therefore activate the physiologic stress system. Unconscious emotional signals were shown to significantly moderate reactions and responses to subsequent stimuli, an effect called 'priming'. We hypothesized that subliminal presentation of a fearful signal during the Stroop task compared with an emotionally neutral one will prime stress reactivity in a subsequently applied psychosocial stress task, thereby yielding a significant increase in salivary cortisol. Half of 36 participants were repeatedly presented either a fearful face or a neutral one. After this, all underwent a psychosocial stress task. The fearful group showed a significant increase in cortisol levels (p = 0.022). This change was not affected by sex, age and body mass index, and it also did not change when taking resting cortisol levels into account. Post-hoc analyses showed that the increase in cortisol in the fearful group started immediately after the psychosocial stress test. Hence, subliminal exposure to a fearful signal in combination with the Stroop and followed by a psychosocial stress test leads to an increase in stress reactivity. Copyright © 2012 John Wiley & Sons, Ltd.

  18. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    2001-01-01

    In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong ...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation.......In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  19. Evaluations of average level spacings

    International Nuclear Information System (INIS)

    Liou, H.I.

    1980-01-01

    The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of 168 Er data. 19 figures, 2 tables

  20. When good = better than average

    Directory of Open Access Journals (Sweden)

    Don A. Moore

    2007-10-01

    Full Text Available People report themselves to be above average on simple tasks and below average on difficult tasks. This paper proposes an explanation for this effect that is simpler than prior explanations. The new explanation is that people conflate relative with absolute evaluation, especially on subjective measures. The paper then presents a series of four studies that test this conflation explanation. These tests distinguish conflation from other explanations, such as differential weighting and selecting the wrong referent. The results suggest that conflation occurs at the response stage during which people attempt to disambiguate subjective response scales in order to choose an answer. This is because conflation has little effect on objective measures, which would be equally affected if the conflation occurred at encoding.

  1. Average is Over

    Science.gov (United States)

    Eliazar, Iddo

    2018-02-01

    The popular perception of statistical distributions is depicted by the iconic bell curve which comprises of a massive bulk of 'middle-class' values, and two thin tails - one of small left-wing values, and one of large right-wing values. The shape of the bell curve is unimodal, and its peak represents both the mode and the mean. Thomas Friedman, the famous New York Times columnist, recently asserted that we have entered a human era in which "Average is Over" . In this paper we present mathematical models for the phenomenon that Friedman highlighted. While the models are derived via different modeling approaches, they share a common foundation. Inherent tipping points cause the models to phase-shift from a 'normal' bell-shape statistical behavior to an 'anomalous' statistical behavior: the unimodal shape changes to an unbounded monotone shape, the mode vanishes, and the mean diverges. Hence: (i) there is an explosion of small values; (ii) large values become super-large; (iii) 'middle-class' values are wiped out, leaving an infinite rift between the small and the super large values; and (iv) "Average is Over" indeed.

  2. Average nuclear surface properties

    International Nuclear Information System (INIS)

    Groote, H. von.

    1979-01-01

    The definition of the nuclear surface energy is discussed for semi-infinite matter. This definition is extended also for the case that there is a neutron gas instead of vacuum on the one side of the plane surface. The calculations were performed with the Thomas-Fermi Model of Syler and Blanchard. The parameters of the interaction of this model were determined by a least squares fit to experimental masses. The quality of this fit is discussed with respect to nuclear masses and density distributions. The average surface properties were calculated for different particle asymmetry of the nucleon-matter ranging from symmetry beyond the neutron-drip line until the system no longer can maintain the surface boundary and becomes homogeneous. The results of the calculations are incorporated in the nuclear Droplet Model which then was fitted to experimental masses. (orig.)

  3. Americans' Average Radiation Exposure

    International Nuclear Information System (INIS)

    2000-01-01

    We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body

  4. Discriminant Analysis of Essay, Mathematics/Science Type of Essay, College Scholastic Ability Test, and Grade Point Average as Predictors of Acceptance to a Pre-med Course at a Korean Medical School

    OpenAIRE

    Geum-Hee Jeong

    2008-01-01

    A discriminant analysis was conducted to investigate how an essay, a mathematics/science type of essay, a college scholastic ability test, and grade point average affect acceptance to a pre-med course at a Korean medical school. Subjects included 122 and 385 applicants for, respectively, early and regular admission to a medical school in Korea. The early admission examination was conducted in October 2007, and the regular admission examination was conducted in January 2008. The analysis of ea...

  5. Higher Drop in Speed during a Repeated Sprint Test in Soccer Players Reporting Former Hamstring Strain Injury

    Science.gov (United States)

    Røksund, Ola D.; Kristoffersen, Morten; Bogen, Bård E.; Wisnes, Alexander; Engeseth, Merete S.; Nilsen, Ann-Kristin; Iversen, Vegard V.; Mæland, Silje; Gundersen, Hilde

    2017-01-01

    Aim: Hamstring strain injury is common in soccer. The aim of this study was to evaluate the physical capacity of players who have and have not suffered from hamstring strain injury in a sample of semi-professional and professional Norwegian soccer players in order to evaluate characteristics and to identify possible indications of insufficient rehabilitation. Method: Seventy-five semi-professional and professional soccer players (19 ± 3 years) playing at the second and third level in the Norwegian league participated in the study. All players answered a questionnaire, including one question about hamstring strain injury (yes/no) during the previous 2 years. They also performed a 40 m maximal sprint test, a repeated sprint test (8 × 20 m), a countermovement jump, a maximal oxygen consumption (VO2max) test, strength tests and flexibility tests. Independent sample t-tests were used to evaluate differences in the physical capacity of the players who had suffered from hamstring strain injury and those who had not. Mixed between-within subject's analyses of variance was used to compare changes in speed during the repeated sprint test between groups. Results: Players who reported hamstring strain injury during the previous two years (16%) had a significantly higher drop in speed (0.07 vs. 0.02 s, p = 0.007) during the repeated sprint test, compared to players reporting no previous hamstring strain injury. In addition, there was a significant interaction (groups × time) (F = 3.22, p = 0.002), showing that speed in the two groups changed differently during the repeated sprint test. There were no significant differences in relations to age, weight, height, body fat, linear speed, countermovement jump height, leg strength, VO2max, or hamstring flexibility between the groups. Conclusion: Soccer players who reported hamstring strain injury during the previous 2 years showed significant higher drop in speed during the repeated sprint test compared to players with no hamstring

  6. Discriminant analysis of essay, mathematics/science type of essay, college scholastic ability test, and grade point average as predictors of acceptance to a pre-med course at a Korean medical school.

    Science.gov (United States)

    Jeong, Geum-Hee

    2008-01-01

    A discriminant analysis was conducted to investigate how an essay, a mathematics/science type of essay, a college scholastic ability test, and grade point average affect acceptance to a pre-med course at a Korean medical school. Subjects included 122 and 385 applicants for, respectively, early and regular admission to a medical school in Korea. The early admission examination was conducted in October 2007, and the regular admission examination was conducted in January 2008. The analysis of early admission data revealed significant F values for the mathematics/science type of essay (51.64; Pgrade point average (10.66; P=0.0014). The analysis of regular admission data revealed the following F values: 28.81 (Pgrade point average, 27.47 (P<0.0001) for college scholastic ability test, 10.67 (P=0.0012) for the essay, and 216.74 (P<0.0001) for the mathematics/science type of essay. Since the mathematics/science type of essay had a strong effect on acceptance, an emphasis on this requirement and exclusion of other kinds of essays would be effective in subsequent entrance examinations for this premed course.

  7. Production characteristics of lettuce Lactuca sativa L. in the frame of the first crop tests in the Higher Plant Chamber integrated into the MELiSSA Pilot Plant

    Science.gov (United States)

    Tikhomirova, Natalia; Lawson, Jamie; Stasiak, Michael; Dixon, Mike; Paille, Christel; Peiro, Enrique; Fossen, Arnaud; Godia, Francesc

    Micro-Ecological Life Support System Alternative (MELiSSA) is an artificial closed ecosystem that is considered a tool for the development of a bioregenerative life support system for manned space missions. One of the five compartments of MELiSSA loop -Higher Plant Chamber was recently integrated into the MELiSSA Pilot Plant facility at Universitat Aut`noma deo Barcelona. The main contributions expected by integration of this photosynthetic compartment are oxygen, water, vegetable food production and CO2 consumption. Production characteristics of Lactuca sativa L., as a MELiSSA candidate crop, were investigated in this work in the first crop experiments in the MELiSSA Pilot Plant facility. The plants were grown in batch culture and totaled 100 plants with a growing area 5 m long and 1 m wide in a sealed controlled environment. Several replicates of the experiments were carried out with varying duration. It was shown that after 46 days of lettuce cultivation dry edible biomass averaged 27, 2 g per plant. However accumulation of oxygen in the chamber, which required purging of the chamber, and decrease in the food value of the plants was observed. Reducing the duration of the tests allowed uninterrupted test without opening the system and also allowed estimation of the crop's carbon balance. Results of productivity, tissue composition, nutrient uptake and canopy photosynthesis of lettuce regardless of test duration are discussed in the paper.

  8. Intelligence Tests with Higher G-Loadings Show Higher Correlations with Body Symmetry: Evidence for a General Fitness Factor Mediated by Developmental Stability

    Science.gov (United States)

    Prokosch, M.D.; Yeo, R.A.; Miller, G.F.

    2005-01-01

    Just as body symmetry reveals developmental stability at the morphological level, general intelligence may reveal developmental stability at the level of brain development and cognitive functioning. These two forms of developmental stability may overlap by tapping into a ''general fitness factor.'' If so, then intellectual tests with higher…

  9. Preoptometry and optometry school grade point average and optometry admissions test scores as predictors of performance on the national board of examiners in optometry part I (basic science) examination.

    Science.gov (United States)

    Bailey, J E; Yackle, K A; Yuen, M T; Voorhees, L I

    2000-04-01

    To evaluate preoptometry and optometry school grade point averages and Optometry Admission Test (OAT) scores as predictors of performance on the National Board of Examiners in Optometry NBEO Part I (Basic Science) (NBEOPI) examination. Simple and multiple correlation coefficients were computed from data obtained from a sample of three consecutive classes of optometry students (1995-1997; n = 278) at Southern California College of Optometry. The GPA after year two of optometry school was the highest correlation (r = 0.75) among all predictor variables; the average of all scores on the OAT was the highest correlation among preoptometry predictor variables (r = 0.46). Stepwise regression analysis indicated a combination of the optometry GPA, the OAT Academic Average, and the GPA in certain optometry curricular tracks resulted in an improved correlation (multiple r = 0.81). Predicted NBEOPI scores were computed from the regression equation and then analyzed by receiver operating characteristic (roc) and statistic of agreement (kappa) methods. From this analysis, we identified the predicted score that maximized identification of true and false NBEOPI failures (71% and 10%, respectively). Cross validation of this result on a separate class of optometry students resulted in a slightly lower correlation between actual and predicted NBEOPI scores (r = 0.77) but showed the criterion-predicted score to be somewhat lax. The optometry school GPA after 2 years is a reasonably good predictor of performance on the full NBEOPI examination, but the prediction is enhanced by adding the Academic Average OAT score. However, predicting performance in certain subject areas of the NBEOPI examination, for example Psychology and Ocular/Visual Biology, was rather insubstantial. Nevertheless, predicting NBEOPI performance from the best combination of year two optometry GPAs and preoptometry variables is better than has been shown in previous studies predicting optometry GPA from the best

  10. Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.

    Science.gov (United States)

    Dirks, Jean; And Others

    1983-01-01

    Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)

  11. Introducing Computer-Based Testing in High-Stakes Exams in Higher Education: Results of a Field Experiment

    Science.gov (United States)

    Boevé, Anja J.; Meijer, Rob R.; Albers, Casper J.; Beetsma, Yta; Bosker, Roel J.

    2015-01-01

    The introduction of computer-based testing in high-stakes examining in higher education is developing rather slowly due to institutional barriers (the need of extra facilities, ensuring test security) and teacher and student acceptance. From the existing literature it is unclear whether computer-based exams will result in similar results as paper-based exams and whether student acceptance can change as a result of administering computer-based exams. In this study, we compared results from a computer-based and paper-based exam in a sample of psychology students and found no differences in total scores across the two modes. Furthermore, we investigated student acceptance and change in acceptance of computer-based examining. After taking the computer-based exam, fifty percent of the students preferred paper-and-pencil exams over computer-based exams and about a quarter preferred a computer-based exam. We conclude that computer-based exam total scores are similar as paper-based exam scores, but that for the acceptance of high-stakes computer-based exams it is important that students practice and get familiar with this new mode of test administration. PMID:26641632

  12. Introducing Computer-Based Testing in High-Stakes Exams in Higher Education: Results of a Field Experiment.

    Science.gov (United States)

    Boevé, Anja J; Meijer, Rob R; Albers, Casper J; Beetsma, Yta; Bosker, Roel J

    2015-01-01

    The introduction of computer-based testing in high-stakes examining in higher education is developing rather slowly due to institutional barriers (the need of extra facilities, ensuring test security) and teacher and student acceptance. From the existing literature it is unclear whether computer-based exams will result in similar results as paper-based exams and whether student acceptance can change as a result of administering computer-based exams. In this study, we compared results from a computer-based and paper-based exam in a sample of psychology students and found no differences in total scores across the two modes. Furthermore, we investigated student acceptance and change in acceptance of computer-based examining. After taking the computer-based exam, fifty percent of the students preferred paper-and-pencil exams over computer-based exams and about a quarter preferred a computer-based exam. We conclude that computer-based exam total scores are similar as paper-based exam scores, but that for the acceptance of high-stakes computer-based exams it is important that students practice and get familiar with this new mode of test administration.

  13. Measurements of higher-order mode damping in the PEP-II low-power test cavity

    International Nuclear Information System (INIS)

    Rimmer, R.A.; Goldberg, D.A.

    1993-05-01

    The paper describes the results of measurements of the Higher-Order Mode (HOM) spectrum of the low-power test model of the PEP-II RF cavity and the reduction in the Q's of the modes achieved by the addition of dedicated damping waveguides. All the longitudinal (monopole) and deflecting (dipole) modes below the beam pipe cut-off are identified by comparing their measured frequencies and field distributions with calculations using the URMEL code. Field configurations were determined using a perturbation method with an automated bead positioning system. The loaded Q's agree well with the calculated values reported previously, and the strongest HOMs are damped by more than three orders of magnitude. This is sufficient to reduce the coupled-bunch growth rates to within the capability of a reasonable feedback system. A high power test cavity will now be built to validate the thermal design at the 150 kW nominal operating level, as described elsewhere at this conference

  14. Age related neuromuscular changes in sEMG of m. Tibialis Anterior using higher order statistics (Gaussianity & linearity test).

    Science.gov (United States)

    Siddiqi, Ariba; Arjunan, Sridhar P; Kumar, Dinesh K

    2016-08-01

    Age-associated changes in the surface electromyogram (sEMG) of Tibialis Anterior (TA) muscle can be attributable to neuromuscular alterations that precede strength loss. We have used our sEMG model of the Tibialis Anterior to interpret the age-related changes and compared with the experimental sEMG. Eighteen young (20-30 years) and 18 older (60-85 years) performed isometric dorsiflexion at 6 different percentage levels of maximum voluntary contractions (MVC), and their sEMG from the TA muscle was recorded. Six different age-related changes in the neuromuscular system were simulated using the sEMG model at the same MVCs as the experiment. The maximal power of the spectrum, Gaussianity and Linearity Test Statistics were computed from the simulated and experimental sEMG. A correlation analysis at α=0.05 was performed between the simulated and experimental age-related change in the sEMG features. The results show the loss in motor units was distinguished by the Gaussianity and Linearity test statistics; while the maximal power of the PSD distinguished between the muscular factors. The simulated condition of 40% loss of motor units with halved the number of fast fibers best correlated with the age-related change observed in the experimental sEMG higher order statistical features. The simulated aging condition found by this study corresponds with the moderate motor unit remodelling and negligible strength loss reported in literature for the cohorts aged 60-70 years.

  15. Flexible time domain averaging technique

    Science.gov (United States)

    Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng

    2013-09-01

    Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

  16. The difference between alternative averages

    Directory of Open Access Journals (Sweden)

    James Vaupel

    2012-09-01

    Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.

  17. Using Movement Test to Evaluate Effectiveness of Health and Fitness Activities of Students in Higher Education Institutions

    Directory of Open Access Journals (Sweden)

    S. Pashkevich

    2018-03-01

    Full Text Available The study objective is to evaluate the possibility of using screening methods for determining the effectiveness of health and fitness activities of students in higher education institutions. Materials and methods. The participants in the experiment were 37 first-year students (17 boys and 20 girls of the School of History of H. S. Skovoroda Kharkiv National Pedagogical University. The experiment lasted during the fall semester. Using the Framingham method for analyzing weekly timing, the study conducted a survey among the students on their level of motor activity and performed a functional movement screen testing. To tentatively evaluate the cause and effect relationship between the level of motor activity and the occurrence of a pathological movement pattern, the study used the Spearman’s rank correlation coefficient. The characteristics between the groups were analyzed by using the Mann-Whitney test for comparing the distribution of ordinal variables. Results. The correlation analysis showed that the first-year students’ motor activity was positively related to the results of functional movement screening (R=+0.69, p< 0.05. At the same time, the students (EG1 who mainly had a high level of physical activity at physical education classes showed low values of functional movement evaluation, compared to the students (EG2 participating in extra-curricular physical activity. In EG1, the overall screening score was 10.3±0.7, in EG2 — 14.2±0.9 (p<0.05. Conclusions. The students with insufficient weekly motor activity had risk values of the test (10.3±0.7, which requires further analysis of the causes of a pathological movement pattern. The study results have confirmed the existence of the relationship between motor activity indicators and functional movement evaluation (R=+0.69, p<0.05. This provides a way to use the screening method of determining motor competence for the effectiveness evaluation of health and fitness programs, but further

  18. Raising test scores vs. teaching higher order thinking (HOT): senior science teachers' views on how several concurrent policies affect classroom practices

    Science.gov (United States)

    Zohar, Anat; Alboher Agmon, Vered

    2018-04-01

    This study investigates how senior science teachers viewed the effects of a Raising Test Scores policy and its implementation on instruction of higher order thinking (HOT), and on teaching thinking to students with low academic achievements.

  19. Predictive value of grade point average (GPA), Medical College Admission Test (MCAT), internal examinations (Block) and National Board of Medical Examiners (NBME) scores on Medical Council of Canada qualifying examination part I (MCCQE-1) scores.

    Science.gov (United States)

    Roy, Banibrata; Ripstein, Ira; Perry, Kyle; Cohen, Barry

    2016-01-01

    To determine whether the pre-medical Grade Point Average (GPA), Medical College Admission Test (MCAT), Internal examinations (Block) and National Board of Medical Examiners (NBME) scores are correlated with and predict the Medical Council of Canada Qualifying Examination Part I (MCCQE-1) scores. Data from 392 admitted students in the graduating classes of 2010-2013 at University of Manitoba (UofM), College of Medicine was considered. Pearson's correlation to assess the strength of the relationship, multiple linear regression to estimate MCCQE-1 score and stepwise linear regression to investigate the amount of variance were employed. Complete data from 367 (94%) students were studied. The MCCQE-1 had a moderate-to-large positive correlation with NBME scores and Block scores but a low correlation with GPA and MCAT scores. The multiple linear regression model gives a good estimate of the MCCQE-1 (R2 =0.604). Stepwise regression analysis demonstrated that 59.2% of the variation in the MCCQE-1 was accounted for by the NBME, but only 1.9% by the Block exams, and negligible variation came from the GPA and the MCAT. Amongst all the examinations used at UofM, the NBME is most closely correlated with MCCQE-1.

  20. Rotational averaging of multiphoton absorption cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Friese, Daniel H., E-mail: daniel.h.friese@uit.no; Beerepoot, Maarten T. P.; Ruud, Kenneth [Centre for Theoretical and Computational Chemistry, University of Tromsø — The Arctic University of Norway, N-9037 Tromsø (Norway)

    2014-11-28

    Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.

  1. How to average logarithmic retrievals?

    Directory of Open Access Journals (Sweden)

    B. Funke

    2012-04-01

    Full Text Available Calculation of mean trace gas contributions from profiles obtained by retrievals of the logarithm of the abundance rather than retrievals of the abundance itself are prone to biases. By means of a system simulator, biases of linear versus logarithmic averaging were evaluated for both maximum likelihood and maximum a priori retrievals, for various signal to noise ratios and atmospheric variabilities. These biases can easily reach ten percent or more. As a rule of thumb we found for maximum likelihood retrievals that linear averaging better represents the true mean value in cases of large local natural variability and high signal to noise ratios, while for small local natural variability logarithmic averaging often is superior. In the case of maximum a posteriori retrievals, the mean is dominated by the a priori information used in the retrievals and the method of averaging is of minor concern. For larger natural variabilities, the appropriateness of the one or the other method of averaging depends on the particular case because the various biasing mechanisms partly compensate in an unpredictable manner. This complication arises mainly because of the fact that in logarithmic retrievals the weight of the prior information depends on abundance of the gas itself. No simple rule was found on which kind of averaging is superior, and instead of suggesting simple recipes we cannot do much more than to create awareness of the traps related with averaging of mixing ratios obtained from logarithmic retrievals.

  2. Lagrangian averaging with geodesic mean.

    Science.gov (United States)

    Oliver, Marcel

    2017-11-01

    This paper revisits the derivation of the Lagrangian averaged Euler (LAE), or Euler- α equations in the light of an intrinsic definition of the averaged flow map as the geodesic mean on the volume-preserving diffeomorphism group. Under the additional assumption that first-order fluctuations are statistically isotropic and transported by the mean flow as a vector field, averaging of the kinetic energy Lagrangian of an ideal fluid yields the LAE Lagrangian. The derivation presented here assumes a Euclidean spatial domain without boundaries.

  3. Averaging in spherically symmetric cosmology

    International Nuclear Information System (INIS)

    Coley, A. A.; Pelavas, N.

    2007-01-01

    The averaging problem in cosmology is of fundamental importance. When applied to study cosmological evolution, the theory of macroscopic gravity (MG) can be regarded as a long-distance modification of general relativity. In the MG approach to the averaging problem in cosmology, the Einstein field equations on cosmological scales are modified by appropriate gravitational correlation terms. We study the averaging problem within the class of spherically symmetric cosmological models. That is, we shall take the microscopic equations and effect the averaging procedure to determine the precise form of the correlation tensor in this case. In particular, by working in volume-preserving coordinates, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. We find that the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background must be of the form of a spatial curvature. Inhomogeneities and spatial averaging, through this spatial curvature correction term, can have a very significant dynamical effect on the dynamics of the Universe and cosmological observations; in particular, we discuss whether spatial averaging might lead to a more conservative explanation of the observed acceleration of the Universe (without the introduction of exotic dark matter fields). We also find that the correlation tensor for a non-FLRW background can be interpreted as the sum of a spatial curvature and an anisotropic fluid. This may lead to interesting effects of averaging on astrophysical scales. We also discuss the results of averaging an inhomogeneous Lemaitre-Tolman-Bondi solution as well as calculations of linear perturbations (that is, the backreaction) in an FLRW background, which support the main conclusions of the analysis

  4. Averaging models: parameters estimation with the R-Average procedure

    Directory of Open Access Journals (Sweden)

    S. Noventa

    2010-01-01

    Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.

  5. Opening a Side-Gate: Engaging the Excluded in Chilean Higher Education through Test-Blind Admission

    Science.gov (United States)

    Koljatic, Mladen; Silva, Monica

    2013-01-01

    The article describes a test-blind admission initiative in a Chilean research university aimed at expanding the inclusion of talented, albeit educationally and socially disadvantaged, students. The outcomes of the test-blind admission cohort were compared with those of students admitted via the regular admission procedure to the same academic…

  6. Aptitude Tests Versus School Exams as Selection Tools for Higher Education and the Case for Assessing Educational Achievement in Context

    Science.gov (United States)

    Stringer, Neil

    2008-01-01

    Advocates of using a US-style SAT for university selection claim that it is fairer to applicants from disadvantaged backgrounds than achievement tests because it assesses potential, not achievement, and that it allows finer discrimination between top applicants than GCEs. The pros and cons of aptitude tests in principle are discussed, focusing on…

  7. Testing compound-specific δ13C of amino acids in mussels as a new approach to determine the average 13C values of primary production in littoral ecosystems

    Science.gov (United States)

    Vokhshoori, N. L.; Larsen, T.; McCarthy, M.

    2012-12-01

    Compound-specific isotope analysis of amino acids (CSI-AA) is a technique used to decouple trophic enrichment patterns from source changes at the base of the food web. With this new emerging tool, it is possible to precisely determine both trophic position and δ15N or δ13C source values in higher feeding organisms. While most work to date has focused on nitrogen (N) isotopic values, early work has suggested that δ13C CSI-AA has great potential as a new tracer both to a record δ13C values of primary production (unaltered by trophic transfers), and also to "fingerprint" specific carbon source organisms. Since essential amino acids (EAA) cannot be made de novo in metazoans but must be obtained from diet, the δ13C value of the primary producer is preserved through the food web. Therefore, the δ13C values of EAAs act as a unique signature of different primary producers and can be used to fingerprint the dominant carbon (C) source driving primary production at the base of the food web. In littoral ecosystems, such as the California Upwelling System (CUS), the likely dominant C sources of suspended particulate organic matter (POM) pool are kelp, upwelling phytoplankton or estuarine phytoplankton. While bulk isotopes of C and N are used extensively to resolve relative consumer hierarchy or shifting diet in a food web, we found that the δ13C bulk values in mussels cannot distinguish exact source in littoral ecosystems. Here we show 15 sites within the CUS, between Cape Blanco, OR and La Jolla, CA where mussels were sampled and analyzed for both bulk δ13C and CSI-AA. We found no latitudinal trends, but rather average bulk δ13C values for the entire coastal record were highly consistent (-15.7 ± 0.9‰). The bulk record would suggest either nutrient provisioning from kelp or upwelled phytoplankton, but 13C-AA fingerprinting confines these two sources to upwelling. This suggests that mussels are recording integrated coastal phytoplankton values, with the enriched

  8. Does provider-initiated HIV testing and counselling lead to higher HIV testing rate and HIV case finding in Rwandan clinics?

    NARCIS (Netherlands)

    Kayigamba, Felix R.; van Santen, Daniëla; Bakker, Mirjam I.; Lammers, Judith; Mugisha, Veronicah; Bagiruwigize, Emmanuel; de Naeyer, Ludwig; Asiimwe, Anita; Schim van der Loeff, Maarten F.

    2016-01-01

    Provider-initiated HIV testing and counselling (PITC) is promoted as a means to increase HIV case finding. We assessed the effectiveness of PITC to increase HIV testing rate and HIV case finding among outpatients in Rwandan health facilities (HF). PITC was introduced in six HFs in 2009-2010. HIV

  9. Ergodic averages via dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    2006-01-01

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain....

  10. ENGLISH IN INDONESIAN ISLAMIC HIGHER EDUCATION: Examining The Relationship between Performance in The Yes/No Test and Reading Skills

    Directory of Open Access Journals (Sweden)

    Sahiruddin Sahiruddin

    2008-12-01

    Full Text Available This study examines the relationship between performance in the Yes/No test of English recognition vocabulary and reading skills in Indonesian Islamic learners of English as a foreign language (EFL. Participants in the study were 83 Indonesian undergraduate students, comprising an Advanced group (n=41 and Intermediate group (n=42 of EFL learners enrolled in the English department at the State Islamic University (UIN of Malang, Indonesia. All participants completed both tests. The results reveal that the hits accuracy performance between the Advanced EFL group and the Intermediate EFL group was statistically significant, indicating that Yes/No test performance, in context of hits accuracy, did discriminate between levels of English proficiency. However, the differences disappeared with corrected scores since both groups indicated a high false alarm rate. In addition, this study also reveals that there was no evidence of a relationship between Yes/No performance and reading scores. Several pedagogical implications for EFL language teachers are discussed.

  11. Development and testing of improved polyimide actuator rod seals at higher temperatures for use in advanced aircraft hydraulic systems

    Science.gov (United States)

    Robinson, E. D.; Waterman, A. W.; Nelson, W. G.

    1972-01-01

    Polyimide second stage rod seals were evaluated to determine their suitability for application in advanced aircraft systems. The configurations of the seals are described. The conditions of the life cycle tests are provided. It was determined that external rod seal leakage was within prescribed limits and that the seals showed no signs of structural degradation.

  12. High average power supercontinuum sources

    Indian Academy of Sciences (India)

    The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium.

  13. Glycogen with short average chain length enhances bacterial durability

    Science.gov (United States)

    Wang, Liang; Wise, Michael J.

    2011-09-01

    Glycogen is conventionally viewed as an energy reserve that can be rapidly mobilized for ATP production in higher organisms. However, several studies have noted that glycogen with short average chain length in some bacteria is degraded very slowly. In addition, slow utilization of glycogen is correlated with bacterial viability, that is, the slower the glycogen breakdown rate, the longer the bacterial survival time in the external environment under starvation conditions. We call that a durable energy storage mechanism (DESM). In this review, evidence from microbiology, biochemistry, and molecular biology will be assembled to support the hypothesis of glycogen as a durable energy storage compound. One method for testing the DESM hypothesis is proposed.

  14. Bias associated with delayed verification in test accuracy studies: accuracy of tests for endometrial hyperplasia may be much higher than we think!

    OpenAIRE

    Clark, T Justin; ter Riet, Gerben; Coomarasamy, Aravinthan; Khan, Khalid S

    2004-01-01

    Abstract Background To empirically evaluate bias in estimation of accuracy associated with delay in verification of diagnosis among studies evaluating tests for predicting endometrial hyperplasia. Methods Systematic reviews of all published research on accuracy of miniature endometrial biopsy and endometr ial ultrasonography for diagnosing endometrial hyperplasia identified 27 test accuracy studies (2,982 subjects). Of these, 16 had immediate histological verification of diagnosis while 11 ha...

  15. Autoregressive Moving Average Graph Filtering

    OpenAIRE

    Isufi, Elvin; Loukas, Andreas; Simonetto, Andrea; Leus, Geert

    2016-01-01

    One of the cornerstones of the field of signal processing on graphs are graph filters, direct analogues of classical filters, but intended for signals defined on graphs. This work brings forth new insights on the distributed graph filtering problem. We design a family of autoregressive moving average (ARMA) recursions, which (i) are able to approximate any desired graph frequency response, and (ii) give exact solutions for tasks such as graph signal denoising and interpolation. The design phi...

  16. Averaging Robertson-Walker cosmologies

    International Nuclear Information System (INIS)

    Brown, Iain A.; Robbers, Georg; Behrend, Juliane

    2009-01-01

    The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ω eff 0 ≈ 4 × 10 −6 , with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10 −8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w eff < −1/3 can be found for strongly phantom models

  17. Bias associated with delayed verification in test accuracy studies: accuracy of tests for endometrial hyperplasia may be much higher than we think!

    Directory of Open Access Journals (Sweden)

    Coomarasamy Aravinthan

    2004-05-01

    Full Text Available Abstract Background To empirically evaluate bias in estimation of accuracy associated with delay in verification of diagnosis among studies evaluating tests for predicting endometrial hyperplasia. Methods Systematic reviews of all published research on accuracy of miniature endometrial biopsy and endometr ial ultrasonography for diagnosing endometrial hyperplasia identified 27 test accuracy studies (2,982 subjects. Of these, 16 had immediate histological verification of diagnosis while 11 had verification delayed > 24 hrs after testing. The effect of delay in verification of diagnosis on estimates of accuracy was evaluated using meta-regression with diagnostic odds ratio (dOR as the accuracy measure. This analysis was adjusted for study quality and type of test (miniature endometrial biopsy or endometrial ultrasound. Results Compared to studies with immediate verification of diagnosis (dOR 67.2, 95% CI 21.7–208.8, those with delayed verification (dOR 16.2, 95% CI 8.6–30.5 underestimated the diagnostic accuracy by 74% (95% CI 7%–99%; P value = 0.048. Conclusion Among studies of miniature endometrial biopsy and endometrial ultrasound, diagnostic accuracy is considerably underestimated if there is a delay in histological verification of diagnosis.

  18. Exceptional F(4) higher-spin theory in AdS{sub 6} at one-loop and other tests of duality

    Energy Technology Data Exchange (ETDEWEB)

    Günaydin, Murat [Institute for Gravitation and the Cosmos Physics Department, Pennsylvania State University, University Park, PA 16802 (United States); Skvortsov, Evgeny [Arnold Sommerfeld Center for Theoretical Physics, Ludwig-Maximilians University Munich, Theresienstr. 37, D-80333 Munich (Germany); Lebedev Institute of Physics, Leninsky ave. 53, 119991 Moscow (Russian Federation); Tran, Tung [Department of Physics, Brown University, Providence, Rhode Island 02912 (United States)

    2016-11-28

    We study the higher-spin gauge theory in six-dimensional anti-de Sitter space AdS{sub 6} that is based on the exceptional Lie superalgebra F(4). The relevant higher-spin algebra was constructed in http://arxiv.org/abs/1409.2185. We determine the spectrum of the theory and show that it contains the physical fields of the Romans F(4) gauged supergravity. The full spectrum consists of an infinite tower of unitary supermultiplets of F(4) which extend the Romans multiplet to higher spins plus a single short supermultiplet. Motivated by applications to this novel supersymmetric higher-spin theory as well as to other theories, we extend the known one-loop tests of AdS/CFT duality in various directions. The spectral zeta-function is derived for the most general case of fermionic and mixed-symmetry fields, which allows one to test the Type-A and B theories and supersymmetric extensions thereof in any dimension. We also study higher-spin doubletons and partially-massless fields. While most of the tests are successfully passed, the Type-B theory in all even dimensional anti-de Sitter spacetimes presents an interesting puzzle: the free energy as computed from the bulk is not equal to that of the free fermion on the CFT side, though there is some systematics to the discrepancy.

  19. Test Analysis Tools to Ensure Higher Quality of On-Board Real Time Software for Space Applications

    Science.gov (United States)

    Boudillet, O.; Mescam, J.-C.; Dalemagne, D.

    2008-08-01

    EADS Astrium Space Transportation, in its Les Mureaux premises, is responsible for the French M51 nuclear deterrent missile onboard SW. There was also developed over 1 million of line of code, mostly in ADA, for the Automated Transfer Vehicle (ATV) onboard SW and the flight control SW of the ARIANE5 launcher which has put it into orbit. As part of the ATV SW, ASTRIUM ST has developed the first Category A SW ever qualified for a European space application. To ensure that all these embedded SW have been developed with the highest quality and reliability level, specific development tools have been designed to cover the steps of source code verification, automated validation test or complete target instruction coverage verification. Three of such dedicated tools are presented here.

  20. Potential impact on HIV incidence of higher HIV testing rates and earlier antiretroviral therapy initiation in MSM

    DEFF Research Database (Denmark)

    Phillips, Andrew N; Cambiano, Valentina; Miners, Alec

    2015-01-01

    count 350/μl. We investigated what would be required to reduce HIV incidence in MSM to below 1 per 1000 person-years (i.e. cost-effective. METHODS: A dynamic, individual-based simulation model was calibrated to multiple data sources...... with viral suppression to 80%, and it would be 90%, if ART is initiated at diagnosis. The scenarios required for such a policy to be cost-effective are presented. CONCLUSION: This analysis provides targets for the proportion of all HIV-positive MSM with viral suppression required to achieve substantial......BACKGROUND: Increased rates of testing, with early antiretroviral therapy (ART) initiation, represent a key potential HIV-prevention approach. Currently, in MSM in the United Kingdom, it is estimated that 36% are diagnosed by 1 year from infection, and the ART initiation threshold is at CD4 cell...

  1. Raising Test Scores vs. Teaching Higher Order Thinking (HOT): Senior Science Teachers' Views on How Several Concurrent Policies Affect Classroom Practices

    Science.gov (United States)

    Zohar, Anat; Alboher Agmon, Vered

    2018-01-01

    Purpose: This study investigates how senior science teachers viewed the effects of a Raising Test Scores policy and its implementation on instruction of higher order thinking (HOT), and on teaching thinking to students with low academic achievements. Background: The study was conducted in the context of three concurrent policies advocating: (a)…

  2. Topological quantization of ensemble averages

    International Nuclear Information System (INIS)

    Prodan, Emil

    2009-01-01

    We define the current of a quantum observable and, under well-defined conditions, we connect its ensemble average to the index of a Fredholm operator. The present work builds on a formalism developed by Kellendonk and Schulz-Baldes (2004 J. Funct. Anal. 209 388) to study the quantization of edge currents for continuous magnetic Schroedinger operators. The generalization given here may be a useful tool to scientists looking for novel manifestations of the topological quantization. As a new application, we show that the differential conductance of atomic wires is given by the index of a certain operator. We also comment on how the formalism can be used to probe the existence of edge states

  3. Experimental study on the natural gas dual fuel engine test and the higher the mixture ratio of hydrogen to natural gas

    Energy Technology Data Exchange (ETDEWEB)

    Kim, B.S.; Lee, Y.S.; Park, C.K. [Cheonnam University, Kwangju (Korea); Masahiro, S. [Kyoto University, Kyoto (Japan)

    1999-05-28

    One of the unsolved problems of the natural gas dual fuel engine is that there is too much exhaust of Total Hydrogen Carbon(THC) at a low equivalent mixture ratio. To fix it, a natural gas mixed with hydrogen was applied to engine test. The results showed that the higher the mixture ratio of hydrogen to natural gas, the higher the combustion efficiency. And when the amount of the intake air is reached to 90% of WOT, the combustion efficiency was promoted. But, like a case making the injection timing earlier, the equivalent mixture ratio for the nocking limit decreases and the produce of NOx increases. 5 refs., 9 figs., 1 tab.

  4. The average Indian female nose.

    Science.gov (United States)

    Patil, Surendra B; Kale, Satish M; Jaiswal, Sumeet; Khare, Nishant; Math, Mahantesh

    2011-12-01

    This study aimed to delineate the anthropometric measurements of the noses of young women of an Indian population and to compare them with the published ideals and average measurements for white women. This anthropometric survey included a volunteer sample of 100 young Indian women ages 18 to 35 years with Indian parents and no history of previous surgery or trauma to the nose. Standardized frontal, lateral, oblique, and basal photographs of the subjects' noses were taken, and 12 standard anthropometric measurements of the nose were determined. The results were compared with published standards for North American white women. In addition, nine nasal indices were calculated and compared with the standards for North American white women. The nose of Indian women differs significantly from the white nose. All the nasal measurements for the Indian women were found to be significantly different from those for North American white women. Seven of the nine nasal indices also differed significantly. Anthropometric analysis suggests differences between the Indian female nose and the North American white nose. Thus, a single aesthetic ideal is inadequate. Noses of Indian women are smaller and wider, with a less projected and rounded tip than the noses of white women. This study established the nasal anthropometric norms for nasal parameters, which will serve as a guide for cosmetic and reconstructive surgery in Indian women.

  5. 0.1 Trend analysis of δ18O composition of precipitation in Germany: Combining Mann-Kendall trend test and ARIMA models to correct for higher order serial correlation

    Science.gov (United States)

    Klaus, Julian; Pan Chun, Kwok; Stumpp, Christine

    2015-04-01

    Spatio-temporal dynamics of stable oxygen (18O) and hydrogen (2H) isotopes in precipitation can be used as proxies for changing hydro-meteorological and regional and global climate patterns. While spatial patterns and distributions gained much attention in recent years the temporal trends in stable isotope time series are rarely investigated and our understanding of them is still limited. These might be a result of a lack of proper trend detection tools and effort for exploring trend processes. Here we make use of an extensive data set of stable isotope in German precipitation. In this study we investigate temporal trends of δ18O in precipitation at 17 observation station in Germany between 1978 and 2009. For that we test different approaches for proper trend detection, accounting for first and higher order serial correlation. We test if significant trends in the isotope time series based on different models can be observed. We apply the Mann-Kendall trend tests on the isotope series, using general multiplicative seasonal autoregressive integrate moving average (ARIMA) models which account for first and higher order serial correlations. With the approach we can also account for the effects of temperature, precipitation amount on the trend. Further we investigate the role of geographic parameters on isotope trends. To benchmark our proposed approach, the ARIMA results are compared to a trend-free prewhiting (TFPW) procedure, the state of the art method for removing the first order autocorrelation in environmental trend studies. Moreover, we explore whether higher order serial correlations in isotope series affects our trend results. The results show that three out of the 17 stations have significant changes when higher order autocorrelation are adjusted, and four stations show a significant trend when temperature and precipitation effects are considered. Significant trends in the isotope time series are generally observed at low elevation stations (≤315 m a

  6. Development of a Mechanical Engineering Test Item Bank to promote learning outcomes-based education in Japanese and Indonesian higher education institutions

    Directory of Open Access Journals (Sweden)

    Jeffrey S. Cross

    2017-11-01

    Full Text Available Following on the 2008-2012 OECD Assessment of Higher Education Learning Outcomes (AHELO feasibility study of civil engineering, in Japan a mechanical engineering learning outcomes assessment working group was established within the National Institute of Education Research (NIER, which became the Tuning National Center for Japan. The purpose of the project is to develop among engineering faculty members, common understandings of engineering learning outcomes, through the collaborative process of test item development, scoring, and sharing of results. By substantiating abstract level learning outcomes into concrete level learning outcomes that are attainable and assessable, and through measuring and comparing the students’ achievement of learning outcomes, it is anticipated that faculty members will be able to draw practical implications for educational improvement at the program and course levels. The development of a mechanical engineering test item bank began with test item development workshops, which led to a series of trial tests, and then to a large scale test implementation in 2016 of 348 first semester master’s students in 9 institutions in Japan, using both multiple choice questions designed to measure the mastery of basic and engineering sciences, and a constructive response task designed to measure “how well students can think like an engineer.” The same set of test items were translated from Japanese into to English and Indonesian, and used to measure achievement of learning outcomes at Indonesia’s Institut Teknologi Bandung (ITB on 37 rising fourth year undergraduate students. This paper highlights how learning outcomes assessment can effectively facilitate learning outcomes-based education, by documenting the experience of Japanese and Indonesian mechanical engineering faculty members engaged in the NIER Test Item Bank project.First published online: 30 November 2017

  7. Averaged null energy condition from causality

    Science.gov (United States)

    Hartman, Thomas; Kundu, Sandipan; Tajdini, Amirhossein

    2017-07-01

    Unitary, Lorentz-invariant quantum field theories in flat spacetime obey mi-crocausality: commutators vanish at spacelike separation. For interacting theories in more than two dimensions, we show that this implies that the averaged null energy, ∫ duT uu , must be non-negative. This non-local operator appears in the operator product expansion of local operators in the lightcone limit, and therefore contributes to n-point functions. We derive a sum rule that isolates this contribution and is manifestly positive. The argument also applies to certain higher spin operators other than the stress tensor, generating an infinite family of new constraints of the form ∫ duX uuu··· u ≥ 0. These lead to new inequalities for the coupling constants of spinning operators in conformal field theory, which include as special cases (but are generally stronger than) the existing constraints from the lightcone bootstrap, deep inelastic scattering, conformal collider methods, and relative entropy. We also comment on the relation to the recent derivation of the averaged null energy condition from relative entropy, and suggest a more general connection between causality and information-theoretic inequalities in QFT.

  8. Chaotic Universe, Friedmannian on the average 2

    Energy Technology Data Exchange (ETDEWEB)

    Marochnik, L S [AN SSSR, Moscow. Inst. Kosmicheskikh Issledovanij

    1980-11-01

    The cosmological solutions are found for the equations for correlators, describing a statistically chaotic Universe, Friedmannian on the average in which delta-correlated fluctuations with amplitudes h >> 1 are excited. For the equation of state of matter p = n epsilon, the kind of solutions depends on the position of maximum of the spectrum of the metric disturbances. The expansion of the Universe, in which long-wave potential and vortical motions and gravitational waves (modes diverging at t ..-->.. 0) had been excited, tends asymptotically to the Friedmannian one at t ..-->.. identity and depends critically on n: at n < 0.26, the solution for the scalefactor is situated higher than the Friedmannian one, and lower at n > 0.26. The influence of finite at t ..-->.. 0 long-wave fluctuation modes leads to an averaged quasiisotropic solution. The contribution of quantum fluctuations and of short-wave parts of the spectrum of classical fluctuations to the expansion law is considered. Their influence is equivalent to the contribution from an ultrarelativistic gas with corresponding energy density and pressure. The restrictions are obtained for the degree of chaos (the spectrum characteristics) compatible with the observed helium abundance, which could have been retained by a completely chaotic Universe during its expansion up to the nucleosynthesis epoch.

  9. FEL system with homogeneous average output

    Energy Technology Data Exchange (ETDEWEB)

    Douglas, David R.; Legg, Robert; Whitney, R. Roy; Neil, George; Powers, Thomas Joseph

    2018-01-16

    A method of varying the output of a free electron laser (FEL) on very short time scales to produce a slightly broader, but smooth, time-averaged wavelength spectrum. The method includes injecting into an accelerator a sequence of bunch trains at phase offsets from crest. Accelerating the particles to full energy to result in distinct and independently controlled, by the choice of phase offset, phase-energy correlations or chirps on each bunch train. The earlier trains will be more strongly chirped, the later trains less chirped. For an energy recovered linac (ERL), the beam may be recirculated using a transport system with linear and nonlinear momentum compactions M.sub.56, which are selected to compress all three bunch trains at the FEL with higher order terms managed.

  10. Increasing average period lengths by switching of robust chaos maps in finite precision

    Science.gov (United States)

    Nagaraj, N.; Shastry, M. C.; Vaidya, P. G.

    2008-12-01

    Grebogi, Ott and Yorke (Phys. Rev. A 38, 1988) have investigated the effect of finite precision on average period length of chaotic maps. They showed that the average length of periodic orbits (T) of a dynamical system scales as a function of computer precision (ɛ) and the correlation dimension (d) of the chaotic attractor: T ˜ɛ-d/2. In this work, we are concerned with increasing the average period length which is desirable for chaotic cryptography applications. Our experiments reveal that random and chaotic switching of deterministic chaotic dynamical systems yield higher average length of periodic orbits as compared to simple sequential switching or absence of switching. To illustrate the application of switching, a novel generalization of the Logistic map that exhibits Robust Chaos (absence of attracting periodic orbits) is first introduced. We then propose a pseudo-random number generator based on chaotic switching between Robust Chaos maps which is found to successfully pass stringent statistical tests of randomness.

  11. Safety Impact of Average Speed Control in the UK

    DEFF Research Database (Denmark)

    Lahrmann, Harry Spaabæk; Brassøe, Bo; Johansen, Jonas Wibert

    2016-01-01

    of automatic speed control was point-based, but in recent years a potentially more effective alternative automatic speed control method has been introduced. This method is based upon records of drivers’ average travel speed over selected sections of the road and is normally called average speed control...... in the UK. The study demonstrates that the introduction of average speed control results in statistically significant and substantial reductions both in speed and in number of accidents. The evaluation indicates that average speed control has a higher safety effect than point-based automatic speed control....

  12. Do later wake times and increased sleep duration of 12th graders result in more studying, higher grades, and improved SAT/ACT test scores?

    Science.gov (United States)

    Cole, James S

    2016-09-01

    The aim of this study was to investigate the relationship between sleep duration, wake time, and hours studying on high school grades and performance on the Scholastic Aptitude Test (SAT)/ American College Testing (ACT) college entrance exams. Data were collected from 13,071 recently graduated high school seniors who were entering college in the fall of 2014. A column proportions z test with a Bonferroni adjustment was used to analyze proportional differences. Analysis of covariance (ANCOVA) was used to examine mean group differences. Students who woke up prior to 6 a.m. and got less than 8 h of sleep (27 %) were significantly more likely to report studying 11 or more hours per week (30 %), almost double the rate compared to students who got more than 8 h of sleep and woke up the latest (16 %). Post hoc results revealed students who woke up at 7 a.m. or later reported significantly higher high school grades than all other groups (p students who woke up between 6:01 a.m. and 7:00 a.m. and got eight or more hours of sleep. The highest reported SAT/ACT scores were from the group that woke up after 7 a.m. but got less than 8 h sleep (M = 1099.5). Their scores were significantly higher than all other groups. This study provides additional evidence that increased sleep and later wake time are associated with increased high school grades. However, this study also found that students who sleep the longest also reported less studying and lower SAT/ACT scores.

  13. Averaging of nonlinearity-managed pulses

    International Nuclear Information System (INIS)

    Zharnitsky, Vadim; Pelinovsky, Dmitry

    2005-01-01

    We consider the nonlinear Schroedinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons

  14. A puzzle form of a non-verbal intelligence test gives significantly higher performance measures in children with severe intellectual disability.

    Science.gov (United States)

    Bello, Katrina D; Goharpey, Nahal; Crewther, Sheila G; Crewther, David P

    2008-08-01

    Assessment of 'potential intellectual ability' of children with severe intellectual disability (ID) is limited, as current tests designed for normal children do not maintain their interest. Thus a manual puzzle version of the Raven's Coloured Progressive Matrices (RCPM) was devised to appeal to the attentional and sensory preferences and language limitations of children with ID. It was hypothesized that performance on the book and manual puzzle forms would not differ for typically developing children but that children with ID would perform better on the puzzle form. The first study assessed the validity of this puzzle form of the RCPM for 76 typically developing children in a test-retest crossover design, with a 3 week interval between tests. A second study tested performance and completion rate for the puzzle form compared to the book form in a sample of 164 children with ID. In the first study, no significant difference was found between performance on the puzzle and book forms in typically developing children, irrespective of the order of completion. The second study demonstrated a significantly higher performance and completion rate for the puzzle form compared to the book form in the ID population. Similar performance on book and puzzle forms of the RCPM by typically developing children suggests that both forms measure the same construct. These findings suggest that the puzzle form does not require greater cognitive ability but demands sensory-motor attention and limits distraction in children with severe ID. Thus, we suggest the puzzle form of the RCPM is a more reliable measure of the non-verbal mentation of children with severe ID than the book form.

  15. Higher Education

    African Journals Online (AJOL)

    Kunle Amuwo: Higher Education Transformation: A Paradigm Shilt in South Africa? ... ty of such skills, especially at the middle management levels within the higher ... istics and virtues of differentiation and diversity. .... may be forced to close shop for lack of capacity to attract ..... necessarily lead to racial and gender equity,.

  16. HOM (higher-order mode) test of the storage ring single-cell cavity with a 20-MeV e- beam for the Advanced Photon Source (APS)

    International Nuclear Information System (INIS)

    Song, J.; Kang, Y.W.; Kustom, R.

    1993-01-01

    To test the effectiveness of damping techniques of the APS storage ring single-cell cavity, a beamline has been designed and assembled to use the ANL Chemistry Division linac beam (20-MeV, FWHM of 20 ps). A single-cell cavity will be excited by the electron beam to investigate the effect on higher-order modes (HOMs) with and without coaxial dampers (H-loop damper, E-probe damper), and wideband aperture dampers. In order for the beam to propagate on- and off-center of the cavity, the beamline consists of two sections -- a beam collimating section and a cavity measurement section -- separated by two double Aluminum foil windows. RF cavity measurements were made with coupling loops and E-probes. The results are compared with both the TBCI calculations and 'cold' measurements with the bead-perturbation method. The data acquisition system and beam diagnostics will be described in a separate paper

  17. The Higher the Insulin Resistance the Lower the Cardiac Output in Men with Type 1 Diabetes During the Maximal Exercise Test.

    Science.gov (United States)

    Niedzwiecki, Pawel; Naskret, Dariusz; Pilacinski, Stanislaw; Pempera, Maciej; Uruska, Aleksandra; Adamska, Anna; Zozulinska-Ziolkiewicz, Dorota

    2017-06-01

    The aim of this study was to assess the hemodynamic parameters analyzed in bioimpedance cardiography during maximal exercise in patients with type 1 diabetes differing in insulin resistance. The study group consisted of 40 men with type 1 diabetes. Tissue sensitivity to insulin was assessed on the basis of the glucose disposal rate (GDR) analyzed during hyperinsulinemic-euglycemic clamp. Patients were divided into groups with GDR insulin sensitivity) and GDR ≥4.5 mg/kg/min (G2 group-higher insulin sensitivity). During the exercise test, the heart rate, systolic volume, cardiac output, cardiac index were measured by the impedance meter (PhysioFlow). Compared with the G2 group, the G1 group had a lower cardiac output (CO): during exercise 8.6 (IQR 7.7-10.0) versus 12.8 (IQR 10.8-13.7) L/min; P insulin resistance is associated with cardiac hemodynamic parameters assessed during and after exercise. The higher the insulin resistance the lower the cardiac output during maximal exercise in men with type 1 diabetes.

  18. Higher Education

    Science.gov (United States)

    & Development (LDRD) National Security Education Center (NSEC) Office of Science Programs Richard P Databases National Security Education Center (NSEC) Center for Nonlinear Studies Engineering Institute Scholarships STEM Education Programs Teachers (K-12) Students (K-12) Higher Education Regional Education

  19. Industrial Applications of High Average Power FELS

    CERN Document Server

    Shinn, Michelle D

    2005-01-01

    The use of lasers for material processing continues to expand, and the annual sales of such lasers exceeds $1 B (US). Large scale (many m2) processing of materials require the economical production of laser powers of the tens of kilowatts, and therefore are not yet commercial processes, although they have been demonstrated. The development of FELs based on superconducting RF (SRF) linac technology provides a scaleable path to laser outputs above 50 kW in the IR, rendering these applications economically viable, since the cost/photon drops as the output power increases. This approach also enables high average power ~ 1 kW output in the UV spectrum. Such FELs will provide quasi-cw (PRFs in the tens of MHz), of ultrafast (pulsewidth ~ 1 ps) output with very high beam quality. This talk will provide an overview of applications tests by our facility's users such as pulsed laser deposition, laser ablation, and laser surface modification, as well as present plans that will be tested with our upgraded FELs. These upg...

  20. High-average-power solid state lasers

    International Nuclear Information System (INIS)

    Summers, M.A.

    1989-01-01

    In 1987, a broad-based, aggressive R ampersand D program aimed at developing the technologies necessary to make possible the use of solid state lasers that are capable of delivering medium- to high-average power in new and demanding applications. Efforts were focused along the following major lines: development of laser and nonlinear optical materials, and of coatings for parasitic suppression and evanescent wave control; development of computational design tools; verification of computational models on thoroughly instrumented test beds; and applications of selected aspects of this technology to specific missions. In the laser materials areas, efforts were directed towards producing strong, low-loss laser glasses and large, high quality garnet crystals. The crystal program consisted of computational and experimental efforts aimed at understanding the physics, thermodynamics, and chemistry of large garnet crystal growth. The laser experimental efforts were directed at understanding thermally induced wave front aberrations in zig-zag slabs, understanding fluid mechanics, heat transfer, and optical interactions in gas-cooled slabs, and conducting critical test-bed experiments with various electro-optic switch geometries. 113 refs., 99 figs., 18 tabs

  1. The average size of ordered binary subgraphs

    NARCIS (Netherlands)

    van Leeuwen, J.; Hartel, Pieter H.

    To analyse the demands made on the garbage collector in a graph reduction system, the change in size of an average graph is studied when an arbitrary edge is removed. In ordered binary trees the average number of deleted nodes as a result of cutting a single edge is equal to the average size of a

  2. Higher Education.

    Science.gov (United States)

    Hendrickson, Robert M.

    This chapter reports 1982 cases involving aspects of higher education. Interesting cases noted dealt with the federal government's authority to regulate state employees' retirement and raised the questions of whether Title IX covers employment, whether financial aid makes a college a program under Title IX, and whether sex segregated mortality…

  3. Calculating Free Energies Using Average Force

    Science.gov (United States)

    Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)

    2001-01-01

    A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.

  4. Averaging for solitons with nonlinearity management

    International Nuclear Information System (INIS)

    Pelinovsky, D.E.; Kevrekidis, P.G.; Frantzeskakis, D.J.

    2003-01-01

    We develop an averaging method for solitons of the nonlinear Schroedinger equation with a periodically varying nonlinearity coefficient, which is used to effectively describe solitons in Bose-Einstein condensates, in the context of the recently proposed technique of Feshbach resonance management. Using the derived local averaged equation, we study matter-wave bright and dark solitons and demonstrate a very good agreement between solutions of the averaged and full equations

  5. DSCOVR Magnetometer Level 2 One Minute Averages

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-minute average of Level 1 data

  6. DSCOVR Magnetometer Level 2 One Second Averages

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-second average of Level 1 data

  7. Spacetime averaging of exotic singularity universes

    International Nuclear Information System (INIS)

    Dabrowski, Mariusz P.

    2011-01-01

    Taking a spacetime average as a measure of the strength of singularities we show that big-rips (type I) are stronger than big-bangs. The former have infinite spacetime averages while the latter have them equal to zero. The sudden future singularities (type II) and w-singularities (type V) have finite spacetime averages. The finite scale factor (type III) singularities for some values of the parameters may have an infinite average and in that sense they may be considered stronger than big-bangs.

  8. NOAA Average Annual Salinity (3-Zone)

    Data.gov (United States)

    California Natural Resource Agency — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...

  9. Improving consensus structure by eliminating averaging artifacts

    Directory of Open Access Journals (Sweden)

    KC Dukka B

    2009-03-01

    Full Text Available Abstract Background Common structural biology methods (i.e., NMR and molecular dynamics often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures. However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%; in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38 of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA 1, our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction 2, which

  10. Averaging Bias Correction for Future IPDA Lidar Mission MERLIN

    Directory of Open Access Journals (Sweden)

    Tellier Yoann

    2018-01-01

    Full Text Available The CNES/DLR MERLIN satellite mission aims at measuring methane dry-air mixing ratio column (XCH4 and thus improving surface flux estimates. In order to get a 1% precision on XCH4 measurements, MERLIN signal processing assumes an averaging of data over 50 km. The induced biases due to the non-linear IPDA lidar equation are not compliant with accuracy requirements. This paper analyzes averaging biases issues and suggests correction algorithms tested on realistic simulated scenes.

  11. Averaging Bias Correction for Future IPDA Lidar Mission MERLIN

    Science.gov (United States)

    Tellier, Yoann; Pierangelo, Clémence; Wirth, Martin; Gibert, Fabien

    2018-04-01

    The CNES/DLR MERLIN satellite mission aims at measuring methane dry-air mixing ratio column (XCH4) and thus improving surface flux estimates. In order to get a 1% precision on XCH4 measurements, MERLIN signal processing assumes an averaging of data over 50 km. The induced biases due to the non-linear IPDA lidar equation are not compliant with accuracy requirements. This paper analyzes averaging biases issues and suggests correction algorithms tested on realistic simulated scenes.

  12. 40 CFR 76.11 - Emissions averaging.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Emissions averaging. 76.11 Section 76.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General...

  13. Determinants of College Grade Point Averages

    Science.gov (United States)

    Bailey, Paul Dean

    2012-01-01

    Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the…

  14. Small Bandwidth Asymptotics for Density-Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    This paper proposes (apparently) novel standard error formulas for the density-weighted average derivative estimator of Powell, Stock, and Stoker (1989). Asymptotic validity of the standard errors developed in this paper does not require the use of higher-order kernels and the standard errors...

  15. Trend of Average Wages as Indicator of Hypothetical Money Illusion

    Directory of Open Access Journals (Sweden)

    Julian Daszkowski

    2010-06-01

    Full Text Available The definition of wage in Poland not before 1998 includes any value of social security contribution. Changed definition creates higher level of reported wages, but was expected not to influence the take home pay. Nevertheless, the trend of average wages, after a short period, has returned to its previous line. Such effect is explained in the term of money illusion.

  16. 40 CFR 63.652 - Emissions averaging provisions.

    Science.gov (United States)

    2010-07-01

    ... emissions more than the reference control technology, but the combination of the pollution prevention... emissions average. This must include any Group 1 emission points to which the reference control technology... agrees has a higher nominal efficiency than the reference control technology. Information on the nominal...

  17. Computation of the bounce-average code

    International Nuclear Information System (INIS)

    Cutler, T.A.; Pearlstein, L.D.; Rensink, M.E.

    1977-01-01

    The bounce-average computer code simulates the two-dimensional velocity transport of ions in a mirror machine. The code evaluates and bounce-averages the collision operator and sources along the field line. A self-consistent equilibrium magnetic field is also computed using the long-thin approximation. Optionally included are terms that maintain μ, J invariance as the magnetic field changes in time. The assumptions and analysis that form the foundation of the bounce-average code are described. When references can be cited, the required results are merely stated and explained briefly. A listing of the code is appended

  18. Test

    DEFF Research Database (Denmark)

    Bendixen, Carsten

    2014-01-01

    Bidrag med en kortfattet, introducerende, perspektiverende og begrebsafklarende fremstilling af begrebet test i det pædagogiske univers.......Bidrag med en kortfattet, introducerende, perspektiverende og begrebsafklarende fremstilling af begrebet test i det pædagogiske univers....

  19. Sea Surface Temperature Average_SST_Master

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using ArcGIS...

  20. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-01-01

    to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic

  1. Should the average tax rate be marginalized?

    Czech Academy of Sciences Publication Activity Database

    Feldman, N. E.; Katuščák, Peter

    -, č. 304 (2006), s. 1-65 ISSN 1211-3298 Institutional research plan: CEZ:MSM0021620846 Keywords : tax * labor supply * average tax Subject RIV: AH - Economics http://www.cerge-ei.cz/pdf/wp/Wp304.pdf

  2. A practical guide to averaging functions

    CERN Document Server

    Beliakov, Gleb; Calvo Sánchez, Tomasa

    2016-01-01

    This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...

  3. MN Temperature Average (1961-1990) - Line

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  4. MN Temperature Average (1961-1990) - Polygon

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  5. Average Bandwidth Allocation Model of WFQ

    Directory of Open Access Journals (Sweden)

    Tomáš Balogh

    2012-01-01

    Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.

  6. Nonequilibrium statistical averages and thermo field dynamics

    International Nuclear Information System (INIS)

    Marinaro, A.; Scarpetta, Q.

    1984-01-01

    An extension of thermo field dynamics is proposed, which permits the computation of nonequilibrium statistical averages. The Brownian motion of a quantum oscillator is treated as an example. In conclusion it is pointed out that the procedure proposed to computation of time-dependent statistical average gives the correct two-point Green function for the damped oscillator. A simple extension can be used to compute two-point Green functions of free particles

  7. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....

  8. Bounds on Average Time Complexity of Decision Trees

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any diagnostic problem, the minimum average depth of decision tree is bounded from below by the entropy of probability distribution (with a multiplier 1/log2 k for a problem over a k-valued information system). Among diagnostic problems, the problems with a complete set of attributes have the lowest minimum average depth of decision trees (e.g, the problem of building optimal prefix code [1] and a blood test study in assumption that exactly one patient is ill [23]). For such problems, the minimum average depth of decision tree exceeds the lower bound by at most one. The minimum average depth reaches the maximum on the problems in which each attribute is "indispensable" [44] (e.g., a diagnostic problem with n attributes and kn pairwise different rows in the decision table and the problem of implementing the modulo 2 summation function). These problems have the minimum average depth of decision tree equal to the number of attributes in the problem description. © Springer-Verlag Berlin Heidelberg 2011.

  9. The HART-II Test: Rotor Wakes and Aeroacoustics with Higher-Harmonic Pitch Control (HHC) Inputs - The Joint German/French/Dutch/US Project

    National Research Council Canada - National Science Library

    Yu, Yung H; Tung, Chee; van der Wall, Berend; Pausder, Heinz-Juergen; Burley, Casey; Brooks, Thomas; Beaumier, Philippe; Delrieux, Yves; Mercker, Edzard; Pengel, Kurt

    2002-01-01

    ...). The main objective of the program is to improve the basic understanding and the analytical modeling capabilities of rotor blade-vortex interaction noise with and without higher harmonic pitch control (HHC...

  10. Can the bivariate Hurst exponent be higher than an average of the separate Hurst exponents?

    Czech Academy of Sciences Publication Activity Database

    Krištoufek, Ladislav

    2015-01-01

    Roč. 431, č. 1 (2015), s. 124-127 ISSN 0378-4371 R&D Projects: GA ČR(CZ) GP14-11402P Institutional support: RVO:67985556 Keywords : Correlations * Power- law cross-correlations * Bivariate Hurst exponent * Spectrum coherence Subject RIV: AH - Economics Impact factor: 1.785, year: 2015 http://library.utia.cas.cz/separaty/2015/E/kristoufek-0452314.pdf

  11. Improved averaging for non-null interferometry

    Science.gov (United States)

    Fleig, Jon F.; Murphy, Paul E.

    2013-09-01

    Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.

  12. Analysis and comparison of safety models using average daily, average hourly, and microscopic traffic.

    Science.gov (United States)

    Wang, Ling; Abdel-Aty, Mohamed; Wang, Xuesong; Yu, Rongjie

    2018-02-01

    There have been plenty of traffic safety studies based on average daily traffic (ADT), average hourly traffic (AHT), or microscopic traffic at 5 min intervals. Nevertheless, not enough research has compared the performance of these three types of safety studies, and seldom of previous studies have intended to find whether the results of one type of study is transferable to the other two studies. First, this study built three models: a Bayesian Poisson-lognormal model to estimate the daily crash frequency using ADT, a Bayesian Poisson-lognormal model to estimate the hourly crash frequency using AHT, and a Bayesian logistic regression model for the real-time safety analysis using microscopic traffic. The model results showed that the crash contributing factors found by different models were comparable but not the same. Four variables, i.e., the logarithm of volume, the standard deviation of speed, the logarithm of segment length, and the existence of diverge segment, were positively significant in the three models. Additionally, weaving segments experienced higher daily and hourly crash frequencies than merge and basic segments. Then, each of the ADT-based, AHT-based, and real-time models was used to estimate safety conditions at different levels: daily and hourly, meanwhile, the real-time model was also used in 5 min intervals. The results uncovered that the ADT- and AHT-based safety models performed similar in predicting daily and hourly crash frequencies, and the real-time safety model was able to provide hourly crash frequency. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Asynchronous Gossip for Averaging and Spectral Ranking

    Science.gov (United States)

    Borkar, Vivek S.; Makhijani, Rahul; Sundaresan, Rajesh

    2014-08-01

    We consider two variants of the classical gossip algorithm. The first variant is a version of asynchronous stochastic approximation. We highlight a fundamental difficulty associated with the classical asynchronous gossip scheme, viz., that it may not converge to a desired average, and suggest an alternative scheme based on reinforcement learning that has guaranteed convergence to the desired average. We then discuss a potential application to a wireless network setting with simultaneous link activation constraints. The second variant is a gossip algorithm for distributed computation of the Perron-Frobenius eigenvector of a nonnegative matrix. While the first variant draws upon a reinforcement learning algorithm for an average cost controlled Markov decision problem, the second variant draws upon a reinforcement learning algorithm for risk-sensitive control. We then discuss potential applications of the second variant to ranking schemes, reputation networks, and principal component analysis.

  14. Benchmarking statistical averaging of spectra with HULLAC

    Science.gov (United States)

    Klapisch, Marcel; Busquet, Michel

    2008-11-01

    Knowledge of radiative properties of hot plasmas is important for ICF, astrophysics, etc When mid-Z or high-Z elements are present, the spectra are so complex that one commonly uses statistically averaged description of atomic systems [1]. In a recent experiment on Fe[2], performed under controlled conditions, high resolution transmission spectra were obtained. The new version of HULLAC [3] allows the use of the same model with different levels of details/averaging. We will take advantage of this feature to check the effect of averaging with comparison with experiment. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Quant. Spectros. Rad. Transf. 65, 43 (2000). [2] J. E. Bailey, G. A. Rochau, C. A. Iglesias et al., Phys. Rev. Lett. 99, 265002-4 (2007). [3]. M. Klapisch, M. Busquet, and A. Bar-Shalom, AIP Conference Proceedings 926, 206-15 (2007).

  15. An approach to averaging digitized plantagram curves.

    Science.gov (United States)

    Hawes, M R; Heinemeyer, R; Sovak, D; Tory, B

    1994-07-01

    The averaging of outline shapes of the human foot for the purposes of determining information concerning foot shape and dimension within the context of comfort of fit of sport shoes is approached as a mathematical problem. An outline of the human footprint is obtained by standard procedures and the curvature is traced with a Hewlett Packard Digitizer. The paper describes the determination of an alignment axis, the identification of two ray centres and the division of the total curve into two overlapping arcs. Each arc is divided by equiangular rays which intersect chords between digitized points describing the arc. The radial distance of each ray is averaged within groups of foot lengths which vary by +/- 2.25 mm (approximately equal to 1/2 shoe size). The method has been used to determine average plantar curves in a study of 1197 North American males (Hawes and Sovak 1993).

  16. Books average previous decade of economic misery.

    Science.gov (United States)

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  17. Books Average Previous Decade of Economic Misery

    Science.gov (United States)

    Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  18. Stochastic Averaging and Stochastic Extremum Seeking

    CERN Document Server

    Liu, Shu-Jun

    2012-01-01

    Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering  and analysis of bacterial  convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...

  19. Aperture averaging in strong oceanic turbulence

    Science.gov (United States)

    Gökçe, Muhsin Caner; Baykal, Yahya

    2018-04-01

    Receiver aperture averaging technique is employed in underwater wireless optical communication (UWOC) systems to mitigate the effects of oceanic turbulence, thus to improve the system performance. The irradiance flux variance is a measure of the intensity fluctuations on a lens of the receiver aperture. Using the modified Rytov theory which uses the small-scale and large-scale spatial filters, and our previously presented expression that shows the atmospheric structure constant in terms of oceanic turbulence parameters, we evaluate the irradiance flux variance and the aperture averaging factor of a spherical wave in strong oceanic turbulence. Irradiance flux variance variations are examined versus the oceanic turbulence parameters and the receiver aperture diameter are examined in strong oceanic turbulence. Also, the effect of the receiver aperture diameter on the aperture averaging factor is presented in strong oceanic turbulence.

  20. Regional averaging and scaling in relativistic cosmology

    International Nuclear Information System (INIS)

    Buchert, Thomas; Carfora, Mauro

    2002-01-01

    Averaged inhomogeneous cosmologies lie at the forefront of interest, since cosmological parameters such as the rate of expansion or the mass density are to be considered as volume-averaged quantities and only these can be compared with observations. For this reason the relevant parameters are intrinsically scale-dependent and one wishes to control this dependence without restricting the cosmological model by unphysical assumptions. In the latter respect we contrast our way to approach the averaging problem in relativistic cosmology with shortcomings of averaged Newtonian models. Explicitly, we investigate the scale-dependence of Eulerian volume averages of scalar functions on Riemannian three-manifolds. We propose a complementary view of a Lagrangian smoothing of (tensorial) variables as opposed to their Eulerian averaging on spatial domains. This programme is realized with the help of a global Ricci deformation flow for the metric. We explain rigorously the origin of the Ricci flow which, on heuristic grounds, has already been suggested as a possible candidate for smoothing the initial dataset for cosmological spacetimes. The smoothing of geometry implies a renormalization of averaged spatial variables. We discuss the results in terms of effective cosmological parameters that would be assigned to the smoothed cosmological spacetime. In particular, we find that on the smoothed spatial domain B-bar evaluated cosmological parameters obey Ω-bar B-bar m + Ω-bar B-bar R + Ω-bar B-bar A + Ω-bar B-bar Q 1, where Ω-bar B-bar m , Ω-bar B-bar R and Ω-bar B-bar A correspond to the standard Friedmannian parameters, while Ω-bar B-bar Q is a remnant of cosmic variance of expansion and shear fluctuations on the averaging domain. All these parameters are 'dressed' after smoothing out the geometrical fluctuations, and we give the relations of the 'dressed' to the 'bare' parameters. While the former provide the framework of interpreting observations with a 'Friedmannian bias

  1. Average: the juxtaposition of procedure and context

    Science.gov (United States)

    Watson, Jane; Chick, Helen; Callingham, Rosemary

    2014-09-01

    This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.

  2. Average-case analysis of numerical problems

    CERN Document Server

    2000-01-01

    The average-case analysis of numerical problems is the counterpart of the more traditional worst-case approach. The analysis of average error and cost leads to new insight on numerical problems as well as to new algorithms. The book provides a survey of results that were mainly obtained during the last 10 years and also contains new results. The problems under consideration include approximation/optimal recovery and numerical integration of univariate and multivariate functions as well as zero-finding and global optimization. Background material, e.g. on reproducing kernel Hilbert spaces and random fields, is provided.

  3. Grassmann Averages for Scalable Robust PCA

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Black, Michael J.

    2014-01-01

    As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...

  4. Model averaging, optimal inference and habit formation

    Directory of Open Access Journals (Sweden)

    Thomas H B FitzGerald

    2014-06-01

    Full Text Available Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function – the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge – that of determining which model or models of their environment are the best for guiding behaviour. Bayesian model averaging – which says that an agent should weight the predictions of different models according to their evidence – provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent’s behaviour should show an equivalent balance. We hypothesise that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realisable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behaviour. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded Bayesian inference, focussing particularly upon the relationship between goal-directed and habitual behaviour.

  5. Generalized Jackknife Estimators of Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic...

  6. Average beta measurement in EXTRAP T1

    International Nuclear Information System (INIS)

    Hedin, E.R.

    1988-12-01

    Beginning with the ideal MHD pressure balance equation, an expression for the average poloidal beta, Β Θ , is derived. A method for unobtrusively measuring the quantities used to evaluate Β Θ in Extrap T1 is described. The results if a series of measurements yielding Β Θ as a function of externally applied toroidal field are presented. (author)

  7. HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS

    International Nuclear Information System (INIS)

    2005-01-01

    Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department

  8. Bayesian Averaging is Well-Temperated

    DEFF Research Database (Denmark)

    Hansen, Lars Kai

    2000-01-01

    Bayesian predictions are stochastic just like predictions of any other inference scheme that generalize from a finite sample. While a simple variational argument shows that Bayes averaging is generalization optimal given that the prior matches the teacher parameter distribution the situation is l...

  9. Gibbs equilibrium averages and Bogolyubov measure

    International Nuclear Information System (INIS)

    Sankovich, D.P.

    2011-01-01

    Application of the functional integration methods in equilibrium statistical mechanics of quantum Bose-systems is considered. We show that Gibbs equilibrium averages of Bose-operators can be represented as path integrals over a special Gauss measure defined in the corresponding space of continuous functions. We consider some problems related to integration with respect to this measure

  10. High average-power induction linacs

    International Nuclear Information System (INIS)

    Prono, D.S.; Barrett, D.; Bowles, E.; Caporaso, G.J.; Chen, Yu-Jiuan; Clark, J.C.; Coffield, F.; Newton, M.A.; Nexsen, W.; Ravenscroft, D.; Turner, W.C.; Watson, J.A.

    1989-01-01

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of ∼ 50-ns duration pulses to > 100 MeV. In this paper the authors report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  11. Function reconstruction from noisy local averages

    International Nuclear Information System (INIS)

    Chen Yu; Huang Jianguo; Han Weimin

    2008-01-01

    A regularization method is proposed for the function reconstruction from noisy local averages in any dimension. Error bounds for the approximate solution in L 2 -norm are derived. A number of numerical examples are provided to show computational performance of the method, with the regularization parameters selected by different strategies

  12. A singularity theorem based on spatial averages

    Indian Academy of Sciences (India)

    journal of. July 2007 physics pp. 31–47. A singularity theorem based on spatial ... In this paper I would like to present a result which confirms – at least partially – ... A detailed analysis of how the model fits in with the .... Further, the statement that the spatial average ...... Financial support under grants FIS2004-01626 and no.

  13. Multiphase averaging of periodic soliton equations

    International Nuclear Information System (INIS)

    Forest, M.G.

    1979-01-01

    The multiphase averaging of periodic soliton equations is considered. Particular attention is given to the periodic sine-Gordon and Korteweg-deVries (KdV) equations. The periodic sine-Gordon equation and its associated inverse spectral theory are analyzed, including a discussion of the spectral representations of exact, N-phase sine-Gordon solutions. The emphasis is on physical characteristics of the periodic waves, with a motivation from the well-known whole-line solitons. A canonical Hamiltonian approach for the modulational theory of N-phase waves is prescribed. A concrete illustration of this averaging method is provided with the periodic sine-Gordon equation; explicit averaging results are given only for the N = 1 case, laying a foundation for a more thorough treatment of the general N-phase problem. For the KdV equation, very general results are given for multiphase averaging of the N-phase waves. The single-phase results of Whitham are extended to general N phases, and more importantly, an invariant representation in terms of Abelian differentials on a Riemann surface is provided. Several consequences of this invariant representation are deduced, including strong evidence for the Hamiltonian structure of N-phase modulational equations

  14. A dynamic analysis of moving average rules

    NARCIS (Netherlands)

    Chiarella, C.; He, X.Z.; Hommes, C.H.

    2006-01-01

    The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type

  15. Essays on model averaging and political economics

    NARCIS (Netherlands)

    Wang, W.

    2013-01-01

    This thesis first investigates various issues related with model averaging, and then evaluates two policies, i.e. West Development Drive in China and fiscal decentralization in U.S, using econometric tools. Chapter 2 proposes a hierarchical weighted least squares (HWALS) method to address multiple

  16. 7 CFR 1209.12 - On average.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false On average. 1209.12 Section 1209.12 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS... CONSUMER INFORMATION ORDER Mushroom Promotion, Research, and Consumer Information Order Definitions § 1209...

  17. High average-power induction linacs

    International Nuclear Information System (INIS)

    Prono, D.S.; Barrett, D.; Bowles, E.

    1989-01-01

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  18. Average Costs versus Net Present Value

    NARCIS (Netherlands)

    E.A. van der Laan (Erwin); R.H. Teunter (Ruud)

    2000-01-01

    textabstractWhile the net present value (NPV) approach is widely accepted as the right framework for studying production and inventory control systems, average cost (AC) models are more widely used. For the well known EOQ model it can be verified that (under certain conditions) the AC approach gives

  19. Average beta-beating from random errors

    CERN Document Server

    Tomas Garcia, Rogelio; Langner, Andy Sven; Malina, Lukas; Franchi, Andrea; CERN. Geneva. ATS Department

    2018-01-01

    The impact of random errors on average β-beating is studied via analytical derivations and simulations. A systematic positive β-beating is expected from random errors quadratic with the sources or, equivalently, with the rms β-beating. However, random errors do not have a systematic effect on the tune.

  20. Reliability Estimates for Undergraduate Grade Point Average

    Science.gov (United States)

    Westrick, Paul A.

    2017-01-01

    Undergraduate grade point average (GPA) is a commonly employed measure in educational research, serving as a criterion or as a predictor depending on the research question. Over the decades, researchers have used a variety of reliability coefficients to estimate the reliability of undergraduate GPA, which suggests that there has been no consensus…

  1. Tendon surveillance requirements - average tendon force

    International Nuclear Information System (INIS)

    Fulton, J.F.

    1982-01-01

    Proposed Rev. 3 to USNRC Reg. Guide 1.35 discusses the need for comparing, for individual tendons, the measured and predicted lift-off forces. Such a comparison is intended to detect any abnormal tendon force loss which might occur. Recognizing that there are uncertainties in the prediction of tendon losses, proposed Guide 1.35.1 has allowed specific tolerances on the fundamental losses. Thus, the lift-off force acceptance criteria for individual tendons appearing in Reg. Guide 1.35, Proposed Rev. 3, is stated relative to a lower bound predicted tendon force, which is obtained using the 'plus' tolerances on the fundamental losses. There is an additional acceptance criterion for the lift-off forces which is not specifically addressed in these two Reg. Guides; however, it is included in a proposed Subsection IWX to ASME Code Section XI. This criterion is based on the overriding requirement that the magnitude of prestress in the containment structure be sufficeint to meet the minimum prestress design requirements. This design requirement can be expressed as an average tendon force for each group of vertical hoop, or dome tendons. For the purpose of comparing the actual tendon forces with the required average tendon force, the lift-off forces measured for a sample of tendons within each group can be averaged to construct the average force for the entire group. However, the individual lift-off forces must be 'corrected' (normalized) prior to obtaining the sample average. This paper derives the correction factor to be used for this purpose. (orig./RW)

  2. Are women positive for the One Step but negative for the Two Step screening tests for gestational diabetes at higher risk for adverse outcomes?

    Science.gov (United States)

    Caissutti, Claudia; Khalifeh, Adeeb; Saccone, Gabriele; Berghella, Vincenzo

    2018-02-01

    The aim of this study was to evaluate if women meeting criteria for gestational diabetes mellitus (GDM) by the One Step test as per International Association of the Diabetes and Pregnancy Study Groups (IADPSG) criteria but not by other less strict criteria have adverse pregnancy outcomes compared with GDM-negative controls. The primary outcome was the incidence of macrosomia, defined as birthweight > 4000 g. Electronic databases were searched from their inception until May 2017. All studies identifying pregnant women negative at the Two Step test, but positive at the One Step test for IADPSG criteria were included. We excluded studies that randomized women to the One Step vs. the Two Step tests; studies that compared different criteria within the same screening method; randomized studies comparing treatments for GDM; and studies comparing incidence of GDM in women doing the One Step test vs. the Two Step test. Eight retrospective cohort studies, including 29 983 women, were included. Five study groups and four control groups were identified. The heterogeneity between the studies was high. Gestational hypertension, preeclampsia and large for gestational age, as well as in some analyses cesarean delivery, macrosomia and preterm birth, were significantly more frequent, and small for gestational age in some analyses significantly less frequent, in women GDM-positive by the One Step, but not the Two Step. Women meeting criteria for GDM by IADPSG criteria but not by other less strict criteria have an increased risk of adverse pregnancy outcomes such as gestational hypertension, preeclampsia and large for gestational age, compared with GDM-negative controls. Based on these findings, and evidence from other studies that treatment decreases these adverse outcomes, we suggest screening for GDM using the One Step IADPSG criteria. © 2017 Nordic Federation of Societies of Obstetrics and Gynecology.

  3. Capillary Electrophoresis Sensitivity Enhancement Based on Adaptive Moving Average Method.

    Science.gov (United States)

    Drevinskas, Tomas; Telksnys, Laimutis; Maruška, Audrius; Gorbatsova, Jelena; Kaljurand, Mihkel

    2018-06-05

    In the present work, we demonstrate a novel approach to improve the sensitivity of the "out of lab" portable capillary electrophoretic measurements. Nowadays, many signal enhancement methods are (i) underused (nonoptimal), (ii) overused (distorts the data), or (iii) inapplicable in field-portable instrumentation because of a lack of computational power. The described innovative migration velocity-adaptive moving average method uses an optimal averaging window size and can be easily implemented with a microcontroller. The contactless conductivity detection was used as a model for the development of a signal processing method and the demonstration of its impact on the sensitivity. The frequency characteristics of the recorded electropherograms and peaks were clarified. Higher electrophoretic mobility analytes exhibit higher-frequency peaks, whereas lower electrophoretic mobility analytes exhibit lower-frequency peaks. On the basis of the obtained data, a migration velocity-adaptive moving average algorithm was created, adapted, and programmed into capillary electrophoresis data-processing software. Employing the developed algorithm, each data point is processed depending on a certain migration time of the analyte. Because of the implemented migration velocity-adaptive moving average method, the signal-to-noise ratio improved up to 11 times for sampling frequency of 4.6 Hz and up to 22 times for sampling frequency of 25 Hz. This paper could potentially be used as a methodological guideline for the development of new smoothing algorithms that require adaptive conditions in capillary electrophoresis and other separation methods.

  4. Accurate phenotyping: Reconciling approaches through Bayesian model averaging.

    Directory of Open Access Journals (Sweden)

    Carla Chia-Ming Chen

    Full Text Available Genetic research into complex diseases is frequently hindered by a lack of clear biomarkers for phenotype ascertainment. Phenotypes for such diseases are often identified on the basis of clinically defined criteria; however such criteria may not be suitable for understanding the genetic composition of the diseases. Various statistical approaches have been proposed for phenotype definition; however our previous studies have shown that differences in phenotypes estimated using different approaches have substantial impact on subsequent analyses. Instead of obtaining results based upon a single model, we propose a new method, using Bayesian model averaging to overcome problems associated with phenotype definition. Although Bayesian model averaging has been used in other fields of research, this is the first study that uses Bayesian model averaging to reconcile phenotypes obtained using multiple models. We illustrate the new method by applying it to simulated genetic and phenotypic data for Kofendred personality disorder-an imaginary disease with several sub-types. Two separate statistical methods were used to identify clusters of individuals with distinct phenotypes: latent class analysis and grade of membership. Bayesian model averaging was then used to combine the two clusterings for the purpose of subsequent linkage analyses. We found that causative genetic loci for the disease produced higher LOD scores using model averaging than under either individual model separately. We attribute this improvement to consolidation of the cores of phenotype clusters identified using each individual method.

  5. Averaged differential expression for the discovery of biomarkers in the blood of patients with prostate cancer.

    Directory of Open Access Journals (Sweden)

    V Uma Bai

    Full Text Available The identification of a blood-based diagnostic marker is a goal in many areas of medicine, including the early diagnosis of prostate cancer. We describe the use of averaged differential display as an efficient mechanism for biomarker discovery in whole blood RNA. The process of averaging reduces the problem of clinical heterogeneity while simultaneously minimizing sample handling.RNA was isolated from the blood of prostate cancer patients and healthy controls. Samples were pooled and subjected to the averaged differential display process. Transcripts present at different levels between patients and controls were purified and sequenced for identification. Transcript levels in the blood of prostate cancer patients and controls were verified by quantitative RT-PCR. Means were compared using a t-test and a receiver-operating curve was generated. The Ring finger protein 19A (RNF19A transcript was identified as having higher levels in prostate cancer patients compared to healthy men through the averaged differential display process. Quantitative RT-PCR analysis confirmed a more than 2-fold higher level of RNF19A mRNA levels in the blood of patients with prostate cancer than in healthy controls (p = 0.0066. The accuracy of distinguishing cancer patients from healthy men using RNF19A mRNA levels in blood as determined by the area under the receiving operator curve was 0.727.Averaged differential display offers a simplified approach for the comprehensive screening of body fluids, such as blood, to identify biomarkers in patients with prostate cancer. Furthermore, this proof-of-concept study warrants further analysis of RNF19A as a clinically relevant biomarker for prostate cancer detection.

  6. Statistics on exponential averaging of periodograms

    Energy Technology Data Exchange (ETDEWEB)

    Peeters, T.T.J.M. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Ciftcioglu, Oe. [Istanbul Technical Univ. (Turkey). Dept. of Electrical Engineering

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a {chi}{sup 2} distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.).

  7. Statistics on exponential averaging of periodograms

    International Nuclear Information System (INIS)

    Peeters, T.T.J.M.; Ciftcioglu, Oe.

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a χ 2 distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.)

  8. ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE

    Directory of Open Access Journals (Sweden)

    Carmen BOGHEAN

    2013-12-01

    Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.

  9. Weighted estimates for the averaging integral operator

    Czech Academy of Sciences Publication Activity Database

    Opic, Bohumír; Rákosník, Jiří

    2010-01-01

    Roč. 61, č. 3 (2010), s. 253-262 ISSN 0010-0757 R&D Projects: GA ČR GA201/05/2033; GA ČR GA201/08/0383 Institutional research plan: CEZ:AV0Z10190503 Keywords : averaging integral operator * weighted Lebesgue spaces * weights Subject RIV: BA - General Mathematics Impact factor: 0.474, year: 2010 http://link.springer.com/article/10.1007%2FBF03191231

  10. Average Transverse Momentum Quantities Approaching the Lightfront

    OpenAIRE

    Boer, Daniel

    2015-01-01

    In this contribution to Light Cone 2014, three average transverse momentum quantities are discussed: the Sivers shift, the dijet imbalance, and the $p_T$ broadening. The definitions of these quantities involve integrals over all transverse momenta that are overly sensitive to the region of large transverse momenta, which conveys little information about the transverse momentum distributions of quarks and gluons inside hadrons. TMD factorization naturally suggests alternative definitions of su...

  11. Time-averaged MSD of Brownian motion

    OpenAIRE

    Andreanov, Alexei; Grebenkov, Denis

    2012-01-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we de...

  12. Size and emotion averaging: costs of dividing attention after all.

    Science.gov (United States)

    Brand, John; Oriet, Chris; Tottenham, Laurie Sykes

    2012-03-01

    Perceptual averaging is a process by which sets of similar items are represented by summary statistics such as their average size, luminance, or orientation. Researchers have argued that this process is automatic, able to be carried out without interference from concurrent processing. Here, we challenge this conclusion and demonstrate a reliable cost of computing the mean size of circles distinguished by colour (Experiments 1 and 2) and the mean emotionality of faces distinguished by sex (Experiment 3). We also test the viability of two strategies that could have allowed observers to guess the correct response without computing the average size or emotionality of both sets concurrently. We conclude that although two means can be computed concurrently, doing so incurs a cost of dividing attention.

  13. Average configuration of the geomagnetic tail

    International Nuclear Information System (INIS)

    Fairfield, D.H.

    1979-01-01

    Over 3000 hours of Imp 6 magnetic field data obtained between 20 and 33 R/sub E/ in the geomagnetic tail have been used in a statistical study of the tail configuration. A distribution of 2.5-min averages of B/sub z/ as a function of position across the tail reveals that more flux crosses the equatorial plane near the dawn and dusk flanks (B-bar/sub z/=3.γ) than near midnight (B-bar/sub z/=1.8γ). The tail field projected in the solar magnetospheric equatorial plane deviates from the x axis due to flaring and solar wind aberration by an angle α=-0.9 Y/sub SM/-2.7, where Y/sub SM/ is in earth radii and α is in degrees. After removing these effects, the B/sub y/ component of the tail field is found to depend on interplanetary sector structure. During an 'away' sector the B/sub y/ component of the tail field is on average 0.5γ greater than that during a 'toward' sector, a result that is true in both tail lobes and is independent of location across the tail. This effect means the average field reversal between northern and southern lobes of the tail is more often 178 0 rather than the 180 0 that is generally supposed

  14. Unscrambling The "Average User" Of Habbo Hotel

    Directory of Open Access Journals (Sweden)

    Mikael Johnson

    2007-01-01

    Full Text Available The “user” is an ambiguous concept in human-computer interaction and information systems. Analyses of users as social actors, participants, or configured users delineate approaches to studying design-use relationships. Here, a developer’s reference to a figure of speech, termed the “average user,” is contrasted with design guidelines. The aim is to create an understanding about categorization practices in design through a case study about the virtual community, Habbo Hotel. A qualitative analysis highlighted not only the meaning of the “average user,” but also the work that both the developer and the category contribute to this meaning. The average user a represents the unknown, b influences the boundaries of the target user groups, c legitimizes the designer to disregard marginal user feedback, and d keeps the design space open, thus allowing for creativity. The analysis shows how design and use are intertwined and highlights the developers’ role in governing different users’ interests.

  15. Changing mortality and average cohort life expectancy

    Directory of Open Access Journals (Sweden)

    Robert Schoen

    2005-10-01

    Full Text Available Period life expectancy varies with changes in mortality, and should not be confused with the life expectancy of those alive during that period. Given past and likely future mortality changes, a recent debate has arisen on the usefulness of the period life expectancy as the leading measure of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure, the average cohort life expectancy (ACLE, to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate measures of mortality are calculated for England and Wales, Norway, and Switzerland for the years 1880 to 2000. CAL is found to be sensitive to past and present changes in death rates. ACLE requires the most data, but gives the best representation of the survivorship of cohorts present at a given time.

  16. A virtual pebble game to ensemble average graph rigidity.

    Science.gov (United States)

    González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J

    2015-01-01

    The body-bar Pebble Game (PG) algorithm is commonly used to calculate network rigidity properties in proteins and polymeric materials. To account for fluctuating interactions such as hydrogen bonds, an ensemble of constraint topologies are sampled, and average network properties are obtained by averaging PG characterizations. At a simpler level of sophistication, Maxwell constraint counting (MCC) provides a rigorous lower bound for the number of internal degrees of freedom (DOF) within a body-bar network, and it is commonly employed to test if a molecular structure is globally under-constrained or over-constrained. MCC is a mean field approximation (MFA) that ignores spatial fluctuations of distance constraints by replacing the actual molecular structure by an effective medium that has distance constraints globally distributed with perfect uniform density. The Virtual Pebble Game (VPG) algorithm is a MFA that retains spatial inhomogeneity in the density of constraints on all length scales. Network fluctuations due to distance constraints that may be present or absent based on binary random dynamic variables are suppressed by replacing all possible constraint topology realizations with the probabilities that distance constraints are present. The VPG algorithm is isomorphic to the PG algorithm, where integers for counting "pebbles" placed on vertices or edges in the PG map to real numbers representing the probability to find a pebble. In the VPG, edges are assigned pebble capacities, and pebble movements become a continuous flow of probability within the network. Comparisons between the VPG and average PG results over a test set of proteins and disordered lattices demonstrate the VPG quantitatively estimates the ensemble average PG results well. The VPG performs about 20% faster than one PG, and it provides a pragmatic alternative to averaging PG rigidity characteristics over an ensemble of constraint topologies. The utility of the VPG falls in between the most

  17. New Nordic diet versus average Danish diet

    DEFF Research Database (Denmark)

    Khakimov, Bekzod; Poulsen, Sanne Kellebjerg; Savorani, Francesco

    2016-01-01

    and 3-hydroxybutanoic acid were related to a higher weight loss, while higher concentrations of salicylic, lactic and N-aspartic acids, and 1,5-anhydro-D-sorbitol were related to a lower weight loss. Specific gender- and seasonal differences were also observed. The study strongly indicates that healthy...... metabolites reflecting specific differences in the diets, especially intake of plant foods and seafood, and in energy metabolism related to ketone bodies and gluconeogenesis, formed the predominant metabolite pattern discriminating the intervention groups. Among NND subjects higher levels of vaccenic acid...

  18. An in vitro drug sensitivity test using a higher 3H-TdR incorporation and a modified human tumor stem cell assay

    International Nuclear Information System (INIS)

    Wang Enzhong

    1991-01-01

    An in vitro drug sensitivity test was developed to evaluate the lethal effects of drugs on human pulmonary carcinoma cells (HPCC). This method was a variant and combination of Human Tumor Stem (HTSCA) and a short-term test using 3 H-TdR incorporation. It consisted of a cell containing liquid top layer and a soft agar bottom layer in 24-well microplates. The medium was RPMI 1640 supplemented with 20% malignant pleural effusion, which could enhance 3 H-TdR incorporation into malignant cells. When 50%, 40%, 30% and 30% of cell survival rate defined as sensitivity-threshold for VCR, MMC, DDP and ADM respectively, in the vitro effectiveness were close to those of clinical single-drug treatment in HPCC by Wright et al. This method was also compared with HTSCA in ten human lung cancer cell lines and four pulmonary carcinoma tissues. The agreement rates were 83% and 100% respectively. Thus we presume this system is more useful for oncological clinics than the others

  19. Operator product expansion and its thermal average

    Energy Technology Data Exchange (ETDEWEB)

    Mallik, S [Saha Inst. of Nuclear Physics, Calcutta (India)

    1998-05-01

    QCD sum rules at finite temperature, like the ones at zero temperature, require the coefficients of local operators, which arise in the short distance expansion of the thermal average of two-point functions of currents. We extend the configuration space method, applied earlier at zero temperature, to the case at finite temperature. We find that, upto dimension four, two new operators arise, in addition to the two appearing already in the vacuum correlation functions. It is argued that the new operators would contribute substantially to the sum rules, when the temperature is not too low. (orig.) 7 refs.

  20. Fluctuations of wavefunctions about their classical average

    International Nuclear Information System (INIS)

    Benet, L; Flores, J; Hernandez-Saldana, H; Izrailev, F M; Leyvraz, F; Seligman, T H

    2003-01-01

    Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics

  1. Phase-averaged transport for quasiperiodic Hamiltonians

    CERN Document Server

    Bellissard, J; Schulz-Baldes, H

    2002-01-01

    For a class of discrete quasi-periodic Schroedinger operators defined by covariant re- presentations of the rotation algebra, a lower bound on phase-averaged transport in terms of the multifractal dimensions of the density of states is proven. This result is established under a Diophantine condition on the incommensuration parameter. The relevant class of operators is distinguished by invariance with respect to symmetry automorphisms of the rotation algebra. It includes the critical Harper (almost-Mathieu) operator. As a by-product, a new solution of the frame problem associated with Weyl-Heisenberg-Gabor lattices of coherent states is given.

  2. Baseline-dependent averaging in radio interferometry

    Science.gov (United States)

    Wijnholds, S. J.; Willis, A. G.; Salvini, S.

    2018-05-01

    This paper presents a detailed analysis of the applicability and benefits of baseline-dependent averaging (BDA) in modern radio interferometers and in particular the Square Kilometre Array. We demonstrate that BDA does not affect the information content of the data other than a well-defined decorrelation loss for which closed form expressions are readily available. We verify these theoretical findings using simulations. We therefore conclude that BDA can be used reliably in modern radio interferometry allowing a reduction of visibility data volume (and hence processing costs for handling visibility data) by more than 80 per cent.

  3. Multistage parallel-serial time averaging filters

    International Nuclear Information System (INIS)

    Theodosiou, G.E.

    1980-01-01

    Here, a new time averaging circuit design, the 'parallel filter' is presented, which can reduce the time jitter, introduced in time measurements using counters of large dimensions. This parallel filter could be considered as a single stage unit circuit which can be repeated an arbitrary number of times in series, thus providing a parallel-serial filter type as a result. The main advantages of such a filter over a serial one are much less electronic gate jitter and time delay for the same amount of total time uncertainty reduction. (orig.)

  4. Time-averaged MSD of Brownian motion

    International Nuclear Information System (INIS)

    Andreanov, Alexei; Grebenkov, Denis S

    2012-01-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution

  5. Time-dependent angularly averaged inverse transport

    International Nuclear Information System (INIS)

    Bal, Guillaume; Jollivet, Alexandre

    2009-01-01

    This paper concerns the reconstruction of the absorption and scattering parameters in a time-dependent linear transport equation from knowledge of angularly averaged measurements performed at the boundary of a domain of interest. Such measurement settings find applications in medical and geophysical imaging. We show that the absorption coefficient and the spatial component of the scattering coefficient are uniquely determined by such measurements. We obtain stability results on the reconstruction of the absorption and scattering parameters with respect to the measured albedo operator. The stability results are obtained by a precise decomposition of the measurements into components with different singular behavior in the time domain

  6. Independence, Odd Girth, and Average Degree

    DEFF Research Database (Denmark)

    Löwenstein, Christian; Pedersen, Anders Sune; Rautenbach, Dieter

    2011-01-01

      We prove several tight lower bounds in terms of the order and the average degree for the independence number of graphs that are connected and/or satisfy some odd girth condition. Our main result is the extension of a lower bound for the independence number of triangle-free graphs of maximum...... degree at most three due to Heckman and Thomas [Discrete Math 233 (2001), 233–237] to arbitrary triangle-free graphs. For connected triangle-free graphs of order n and size m, our result implies the existence of an independent set of order at least (4n−m−1) / 7.  ...

  7. Bootstrapping Density-Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...... (1989). In many cases validity of bootstrap-based inference procedures is found to depend crucially on whether the bandwidth sequence satisfies a particular (asymptotic linearity) condition. An exception to this rule occurs for inference procedures involving a studentized estimator employing a "robust...

  8. Average Nuclear properties based on statistical model

    International Nuclear Information System (INIS)

    El-Jaick, L.J.

    1974-01-01

    The rough properties of nuclei were investigated by statistical model, in systems with the same and different number of protons and neutrons, separately, considering the Coulomb energy in the last system. Some average nuclear properties were calculated based on the energy density of nuclear matter, from Weizsscker-Beth mass semiempiric formulae, generalized for compressible nuclei. In the study of a s surface energy coefficient, the great influence exercised by Coulomb energy and nuclear compressibility was verified. For a good adjust of beta stability lines and mass excess, the surface symmetry energy were established. (M.C.K.) [pt

  9. Time-averaged MSD of Brownian motion

    Science.gov (United States)

    Andreanov, Alexei; Grebenkov, Denis S.

    2012-07-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution.

  10. Bayesian model averaging and weighted average least squares : Equivariance, stability, and numerical issues

    NARCIS (Netherlands)

    De Luca, G.; Magnus, J.R.

    2011-01-01

    In this article, we describe the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals, which implement, respectively, the exact Bayesian model-averaging estimator and the weighted-average least-squares

  11. Spatial analysis based on variance of moving window averages

    OpenAIRE

    Wu, B M; Subbarao, K V; Ferrandino, F J; Hao, J J

    2006-01-01

    A new method for analysing spatial patterns was designed based on the variance of moving window averages (VMWA), which can be directly calculated in geographical information systems or a spreadsheet program (e.g. MS Excel). Different types of artificial data were generated to test the method. Regardless of data types, the VMWA method correctly determined the mean cluster sizes. This method was also employed to assess spatial patterns in historical plant disease survey data encompassing both a...

  12. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  13. Beta-energy averaging and beta spectra

    International Nuclear Information System (INIS)

    Stamatelatos, M.G.; England, T.R.

    1976-07-01

    A simple yet highly accurate method for approximately calculating spectrum-averaged beta energies and beta spectra for radioactive nuclei is presented. This method should prove useful for users who wish to obtain accurate answers without complicated calculations of Fermi functions, complex gamma functions, and time-consuming numerical integrations as required by the more exact theoretical expressions. Therefore, this method should be a good time-saving alternative for investigators who need to make calculations involving large numbers of nuclei (e.g., fission products) as well as for occasional users interested in restricted number of nuclides. The average beta-energy values calculated by this method differ from those calculated by ''exact'' methods by no more than 1 percent for nuclides with atomic numbers in the 20 to 100 range and which emit betas of energies up to approximately 8 MeV. These include all fission products and the actinides. The beta-energy spectra calculated by the present method are also of the same quality

  14. Asymptotic Time Averages and Frequency Distributions

    Directory of Open Access Journals (Sweden)

    Muhammad El-Taha

    2016-01-01

    Full Text Available Consider an arbitrary nonnegative deterministic process (in a stochastic setting {X(t,  t≥0} is a fixed realization, i.e., sample-path of the underlying stochastic process with state space S=(-∞,∞. Using a sample-path approach, we give necessary and sufficient conditions for the long-run time average of a measurable function of process to be equal to the expectation taken with respect to the same measurable function of its long-run frequency distribution. The results are further extended to allow unrestricted parameter (time space. Examples are provided to show that our condition is not superfluous and that it is weaker than uniform integrability. The case of discrete-time processes is also considered. The relationship to previously known sufficient conditions, usually given in stochastic settings, will also be discussed. Our approach is applied to regenerative processes and an extension of a well-known result is given. For researchers interested in sample-path analysis, our results will give them the choice to work with the time average of a process or its frequency distribution function and go back and forth between the two under a mild condition.

  15. Averaging in the presence of sliding errors

    International Nuclear Information System (INIS)

    Yost, G.P.

    1991-08-01

    In many cases the precision with which an experiment can measure a physical quantity depends on the value of that quantity. Not having access to the true value, experimental groups are forced to assign their errors based on their own measured value. Procedures which attempt to derive an improved estimate of the true value by a suitable average of such measurements usually weight each experiment's measurement according to the reported variance. However, one is in a position to derive improved error estimates for each experiment from the average itself, provided an approximate idea of the functional dependence of the error on the central value is known. Failing to do so can lead to substantial biases. Techniques which avoid these biases without loss of precision are proposed and their performance is analyzed with examples. These techniques are quite general and can bring about an improvement even when the behavior of the errors is not well understood. Perhaps the most important application of the technique is in fitting curves to histograms

  16. Average subentropy, coherence and entanglement of random mixed quantum states

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Lin, E-mail: godyalin@163.com [Institute of Mathematics, Hangzhou Dianzi University, Hangzhou 310018 (China); Singh, Uttam, E-mail: uttamsingh@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India); Pati, Arun K., E-mail: akpati@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India)

    2017-02-15

    Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate that mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.

  17. High average power linear induction accelerator development

    International Nuclear Information System (INIS)

    Bayless, J.R.; Adler, R.J.

    1987-07-01

    There is increasing interest in linear induction accelerators (LIAs) for applications including free electron lasers, high power microwave generators and other types of radiation sources. Lawrence Livermore National Laboratory has developed LIA technology in combination with magnetic pulse compression techniques to achieve very impressive performance levels. In this paper we will briefly discuss the LIA concept and describe our development program. Our goals are to improve the reliability and reduce the cost of LIA systems. An accelerator is presently under construction to demonstrate these improvements at an energy of 1.6 MeV in 2 kA, 65 ns beam pulses at an average beam power of approximately 30 kW. The unique features of this system are a low cost accelerator design and an SCR-switched, magnetically compressed, pulse power system. 4 refs., 7 figs

  18. Quetelet, the average man and medical knowledge.

    Science.gov (United States)

    Caponi, Sandra

    2013-01-01

    Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.

  19. [Quetelet, the average man and medical knowledge].

    Science.gov (United States)

    Caponi, Sandra

    2013-01-01

    Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.

  20. Asymmetric network connectivity using weighted harmonic averages

    Science.gov (United States)

    Morrison, Greg; Mahadevan, L.

    2011-02-01

    We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.

  1. Angle-averaged Compton cross sections

    International Nuclear Information System (INIS)

    Nickel, G.H.

    1983-01-01

    The scattering of a photon by an individual free electron is characterized by six quantities: α = initial photon energy in units of m 0 c 2 ; α/sub s/ = scattered photon energy in units of m 0 c 2 ; β = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV

  2. Average Gait Differential Image Based Human Recognition

    Directory of Open Access Journals (Sweden)

    Jinyan Chen

    2014-01-01

    Full Text Available The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI, AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition.

  3. Reynolds averaged simulation of unsteady separated flow

    International Nuclear Information System (INIS)

    Iaccarino, G.; Ooi, A.; Durbin, P.A.; Behnia, M.

    2003-01-01

    The accuracy of Reynolds averaged Navier-Stokes (RANS) turbulence models in predicting complex flows with separation is examined. The unsteady flow around square cylinder and over a wall-mounted cube are simulated and compared with experimental data. For the cube case, none of the previously published numerical predictions obtained by steady-state RANS produced a good match with experimental data. However, evidence exists that coherent vortex shedding occurs in this flow. Its presence demands unsteady RANS computation because the flow is not statistically stationary. The present study demonstrates that unsteady RANS does indeed predict periodic shedding, and leads to much better concurrence with available experimental data than has been achieved with steady computation

  4. Angle-averaged Compton cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Nickel, G.H.

    1983-01-01

    The scattering of a photon by an individual free electron is characterized by six quantities: ..cap alpha.. = initial photon energy in units of m/sub 0/c/sup 2/; ..cap alpha../sub s/ = scattered photon energy in units of m/sub 0/c/sup 2/; ..beta.. = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV.

  5. The balanced survivor average causal effect.

    Science.gov (United States)

    Greene, Tom; Joffe, Marshall; Hu, Bo; Li, Liang; Boucher, Ken

    2013-05-07

    Statistical analysis of longitudinal outcomes is often complicated by the absence of observable values in patients who die prior to their scheduled measurement. In such cases, the longitudinal data are said to be "truncated by death" to emphasize that the longitudinal measurements are not simply missing, but are undefined after death. Recently, the truncation by death problem has been investigated using the framework of principal stratification to define the target estimand as the survivor average causal effect (SACE), which in the context of a two-group randomized clinical trial is the mean difference in the longitudinal outcome between the treatment and control groups for the principal stratum of always-survivors. The SACE is not identified without untestable assumptions. These assumptions have often been formulated in terms of a monotonicity constraint requiring that the treatment does not reduce survival in any patient, in conjunction with assumed values for mean differences in the longitudinal outcome between certain principal strata. In this paper, we introduce an alternative estimand, the balanced-SACE, which is defined as the average causal effect on the longitudinal outcome in a particular subset of the always-survivors that is balanced with respect to the potential survival times under the treatment and control. We propose a simple estimator of the balanced-SACE that compares the longitudinal outcomes between equivalent fractions of the longest surviving patients between the treatment and control groups and does not require a monotonicity assumption. We provide expressions for the large sample bias of the estimator, along with sensitivity analyses and strategies to minimize this bias. We consider statistical inference under a bootstrap resampling procedure.

  6. A depth semi-averaged model for coastal dynamics

    Science.gov (United States)

    Antuono, M.; Colicchio, G.; Lugni, C.; Greco, M.; Brocchini, M.

    2017-05-01

    The present work extends the semi-integrated method proposed by Antuono and Brocchini ["Beyond Boussinesq-type equations: Semi-integrated models for coastal dynamics," Phys. Fluids 25(1), 016603 (2013)], which comprises a subset of depth-averaged equations (similar to Boussinesq-like models) and a Poisson equation that accounts for vertical dynamics. Here, the subset of depth-averaged equations has been reshaped in a conservative-like form and both the Poisson equation formulations proposed by Antuono and Brocchini ["Beyond Boussinesq-type equations: Semi-integrated models for coastal dynamics," Phys. Fluids 25(1), 016603 (2013)] are investigated: the former uses the vertical velocity component (formulation A) and the latter a specific depth semi-averaged variable, ϒ (formulation B). Our analyses reveal that formulation A is prone to instabilities as wave nonlinearity increases. On the contrary, formulation B allows an accurate, robust numerical implementation. Test cases derived from the scientific literature on Boussinesq-type models—i.e., solitary and Stokes wave analytical solutions for linear dispersion and nonlinear evolution and experimental data for shoaling properties—are used to assess the proposed solution strategy. It is found that the present method gives reliable predictions of wave propagation in shallow to intermediate waters, in terms of both semi-averaged variables and conservation properties.

  7. Post-model selection inference and model averaging

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  8. Reproducing multi-model ensemble average with Ensemble-averaged Reconstructed Forcings (ERF) in regional climate modeling

    Science.gov (United States)

    Erfanian, A.; Fomenko, L.; Wang, G.

    2016-12-01

    Multi-model ensemble (MME) average is considered the most reliable for simulating both present-day and future climates. It has been a primary reference for making conclusions in major coordinated studies i.e. IPCC Assessment Reports and CORDEX. The biases of individual models cancel out each other in MME average, enabling the ensemble mean to outperform individual members in simulating the mean climate. This enhancement however comes with tremendous computational cost, which is especially inhibiting for regional climate modeling as model uncertainties can originate from both RCMs and the driving GCMs. Here we propose the Ensemble-based Reconstructed Forcings (ERF) approach to regional climate modeling that achieves a similar level of bias reduction at a fraction of cost compared with the conventional MME approach. The new method constructs a single set of initial and boundary conditions (IBCs) by averaging the IBCs of multiple GCMs, and drives the RCM with this ensemble average of IBCs to conduct a single run. Using a regional climate model (RegCM4.3.4-CLM4.5), we tested the method over West Africa for multiple combination of (up to six) GCMs. Our results indicate that the performance of the ERF method is comparable to that of the MME average in simulating the mean climate. The bias reduction seen in ERF simulations is achieved by using more realistic IBCs in solving the system of equations underlying the RCM physics and dynamics. This endows the new method with a theoretical advantage in addition to reducing computational cost. The ERF output is an unaltered solution of the RCM as opposed to a climate state that might not be physically plausible due to the averaging of multiple solutions with the conventional MME approach. The ERF approach should be considered for use in major international efforts such as CORDEX. Key words: Multi-model ensemble, ensemble analysis, ERF, regional climate modeling

  9. Determination of the diagnostic x-ray tube practical peak voltage (PPV) from average or average peak voltage measurements

    Energy Technology Data Exchange (ETDEWEB)

    Hourdakis, C J, E-mail: khour@gaec.gr [Ionizing Radiation Calibration Laboratory-Greek Atomic Energy Commission, PO Box 60092, 15310 Agia Paraskevi, Athens, Attiki (Greece)

    2011-04-07

    The practical peak voltage (PPV) has been adopted as the reference measuring quantity for the x-ray tube voltage. However, the majority of commercial kV-meter models measure the average peak, U-bar{sub P}, the average, U-bar, the effective, U{sub eff} or the maximum peak, U{sub P} tube voltage. This work proposed a method for determination of the PPV from measurements with a kV-meter that measures the average U-bar or the average peak, U-bar{sub p} voltage. The kV-meter reading can be converted to the PPV by applying appropriate calibration coefficients and conversion factors. The average peak k{sub PPV,kVp} and the average k{sub PPV,Uav} conversion factors were calculated from virtual voltage waveforms for conventional diagnostic radiology (50-150 kV) and mammography (22-35 kV) tube voltages and for voltage ripples from 0% to 100%. Regression equation and coefficients provide the appropriate conversion factors at any given tube voltage and ripple. The influence of voltage waveform irregularities, like 'spikes' and pulse amplitude variations, on the conversion factors was investigated and discussed. The proposed method and the conversion factors were tested using six commercial kV-meters at several x-ray units. The deviations between the reference and the calculated - according to the proposed method - PPV values were less than 2%. Practical aspects on the voltage ripple measurement were addressed and discussed. The proposed method provides a rigorous base to determine the PPV with kV-meters from U-bar{sub p} and U-bar measurement. Users can benefit, since all kV-meters, irrespective of their measuring quantity, can be used to determine the PPV, complying with the IEC standard requirements.

  10. Stochastic Optimal Prediction with Application to Averaged Euler Equations

    Energy Technology Data Exchange (ETDEWEB)

    Bell, John [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chorin, Alexandre J. [Univ. of California, Berkeley, CA (United States); Crutchfield, William [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2017-04-24

    Optimal prediction (OP) methods compensate for a lack of resolution in the numerical solution of complex problems through the use of an invariant measure as a prior measure in the Bayesian sense. In first-order OP, unresolved information is approximated by its conditional expectation with respect to the invariant measure. In higher-order OP, unresolved information is approximated by a stochastic estimator, leading to a system of random or stochastic differential equations. We explain the ideas through a simple example, and then apply them to the solution of Averaged Euler equations in two space dimensions.

  11. A 35-year comparison of children labelled as gifted, unlabelled as gifted and average-ability

    Directory of Open Access Journals (Sweden)

    Joan Freeman

    2014-09-01

    Full Text Available http://dx.doi.org/10.5902/1984686X14273Why are some children seen as gifted while others of identical ability are not?  To find out why and what the consequences might be, in 1974 I began in England with 70 children labelled as gifted.  Each one was matched for age, sex and socio-economic level with two comparison children in the same school class. The first comparison child had an identical gift, and the second taken at random.  Investigation was by a battery of tests and deep questioning of pupils, teachers and parents in their schools and homes which went on for 35 years. A major significant difference was that those labelled gifted had significantly more emotional problems than either the unlabelled but identically gifted or the random controls.  The vital aspects of success for the entire sample, whether gifted or not, have been hard work, emotional support and a positive personal outlook.  But in general, the higher the individual’s intelligence the better their chances in life. 

  12. Geographic Gossip: Efficient Averaging for Sensor Networks

    Science.gov (United States)

    Dimakis, Alexandros D. G.; Sarwate, Anand D.; Wainwright, Martin J.

    Gossip algorithms for distributed computation are attractive due to their simplicity, distributed nature, and robustness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repeatedly recirculating redundant information. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing times of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of $n$ and $\\sqrt{n}$ respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy $\\epsilon$ using $O(\\frac{n^{1.5}}{\\sqrt{\\log n}} \\log \\epsilon^{-1})$ radio transmissions, which yields a $\\sqrt{\\frac{n}{\\log n}}$ factor improvement over standard gossip algorithms. We illustrate these theoretical results with experimental comparisons between our algorithm and standard methods as applied to various classes of random fields.

  13. The concept of average LET values determination

    International Nuclear Information System (INIS)

    Makarewicz, M.

    1981-01-01

    The concept of average LET (linear energy transfer) values determination, i.e. ordinary moments of LET in absorbed dose distribution vs. LET of ionizing radiation of any kind and any spectrum (even the unknown ones) has been presented. The method is based on measurement of ionization current with several values of voltage supplying an ionization chamber operating in conditions of columnar recombination of ions or ion recombination in clusters while the chamber is placed in the radiation field at the point of interest. By fitting a suitable algebraic expression to the measured current values one can obtain coefficients of the expression which can be interpreted as values of LET moments. One of the advantages of the method is its experimental and computational simplicity. It has been shown that for numerical estimation of certain effects dependent on LET of radiation it is not necessary to know the dose distribution but only a number of parameters of the distribution, i.e. the LET moments. (author)

  14. On spectral averages in nuclear spectroscopy

    International Nuclear Information System (INIS)

    Verbaarschot, J.J.M.

    1982-01-01

    In nuclear spectroscopy one tries to obtain a description of systems of bound nucleons. By means of theoretical models one attemps to reproduce the eigenenergies and the corresponding wave functions which then enable the computation of, for example, the electromagnetic moments and the transition amplitudes. Statistical spectroscopy can be used for studying nuclear systems in large model spaces. In this thesis, methods are developed and applied which enable the determination of quantities in a finite part of the Hilbert space, which is defined by specific quantum values. In the case of averages in a space defined by a partition of the nucleons over the single-particle orbits, the propagation coefficients reduce to Legendre interpolation polynomials. In chapter 1 these polynomials are derived with the help of a generating function and a generalization of Wick's theorem. One can then deduce the centroid and the variance of the eigenvalue distribution in a straightforward way. The results are used to calculate the systematic energy difference between states of even and odd parity for nuclei in the mass region A=10-40. In chapter 2 an efficient method for transforming fixed angular momentum projection traces into fixed angular momentum for the configuration space traces is developed. In chapter 3 it is shown that the secular behaviour can be represented by a Gaussian function of the energies. (Auth.)

  15. Kumaraswamy autoregressive moving average models for double bounded environmental data

    Science.gov (United States)

    Bayer, Fábio Mariano; Bayer, Débora Missio; Pumi, Guilherme

    2017-12-01

    In this paper we introduce the Kumaraswamy autoregressive moving average models (KARMA), which is a dynamic class of models for time series taking values in the double bounded interval (a,b) following the Kumaraswamy distribution. The Kumaraswamy family of distribution is widely applied in many areas, especially hydrology and related fields. Classical examples are time series representing rates and proportions observed over time. In the proposed KARMA model, the median is modeled by a dynamic structure containing autoregressive and moving average terms, time-varying regressors, unknown parameters and a link function. We introduce the new class of models and discuss conditional maximum likelihood estimation, hypothesis testing inference, diagnostic analysis and forecasting. In particular, we provide closed-form expressions for the conditional score vector and conditional Fisher information matrix. An application to environmental real data is presented and discussed.

  16. Database of average-power damage thresholds at 1064 nm

    International Nuclear Information System (INIS)

    Rainer, F.; Hildum, E.A.; Milam, D.

    1987-01-01

    We have completed a database of average-power, laser-induced, damage thresholds at 1064 nm on a variety of materials. Measurements were made with a newly constructed laser to provide design input for moderate and high average-power laser projects. The measurements were conducted with 16-ns pulses at pulse-repetition frequencies ranging from 6 to 120 Hz. Samples were typically irradiated for time ranging from a fraction of a second up to 5 minutes (36,000 shots). We tested seven categories of samples which included antireflective coatings, high reflectors, polarizers, single and multiple layers of the same material, bare and overcoated metal surfaces, bare polished surfaces, and bulk materials. The measured damage threshold ranged from 2 for some metals to > 46 J/cm 2 for a bare polished glass substrate. 4 refs., 7 figs., 1 tab

  17. Precision Medicine-Nobody Is Average.

    Science.gov (United States)

    Vinks, A A

    2017-03-01

    Medicine gets personal and tailor-made treatments are underway. Hospitals have started to advertise their advanced genomic testing capabilities and even their disruptive technologies to help foster a culture of innovation. The prediction in the lay press is that in decades from now we may look back and see 2017 as the year precision medicine blossomed. It is all part of the Precision Medicine Initiative that takes into account individual differences in people's genes, environments, and lifestyles. © 2017 ASCPT.

  18. Cognitive Capitalism: Economic Freedom Moderates the Effects of Intellectual and Average Classes on Economic Productivity.

    Science.gov (United States)

    Coyle, Thomas R; Rindermann, Heiner; Hancock, Dale

    2016-10-01

    Cognitive ability stimulates economic productivity. However, the effects of cognitive ability may be stronger in free and open economies, where competition rewards merit and achievement. To test this hypothesis, ability levels of intellectual classes (top 5%) and average classes (country averages) were estimated using international student assessments (Programme for International Student Assessment; Trends in International Mathematics and Science Study; and Progress in International Reading Literacy Study) (N = 99 countries). The ability levels were correlated with indicators of economic freedom (Fraser Institute), scientific achievement (patent rates), innovation (Global Innovation Index), competitiveness (Global Competitiveness Index), and wealth (gross domestic product). Ability levels of intellectual and average classes strongly predicted all economic criteria. In addition, economic freedom moderated the effects of cognitive ability (for both classes), with stronger effects at higher levels of freedom. Effects were particularly robust for scientific achievements when the full range of freedom was analyzed. The results support cognitive capitalism theory: cognitive ability stimulates economic productivity, and its effects are enhanced by economic freedom. © The Author(s) 2016.

  19. Recent developments in high average power driver technology

    International Nuclear Information System (INIS)

    Prestwich, K.R.; Buttram, M.T.; Rohwein, G.J.

    1979-01-01

    Inertial confinement fusion (ICF) reactors will require driver systems operating with tens to hundreds of megawatts of average power. The pulse power technology that will be required to build such drivers is in a primitive state of development. Recent developments in repetitive pulse power are discussed. A high-voltage transformer has been developed and operated at 3 MV in a single pulse experiment and is being tested at 1.5 MV, 5 kj and 10 pps. A low-loss, 1 MV, 10 kj, 10 pps Marx generator is being tested. Test results from gas-dynamic spark gaps that operate both in the 100 kV and 700 kV range are reported. A 250 kV, 1.5 kA/cm 2 , 30 ns electron beam diode has operated stably for 1.6 x 10 5 pulses

  20. Applications of ordered weighted averaging (OWA operators in environmental problems

    Directory of Open Access Journals (Sweden)

    Carlos Llopis-Albert

    2017-04-01

    Full Text Available This paper presents an application of a prioritized weighted aggregation operator based on ordered weighted averaging (OWA to deal with stakeholders' constructive participation in water resources projects. They have different degree of acceptance or preference regarding the measures and policies to be carried out, which lead to different environmental and socio-economic outcomes, and hence, to different levels of stakeholders’ satisfaction. The methodology establishes a prioritization relationship upon the stakeholders, which preferences are aggregated by means of weights depending on the satisfaction of the higher priority policy maker. The methodology establishes a prioritization relationship upon the stakeholders, which preferences are aggregated by means of weights depending on the satisfaction of the higher priority policy maker. The methodology has been successfully applied to a Public Participation Project (PPP in watershed management, thus obtaining efficient environmental measures in conflict resolution problems under actors’ preference uncertainties.

  1. Image compression using moving average histogram and RBF network

    International Nuclear Information System (INIS)

    Khowaja, S.; Ismaili, I.A.

    2015-01-01

    Modernization and Globalization have made the multimedia technology as one of the fastest growing field in recent times but optimal use of bandwidth and storage has been one of the topics which attract the research community to work on. Considering that images have a lion share in multimedia communication, efficient image compression technique has become the basic need for optimal use of bandwidth and space. This paper proposes a novel method for image compression based on fusion of moving average histogram and RBF (Radial Basis Function). Proposed technique employs the concept of reducing color intensity levels using moving average histogram technique followed by the correction of color intensity levels using RBF networks at reconstruction phase. Existing methods have used low resolution images for the testing purpose but the proposed method has been tested on various image resolutions to have a clear assessment of the said technique. The proposed method have been tested on 35 images with varying resolution and have been compared with the existing algorithms in terms of CR (Compression Ratio), MSE (Mean Square Error), PSNR (Peak Signal to Noise Ratio), computational complexity. The outcome shows that the proposed methodology is a better trade off technique in terms of compression ratio, PSNR which determines the quality of the image and computational complexity. (author)

  2. Analysis of nonlinear systems using ARMA [autoregressive moving average] models

    International Nuclear Information System (INIS)

    Hunter, N.F. Jr.

    1990-01-01

    While many vibration systems exhibit primarily linear behavior, a significant percentage of the systems encountered in vibration and model testing are mildly to severely nonlinear. Analysis methods for such nonlinear systems are not yet well developed and the response of such systems is not accurately predicted by linear models. Nonlinear ARMA (autoregressive moving average) models are one method for the analysis and response prediction of nonlinear vibratory systems. In this paper we review the background of linear and nonlinear ARMA models, and illustrate the application of these models to nonlinear vibration systems. We conclude by summarizing the advantages and disadvantages of ARMA models and emphasizing prospects for future development. 14 refs., 11 figs

  3. Average spectral efficiency analysis of FSO links over turbulence channel with adaptive transmissions and aperture averaging

    Science.gov (United States)

    Aarthi, G.; Ramachandra Reddy, G.

    2018-03-01

    In our paper, the impact of adaptive transmission schemes: (i) optimal rate adaptation (ORA) and (ii) channel inversion with fixed rate (CIFR) on the average spectral efficiency (ASE) are explored for free-space optical (FSO) communications with On-Off Keying (OOK), Polarization shift keying (POLSK), and Coherent optical wireless communication (Coherent OWC) systems under different turbulence regimes. Further to enhance the ASE we have incorporated aperture averaging effects along with the above adaptive schemes. The results indicate that ORA adaptation scheme has the advantage of improving the ASE performance compared with CIFR under moderate and strong turbulence regime. The coherent OWC system with ORA excels the other modulation schemes and could achieve ASE performance of 49.8 bits/s/Hz at the average transmitted optical power of 6 dBm under strong turbulence. By adding aperture averaging effect we could achieve an ASE of 50.5 bits/s/Hz under the same conditions. This makes ORA with Coherent OWC modulation as a favorable candidate for improving the ASE of the FSO communication system.

  4. General and Local: Averaged k-Dependence Bayesian Classifiers

    Directory of Open Access Journals (Sweden)

    Limin Wang

    2015-06-01

    Full Text Available The inference of a general Bayesian network has been shown to be an NP-hard problem, even for approximate solutions. Although k-dependence Bayesian (KDB classifier can construct at arbitrary points (values of k along the attribute dependence spectrum, it cannot identify the changes of interdependencies when attributes take different values. Local KDB, which learns in the framework of KDB, is proposed in this study to describe the local dependencies implicated in each test instance. Based on the analysis of functional dependencies, substitution-elimination resolution, a new type of semi-naive Bayesian operation, is proposed to substitute or eliminate generalization to achieve accurate estimation of conditional probability distribution while reducing computational complexity. The final classifier, averaged k-dependence Bayesian (AKDB classifiers, will average the output of KDB and local KDB. Experimental results on the repository of machine learning databases from the University of California Irvine (UCI showed that AKDB has significant advantages in zero-one loss and bias relative to naive Bayes (NB, tree augmented naive Bayes (TAN, Averaged one-dependence estimators (AODE, and KDB. Moreover, KDB and local KDB show mutually complementary characteristics with respect to variance.

  5. Experimental investigation of a moving averaging algorithm for motion perpendicular to the leaf travel direction in dynamic MLC target tracking

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Jai-Woong; Sawant, Amit; Suh, Yelin; Cho, Byung-Chul; Suh, Tae-Suk; Keall, Paul [Department of Biomedical Engineering, College of Medicine, Catholic University of Korea, Seoul, Korea 131-700 and Research Institute of Biomedical Engineering, Catholic University of Korea, Seoul, 131-700 (Korea, Republic of); Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States); Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States) and Department of Radiation Oncology, Asan Medical Center, Seoul, 138-736 (Korea, Republic of); Department of Biomedical Engineering, College of Medicine, Catholic University of Korea, Seoul, 131-700 and Research Institute of Biomedical Engineering, Catholic University of Korea, Seoul, 131-700 (Korea, Republic of); Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States) and Radiation Physics Laboratory, Sydney Medical School, University of Sydney, 2006 (Australia)

    2011-07-15

    . Conclusions: The delivery efficiency of moving average tracking was up to four times higher than that of real-time tracking and approached the efficiency of no compensation for all cases. The geometric accuracy and dosimetric accuracy of the moving average algorithm was between real-time tracking and no compensation, approximately half the percentage of dosimetric points failing the {gamma}-test compared with no compensation.

  6. Experimental investigation of a moving averaging algorithm for motion perpendicular to the leaf travel direction in dynamic MLC target tracking.

    Science.gov (United States)

    Yoon, Jai-Woong; Sawant, Amit; Suh, Yelin; Cho, Byung-Chul; Suh, Tae-Suk; Keall, Paul

    2011-07-01

    moving average tracking was up to four times higher than that of real-time tracking and approached the efficiency of no compensation for all cases. The geometric accuracy and dosimetric accuracy of the moving average algorithm was between real-time tracking and no compensation, approximately half the percentage of dosimetric points failing the gamma-test compared with no compensation.

  7. To quantum averages through asymptotic expansion of classical averages on infinite-dimensional space

    International Nuclear Information System (INIS)

    Khrennikov, Andrei

    2007-01-01

    We study asymptotic expansions of Gaussian integrals of analytic functionals on infinite-dimensional spaces (Hilbert and nuclear Frechet). We obtain an asymptotic equality coupling the Gaussian integral and the trace of the composition of scaling of the covariation operator of a Gaussian measure and the second (Frechet) derivative of a functional. In this way we couple classical average (given by an infinite-dimensional Gaussian integral) and quantum average (given by the von Neumann trace formula). We can interpret this mathematical construction as a procedure of 'dequantization' of quantum mechanics. We represent quantum mechanics as an asymptotic projection of classical statistical mechanics with infinite-dimensional phase space. This space can be represented as the space of classical fields, so quantum mechanics is represented as a projection of 'prequantum classical statistical field theory'

  8. Determining average path length and average trapping time on generalized dual dendrimer

    Science.gov (United States)

    Li, Ling; Guan, Jihong

    2015-03-01

    Dendrimer has wide number of important applications in various fields. In some cases during transport or diffusion process, it transforms into its dual structure named Husimi cactus. In this paper, we study the structure properties and trapping problem on a family of generalized dual dendrimer with arbitrary coordination numbers. We first calculate exactly the average path length (APL) of the networks. The APL increases logarithmically with the network size, indicating that the networks exhibit a small-world effect. Then we determine the average trapping time (ATT) of the trapping process in two cases, i.e., the trap placed on a central node and the trap is uniformly distributed in all the nodes of the network. In both case, we obtain explicit solutions of ATT and show how they vary with the networks size. Besides, we also discuss the influence of the coordination number on trapping efficiency.

  9. Face averages enhance user recognition for smartphone security.

    Science.gov (United States)

    Robertson, David J; Kramer, Robin S S; Burton, A Mike

    2015-01-01

    Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy). In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual's 'face-average'--a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1) and for real faces (Experiment 2): users could unlock their phones more reliably when the device stored an average of the user's face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings.

  10. Time-averaged molluscan death assemblages: Palimpsests of richness, snapshots of abundance

    Science.gov (United States)

    Kidwell, Susan M.

    2002-09-01

    Field tests that compare living communities to associated dead remains are the primary means of estimating the reliability of biological information in the fossil record; such tests also provide insights into the dynamics of skeletal accumulation. Contrary to expectations, molluscan death assemblages capture a strong signal of living species' rank-order abundances. This finding, combined with independent evidence for exponential postmortem destruction of dead cohorts, argues that, although the species richness of a death assemblage may be a time-averaged palimpsest of the habitat (molluscan death assemblages contain, on average, ˜25% more species than any single census of the local live community, after sample-size standardization), species' relative-abundance data from the same assemblage probably constitute a much higher acuity record dominated by the most recent dead cohorts (e.g., from the past few hundred years or so, rather than the several thousand years recorded by the total assemblage and usually taken as the acuity of species-richness information). The pervasive excess species richness of molluscan death assemblages requires further analysis and modeling to discriminate among possible sources. However, time averaging alone cannot be responsible unless rare species (species with low rates of dead-shell production) are collectively more durable (have longer taphonomic half-lives) than abundant species. Species richness and abundance data thus appear to present fundamentally different taphonomic qualities for paleobiological analysis. Relative- abundance information is more snapshot-like and thus taphonomically more straightforward than expected, especially compared to the complex origins of dead-species richness.

  11. Results from tests of the Delphi TPC prototype

    International Nuclear Information System (INIS)

    Vilanova, D.

    1985-01-01

    Results from beam tests of a half-scale sector of the Delphi TPC are presented. The spatial resolution is slightly higher than predicted by Monte Carlo simulations, corresponding to an average value of about 300 μm. (orig.)

  12. Nonlinearity management in higher dimensions

    International Nuclear Information System (INIS)

    Kevrekidis, P G; Pelinovsky, D E; Stefanov, A

    2006-01-01

    In the present paper, we revisit nonlinearity management of the time-periodic nonlinear Schroedinger equation and the related averaging procedure. By means of rigorous estimates, we show that the averaged nonlinear Schroedinger equation does not blow up in the higher dimensional case so long as the corresponding solution remains smooth. In particular, we show that the H 1 norm remains bounded, in contrast with the usual blow-up mechanism for the focusing Schroedinger equation. This conclusion agrees with earlier works in the case of strong nonlinearity management but contradicts those in the case of weak nonlinearity management. The apparent discrepancy is explained by the divergence of the averaging procedure in the limit of weak nonlinearity management

  13. The average crossing number of equilateral random polygons

    International Nuclear Information System (INIS)

    Diao, Y; Dobay, A; Kusner, R B; Millett, K; Stasiak, A

    2003-01-01

    In this paper, we study the average crossing number of equilateral random walks and polygons. We show that the mean average crossing number ACN of all equilateral random walks of length n is of the form (3/16)n ln n + O(n). A similar result holds for equilateral random polygons. These results are confirmed by our numerical studies. Furthermore, our numerical studies indicate that when random polygons of length n are divided into individual knot types, the for each knot type K can be described by a function of the form = a(n-n 0 )ln(n-n 0 ) + b(n-n 0 ) + c where a, b and c are constants depending on K and n 0 is the minimal number of segments required to form K. The profiles diverge from each other, with more complex knots showing higher than less complex knots. Moreover, the profiles intersect with the profile of all closed walks. These points of intersection define the equilibrium length of K, i.e., the chain length n e (K) at which a statistical ensemble of configurations with given knot type K-upon cutting, equilibration and reclosure to a new knot type K'-does not show a tendency to increase or decrease . This concept of equilibrium length seems to be universal, and applies also to other length-dependent observables for random knots, such as the mean radius of gyration g >

  14. The effects of average revenue regulation on electricity transmission investment and pricing

    International Nuclear Information System (INIS)

    Matsukawa, Isamu

    2008-01-01

    This paper investigates the long-run effects of average revenue regulation on an electricity transmission monopolist who applies a two-part tariff comprising a variable congestion price and a non-negative fixed access fee. A binding constraint on the monopolist's expected average revenue lowers the access fee, promotes transmission investment, and improves consumer surplus. In a case of any linear or log-linear electricity demand function with a positive probability that no congestion occurs, average revenue regulation is allocatively more efficient than a Coasian two-part tariff if the level of capacity under average revenue regulation is higher than that under a Coasian two-part tariff. (author)

  15. “Simpson’s paradox” as a manifestation of the properties of weighted average (part 2)

    OpenAIRE

    Zhekov, Encho

    2012-01-01

    The article proves that the so-called “Simpson's paradox” is a special case of manifestation of the properties of weighted average. In this case always comes to comparing two weighted averages, where the average of the larger variables is less than that of the smaller. The article demonstrates one method for analyzing the relative change of magnitudes of the type: k S = Σ x iy i i=1 who gives answer to question: what is the reason, the weighted average of few variables with higher values, to ...

  16. “Simpson’s paradox” as a manifestation of the properties of weighted average (part 1)

    OpenAIRE

    Zhekov, Encho

    2012-01-01

    The article proves that the so-called “Simpson's paradox” is a special case of manifestation of the properties of weighted average. In this case always comes to comparing two weighted averages, where the average of the larger variables is less than that of the smaller. The article demonstrates one method for analyzing the relative change of magnitudes of the type: S = Σ ki=1x iy i who gives answer to question: what is the reason, the weighted average of few variables with higher values, to be...

  17. 20 CFR 404.221 - Computing your average monthly wage.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the average...

  18. Average and local structure of α-CuI by configurational averaging

    International Nuclear Information System (INIS)

    Mohn, Chris E; Stoelen, Svein

    2007-01-01

    Configurational Boltzmann averaging together with density functional theory are used to study in detail the average and local structure of the superionic α-CuI. We find that the coppers are spread out with peaks in the atom-density at the tetrahedral sites of the fcc sublattice of iodines. We calculate Cu-Cu, Cu-I and I-I pair radial distribution functions, the distribution of coordination numbers and the distribution of Cu-I-Cu, I-Cu-I and Cu-Cu-Cu bond-angles. The partial pair distribution functions are in good agreement with experimental neutron diffraction-reverse Monte Carlo, extended x-ray absorption fine structure and ab initio molecular dynamics results. In particular, our results confirm the presence of a prominent peak at around 2.7 A in the Cu-Cu pair distribution function as well as a broader, less intense peak at roughly 4.3 A. We find highly flexible bonds and a range of coordination numbers for both iodines and coppers. This structural flexibility is of key importance in order to understand the exceptional conductivity of coppers in α-CuI; the iodines can easily respond to changes in the local environment as the coppers diffuse, and a myriad of different diffusion-pathways is expected due to the large variation in the local motifs

  19. Artificial Intelligence Can Predict Daily Trauma Volume and Average Acuity.

    Science.gov (United States)

    Stonko, David P; Dennis, Bradley M; Betzold, Richard D; Peetz, Allan B; Gunter, Oliver L; Guillamondegui, Oscar D

    2018-04-19

    The goal of this study was to integrate temporal and weather data in order to create an artificial neural network (ANN) to predict trauma volume, the number of emergent operative cases, and average daily acuity at a level 1 trauma center. Trauma admission data from TRACS and weather data from the National Oceanic and Atmospheric Administration (NOAA) was collected for all adult trauma patients from July 2013-June 2016. The ANN was constructed using temporal (time, day of week), and weather factors (daily high, active precipitation) to predict four points of daily trauma activity: number of traumas, number of penetrating traumas, average ISS, and number of immediate OR cases per day. We trained a two-layer feed-forward network with 10 sigmoid hidden neurons via the Levenberg-Marquardt backpropagation algorithm, and performed k-fold cross validation and accuracy calculations on 100 randomly generated partitions. 10,612 patients over 1,096 days were identified. The ANN accurately predicted the daily trauma distribution in terms of number of traumas, number of penetrating traumas, number of OR cases, and average daily ISS (combined training correlation coefficient r = 0.9018+/-0.002; validation r = 0.8899+/- 0.005; testing r = 0.8940+/-0.006). We were able to successfully predict trauma and emergent operative volume, and acuity using an ANN by integrating local weather and trauma admission data from a level 1 center. As an example, for June 30, 2016, it predicted 9.93 traumas (actual: 10), and a mean ISS score of 15.99 (actual: 13.12); see figure 3. This may prove useful for predicting trauma needs across the system and hospital administration when allocating limited resources. Level III STUDY TYPE: Prognostic/Epidemiological.

  20. Virginia tech freshman class becoming more competitive; Rise in grades and test scores noted

    OpenAIRE

    Virginia Tech News

    2004-01-01

    Admission to Virginia Tech continues to become more competitive as applicants report higher grade point averages and test scores than previous years. The incoming class of 4,975 students has an average grade point average (GPA) of 3.68 and SAT 1203, up from 3.60 GPA and 1197 SAT in 2003.

  1. Monthly streamflow forecasting with auto-regressive integrated moving average

    Science.gov (United States)

    Nasir, Najah; Samsudin, Ruhaidah; Shabri, Ani

    2017-09-01

    Forecasting of streamflow is one of the many ways that can contribute to better decision making for water resource management. The auto-regressive integrated moving average (ARIMA) model was selected in this research for monthly streamflow forecasting with enhancement made by pre-processing the data using singular spectrum analysis (SSA). This study also proposed an extension of the SSA technique to include a step where clustering was performed on the eigenvector pairs before reconstruction of the time series. The monthly streamflow data of Sungai Muda at Jeniang, Sungai Muda at Jambatan Syed Omar and Sungai Ketil at Kuala Pegang was gathered from the Department of Irrigation and Drainage Malaysia. A ratio of 9:1 was used to divide the data into training and testing sets. The ARIMA, SSA-ARIMA and Clustered SSA-ARIMA models were all developed in R software. Results from the proposed model are then compared to a conventional auto-regressive integrated moving average model using the root-mean-square error and mean absolute error values. It was found that the proposed model can outperform the conventional model.

  2. Large interface simulation in an averaged two-fluid code

    International Nuclear Information System (INIS)

    Henriques, A.

    2006-01-01

    Different ranges of size of interfaces and eddies are involved in multiphase flow phenomena. Classical formalisms focus on a specific range of size. This study presents a Large Interface Simulation (LIS) two-fluid compressible formalism taking into account different sizes of interfaces. As in the single-phase Large Eddy Simulation, a filtering process is used to point out Large Interface (LI) simulation and Small interface (SI) modelization. The LI surface tension force is modelled adapting the well-known CSF method. The modelling of SI transfer terms is done calling for classical closure laws of the averaged approach. To simulate accurately LI transfer terms, we develop a LI recognition algorithm based on a dimensionless criterion. The LIS model is applied in a classical averaged two-fluid code. The LI transfer terms modelling and the LI recognition are validated on analytical and experimental tests. A square base basin excited by a horizontal periodic movement is studied with the LIS model. The capability of the model is also shown on the case of the break-up of a bubble in a turbulent liquid flow. The break-up of a large bubble at a grid impact performed regime transition between two different scales of interface from LI to SI and from PI to LI. (author) [fr

  3. Average cross sections calculated in various neutron fields

    International Nuclear Information System (INIS)

    Shibata, Keiichi

    2002-01-01

    Average cross sections have been calculated for the reactions contained in the dosimetry files, JENDL/D-99, IRDF-90V2, and RRDF-98 in order to select the best data for the new library IRDF-2002. The neutron spectra used in the calculations are as follows: 1) 252 Cf spontaneous fission spectrum (NBS evaluation), 2) 235 U thermal fission spectrum (NBS evaluation), 3) Intermediate-energy Standard Neutron Field (ISNF), 4) Coupled Fast Reactivity Measurement Facility (CFRMF), 5) Coupled thermal/fast uranium and boron carbide spherical assembly (ΣΣ), 6) Fast neutron source reactor (YAYOI), 7) Experimental fast reactor (JOYO), 8) Japan Material Testing Reactor (JMTR), 9) d-Li neutron spectrum with a 2-MeV deuteron beam. The items 3)-7) represent fast neutron spectra, while JMTR is a light water reactor. The Q-value for the d-Li reaction mentioned above is 15.02 MeV. Therefore, neutrons with energies up to 17 MeV can be produced in the d-Li reaction. The calculated average cross sections were compared with the measurements. Figures 1-9 show the ratios of the calculations to the experimental data which are given. It is found from these figures that the 58 Fe(n, γ) cross section in JENDL/D-99 reproduces the measurements in the thermal and fast reactor spectra better than that in IRDF-90V2. (author)

  4. Characterizing individual painDETECT symptoms by average pain severity

    Directory of Open Access Journals (Sweden)

    Sadosky A

    2016-07-01

    Full Text Available Alesia Sadosky,1 Vijaya Koduru,2 E Jay Bienen,3 Joseph C Cappelleri4 1Pfizer Inc, New York, NY, 2Eliassen Group, New London, CT, 3Outcomes Research Consultant, New York, NY, 4Pfizer Inc, Groton, CT, USA Background: painDETECT is a screening measure for neuropathic pain. The nine-item version consists of seven sensory items (burning, tingling/prickling, light touching, sudden pain attacks/electric shock-type pain, cold/heat, numbness, and slight pressure, a pain course pattern item, and a pain radiation item. The seven-item version consists only of the sensory items. Total scores of both versions discriminate average pain-severity levels (mild, moderate, and severe, but their ability to discriminate individual item severity has not been evaluated.Methods: Data were from a cross-sectional, observational study of six neuropathic pain conditions (N=624. Average pain severity was evaluated using the Brief Pain Inventory-Short Form, with severity levels defined using established cut points for distinguishing mild, moderate, and severe pain. The Wilcoxon rank sum test was followed by ridit analysis to represent the probability that a randomly selected subject from one average pain-severity level had a more favorable outcome on the specific painDETECT item relative to a randomly selected subject from a comparator severity level.Results: A probability >50% for a better outcome (less severe pain was significantly observed for each pain symptom item. The lowest probability was 56.3% (on numbness for mild vs moderate pain and highest probability was 76.4% (on cold/heat for mild vs severe pain. The pain radiation item was significant (P<0.05 and consistent with pain symptoms, as well as with total scores for both painDETECT versions; only the pain course item did not differ.Conclusion: painDETECT differentiates severity such that the ability to discriminate average pain also distinguishes individual pain item severity in an interpretable manner. Pain

  5. Correlation between Grade Point Averages and Student Evaluation of Teaching Scores: Taking a Closer Look

    Science.gov (United States)

    Griffin, Tyler J.; Hilton, John, III.; Plummer, Kenneth; Barret, Devynne

    2014-01-01

    One of the most contentious potential sources of bias is whether instructors who give higher grades receive higher ratings from students. We examined the grade point averages (GPAs) and student ratings across 2073 general education religion courses at a large private university. A moderate correlation was found between GPAs and student evaluations…

  6. Relationships between feeding behavior and average daily gain in cattle

    Directory of Open Access Journals (Sweden)

    Bruno Fagundes Cunha Lage

    2013-12-01

    Full Text Available Several studies have reported relationship between eating behavior and performance in feedlot cattle. The evaluation of behavior traits demands high degree of work and trained manpower, therefore, in recent years has been used an automated feed intake measurement system (GrowSafe System ®, that identify and record individual feeding patterns. The aim of this study was to evaluate the relationship between feeding behavior traits and average daily gain in Nellore calves undergoing feed efficiency test. Date from 85 Nelore males was recorded during the feed efficiency test performed in 2012, at Centro APTA Bovinos de Corte, Instituto de Zootecnia, São Paulo State. Were analyzed the behavioral traits: time at feeder (TF, head down duration (HD, representing the time when the animal is actually eating, frequency of visits (FV and feed rate (FR calculated as the amount of dry matter (DM consumed by time at feeder (g.min-1. The ADG was calculated by linear regression of individual weights on days in test. ADG classes were obtained considering the average ADG and standard deviation (SD being: high ADG (>mean + 1.0 SD, medium ADG (± 1.0 SD from the mean and low ADG (test as a covariate. Low gain animals remained 21.8% less time of head down than medium or high gain animals (P<0.05. Were observed significant effects of ADG class on FR (P<0.01, high ADG animals consumed more feed per time (g.min-1 than the low and medium ADG animals. No diferences were observed (P>0.05 among ADG classes for FV, indicating that these traits are not related to each other. These results shows that the ADG is related to the agility in eat food and not to the time spent in the bunk or to the number of visits in a range of 24 hours.

  7. Geometrical optics in general relativity: A study of the higher order corrections

    International Nuclear Information System (INIS)

    Anile, A.M.

    1976-01-01

    The higher order corrections to geometrical optics are studied in general relativity for an electromagnetic test wave. An explicit expression is found for the average energy--momentum tensor which takes into account the first-order corrections. Finally the first-order corrections to the well-known area-intensity law of geometrical optics are derived

  8. Average of delta: a new quality control tool for clinical laboratories.

    Science.gov (United States)

    Jones, Graham R D

    2016-01-01

    Average of normals is a tool used to control assay performance using the average of a series of results from patients' samples. Delta checking is a process of identifying errors in individual patient results by reviewing the difference from previous results of the same patient. This paper introduces a novel alternate approach, average of delta, which combines these concepts to use the average of a number of sequential delta values to identify changes in assay performance. Models for average of delta and average of normals were developed in a spreadsheet application. The model assessed the expected scatter of average of delta and average of normals functions and the effect of assay bias for different values of analytical imprecision and within- and between-subject biological variation and the number of samples included in the calculations. The final assessment was the number of patients' samples required to identify an added bias with 90% certainty. The model demonstrated that with larger numbers of delta values, the average of delta function was tighter (lower coefficient of variation). The optimal number of samples for bias detection with average of delta was likely to be between 5 and 20 for most settings and that average of delta outperformed average of normals when the within-subject biological variation was small relative to the between-subject variation. Average of delta provides a possible additional assay quality control tool which theoretical modelling predicts may be more valuable than average of normals for analytes where the group biological variation is wide compared with within-subject variation and where there is a high rate of repeat testing in the laboratory patient population. © The Author(s) 2015.

  9. Experimental demonstration of squeezed-state quantum averaging

    DEFF Research Database (Denmark)

    Lassen, Mikael Østergaard; Madsen, Lars Skovgaard; Sabuncu, Metin

    2010-01-01

    We propose and experimentally demonstrate a universal quantum averaging process implementing the harmonic mean of quadrature variances. The averaged variances are prepared probabilistically by means of linear optical interference and measurement-induced conditioning. We verify that the implemented...

  10. Comparison of depth-averaged concentration and bed load flux sediment transport models of dam-break flow

    Directory of Open Access Journals (Sweden)

    Jia-heng Zhao

    2017-10-01

    Full Text Available This paper presents numerical simulations of dam-break flow over a movable bed. Two different mathematical models were compared: a fully coupled formulation of shallow water equations with erosion and deposition terms (a depth-averaged concentration flux model, and shallow water equations with a fully coupled Exner equation (a bed load flux model. Both models were discretized using the cell-centered finite volume method, and a second-order Godunov-type scheme was used to solve the equations. The numerical flux was calculated using a Harten, Lax, and van Leer approximate Riemann solver with the contact wave restored (HLLC. A novel slope source term treatment that considers the density change was introduced to the depth-averaged concentration flux model to obtain higher-order accuracy. A source term that accounts for the sediment flux was added to the bed load flux model to reflect the influence of sediment movement on the momentum of the water. In a one-dimensional test case, a sensitivity study on different model parameters was carried out. For the depth-averaged concentration flux model, Manning's coefficient and sediment porosity values showed an almost linear relationship with the bottom change, and for the bed load flux model, the sediment porosity was identified as the most sensitive parameter. The capabilities and limitations of both model concepts are demonstrated in a benchmark experimental test case dealing with dam-break flow over variable bed topography.

  11. The flattening of the average potential in models with fermions

    International Nuclear Information System (INIS)

    Bornholdt, S.

    1993-01-01

    The average potential is a scale dependent scalar effective potential. In a phase with spontaneous symmetry breaking its inner region becomes flat as the averaging extends over infinite volume and the average potential approaches the convex effective potential. Fermion fluctuations affect the shape of the average potential in this region and its flattening with decreasing physical scale. They have to be taken into account to find the true minimum of the scalar potential which determines the scale of spontaneous symmetry breaking. (orig.)

  12. 20 CFR 404.220 - Average-monthly-wage method.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Average-monthly-wage method. 404.220 Section... INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.220 Average-monthly-wage method. (a) Who is eligible for this method. You must...

  13. A time-averaged cosmic ray propagation theory

    International Nuclear Information System (INIS)

    Klimas, A.J.

    1975-01-01

    An argument is presented, which casts doubt on our ability to choose an appropriate magnetic field ensemble for computing the average behavior of cosmic ray particles. An alternate procedure, using time-averages rather than ensemble-averages, is presented. (orig.) [de

  14. 7 CFR 51.2561 - Average moisture content.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except when...

  15. Averaging in SU(2) open quantum random walk

    International Nuclear Information System (INIS)

    Ampadu Clement

    2014-01-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT

  16. Averaging in SU(2) open quantum random walk

    Science.gov (United States)

    Clement, Ampadu

    2014-03-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT.

  17. A study of beam position diagnostics with beam-excited dipole higher order modes using a downconverter test electronics in third harmonic 3.9 GHz superconducting accelerating cavities at FLASH

    International Nuclear Information System (INIS)

    Zhang, P.; Baboi, N.; Lorbeer, B.; Wamsat, T.; Eddy, N.; Fellenz, B.; Wendt, M.; Jones, R.M.

    2012-08-01

    Beam-excited higher order modes (HOM) in accelerating cavities contain transverse beam position information. Previous studies have narrowed down three modal options for beam position diagnostics in the third harmonic 3.9 GHz cavities at FLASH. Localized modes in the beam pipes at approximately 4.1 GHz and in the fifth cavity dipole band at approximately 9 GHz were found, that can provide a local measurement of the beam position. In contrast, propagating modes in the first and second dipole bands between 4.2 and 5.5 GHz can reach a better resolution. All the options were assessed with a specially designed test electronics built by Fermilab. The aim is to de ne a mode or spectral region suitable for the HOM electronics. Two data analysis techniques are used and compared in extracting beam position information from the dipole HOMs: direct linear regression and singular value decomposition. Current experiments suggest a resolution of 50 m accuracy in predicting local beam position using modes in the fifth dipole band, and a global resolution of 20 m over the complete module. Based on these results we decided to build a HOM electronics for the second dipole band and the fifth dipole band, so that we will have both high resolution measurements for the whole module, and localized measurements for individual cavity. The prototype electronics is being built by Fermilab and planned to be tested in FLASH by the end of 2012.

  18. A study of beam position diagnostics with beam-excited dipole higher order modes using a downconverter test electronics in third harmonic 3.9 GHz superconducting accelerating cavities at FLASH

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, P. [Manchester Univ. (United Kingdom); Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Baboi, N.; Lorbeer, B.; Wamsat, T. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Eddy, N.; Fellenz, B.; Wendt, M. [Fermi National Accelerator Lab., Batavia, IL (United States); Jones, R.M. [Manchester Univ. (United Kingdom); The Cockcroft Institute, Daresbury (United Kingdom)

    2012-08-15

    Beam-excited higher order modes (HOM) in accelerating cavities contain transverse beam position information. Previous studies have narrowed down three modal options for beam position diagnostics in the third harmonic 3.9 GHz cavities at FLASH. Localized modes in the beam pipes at approximately 4.1 GHz and in the fifth cavity dipole band at approximately 9 GHz were found, that can provide a local measurement of the beam position. In contrast, propagating modes in the first and second dipole bands between 4.2 and 5.5 GHz can reach a better resolution. All the options were assessed with a specially designed test electronics built by Fermilab. The aim is to de ne a mode or spectral region suitable for the HOM electronics. Two data analysis techniques are used and compared in extracting beam position information from the dipole HOMs: direct linear regression and singular value decomposition. Current experiments suggest a resolution of 50 m accuracy in predicting local beam position using modes in the fifth dipole band, and a global resolution of 20 m over the complete module. Based on these results we decided to build a HOM electronics for the second dipole band and the fifth dipole band, so that we will have both high resolution measurements for the whole module, and localized measurements for individual cavity. The prototype electronics is being built by Fermilab and planned to be tested in FLASH by the end of 2012.

  19. Testing Testing Testing.

    Science.gov (United States)

    Deville, Craig; O'Neill, Thomas; Wright, Benjamin D.; Woodcock, Richard W.; Munoz-Sandoval, Ana; Gershon, Richard C.; Bergstrom, Betty

    1998-01-01

    Articles in this special section consider (1) flow in test taking (Craig Deville); (2) testwiseness (Thomas O'Neill); (3) test length (Benjamin Wright); (4) cross-language test equating (Richard W. Woodcock and Ana Munoz-Sandoval); (5) computer-assisted testing and testwiseness (Richard Gershon and Betty Bergstrom); and (6) Web-enhanced testing…

  20. Influence of Averaging Preprocessing on Image Analysis with a Markov Random Field Model

    Science.gov (United States)

    Sakamoto, Hirotaka; Nakanishi-Ohno, Yoshinori; Okada, Masato

    2018-02-01

    This paper describes our investigations into the influence of averaging preprocessing on the performance of image analysis. Averaging preprocessing involves a trade-off: image averaging is often undertaken to reduce noise while the number of image data available for image analysis is decreased. We formulated a process of generating image data by using a Markov random field (MRF) model to achieve image analysis tasks such as image restoration and hyper-parameter estimation by a Bayesian approach. According to the notions of Bayesian inference, posterior distributions were analyzed to evaluate the influence of averaging. There are three main results. First, we found that the performance of image restoration with a predetermined value for hyper-parameters is invariant regardless of whether averaging is conducted. We then found that the performance of hyper-parameter estimation deteriorates due to averaging. Our analysis of the negative logarithm of the posterior probability, which is called the free energy based on an analogy with statistical mechanics, indicated that the confidence of hyper-parameter estimation remains higher without averaging. Finally, we found that when the hyper-parameters are estimated from the data, the performance of image restoration worsens as averaging is undertaken. We conclude that averaging adversely influences the performance of image analysis through hyper-parameter estimation.

  1. Sedimentological regimes for turbidity currents: Depth-averaged theory

    Science.gov (United States)

    Halsey, Thomas C.; Kumar, Amit; Perillo, Mauricio M.

    2017-07-01

    Turbidity currents are one of the most significant means by which sediment is moved from the continents into the deep ocean; their properties are interesting both as elements of the global sediment cycle and due to their role in contributing to the formation of deep water oil and gas reservoirs. One of the simplest models of the dynamics of turbidity current flow was introduced three decades ago, and is based on depth-averaging of the fluid mechanical equations governing the turbulent gravity-driven flow of relatively dilute turbidity currents. We examine the sedimentological regimes of a simplified version of this model, focusing on the role of the Richardson number Ri [dimensionless inertia] and Rouse number Ro [dimensionless sedimentation velocity] in determining whether a current is net depositional or net erosional. We find that for large Rouse numbers, the currents are strongly net depositional due to the disappearance of local equilibria between erosion and deposition. At lower Rouse numbers, the Richardson number also plays a role in determining the degree of erosion versus deposition. The currents become more erosive at lower values of the product Ro × Ri, due to the effect of clear water entrainment. At higher values of this product, the turbulence becomes insufficient to maintain the sediment in suspension, as first pointed out by Knapp and Bagnold. We speculate on the potential for two-layer solutions in this insufficiently turbulent regime, which would comprise substantial bedload flow with an overlying turbidity current.

  2. Averaging and sampling for magnetic-observatory hourly data

    Directory of Open Access Journals (Sweden)

    J. J. Love

    2010-11-01

    Full Text Available A time and frequency-domain analysis is made of the effects of averaging and sampling methods used for constructing magnetic-observatory hourly data values. Using 1-min data as a proxy for continuous, geomagnetic variation, we construct synthetic hourly values of two standard types: instantaneous "spot" measurements and simple 1-h "boxcar" averages. We compare these average-sample types with others: 2-h average, Gaussian, and "brick-wall" low-frequency-pass. Hourly spot measurements provide a statistically unbiased representation of the amplitude range of geomagnetic-field variation, but as a representation of continuous field variation over time, they are significantly affected by aliasing, especially at high latitudes. The 1-h, 2-h, and Gaussian average-samples are affected by a combination of amplitude distortion and aliasing. Brick-wall values are not affected by either amplitude distortion or aliasing, but constructing them is, in an operational setting, relatively more difficult than it is for other average-sample types. It is noteworthy that 1-h average-samples, the present standard for observatory hourly data, have properties similar to Gaussian average-samples that have been optimized for a minimum residual sum of amplitude distortion and aliasing. For 1-h average-samples from medium and low-latitude observatories, the average of the combination of amplitude distortion and aliasing is less than the 5.0 nT accuracy standard established by Intermagnet for modern 1-min data. For medium and low-latitude observatories, average differences between monthly means constructed from 1-min data and monthly means constructed from any of the hourly average-sample types considered here are less than the 1.0 nT resolution of standard databases. We recommend that observatories and World Data Centers continue the standard practice of reporting simple 1-h-average hourly values.

  3. Average glandular dose in digital mammography and breast tomosynthesis

    Energy Technology Data Exchange (ETDEWEB)

    Olgar, T. [Ankara Univ. (Turkey). Dept. of Engineering Physics; Universitaetsklinikum Leipzig AoeR (Germany). Klinik und Poliklinik fuer Diagnostische und Interventionelle Radiologie; Kahn, T.; Gosch, D. [Universitaetsklinikum Leipzig AoeR (Germany). Klinik und Poliklinik fuer Diagnostische und Interventionelle Radiologie

    2012-10-15

    Purpose: To determine the average glandular dose (AGD) in digital full-field mammography (2 D imaging mode) and in breast tomosynthesis (3 D imaging mode). Materials and Methods: Using the method described by Boone, the AGD was calculated from the exposure parameters of 2247 conventional 2 D mammograms and 984 mammograms in 3 D imaging mode of 641 patients examined with the digital mammographic system Hologic Selenia Dimensions. The breast glandular tissue content was estimated by the Hologic R2 Quantra automated volumetric breast density measurement tool for each patient from right craniocaudal (RCC) and left craniocaudal (LCC) images in 2 D imaging mode. Results: The mean compressed breast thickness (CBT) was 52.7 mm for craniocaudal (CC) and 56.0 mm for mediolateral oblique (MLO) views. The mean percentage of breast glandular tissue content was 18.0 % and 17.4 % for RCC and LCC projections, respectively. The mean AGD values in 2 D imaging mode per exposure for the standard breast were 1.57 mGy and 1.66 mGy, while the mean AGD values after correction for real breast composition were 1.82 mGy and 1.94 mGy for CC and MLO views, respectively. The mean AGD values in 3 D imaging mode per exposure for the standard breast were 2.19 mGy and 2.29 mGy, while the mean AGD values after correction for the real breast composition were 2.53 mGy and 2.63 mGy for CC and MLO views, respectively. No significant relationship was found between the AGD and CBT in 2 D imaging mode and a good correlation coefficient of 0.98 in 3 D imaging mode. Conclusion: In this study the mean calculated AGD per exposure in 3 D imaging mode was on average 34 % higher than for 2 D imaging mode for patients examined with the same CBT.

  4. Average glandular dose in digital mammography and breast tomosynthesis

    International Nuclear Information System (INIS)

    Olgar, T.; Universitaetsklinikum Leipzig AoeR; Kahn, T.; Gosch, D.

    2012-01-01

    Purpose: To determine the average glandular dose (AGD) in digital full-field mammography (2 D imaging mode) and in breast tomosynthesis (3 D imaging mode). Materials and Methods: Using the method described by Boone, the AGD was calculated from the exposure parameters of 2247 conventional 2 D mammograms and 984 mammograms in 3 D imaging mode of 641 patients examined with the digital mammographic system Hologic Selenia Dimensions. The breast glandular tissue content was estimated by the Hologic R2 Quantra automated volumetric breast density measurement tool for each patient from right craniocaudal (RCC) and left craniocaudal (LCC) images in 2 D imaging mode. Results: The mean compressed breast thickness (CBT) was 52.7 mm for craniocaudal (CC) and 56.0 mm for mediolateral oblique (MLO) views. The mean percentage of breast glandular tissue content was 18.0 % and 17.4 % for RCC and LCC projections, respectively. The mean AGD values in 2 D imaging mode per exposure for the standard breast were 1.57 mGy and 1.66 mGy, while the mean AGD values after correction for real breast composition were 1.82 mGy and 1.94 mGy for CC and MLO views, respectively. The mean AGD values in 3 D imaging mode per exposure for the standard breast were 2.19 mGy and 2.29 mGy, while the mean AGD values after correction for the real breast composition were 2.53 mGy and 2.63 mGy for CC and MLO views, respectively. No significant relationship was found between the AGD and CBT in 2 D imaging mode and a good correlation coefficient of 0.98 in 3 D imaging mode. Conclusion: In this study the mean calculated AGD per exposure in 3 D imaging mode was on average 34 % higher than for 2 D imaging mode for patients examined with the same CBT.

  5. Strengthened glass for high average power laser applications

    International Nuclear Information System (INIS)

    Cerqua, K.A.; Lindquist, A.; Jacobs, S.D.; Lambropoulos, J.

    1987-01-01

    Recent advancements in high repetition rate and high average power laser systems have put increasing demands on the development of improved solid state laser materials with high thermal loading capabilities. The authors have developed a process for strengthening a commercially available Nd doped phosphate glass utilizing an ion-exchange process. Results of thermal loading fracture tests on moderate size (160 x 15 x 8 mm) glass slabs have shown a 6-fold improvement in power loading capabilities for strengthened samples over unstrengthened slabs. Fractographic analysis of post-fracture samples has given insight into the mechanism of fracture in both unstrengthened and strengthened samples. Additional stress analysis calculations have supported these findings. In addition to processing the glass' surface during strengthening in a manner which preserves its post-treatment optical quality, the authors have developed an in-house optical fabrication technique utilizing acid polishing to minimize subsurface damage in samples prior to exchange treatment. Finally, extension of the strengthening process to alternate geometries of laser glass has produced encouraging results, which may expand the potential or strengthened glass in laser systems, making it an exciting prospect for many applications

  6. Average chewing pattern improvements following Disclusion Time reduction.

    Science.gov (United States)

    Kerstein, Robert B; Radke, John

    2017-05-01

    Studies involving electrognathographic (EGN) recordings of chewing improvements obtained following occlusal adjustment therapy are rare, as most studies lack 'chewing' within the research. The objectives of this study were to determine if reducing long Disclusion Time to short Disclusion Time with the immediate complete anterior guidance development (ICAGD) coronoplasty in symptomatic subjects altered their average chewing pattern (ACP) and their muscle function. Twenty-nine muscularly symptomatic subjects underwent simultaneous EMG and EGN recordings of right and left gum chewing, before and after the ICAGD coronoplasty. Statistical differences in the mean Disclusion Time, the mean muscle contraction cycle, and the mean ACP resultant from ICAGD underwent the Student's paired t-test (α = 0.05). Disclusion Time reductions from ICAGD were significant (2.11-0.45 s. p = 0.0000). Post-ICAGD muscle changes were significant in the mean area (p = 0.000001), the peak amplitude (p = 0.00005), the time to peak contraction (p chewing position became closer to centric occlusion (p chewing velocities increased (p chewing pattern (ACP) shape, speed, consistency, muscular coordination, and vertical opening improvements can be significantly improved in muscularly dysfunctional TMD patients within one week's time of undergoing the ICAGD enameloplasty. Computer-measured and guided occlusal adjustments quickly and physiologically improved chewing, without requiring the patients to wear pre- or post-treatment appliances.

  7. Plate with a hole obeys the averaged null energy condition

    International Nuclear Information System (INIS)

    Graham, Noah; Olum, Ken D.

    2005-01-01

    The negative energy density of Casimir systems appears to violate general relativity energy conditions. However, one cannot test the averaged null energy condition (ANEC) using standard calculations for perfectly reflecting plates, because the null geodesic would have to pass through the plates, where the calculation breaks down. To avoid this problem, we compute the contribution to ANEC for a geodesic that passes through a hole in a single plate. We consider both Dirichlet and Neumann boundary conditions in two and three space dimensions. We use a Babinet's principle argument to reduce the problem to a complementary finite disk correction to the perfect mirror result, which we then compute using scattering theory in elliptical and spheroidal coordinates. In the Dirichlet case, we find that the positive correction due to the hole overwhelms the negative contribution of the infinite plate. In the Neumann case, where the infinite plate gives a positive contribution, the hole contribution is smaller in magnitude, so again ANEC is obeyed. These results can be extended to the case of two plates in the limits of large and small hole radii. This system thus provides another example of a situation where ANEC turns out to be obeyed when one might expect it to be violated

  8. Are average and symmetric faces attractive to infants? Discrimination and looking preferences.

    Science.gov (United States)

    Rhodes, Gillian; Geddes, Keren; Jeffery, Linda; Dziurawiec, Suzanne; Clark, Alison

    2002-01-01

    Young infants prefer to look at faces that adults find attractive, suggesting a biological basis for some face preferences. However, the basis for infant preferences is not known. Adults find average and symmetric faces attractive. We examined whether 5-8-month-old infants discriminate between different levels of averageness and symmetry in faces, and whether they prefer to look at faces with higher levels of these traits. Each infant saw 24 pairs of female faces. Each pair consisted of two versions of the same face differing either in averageness (12 pairs) or symmetry (12 pairs). Data from the mothers confirmed that adults preferred the more average and more symmetric versions in each pair. The infants were sensitive to differences in both averageness and symmetry, but showed no looking preference for the more average or more symmetric versions. On the contrary, longest looks were significantly longer for the less average versions, and both longest looks and first looks were marginally longer for the less symmetric versions. Mean looking times were also longer for the less average and less symmetric versions, but those differences were not significant. We suggest that the infant looking behaviour may reflect a novelty preference rather than an aesthetic preference.

  9. on the performance of Autoregressive Moving Average Polynomial

    African Journals Online (AJOL)

    Timothy Ademakinwa

    Distributed Lag (PDL) model, Autoregressive Polynomial Distributed Lag ... Moving Average Polynomial Distributed Lag (ARMAPDL) model. ..... Global Journal of Mathematics and Statistics. Vol. 1. ... Business and Economic Research Center.

  10. Decision trees with minimum average depth for sorting eight elements

    KAUST Repository

    AbouEisha, Hassan M.

    2015-11-19

    We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees is approximately equal to 8.548×10^326365), has also minimum depth. Both problems were considered by Knuth (1998). To obtain these results, we use tools based on extensions of dynamic programming which allow us to make sequential optimization of decision trees relative to depth and average depth, and to count the number of decision trees with minimum average depth.

  11. Comparison of Interpolation Methods as Applied to Time Synchronous Averaging

    National Research Council Canada - National Science Library

    Decker, Harry

    1999-01-01

    Several interpolation techniques were investigated to determine their effect on time synchronous averaging of gear vibration signals and also the effects on standard health monitoring diagnostic parameters...

  12. Light-cone averaging in cosmology: formalism and applications

    International Nuclear Information System (INIS)

    Gasperini, M.; Marozzi, G.; Veneziano, G.; Nugier, F.

    2011-01-01

    We present a general gauge invariant formalism for defining cosmological averages that are relevant for observations based on light-like signals. Such averages involve either null hypersurfaces corresponding to a family of past light-cones or compact surfaces given by their intersection with timelike hypersurfaces. Generalized Buchert-Ehlers commutation rules for derivatives of these light-cone averages are given. After introducing some adapted ''geodesic light-cone'' coordinates, we give explicit expressions for averaging the redshift to luminosity-distance relation and the so-called ''redshift drift'' in a generic inhomogeneous Universe

  13. Globalisation and Higher Education

    NARCIS (Netherlands)

    Marginson, Simon; van der Wende, Marijk

    2007-01-01

    Economic and cultural globalisation has ushered in a new era in higher education. Higher education was always more internationally open than most sectors because of its immersion in knowledge, which never showed much respect for juridical boundaries. In global knowledge economies, higher education

  14. Delineation of facial archetypes by 3d averaging.

    Science.gov (United States)

    Shaweesh, Ashraf I; Thomas, C David L; Bankier, Agnes; Clement, John G

    2004-10-01

    The objective of this study was to investigate the feasibility of creating archetypal 3D faces through computerized 3D facial averaging. A 3D surface scanner Fiore and its software were used to acquire the 3D scans of the faces while 3D Rugle3 and locally-developed software generated the holistic facial averages. 3D facial averages were created from two ethnic groups; European and Japanese and from children with three previous genetic disorders; Williams syndrome, achondroplasia and Sotos syndrome as well as the normal control group. The method included averaging the corresponding depth (z) coordinates of the 3D facial scans. Compared with other face averaging techniques there was not any warping or filling in the spaces by interpolation; however, this facial average lacked colour information. The results showed that as few as 14 faces were sufficient to create an archetypal facial average. In turn this would make it practical to use face averaging as an identification tool in cases where it would be difficult to recruit a larger number of participants. In generating the average, correcting for size differences among faces was shown to adjust the average outlines of the facial features. It is assumed that 3D facial averaging would help in the identification of the ethnic status of persons whose identity may not be known with certainty. In clinical medicine, it would have a great potential for the diagnosis of syndromes with distinctive facial features. The system would also assist in the education of clinicians in the recognition and identification of such syndromes.

  15. Original article Functioning of memory and attention processes in children with intelligence below average

    Directory of Open Access Journals (Sweden)

    Aneta Rita Borkowska

    2014-05-01

    Full Text Available BACKGROUND The aim of the research was to assess memorization and recall of logically connected and unconnected material, coded graphically and linguistically, and the ability to focus attention, in a group of children with intelligence below average, compared to children with average intelligence. PARTICIPANTS AND PROCEDURE The study group included 27 children with intelligence below average. The control group consisted of 29 individuals. All of them were examined using the authors’ experimental trials and the TUS test (Attention and Perceptiveness Test. RESULTS Children with intelligence below average memorized significantly less information contained in the logical material, demonstrated lower ability to memorize the visual material, memorized significantly fewer words in the verbal material learning task, achieved lower results in such indicators of the visual attention process pace as the number of omissions and mistakes, and had a lower pace of perceptual work, compared to children with average intelligence. CONCLUSIONS The results confirm that children with intelligence below average have difficulties with memorizing new material, both logically connected and unconnected. The significantly lower capacity of direct memory is independent of modality. The results of the study on the memory process confirm the hypothesis about lower abilities of children with intelligence below average, in terms of concentration, work pace, efficiency and perception.

  16. Free Energy Self-Averaging in Protein-Sized Random Heteropolymers

    International Nuclear Information System (INIS)

    Chuang, Jeffrey; Grosberg, Alexander Yu.; Kardar, Mehran

    2001-01-01

    Current theories of heteropolymers are inherently macroscopic, but are applied to mesoscopic proteins. To compute the free energy over sequences, one assumes self-averaging -- a property established only in the macroscopic limit. By enumerating the states and energies of compact 18, 27, and 36mers on a lattice with an ensemble of random sequences, we test the self-averaging approximation. We find that fluctuations in the free energy between sequences are weak, and that self-averaging is valid at the scale of real proteins. The results validate sequence design methods which exponentially speed up computational design and simplify experimental realizations

  17. Interpreting Bivariate Regression Coefficients: Going beyond the Average

    Science.gov (United States)

    Halcoussis, Dennis; Phillips, G. Michael

    2010-01-01

    Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…

  18. Average stress in a Stokes suspension of disks

    NARCIS (Netherlands)

    Prosperetti, Andrea

    2004-01-01

    The ensemble-average velocity and pressure in an unbounded quasi-random suspension of disks (or aligned cylinders) are calculated in terms of average multipoles allowing for the possibility of spatial nonuniformities in the system. An expression for the stress due to the suspended particles is

  19. 47 CFR 1.959 - Computation of average terrain elevation.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Computation of average terrain elevation. 1.959 Section 1.959 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Wireless Radio Services Applications and Proceedings Application Requirements and Procedures § 1.959 Computation of average terrain elevation. Except a...

  20. 47 CFR 80.759 - Average terrain elevation.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Average terrain elevation. 80.759 Section 80.759 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.759 Average terrain elevation. (a)(1) Draw radials...

  1. The average covering tree value for directed graph games

    NARCIS (Netherlands)

    Khmelnitskaya, Anna Borisovna; Selcuk, Özer; Talman, Dolf

    We introduce a single-valued solution concept, the so-called average covering tree value, for the class of transferable utility games with limited communication structure represented by a directed graph. The solution is the average of the marginal contribution vectors corresponding to all covering

  2. The Average Covering Tree Value for Directed Graph Games

    NARCIS (Netherlands)

    Khmelnitskaya, A.; Selcuk, O.; Talman, A.J.J.

    2012-01-01

    Abstract: We introduce a single-valued solution concept, the so-called average covering tree value, for the class of transferable utility games with limited communication structure represented by a directed graph. The solution is the average of the marginal contribution vectors corresponding to all

  3. 18 CFR 301.7 - Average System Cost methodology functionalization.

    Science.gov (United States)

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Average System Cost... REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER MARKETING ADMINISTRATIONS AVERAGE SYSTEM COST METHODOLOGY FOR SALES FROM UTILITIES TO BONNEVILLE POWER ADMINISTRATION UNDER NORTHWEST POWER...

  4. Analytic computation of average energy of neutrons inducing fission

    International Nuclear Information System (INIS)

    Clark, Alexander Rich

    2016-01-01

    The objective of this report is to describe how I analytically computed the average energy of neutrons that induce fission in the bare BeRP ball. The motivation of this report is to resolve a discrepancy between the average energy computed via the FMULT and F4/FM cards in MCNP6 by comparison to the analytic results.

  5. An alternative scheme of the Bogolyubov's average method

    International Nuclear Information System (INIS)

    Ortiz Peralta, T.; Ondarza R, R.; Camps C, E.

    1990-01-01

    In this paper the average energy and the magnetic moment conservation laws in the Drift Theory of charged particle motion are obtained in a simple way. The approach starts from the energy and magnetic moment conservation laws and afterwards the average is performed. This scheme is more economic from the standpoint of time and algebraic calculations than the usual procedure of Bogolyubov's method. (Author)

  6. Decision trees with minimum average depth for sorting eight elements

    KAUST Repository

    AbouEisha, Hassan M.; Chikalov, Igor; Moshkov, Mikhail

    2015-01-01

    We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees

  7. Bounds on Average Time Complexity of Decision Trees

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any

  8. A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, Manfred

    2003-01-01

    We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages...

  9. Preference for Averageness in Faces Does Not Generalize to Non-Human Primates

    Directory of Open Access Journals (Sweden)

    Olivia B. Tomeo

    2017-07-01

    Full Text Available Facial attractiveness is a long-standing topic of active study in both neuroscience and social science, motivated by its positive social consequences. Over the past few decades, it has been established that averageness is a major factor influencing judgments of facial attractiveness in humans. Non-human primates share similar social behaviors as well as neural mechanisms related to face processing with humans. However, it is unknown whether monkeys, like humans, also find particular faces attractive and, if so, which kind of facial traits they prefer. To address these questions, we investigated the effect of averageness on preferences for faces in monkeys. We tested three adult male rhesus macaques using a visual paired comparison (VPC task, in which they viewed pairs of faces (both individual faces, or one individual face and one average face; viewing time was used as a measure of preference. We did find that monkeys looked longer at certain individual faces than others. However, unlike humans, monkeys did not prefer the average face over individual faces. In fact, the more the individual face differed from the average face, the longer the monkeys looked at it, indicating that the average face likely plays a role in face recognition rather than in judgments of facial attractiveness: in models of face recognition, the average face operates as the norm against which individual faces are compared and recognized. Taken together, our study suggests that the preference for averageness in faces does not generalize to non-human primates.

  10. Anomalous behavior of q-averages in nonextensive statistical mechanics

    International Nuclear Information System (INIS)

    Abe, Sumiyoshi

    2009-01-01

    A generalized definition of average, termed the q-average, is widely employed in the field of nonextensive statistical mechanics. Recently, it has however been pointed out that such an average value may behave unphysically under specific deformations of probability distributions. Here, the following three issues are discussed and clarified. Firstly, the deformations considered are physical and may be realized experimentally. Secondly, in view of the thermostatistics, the q-average is unstable in both finite and infinite discrete systems. Thirdly, a naive generalization of the discussion to continuous systems misses a point, and a norm better than the L 1 -norm should be employed for measuring the distance between two probability distributions. Consequently, stability of the q-average is shown not to be established in all of the cases

  11. Bootstrapping pre-averaged realized volatility under market microstructure noise

    DEFF Research Database (Denmark)

    Hounyo, Ulrich; Goncalves, Sílvia; Meddahi, Nour

    The main contribution of this paper is to propose a bootstrap method for inference on integrated volatility based on the pre-averaging approach of Jacod et al. (2009), where the pre-averaging is done over all possible overlapping blocks of consecutive observations. The overlapping nature of the pre......-averaged returns implies that these are kn-dependent with kn growing slowly with the sample size n. This motivates the application of a blockwise bootstrap method. We show that the "blocks of blocks" bootstrap method suggested by Politis and Romano (1992) (and further studied by Bühlmann and Künsch (1995......)) is valid only when volatility is constant. The failure of the blocks of blocks bootstrap is due to the heterogeneity of the squared pre-averaged returns when volatility is stochastic. To preserve both the dependence and the heterogeneity of squared pre-averaged returns, we propose a novel procedure...

  12. Lateral dispersion coefficients as functions of averaging time

    International Nuclear Information System (INIS)

    Sheih, C.M.

    1980-01-01

    Plume dispersion coefficients are discussed in terms of single-particle and relative diffusion, and are investigated as functions of averaging time. To demonstrate the effects of averaging time on the relative importance of various dispersion processes, and observed lateral wind velocity spectrum is used to compute the lateral dispersion coefficients of total, single-particle and relative diffusion for various averaging times and plume travel times. The results indicate that for a 1 h averaging time the dispersion coefficient of a plume can be approximated by single-particle diffusion alone for travel times <250 s and by relative diffusion for longer travel times. Furthermore, it is shown that the power-law formula suggested by Turner for relating pollutant concentrations for other averaging times to the corresponding 15 min average is applicable to the present example only when the averaging time is less than 200 s and the tral time smaller than about 300 s. Since the turbulence spectrum used in the analysis is an observed one, it is hoped that the results could represent many conditions encountered in the atmosphere. However, as the results depend on the form of turbulence spectrum, the calculations are not for deriving a set of specific criteria but for demonstrating the need in discriminating various processes in studies of plume dispersion

  13. Transferability of hydrological models and ensemble averaging methods between contrasting climatic periods

    Science.gov (United States)

    Broderick, Ciaran; Matthews, Tom; Wilby, Robert L.; Bastola, Satish; Murphy, Conor

    2016-10-01

    Understanding hydrological model predictive capabilities under contrasting climate conditions enables more robust decision making. Using Differential Split Sample Testing (DSST), we analyze the performance of six hydrological models for 37 Irish catchments under climate conditions unlike those used for model training. Additionally, we consider four ensemble averaging techniques when examining interperiod transferability. DSST is conducted using 2/3 year noncontinuous blocks of (i) the wettest/driest years on record based on precipitation totals and (ii) years with a more/less pronounced seasonal precipitation regime. Model transferability between contrasting regimes was found to vary depending on the testing scenario, catchment, and evaluation criteria considered. As expected, the ensemble average outperformed most individual ensemble members. However, averaging techniques differed considerably in the number of times they surpassed the best individual model member. Bayesian Model Averaging (BMA) and the Granger-Ramanathan Averaging (GRA) method were found to outperform the simple arithmetic mean (SAM) and Akaike Information Criteria Averaging (AICA). Here GRA performed better than the best individual model in 51%-86% of cases (according to the Nash-Sutcliffe criterion). When assessing model predictive skill under climate change conditions we recommend (i) setting up DSST to select the best available analogues of expected annual mean and seasonal climate conditions; (ii) applying multiple performance criteria; (iii) testing transferability using a diverse set of catchments; and (iv) using a multimodel ensemble in conjunction with an appropriate averaging technique. Given the computational efficiency and performance of GRA relative to BMA, the former is recommended as the preferred ensemble averaging technique for climate assessment.

  14. 40 CFR 80.205 - How is the annual refinery or importer average and corporate pool average sulfur level determined?

    Science.gov (United States)

    2010-07-01

    ... volume of gasoline produced or imported in batch i. Si=The sulfur content of batch i determined under § 80.330. n=The number of batches of gasoline produced or imported during the averaging period. i=Individual batch of gasoline produced or imported during the averaging period. (b) All annual refinery or...

  15. 40 CFR 600.510-12 - Calculation of average fuel economy and average carbon-related exhaust emissions.

    Science.gov (United States)

    2010-07-01

    ... and average carbon-related exhaust emissions. 600.510-12 Section 600.510-12 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF... Transportation. (iv) [Reserved] (2) Average carbon-related exhaust emissions will be calculated to the nearest...

  16. TRIGA research reactors with higher power density

    International Nuclear Information System (INIS)

    Whittemore, W.L.

    1994-01-01

    The recent trend in new or upgraded research reactors is to higher power densities (hence higher neutron flux levels) but not necessarily to higher power levels. The TRIGA LEU fuel with burnable poison is available in small diameter fuel rods capable of high power per rod (≅48 kW/rod) with acceptable peak fuel temperatures. The performance of a 10-MW research reactor with a compact core of hexagonal TRIGA fuel clusters has been calculated in detail. With its light water coolant, beryllium and D 2 O reflector regions, this reactor can provide in-core experiments with thermal fluxes in excess of 3 x 10 14 n/cm 2 ·s and fast fluxes (>0.1 MeV) of 2 x 10 14 n/cm 2 ·s. The core centerline thermal neutron flux in the D 2 O reflector is about 2 x 10 14 n/cm 2 ·s and the average core power density is about 230 kW/liter. Using other TRIGA fuel developed for 25-MW test reactors but arranged in hexagonal arrays, power densities in excess of 300 kW/liter are readily available. A core with TRIGA fuel operating at 15-MW and generating such a power density is capable of producing thermal neutron fluxes in a D 2 O reflector of 3 x 10 14 n/cm 2 ·s. A beryllium-filled central region of the core can further enhance the core leakage and hence the neutron flux in the reflector. (author)

  17. Introduction to the method of average magnitude analysis and application to natural convection in cavities

    International Nuclear Information System (INIS)

    Lykoudis, P.S.

    1995-01-01

    The method of Average Magnitude Analysis is a mixture of the Integral Method and the Order of Magnitude Analysis. The paper shows how the differential equations of conservation for steady-state, laminar, boundary layer flows are converted to a system of algebraic equations, where the result is a sum of the order of magnitude of each term, multiplied by, a weight coefficient. These coefficients are determined from integrals containing the assumed velocity and temperature profiles. The method is illustrated by applying it to the case of drag and heat transfer over an infinite flat plate. It is then applied to the case of natural convection over an infinite flat plate with and without the presence of a horizontal magnetic field, and subsequently to enclosures of aspect ratios of one or higher. The final correlation in this instance yields the Nusselt number as a function of the aspect ratio and the Rayleigh and Prandtl numbers. This correlation is tested against a wide range of small and large values of these parameters. 19 refs., 4 figs

  18. Reynolds-Averaged Navier-Stokes Simulation of a 2D Circulation Control Wind Tunnel Experiment

    Science.gov (United States)

    Allan, Brian G.; Jones, Greg; Lin, John C.

    2011-01-01

    Numerical simulations are performed using a Reynolds-averaged Navier-Stokes (RANS) flow solver for a circulation control airfoil. 2D and 3D simulation results are compared to a circulation control wind tunnel test conducted at the NASA Langley Basic Aerodynamics Research Tunnel (BART). The RANS simulations are compared to a low blowing case with a jet momentum coefficient, C(sub u), of 0:047 and a higher blowing case of 0.115. Three dimensional simulations of the model and tunnel walls show wall effects on the lift and airfoil surface pressures. These wall effects include a 4% decrease of the midspan sectional lift for the C(sub u) 0.115 blowing condition. Simulations comparing the performance of the Spalart Allmaras (SA) and Shear Stress Transport (SST) turbulence models are also made, showing the SST model compares best to the experimental data. A Rotational/Curvature Correction (RCC) to the turbulence model is also evaluated demonstrating an improvement in the CFD predictions.

  19. Average inactivity time model, associated orderings and reliability properties

    Science.gov (United States)

    Kayid, M.; Izadkhah, S.; Abouammoh, A. M.

    2018-02-01

    In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.

  20. Average L-shell fluorescence, Auger, and electron yields

    International Nuclear Information System (INIS)

    Krause, M.O.

    1980-01-01

    The dependence of the average L-shell fluorescence and Auger yields on the initial vacancy distribution is shown to be small. By contrast, the average electron yield pertaining to both Auger and Coster-Kronig transitions is shown to display a strong dependence. Numerical examples are given on the basis of Krause's evaluation of subshell radiative and radiationless yields. Average yields are calculated for widely differing vacancy distributions and are intercompared graphically for 40 3 subshell yields in most cases of inner-shell ionization

  1. Simultaneous inference for model averaging of derived parameters

    DEFF Research Database (Denmark)

    Jensen, Signe Marie; Ritz, Christian

    2015-01-01

    Model averaging is a useful approach for capturing uncertainty due to model selection. Currently, this uncertainty is often quantified by means of approximations that do not easily extend to simultaneous inference. Moreover, in practice there is a need for both model averaging and simultaneous...... inference for derived parameters calculated in an after-fitting step. We propose a method for obtaining asymptotically correct standard errors for one or several model-averaged estimates of derived parameters and for obtaining simultaneous confidence intervals that asymptotically control the family...

  2. Salecker-Wigner-Peres clock and average tunneling times

    International Nuclear Information System (INIS)

    Lunardi, Jose T.; Manzoni, Luiz A.; Nystrom, Andrew T.

    2011-01-01

    The quantum clock of Salecker-Wigner-Peres is used, by performing a post-selection of the final state, to obtain average transmission and reflection times associated to the scattering of localized wave packets by static potentials in one dimension. The behavior of these average times is studied for a Gaussian wave packet, centered around a tunneling wave number, incident on a rectangular barrier and, in particular, on a double delta barrier potential. The regime of opaque barriers is investigated and the results show that the average transmission time does not saturate, showing no evidence of the Hartman effect (or its generalized version).

  3. Time average vibration fringe analysis using Hilbert transformation

    International Nuclear Information System (INIS)

    Kumar, Upputuri Paul; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad

    2010-01-01

    Quantitative phase information from a single interferogram can be obtained using the Hilbert transform (HT). We have applied the HT method for quantitative evaluation of Bessel fringes obtained in time average TV holography. The method requires only one fringe pattern for the extraction of vibration amplitude and reduces the complexity in quantifying the data experienced in the time average reference bias modulation method, which uses multiple fringe frames. The technique is demonstrated for the measurement of out-of-plane vibration amplitude on a small scale specimen using a time average microscopic TV holography system.

  4. Average multiplications in deep inelastic processes and their interpretation

    International Nuclear Information System (INIS)

    Kiselev, A.V.; Petrov, V.A.

    1983-01-01

    Inclusive production of hadrons in deep inelastic proceseseus is considered. It is shown that at high energies the jet evolution in deep inelastic processes is mainly of nonperturbative character. With the increase of a final hadron state energy the leading contribution to an average multiplicity comes from a parton subprocess due to production of massive quark and gluon jets and their further fragmentation as diquark contribution becomes less and less essential. The ratio of the total average multiplicity in deep inelastic processes to the average multiplicity in e + e - -annihilation at high energies tends to unity

  5. Fitting a function to time-dependent ensemble averaged data

    DEFF Research Database (Denmark)

    Fogelmark, Karl; Lomholt, Michael A.; Irbäck, Anders

    2018-01-01

    Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion...... method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software....

  6. Average wind statistics for SRP area meteorological towers

    International Nuclear Information System (INIS)

    Laurinat, J.E.

    1987-01-01

    A quality assured set of average wind Statistics for the seven SRP area meteorological towers has been calculated for the five-year period 1982--1986 at the request of DOE/SR. A Similar set of statistics was previously compiled for the years 1975-- 1979. The updated wind statistics will replace the old statistics as the meteorological input for calculating atmospheric radionuclide doses from stack releases, and will be used in the annual environmental report. This report details the methods used to average the wind statistics and to screen out bad measurements and presents wind roses generated by the averaged statistics

  7. Higher Education and Inequality

    Science.gov (United States)

    Brown, Roger

    2018-01-01

    After climate change, rising economic inequality is the greatest challenge facing the advanced Western societies. Higher education has traditionally been seen as a means to greater equality through its role in promoting social mobility. But with increased marketisation higher education now not only reflects the forces making for greater inequality…

  8. Higher Education in California

    Science.gov (United States)

    Public Policy Institute of California, 2016

    2016-01-01

    Higher education enhances Californians' lives and contributes to the state's economic growth. But population and education trends suggest that California is facing a large shortfall of college graduates. Addressing this short­fall will require strong gains for groups that have been historically under­represented in higher education. Substantial…

  9. Reimagining Christian Higher Education

    Science.gov (United States)

    Hulme, E. Eileen; Groom, David E., Jr.; Heltzel, Joseph M.

    2016-01-01

    The challenges facing higher education continue to mount. The shifting of the U.S. ethnic and racial demographics, the proliferation of advanced digital technologies and data, and the move from traditional degrees to continuous learning platforms have created an unstable environment to which Christian higher education must adapt in order to remain…

  10. Happiness in Higher Education

    Science.gov (United States)

    Elwick, Alex; Cannizzaro, Sara

    2017-01-01

    This paper investigates the higher education literature surrounding happiness and related notions: satisfaction, despair, flourishing and well-being. It finds that there is a real dearth of literature relating to profound happiness in higher education: much of the literature using the terms happiness and satisfaction interchangeably as if one were…

  11. Gender and Higher Education

    Science.gov (United States)

    Bank, Barbara J., Ed.

    2011-01-01

    This comprehensive, encyclopedic review explores gender and its impact on American higher education across historical and cultural contexts. Challenging recent claims that gender inequities in U.S. higher education no longer exist, the contributors--leading experts in the field--reveal the many ways in which gender is embedded in the educational…

  12. Quality of Higher Education

    DEFF Research Database (Denmark)

    Zou, Yihuan

    is about constructing a more inclusive understanding of quality in higher education through combining the macro, meso and micro levels, i.e. from the perspectives of national policy, higher education institutions as organizations in society, individual teaching staff and students. It covers both......Quality in higher education was not invented in recent decades – universities have always possessed mechanisms for assuring the quality of their work. The rising concern over quality is closely related to the changes in higher education and its social context. Among others, the most conspicuous...... changes are the massive expansion, diversification and increased cost in higher education, and new mechanisms of accountability initiated by the state. With these changes the traditional internally enacted academic quality-keeping has been given an important external dimension – quality assurance, which...

  13. Higher Efficiency HVAC Motors

    Energy Technology Data Exchange (ETDEWEB)

    Flynn, Charles Joseph [QM Power, Inc., Kansas City, MO (United States)

    2018-02-13

    The objective of this project was to design and build a cost competitive, more efficient heating, ventilation, and air conditioning (HVAC) motor than what is currently available on the market. Though different potential motor architectures among QMP’s primary technology platforms were investigated and evaluated, including through the building of numerous prototypes, the project ultimately focused on scaling up QM Power, Inc.’s (QMP) Q-Sync permanent magnet synchronous motors from available sub-fractional horsepower (HP) sizes for commercial refrigeration fan applications to larger fractional horsepower sizes appropriate for HVAC applications, and to add multi-speed functionality. The more specific goal became the research, design, development, and testing of a prototype 1/2 HP Q-Sync motor that has at least two operating speeds and 87% peak efficiency compared to incumbent electronically commutated motors (EC or ECM, also known as brushless direct current (DC) motors), the heretofore highest efficiency HVACR fan motor solution, at approximately 82% peak efficiency. The resulting motor prototype built achieved these goals, hitting 90% efficiency and .95 power factor at full load and speed, and 80% efficiency and .7 power factor at half speed. Q-Sync, developed in part through a DOE SBIR grant (Award # DE-SC0006311), is a novel, patented motor technology that improves on electronically commutated permanent magnet motors through an advanced electronic circuit technology. It allows a motor to “sync” with the alternating current (AC) power flow. It does so by eliminating the constant, wasteful power conversions from AC to DC and back to AC through the synthetic creation of a new AC wave on the primary circuit board (PCB) by a process called pulse width modulation (PWM; aka electronic commutation) that is incessantly required to sustain motor operation in an EC permanent magnet motor. The Q-Sync circuit improves the power factor of the motor by removing all

  14. Average monthly and annual climate maps for Bolivia

    KAUST Repository

    Vicente-Serrano, Sergio M.

    2015-02-24

    This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.

  15. Medicare Part B Drug Average Sales Pricing Files

    Data.gov (United States)

    U.S. Department of Health & Human Services — Manufacturer reporting of Average Sales Price (ASP) data - A manufacturers ASP must be calculated by the manufacturer every calendar quarter and submitted to CMS...

  16. High Average Power Fiber Laser for Satellite Communications, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Very high average power lasers with high electrical-top-optical (E-O) efficiency, which also support pulse position modulation (PPM) formats in the MHz-data rate...

  17. A time averaged background compensator for Geiger-Mueller counters

    International Nuclear Information System (INIS)

    Bhattacharya, R.C.; Ghosh, P.K.

    1983-01-01

    The GM tube compensator described stores background counts to cancel an equal number of pulses from the measuring channel providing time averaged compensation. The method suits portable instruments. (orig.)

  18. Time averaging, ageing and delay analysis of financial time series

    Science.gov (United States)

    Cherstvy, Andrey G.; Vinod, Deepak; Aghion, Erez; Chechkin, Aleksei V.; Metzler, Ralf

    2017-06-01

    We introduce three strategies for the analysis of financial time series based on time averaged observables. These comprise the time averaged mean squared displacement (MSD) as well as the ageing and delay time methods for varying fractions of the financial time series. We explore these concepts via statistical analysis of historic time series for several Dow Jones Industrial indices for the period from the 1960s to 2015. Remarkably, we discover a simple universal law for the delay time averaged MSD. The observed features of the financial time series dynamics agree well with our analytical results for the time averaged measurables for geometric Brownian motion, underlying the famed Black-Scholes-Merton model. The concepts we promote here are shown to be useful for financial data analysis and enable one to unveil new universal features of stock market dynamics.

  19. Historical Data for Average Processing Time Until Hearing Held

    Data.gov (United States)

    Social Security Administration — This dataset provides historical data for average wait time (in days) from the hearing request date until a hearing was held. This dataset includes data from fiscal...

  20. GIS Tools to Estimate Average Annual Daily Traffic

    Science.gov (United States)

    2012-06-01

    This project presents five tools that were created for a geographical information system to estimate Annual Average Daily : Traffic using linear regression. Three of the tools can be used to prepare spatial data for linear regression. One tool can be...

  1. A high speed digital signal averager for pulsed NMR

    International Nuclear Information System (INIS)

    Srinivasan, R.; Ramakrishna, J.; Ra agopalan, S.R.

    1978-01-01

    A 256-channel digital signal averager suitable for pulsed nuclear magnetic resonance spectroscopy is described. It implements 'stable averaging' algorithm and hence provides a calibrated display of the average signal at all times during the averaging process on a CRT. It has a maximum sampling rate of 2.5 μ sec and a memory capacity of 256 x 12 bit words. Number of sweeps is selectable through a front panel control in binary steps from 2 3 to 2 12 . The enhanced signal can be displayed either on a CRT or by a 3.5-digit LED display. The maximum S/N improvement that can be achieved with this instrument is 36 dB. (auth.)

  2. The average-shadowing property and topological ergodicity for flows

    International Nuclear Information System (INIS)

    Gu Rongbao; Guo Wenjing

    2005-01-01

    In this paper, the transitive property for a flow without sensitive dependence on initial conditions is studied and it is shown that a Lyapunov stable flow with the average-shadowing property on a compact metric space is topologically ergodic

  3. Testing the Limits of the Price Elasticity of Potential Students at Colleges and Universities: Has the Increased Direct Cost to the Student Begun to Drive down Higher Education Enrolment?

    Science.gov (United States)

    Fincher, Mark; Katsinas, Stephen

    2017-01-01

    Higher education enrolment has long been known to rise and fall counter to the current economic situation. This counter-cyclical enrolment response represents an economic principle where a price-elastic consumer is more likely make a consumption choice when another valuable use of resources is not available. Higher unemployment has historically…

  4. Application of Bayesian approach to estimate average level spacing

    International Nuclear Information System (INIS)

    Huang Zhongfu; Zhao Zhixiang

    1991-01-01

    A method to estimate average level spacing from a set of resolved resonance parameters by using Bayesian approach is given. Using the information given in the distributions of both levels spacing and neutron width, the level missing in measured sample can be corrected more precisely so that better estimate for average level spacing can be obtained by this method. The calculation of s-wave resonance has been done and comparison with other work was carried out

  5. Annual average equivalent dose of workers form health area

    International Nuclear Information System (INIS)

    Daltro, T.F.L.; Campos, L.L.

    1992-01-01

    The data of personnel monitoring during 1985 and 1991 of personnel that work in health area were studied, obtaining a general overview of the value change of annual average equivalent dose. Two different aspects were presented: the analysis of annual average equivalent dose in the different sectors of a hospital and the comparison of these doses in the same sectors in different hospitals. (C.G.C.)

  6. A precise measurement of the average b hadron lifetime

    CERN Document Server

    Buskulic, Damir; De Bonis, I; Décamp, D; Ghez, P; Goy, C; Lees, J P; Lucotte, A; Minard, M N; Odier, P; Pietrzyk, B; Ariztizabal, F; Chmeissani, M; Crespo, J M; Efthymiopoulos, I; Fernández, E; Fernández-Bosman, M; Gaitan, V; Garrido, L; Martínez, M; Orteu, S; Pacheco, A; Padilla, C; Palla, Fabrizio; Pascual, A; Perlas, J A; Sánchez, F; Teubert, F; Colaleo, A; Creanza, D; De Palma, M; Farilla, A; Gelao, G; Girone, M; Iaselli, Giuseppe; Maggi, G; Maggi, M; Marinelli, N; Natali, S; Nuzzo, S; Ranieri, A; Raso, G; Romano, F; Ruggieri, F; Selvaggi, G; Silvestris, L; Tempesta, P; Zito, G; Huang, X; Lin, J; Ouyang, Q; Wang, T; Xie, Y; Xu, R; Xue, S; Zhang, J; Zhang, L; Zhao, W; Bonvicini, G; Cattaneo, M; Comas, P; Coyle, P; Drevermann, H; Engelhardt, A; Forty, Roger W; Frank, M; Hagelberg, R; Harvey, J; Jacobsen, R; Janot, P; Jost, B; Knobloch, J; Lehraus, Ivan; Markou, C; Martin, E B; Mato, P; Meinhard, H; Minten, Adolf G; Miquel, R; Oest, T; Palazzi, P; Pater, J R; Pusztaszeri, J F; Ranjard, F; Rensing, P E; Rolandi, Luigi; Schlatter, W D; Schmelling, M; Schneider, O; Tejessy, W; Tomalin, I R; Venturi, A; Wachsmuth, H W; Wiedenmann, W; Wildish, T; Witzeling, W; Wotschack, J; Ajaltouni, Ziad J; Bardadin-Otwinowska, Maria; Barrès, A; Boyer, C; Falvard, A; Gay, P; Guicheney, C; Henrard, P; Jousset, J; Michel, B; Monteil, S; Montret, J C; Pallin, D; Perret, P; Podlyski, F; Proriol, J; Rossignol, J M; Saadi, F; Fearnley, Tom; Hansen, J B; Hansen, J D; Hansen, J R; Hansen, P H; Nilsson, B S; Kyriakis, A; Simopoulou, Errietta; Siotis, I; Vayaki, Anna; Zachariadou, K; Blondel, A; Bonneaud, G R; Brient, J C; Bourdon, P; Passalacqua, L; Rougé, A; Rumpf, M; Tanaka, R; Valassi, Andrea; Verderi, M; Videau, H L; Candlin, D J; Parsons, M I; Focardi, E; Parrini, G; Corden, M; Delfino, M C; Georgiopoulos, C H; Jaffe, D E; Antonelli, A; Bencivenni, G; Bologna, G; Bossi, F; Campana, P; Capon, G; Chiarella, V; Felici, G; Laurelli, P; Mannocchi, G; Murtas, F; Murtas, G P; Pepé-Altarelli, M; Dorris, S J; Halley, A W; ten Have, I; Knowles, I G; Lynch, J G; Morton, W T; O'Shea, V; Raine, C; Reeves, P; Scarr, J M; Smith, K; Smith, M G; Thompson, A S; Thomson, F; Thorn, S; Turnbull, R M; Becker, U; Braun, O; Geweniger, C; Graefe, G; Hanke, P; Hepp, V; Kluge, E E; Putzer, A; Rensch, B; Schmidt, M; Sommer, J; Stenzel, H; Tittel, K; Werner, S; Wunsch, M; Beuselinck, R; Binnie, David M; Cameron, W; Colling, D J; Dornan, Peter J; Konstantinidis, N P; Moneta, L; Moutoussi, A; Nash, J; San Martin, G; Sedgbeer, J K; Stacey, A M; Dissertori, G; Girtler, P; Kneringer, E; Kuhn, D; Rudolph, G; Bowdery, C K; Brodbeck, T J; Colrain, P; Crawford, G; Finch, A J; Foster, F; Hughes, G; Sloan, Terence; Whelan, E P; Williams, M I; Galla, A; Greene, A M; Kleinknecht, K; Quast, G; Raab, J; Renk, B; Sander, H G; Wanke, R; Van Gemmeren, P; Zeitnitz, C; Aubert, Jean-Jacques; Bencheikh, A M; Benchouk, C; Bonissent, A; Bujosa, G; Calvet, D; Carr, J; Diaconu, C A; Etienne, F; Thulasidas, M; Nicod, D; Payre, P; Rousseau, D; Talby, M; Abt, I; Assmann, R W; Bauer, C; Blum, Walter; Brown, D; Dietl, H; Dydak, Friedrich; Ganis, G; Gotzhein, C; Jakobs, K; Kroha, H; Lütjens, G; Lutz, Gerhard; Männer, W; Moser, H G; Richter, R H; Rosado-Schlosser, A; Schael, S; Settles, Ronald; Seywerd, H C J; Stierlin, U; Saint-Denis, R; Wolf, G; Alemany, R; Boucrot, J; Callot, O; Cordier, A; Courault, F; Davier, M; Duflot, L; Grivaz, J F; Heusse, P; Jacquet, M; Kim, D W; Le Diberder, F R; Lefrançois, J; Lutz, A M; Musolino, G; Nikolic, I A; Park, H J; Park, I C; Schune, M H; Simion, S; Veillet, J J; Videau, I; Abbaneo, D; Azzurri, P; Bagliesi, G; Batignani, G; Bettarini, S; Bozzi, C; Calderini, G; Carpinelli, M; Ciocci, M A; Ciulli, V; Dell'Orso, R; Fantechi, R; Ferrante, I; Foà, L; Forti, F; Giassi, A; Giorgi, M A; Gregorio, A; Ligabue, F; Lusiani, A; Marrocchesi, P S; Messineo, A; Rizzo, G; Sanguinetti, G; Sciabà, A; Spagnolo, P; Steinberger, Jack; Tenchini, Roberto; Tonelli, G; Triggiani, G; Vannini, C; Verdini, P G; Walsh, J; Betteridge, A P; Blair, G A; Bryant, L M; Cerutti, F; Gao, Y; Green, M G; Johnson, D L; Medcalf, T; Mir, L M; Perrodo, P; Strong, J A; Bertin, V; Botterill, David R; Clifft, R W; Edgecock, T R; Haywood, S; Edwards, M; Maley, P; Norton, P R; Thompson, J C; Bloch-Devaux, B; Colas, P; Duarte, H; Emery, S; Kozanecki, Witold; Lançon, E; Lemaire, M C; Locci, E; Marx, B; Pérez, P; Rander, J; Renardy, J F; Rosowsky, A; Roussarie, A; Schuller, J P; Schwindling, J; Si Mohand, D; Trabelsi, A; Vallage, B; Johnson, R P; Kim, H Y; Litke, A M; McNeil, M A; Taylor, G; Beddall, A; Booth, C N; Boswell, R; Cartwright, S L; Combley, F; Dawson, I; Köksal, A; Letho, M; Newton, W M; Rankin, C; Thompson, L F; Böhrer, A; Brandt, S; Cowan, G D; Feigl, E; Grupen, Claus; Lutters, G; Minguet-Rodríguez, J A; Rivera, F; Saraiva, P; Smolik, L; Stephan, F; Apollonio, M; Bosisio, L; Della Marina, R; Giannini, G; Gobbo, B; Ragusa, F; Rothberg, J E; Wasserbaech, S R; Armstrong, S R; Bellantoni, L; Elmer, P; Feng, P; Ferguson, D P S; Gao, Y S; González, S; Grahl, J; Harton, J L; Hayes, O J; Hu, H; McNamara, P A; Nachtman, J M; Orejudos, W; Pan, Y B; Saadi, Y; Schmitt, M; Scott, I J; Sharma, V; Turk, J; Walsh, A M; Wu Sau Lan; Wu, X; Yamartino, J M; Zheng, M; Zobernig, G

    1996-01-01

    An improved measurement of the average b hadron lifetime is performed using a sample of 1.5 million hadronic Z decays, collected during the 1991-1993 runs of ALEPH, with the silicon vertex detector fully operational. This uses the three-dimensional impact parameter distribution of lepton tracks coming from semileptonic b decays and yields an average b hadron lifetime of 1.533 \\pm 0.013 \\pm 0.022 ps.

  7. Bivariate copulas on the exponentially weighted moving average control chart

    Directory of Open Access Journals (Sweden)

    Sasigarn Kuvattana

    2016-10-01

    Full Text Available This paper proposes four types of copulas on the Exponentially Weighted Moving Average (EWMA control chart when observations are from an exponential distribution using a Monte Carlo simulation approach. The performance of the control chart is based on the Average Run Length (ARL which is compared for each copula. Copula functions for specifying dependence between random variables are used and measured by Kendall’s tau. The results show that the Normal copula can be used for almost all shifts.

  8. The average action for scalar fields near phase transitions

    International Nuclear Information System (INIS)

    Wetterich, C.

    1991-08-01

    We compute the average action for fields in two, three and four dimensions, including the effects of wave function renormalization. A study of the one loop evolution equations for the scale dependence of the average action gives a unified picture of the qualitatively different behaviour in various dimensions for discrete as well as abelian and nonabelian continuous symmetry. The different phases and the phase transitions can be infered from the evolution equation. (orig.)

  9. Wave function collapse implies divergence of average displacement

    OpenAIRE

    Marchewka, A.; Schuss, Z.

    2005-01-01

    We show that propagating a truncated discontinuous wave function by Schr\\"odinger's equation, as asserted by the collapse axiom, gives rise to non-existence of the average displacement of the particle on the line. It also implies that there is no Zeno effect. On the other hand, if the truncation is done so that the reduced wave function is continuous, the average coordinate is finite and there is a Zeno effect. Therefore the collapse axiom of measurement needs to be revised.

  10. Average geodesic distance of skeleton networks of Sierpinski tetrahedron

    Science.gov (United States)

    Yang, Jinjin; Wang, Songjing; Xi, Lifeng; Ye, Yongchao

    2018-04-01

    The average distance is concerned in the research of complex networks and is related to Wiener sum which is a topological invariant in chemical graph theory. In this paper, we study the skeleton networks of the Sierpinski tetrahedron, an important self-similar fractal, and obtain their asymptotic formula for average distances. To provide the formula, we develop some technique named finite patterns of integral of geodesic distance on self-similar measure for the Sierpinski tetrahedron.

  11. An Experimental Study Related to Planning Abilities of Gifted and Average Students

    Directory of Open Access Journals (Sweden)

    Marilena Z. Leana-Taşcılar

    2016-02-01

    Full Text Available Gifted students differ from their average peers in psychological, social, emotional and cognitive development. One of these differences in the cognitive domain is related to executive functions. One of the most important executive functions is planning and organization ability. The aim of this study was to compare planning abilities of gifted students with those of their average peers and to test the effectiveness of a training program on planning abilities of gifted students and average students. First, students’ intelligence and planning abilities were measured and then assigned to either experimental or control group. The groups were matched by intelligence and planning ability (experimental: (13 gifted and 8 average; control: 14 gifted and 8 average. In total 182 students (79 gifted and 103 average participated in the study. Then, a training program was implemented in the experimental group to find out if it improved students’ planning ability. Results showed that boys had better planning abilities than girls did, and gifted students had better planning abilities than their average peers did. Significant results were obtained in favor of the experimental group in the posttest scores

  12. Higher English for CFE

    CERN Document Server

    Bridges, Ann; Mitchell, John

    2015-01-01

    A brand new edition of the former Higher English: Close Reading , completely revised and updated for the new Higher element (Reading for Understanding, Analysis and Evaluation) - worth 30% of marks in the final exam!. We are working with SQA to secure endorsement for this title. Written by two highly experienced authors this book shows you how to practice for the Reading for Understanding, Analysis and Evaluation section of the new Higher English exam. This book introduces the terms and concepts that lie behind success and offers guidance on the interpretation of questions and targeting answer

  13. Planning for Higher Education.

    Science.gov (United States)

    Lindstrom, Caj-Gunnar

    1984-01-01

    Decision processes for strategic planning for higher education institutions are outlined using these parameters: institutional goals and power structure, organizational climate, leadership attitudes, specific problem type, and problem-solving conditions and alternatives. (MSE)

  14. Advert for higher education

    OpenAIRE

    N.V. Provozin; А.S. Teletov

    2011-01-01

    The article discusses the features advertising higher education institution. The analysis results of marketing research students for their choice of institutions and further study. Principles of the advertising campaign on three levels: the university, the faculty, the separate department.

  15. The Chicken Soup Effect: The Role of Recreation and Intramural Participation in Boosting Freshman Grade Point Average

    Science.gov (United States)

    Gibbison, Godfrey A.; Henry, Tracyann L.; Perkins-Brown, Jayne

    2011-01-01

    Freshman grade point average, in particular first semester grade point average, is an important predictor of survival and eventual student success in college. As many institutions of higher learning are searching for ways to improve student success, one would hope that policies geared towards the success of freshmen have long term benefits…

  16. On higher derivative gravity

    International Nuclear Information System (INIS)

    Accioly, A.J.

    1987-01-01

    A possible classical route conducting towards a general relativity theory with higher-derivatives starting, in a sense, from first principles, is analysed. A completely causal vacuum solution with the symmetries of the Goedel universe is obtained in the framework of this higher-derivative gravity. This very peculiar and rare result is the first known vcuum solution of the fourth-order gravity theory that is not a solution of the corresponding Einstein's equations.(Author) [pt

  17. Higher Spins & Strings

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    The conjectured relation between higher spin theories on anti de-Sitter (AdS) spaces and weakly coupled conformal field theories is reviewed. I shall then outline the evidence in favour of a concrete duality of this kind, relating a specific higher spin theory on AdS3 to a family of 2d minimal model CFTs. Finally, I shall explain how this relation fits into the framework of the familiar stringy AdS/CFT correspondence.

  18. A comparative analysis of 9 multi-model averaging approaches in hydrological continuous streamflow simulation

    Science.gov (United States)

    Arsenault, Richard; Gatien, Philippe; Renaud, Benoit; Brissette, François; Martel, Jean-Luc

    2015-10-01

    This study aims to test whether a weighted combination of several hydrological models can simulate flows more accurately than the models taken individually. In addition, the project attempts to identify the most efficient model averaging method and the optimal number of models to include in the weighting scheme. In order to address the first objective, streamflow was simulated using four lumped hydrological models (HSAMI, HMETS, MOHYSE and GR4J-6), each of which were calibrated with three different objective functions on 429 watersheds. The resulting 12 hydrographs (4 models × 3 metrics) were weighted and combined with the help of 9 averaging methods which are the simple arithmetic mean (SAM), Akaike information criterion (AICA), Bates-Granger (BGA), Bayes information criterion (BICA), Bayesian model averaging (BMA), Granger-Ramanathan average variant A, B and C (GRA, GRB and GRC) and the average by SCE-UA optimization (SCA). The same weights were then applied to the hydrographs in validation mode, and the Nash-Sutcliffe Efficiency metric was measured between the averaged and observed hydrographs. Statistical analyses were performed to compare the accuracy of weighted methods to that of individual models. A Kruskal-Wallis test and a multi-objective optimization algorithm were then used to identify the most efficient weighted method and the optimal number of models to integrate. Results suggest that the GRA, GRB, GRC and SCA weighted methods perform better than the individual members. Model averaging from these four methods were superior to the best of the individual members in 76% of the cases. Optimal combinations on all watersheds included at least one of each of the four hydrological models. None of the optimal combinations included all members of the ensemble of 12 hydrographs. The Granger-Ramanathan average variant C (GRC) is recommended as the best compromise between accuracy, speed of execution, and simplicity.

  19. Identification of Large-Scale Structure Fluctuations in IC Engines using POD-Based Conditional Averaging

    Directory of Open Access Journals (Sweden)

    Buhl Stefan

    2016-01-01

    Full Text Available Cycle-to-Cycle Variations (CCV in IC engines is a well-known phenomenon and the definition and quantification is well-established for global quantities such as the mean pressure. On the other hand, the definition of CCV for local quantities, e.g. the velocity or the mixture distribution, is less straightforward. This paper proposes a new method to identify and calculate cyclic variations of the flow field in IC engines emphasizing the different contributions from large-scale energetic (coherent structures, identified by a combination of Proper Orthogonal Decomposition (POD and conditional averaging, and small-scale fluctuations. Suitable subsets required for the conditional averaging are derived from combinations of the the POD coefficients of the second and third mode. Within each subset, the velocity is averaged and these averages are compared to the ensemble-averaged velocity field, which is based on all cycles. The resulting difference of the subset-average and the global-average is identified as a cyclic fluctuation of the coherent structures. Then, within each subset, remaining fluctuations are obtained from the difference between the instantaneous fields and the corresponding subset average. The proposed methodology is tested for two data sets obtained from scale resolving engine simulations. For the first test case, the numerical database consists of 208 independent samples of a simplified engine geometry. For the second case, 120 cycles for the well-established Transparent Combustion Chamber (TCC benchmark engine are considered. For both applications, the suitability of the method to identify the two contributions to CCV is discussed and the results are directly linked to the observed flow field structures.

  20. Average Soil Water Retention Curves Measured by Neutron Radiography

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Chu-Lin [ORNL; Perfect, Edmund [University of Tennessee, Knoxville (UTK); Kang, Misun [ORNL; Voisin, Sophie [ORNL; Bilheux, Hassina Z [ORNL; Horita, Juske [Texas Tech University (TTU); Hussey, Dan [NIST Center for Neutron Research (NCRN), Gaithersburg, MD

    2011-01-01

    Water retention curves are essential for understanding the hydrologic behavior of partially-saturated porous media and modeling flow transport processes within the vadose zone. In this paper we report direct measurements of the main drying and wetting branches of the average water retention function obtained using 2-dimensional neutron radiography. Flint sand columns were saturated with water and then drained under quasi-equilibrium conditions using a hanging water column setup. Digital images (2048 x 2048 pixels) of the transmitted flux of neutrons were acquired at each imposed matric potential (~10-15 matric potential values per experiment) at the NCNR BT-2 neutron imaging beam line. Volumetric water contents were calculated on a pixel by pixel basis using Beer-Lambert s law after taking into account beam hardening and geometric corrections. To remove scattering effects at high water contents the volumetric water contents were normalized (to give relative saturations) by dividing the drying and wetting sequences of images by the images obtained at saturation and satiation, respectively. The resulting pixel values were then averaged and combined with information on the imposed basal matric potentials to give average water retention curves. The average relative saturations obtained by neutron radiography showed an approximate one-to-one relationship with the average values measured volumetrically using the hanging water column setup. There were no significant differences (at p < 0.05) between the parameters of the van Genuchten equation fitted to the average neutron radiography data and those estimated from replicated hanging water column data. Our results indicate that neutron imaging is a very effective tool for quantifying the average water retention curve.

  1. Analysis of litter size and average litter weight in pigs using a recursive model

    DEFF Research Database (Denmark)

    Varona, Luis; Sorensen, Daniel; Thompson, Robin

    2007-01-01

    An analysis of litter size and average piglet weight at birth in Landrace and Yorkshire using a standard two-trait mixed model (SMM) and a recursive mixed model (RMM) is presented. The RMM establishes a one-way link from litter size to average piglet weight. It is shown that there is a one......-to-one correspondence between the parameters of SMM and RMM and that they generate equivalent likelihoods. As parameterized in this work, the RMM tests for the presence of a recursive relationship between additive genetic values, permanent environmental effects, and specific environmental effects of litter size......, on average piglet weight. The equivalent standard mixed model tests whether or not the covariance matrices of the random effects have a diagonal structure. In Landrace, posterior predictive model checking supports a model without any form of recursion or, alternatively, a SMM with diagonal covariance...

  2. Estimating average glandular dose by measuring glandular rate in mammograms

    International Nuclear Information System (INIS)

    Goto, Sachiko; Azuma, Yoshiharu; Sumimoto, Tetsuhiro; Eiho, Shigeru

    2003-01-01

    The glandular rate of the breast was objectively measured in order to calculate individual patient exposure dose (average glandular dose) in mammography. By employing image processing techniques and breast-equivalent phantoms with various glandular rate values, a conversion curve for pixel value to glandular rate can be determined by a neural network. Accordingly, the pixel values in clinical mammograms can be converted to the glandular rate value for each pixel. The individual average glandular dose can therefore be calculated using the individual glandular rates on the basis of the dosimetry method employed for quality control in mammography. In the present study, a data set of 100 craniocaudal mammograms from 50 patients was used to evaluate our method. The average glandular rate and average glandular dose of the data set were 41.2% and 1.79 mGy, respectively. The error in calculating the individual glandular rate can be estimated to be less than ±3%. When the calculation error of the glandular rate is taken into consideration, the error in the individual average glandular dose can be estimated to be 13% or less. We feel that our method for determining the glandular rate from mammograms is useful for minimizing subjectivity in the evaluation of patient breast composition. (author)

  3. Yearly, seasonal and monthly daily average diffuse sky radiation models

    International Nuclear Information System (INIS)

    Kassem, A.S.; Mujahid, A.M.; Turner, D.W.

    1993-01-01

    A daily average diffuse sky radiation regression model based on daily global radiation was developed utilizing two year data taken near Blytheville, Arkansas (Lat. =35.9 0 N, Long. = 89.9 0 W), U.S.A. The model has a determination coefficient of 0.91 and 0.092 standard error of estimate. The data were also analyzed for a seasonal dependence and four seasonal average daily models were developed for the spring, summer, fall and winter seasons. The coefficient of determination is 0.93, 0.81, 0.94 and 0.93, whereas the standard error of estimate is 0.08, 0.102, 0.042 and 0.075 for spring, summer, fall and winter, respectively. A monthly average daily diffuse sky radiation model was also developed. The coefficient of determination is 0.92 and the standard error of estimate is 0.083. A seasonal monthly average model was also developed which has 0.91 coefficient of determination and 0.085 standard error of estimate. The developed monthly daily average and daily models compare well with a selected number of previously developed models. (author). 11 ref., figs., tabs

  4. The Average IQ of Sub-Saharan Africans: Comments on Wicherts, Dolan, and van der Maas

    Science.gov (United States)

    Lynn, Richard; Meisenberg, Gerhard

    2010-01-01

    Wicherts, Dolan, and van der Maas (2009) contend that the average IQ of sub-Saharan Africans is about 80. A critical evaluation of the studies presented by WDM shows that many of these are based on unrepresentative elite samples. We show that studies of 29 acceptably representative samples on tests other than the Progressive Matrices give a…

  5. A sampling strategy for estimating plot average annual fluxes of chemical elements from forest soils

    NARCIS (Netherlands)

    Brus, D.J.; Gruijter, de J.J.; Vries, de W.

    2010-01-01

    A sampling strategy for estimating spatially averaged annual element leaching fluxes from forest soils is presented and tested in three Dutch forest monitoring plots. In this method sampling locations and times (days) are selected by probability sampling. Sampling locations were selected by

  6. 40 CFR 86.1865-12 - How to comply with the fleet average CO2 standards.

    Science.gov (United States)

    2010-07-01

    ... efficiency credits earned according to the provisions of § 86.1866-12(c); (iii) Off-cycle technology credits... change Selective Enforcement Auditing or in-use testing failures from a failure to a non-failure. The.... (l) Maintenance of records and submittal of information relevant to compliance with fleet average CO...

  7. Fuel Class Higher Alcohols

    KAUST Repository

    Sarathy, Mani

    2016-08-17

    This chapter focuses on the production and combustion of alcohol fuels with four or more carbon atoms, which we classify as higher alcohols. It assesses the feasibility of utilizing various C4-C8 alcohols as fuels for internal combustion engines. Utilizing higher-molecular-weight alcohols as fuels requires careful analysis of their fuel properties. ASTM standards provide fuel property requirements for spark-ignition (SI) and compression-ignition (CI) engines such as the stability, lubricity, viscosity, and cold filter plugging point (CFPP) properties of blends of higher alcohols. Important combustion properties that are studied include laminar and turbulent flame speeds, flame blowout/extinction limits, ignition delay under various mixing conditions, and gas-phase and particulate emissions. The chapter focuses on the combustion of higher alcohols in reciprocating SI and CI engines and discusses higher alcohol performance in SI and CI engines. Finally, the chapter identifies the sources, production pathways, and technologies currently being pursued for production of some fuels, including n-butanol, iso-butanol, and n-octanol.

  8. Average cross sections for the 252Cf neutron spectrum

    International Nuclear Information System (INIS)

    Dezso, Z.; Csikai, J.

    1977-01-01

    A number of average cross sections have been measured for 252 Cf neutrons in (n, γ), (n,p), (n,2n), (n,α) reactions by the activation method and for fission by fission chamber. Cross sections have been determined for 19 elements and 45 reactions. The (n,γ) cross section values lie in the interval from 0.3 to 200 mb. The data as a function of target neutron number increases up to about N=60 with minimum near to dosed shells. The values lie between 0.3 mb and 113 mb. These cross sections decrease significantly with increasing the threshold energy. The values are below 20 mb. The data do not exceed 10 mb. Average (n,p) cross sections as a function of the threshold energy and average fission cross sections as a function of Zsup(4/3)/A are shown. The results obtained are summarized in tables

  9. Average contraction and synchronization of complex switched networks

    International Nuclear Information System (INIS)

    Wang Lei; Wang Qingguo

    2012-01-01

    This paper introduces an average contraction analysis for nonlinear switched systems and applies it to investigating the synchronization of complex networks of coupled systems with switching topology. For a general nonlinear system with a time-dependent switching law, a basic convergence result is presented according to average contraction analysis, and a special case where trajectories of a distributed switched system converge to a linear subspace is then investigated. Synchronization is viewed as the special case with all trajectories approaching the synchronization manifold, and is thus studied for complex networks of coupled oscillators with switching topology. It is shown that the synchronization of a complex switched network can be evaluated by the dynamics of an isolated node, the coupling strength and the time average of the smallest eigenvalue associated with the Laplacians of switching topology and the coupling fashion. Finally, numerical simulations illustrate the effectiveness of the proposed methods. (paper)

  10. The Health Effects of Income Inequality: Averages and Disparities.

    Science.gov (United States)

    Truesdale, Beth C; Jencks, Christopher

    2016-01-01

    Much research has investigated the association of income inequality with average life expectancy, usually finding negative correlations that are not very robust. A smaller body of work has investigated socioeconomic disparities in life expectancy, which have widened in many countries since 1980. These two lines of work should be seen as complementary because changes in average life expectancy are unlikely to affect all socioeconomic groups equally. Although most theories imply long and variable lags between changes in income inequality and changes in health, empirical evidence is confined largely to short-term effects. Rising income inequality can affect individuals in two ways. Direct effects change individuals' own income. Indirect effects change other people's income, which can then change a society's politics, customs, and ideals, altering the behavior even of those whose own income remains unchanged. Indirect effects can thus change both average health and the slope of the relationship between individual income and health.

  11. Perceived Average Orientation Reflects Effective Gist of the Surface.

    Science.gov (United States)

    Cha, Oakyoon; Chong, Sang Chul

    2018-03-01

    The human ability to represent ensemble visual information, such as average orientation and size, has been suggested as the foundation of gist perception. To effectively summarize different groups of objects into the gist of a scene, observers should form ensembles separately for different groups, even when objects have similar visual features across groups. We hypothesized that the visual system utilizes perceptual groups characterized by spatial configuration and represents separate ensembles for different groups. Therefore, participants could not integrate ensembles of different perceptual groups on a task basis. We asked participants to determine the average orientation of visual elements comprising a surface with a contour situated inside. Although participants were asked to estimate the average orientation of all the elements, they ignored orientation signals embedded in the contour. This constraint may help the visual system to keep the visual features of occluding objects separate from those of the occluded objects.

  12. Object detection by correlation coefficients using azimuthally averaged reference projections.

    Science.gov (United States)

    Nicholson, William V

    2004-11-01

    A method of computing correlation coefficients for object detection that takes advantage of using azimuthally averaged reference projections is described and compared with two alternative methods-computing a cross-correlation function or a local correlation coefficient versus the azimuthally averaged reference projections. Two examples of an application from structural biology involving the detection of projection views of biological macromolecules in electron micrographs are discussed. It is found that a novel approach to computing a local correlation coefficient versus azimuthally averaged reference projections, using a rotational correlation coefficient, outperforms using a cross-correlation function and a local correlation coefficient in object detection from simulated images with a range of levels of simulated additive noise. The three approaches perform similarly in detecting macromolecular views in electron microscope images of a globular macrolecular complex (the ribosome). The rotational correlation coefficient outperforms the other methods in detection of keyhole limpet hemocyanin macromolecular views in electron micrographs.

  13. Measurement of average radon gas concentration at workplaces

    International Nuclear Information System (INIS)

    Kavasi, N.; Somlai, J.; Kovacs, T.; Gorjanacz, Z.; Nemeth, Cs.; Szabo, T.; Varhegyi, A.; Hakl, J.

    2003-01-01

    In this paper results of measurement of average radon gas concentration at workplaces (the schools and kindergartens and the ventilated workplaces) are presented. t can be stated that the one month long measurements means very high variation (as it is obvious in the cases of the hospital cave and the uranium tailing pond). Consequently, in workplaces where the expectable changes of radon concentration considerable with the seasons should be measure for 12 months long. If it is not possible, the chosen six months period should contain summer and winter months as well. The average radon concentration during working hours can be differ considerable from the average of the whole time in the cases of frequent opening the doors and windows or using artificial ventilation. (authors)

  14. A Martian PFS average spectrum: Comparison with ISO SWS

    Science.gov (United States)

    Formisano, V.; Encrenaz, T.; Fonti, S.; Giuranna, M.; Grassi, D.; Hirsh, H.; Khatuntsev, I.; Ignatiev, N.; Lellouch, E.; Maturilli, A.; Moroz, V.; Orleanski, P.; Piccioni, G.; Rataj, M.; Saggin, B.; Zasova, L.

    2005-08-01

    The evaluation of the planetary Fourier spectrometer performance at Mars is presented by comparing an average spectrum with the ISO spectrum published by Lellouch et al. [2000. Planet. Space Sci. 48, 1393.]. First, the average conditions of Mars atmosphere are compared, then the mixing ratios of the major gases are evaluated. Major and minor bands of CO 2 are compared, from the point of view of features characteristics and bands depth. The spectral resolution is also compared using several solar lines. The result indicates that PFS radiance is valid to better than 1% in the wavenumber range 1800-4200 cm -1 for the average spectrum considered (1680 measurements). The PFS monochromatic transfer function generates an overshooting on the left-hand side of strong narrow lines (solar or atmospheric). The spectral resolution of PFS is of the order of 1.3 cm -1 or better. A large number of narrow features to be identified are discovered.

  15. Higher spin gauge theories

    CERN Document Server

    Henneaux, Marc; Vasiliev, Mikhail A

    2017-01-01

    Symmetries play a fundamental role in physics. Non-Abelian gauge symmetries are the symmetries behind theories for massless spin-1 particles, while the reparametrization symmetry is behind Einstein's gravity theory for massless spin-2 particles. In supersymmetric theories these particles can be connected also to massless fermionic particles. Does Nature stop at spin-2 or can there also be massless higher spin theories. In the past strong indications have been given that such theories do not exist. However, in recent times ways to evade those constraints have been found and higher spin gauge theories have been constructed. With the advent of the AdS/CFT duality correspondence even stronger indications have been given that higher spin gauge theories play an important role in fundamental physics. All these issues were discussed at an international workshop in Singapore in November 2015 where the leading scientists in the field participated. This volume presents an up-to-date, detailed overview of the theories i...

  16. INTERNATIONALIZATION IN HIGHER EDUCATION

    Directory of Open Access Journals (Sweden)

    Catalina Crisan-Mitra

    2016-03-01

    Full Text Available Internationalization of higher education is one of the key trends of development. There are several approaches on how to achieve competitiveness and performance in higher education and international academic mobility; students’ exchange programs, partnerships are some of the aspects that can play a significant role in this process. This paper wants to point out the student’s perception regarding two main directions: one about the master students’ expectation regarding how an internationalized master should be organized and should function, and second the degree of satisfaction of the beneficiaries of internationalized master programs from Babe-Bolyai University. This article is based on an empirical qualitative research that was implemented to students of an internationalized master from the Faculty of Economics and Business Administration. This research can be considered a useful example for those preoccupied to increase the quality of higher education and conclusions drawn have relevance both theoretically and especially practically.

  17. Quality of Higher Education

    DEFF Research Database (Denmark)

    Zou, Yihuan; Zhao, Yingsheng; Du, Xiangyun

    . This transformation involves a broad scale of change at individual level, organizational level, and societal level. In this change process in higher education, staff development remains one of the key elements for university innovation and at the same time demands a systematic and holistic approach.......This paper starts with a critical approach to reflect on the current practice of quality assessment and assurance in higher education. This is followed by a proposal that in response to the global challenges for improving the quality of higher education, universities should take active actions...... of change by improving the quality of teaching and learning. From a constructivist perspective of understanding education and learning, this paper also discusses why and how universities should give more weight to learning and change the traditional role of teaching to an innovative approach of facilitation...

  18. Exactly averaged equations for flow and transport in random media

    International Nuclear Information System (INIS)

    Shvidler, Mark; Karasaki, Kenzi

    2001-01-01

    It is well known that exact averaging of the equations of flow and transport in random porous media can be realized only for a small number of special, occasionally exotic, fields. On the other hand, the properties of approximate averaging methods are not yet fully understood. For example, the convergence behavior and the accuracy of truncated perturbation series. Furthermore, the calculation of the high-order perturbations is very complicated. These problems for a long time have stimulated attempts to find the answer for the question: Are there in existence some exact general and sufficiently universal forms of averaged equations? If the answer is positive, there arises the problem of the construction of these equations and analyzing them. There exist many publications related to these problems and oriented on different applications: hydrodynamics, flow and transport in porous media, theory of elasticity, acoustic and electromagnetic waves in random fields, etc. We present a method of finding the general form of exactly averaged equations for flow and transport in random fields by using (1) an assumption of the existence of Green's functions for appropriate stochastic problems, (2) some general properties of the Green's functions, and (3) the some basic information about the random fields of the conductivity, porosity and flow velocity. We present a general form of the exactly averaged non-local equations for the following cases. 1. Steady-state flow with sources in porous media with random conductivity. 2. Transient flow with sources in compressible media with random conductivity and porosity. 3. Non-reactive solute transport in random porous media. We discuss the problem of uniqueness and the properties of the non-local averaged equations, for the cases with some types of symmetry (isotropic, transversal isotropic, orthotropic) and we analyze the hypothesis of the structure non-local equations in general case of stochastically homogeneous fields. (author)

  19. Increase in average foveal thickness after internal limiting membrane peeling

    Directory of Open Access Journals (Sweden)

    Kumagai K

    2017-04-01

    Full Text Available Kazuyuki Kumagai,1 Mariko Furukawa,1 Tetsuyuki Suetsugu,1 Nobuchika Ogino2 1Department of Ophthalmology, Kami-iida Daiichi General Hospital, 2Department of Ophthalmology, Nishigaki Eye Clinic, Aichi, Japan Purpose: To report the findings in three cases in which the average foveal thickness was increased after a thin epiretinal membrane (ERM was removed by vitrectomy with internal limiting membrane (ILM peeling.Methods: The foveal contour was normal preoperatively in all eyes. All cases underwent successful phacovitrectomy with ILM peeling for a thin ERM. The optical coherence tomography (OCT images were examined before and after the surgery. The changes in the average foveal (1 mm thickness and the foveal areas within 500 µm from the foveal center were measured. The postoperative changes in the inner and outer retinal areas determined from the cross-sectional OCT images were analyzed.Results: The average foveal thickness and the inner and outer foveal areas increased significantly after the surgery in each of the three cases. The percentage increase in the average foveal thickness relative to the baseline thickness was 26% in Case 1, 29% in Case 2, and 31% in Case 3. The percentage increase in the foveal inner retinal area was 71% in Case 1, 113% in Case 2, and 110% in Case 3, and the percentage increase in foveal outer retinal area was 8% in Case 1, 13% in Case 2, and 18% in Case 3.Conclusion: The increase in the average foveal thickness and the inner and outer foveal areas suggests that a centripetal movement of the inner and outer retinal layers toward the foveal center probably occurred due to the ILM peeling. Keywords: internal limiting membrane, optical coherence tomography, average foveal thickness, epiretinal membrane, vitrectomy

  20. Reputation in Higher Education

    DEFF Research Database (Denmark)

    Martensen, Anne; Grønholdt, Lars

    2005-01-01

    leaders of higher education institutions to set strategic directions and support their decisions in an effort to create even better study programmes with a better reputation. Finally, managerial implications and directions for future research are discussed.Keywords: Reputation, image, corporate identity......The purpose of this paper is to develop a reputation model for higher education programmes, provide empirical evidence for the model and illustrate its application by using Copenhagen Business School (CBS) as the recurrent case. The developed model is a cause-and-effect model linking image...

  1. Reputation in Higher Education

    DEFF Research Database (Denmark)

    Plewa, Carolin; Ho, Joanne; Conduit, Jodie

    2016-01-01

    Reputation is critical for institutions wishing to attract and retain students in today's competitive higher education setting. Drawing on the resource based view and configuration theory, this research proposes that Higher Education Institutions (HEIs) need to understand not only the impact...... of independent resources but of resource configurations when seeking to achieve a strong, positive reputation. Utilizing fuzzy set qualitative comparative analysis (fsQCA), the paper provides insight into different configurations of resources that HEIs can utilize to build their reputation within their domestic...

  2. Navigating in higher education

    DEFF Research Database (Denmark)

    Thingholm, Hanne Balsby; Reimer, David; Keiding, Tina Bering

    Denne rapport er skrevet på baggrund af spørgeskemaundersøgelsen – Navigating in Higher Education (NiHE) – der rummer besvarelser fra 1410 bachelorstuderende og 283 undervisere fordelt på ni uddannelser fra Aarhus Universitet: Uddannelsesvidenskab, Historie, Nordisk sprog og litteratur, Informati......Denne rapport er skrevet på baggrund af spørgeskemaundersøgelsen – Navigating in Higher Education (NiHE) – der rummer besvarelser fra 1410 bachelorstuderende og 283 undervisere fordelt på ni uddannelser fra Aarhus Universitet: Uddannelsesvidenskab, Historie, Nordisk sprog og litteratur...

  3. Positivity of the spherically averaged atomic one-electron density

    DEFF Research Database (Denmark)

    Fournais, Søren; Hoffmann-Ostenhof, Maria; Hoffmann-Ostenhof, Thomas

    2008-01-01

    We investigate the positivity of the spherically averaged atomic one-electron density . For a which stems from a physical ground state we prove that for r ≥  0. This article may be reproduced in its entirety for non-commercial purposes.......We investigate the positivity of the spherically averaged atomic one-electron density . For a which stems from a physical ground state we prove that for r ≥  0. This article may be reproduced in its entirety for non-commercial purposes....

  4. Research & development and growth: A Bayesian model averaging analysis

    Czech Academy of Sciences Publication Activity Database

    Horváth, Roman

    2011-01-01

    Roč. 28, č. 6 (2011), s. 2669-2673 ISSN 0264-9993. [Society for Non-linear Dynamics and Econometrics Annual Conferencen. Washington DC, 16.03.2011-18.03.2011] R&D Projects: GA ČR GA402/09/0965 Institutional research plan: CEZ:AV0Z10750506 Keywords : Research and development * Growth * Bayesian model averaging Subject RIV: AH - Economic s Impact factor: 0.701, year: 2011 http://library.utia.cas.cz/separaty/2011/E/horvath-research & development and growth a bayesian model averaging analysis.pdf

  5. MAIN STAGES SCIENTIFIC AND PRODUCTION MASTERING THE TERRITORY AVERAGE URAL

    Directory of Open Access Journals (Sweden)

    V.S. Bochko

    2006-09-01

    Full Text Available Questions of the shaping Average Ural, as industrial territory, on base her scientific study and production mastering are considered in the article. It is shown that studies of Ural resources and particularities of the vital activity of its population were concerned by Russian and foreign scientist in XVIII-XIX centuries. It is noted that in XX century there was a transition to systematic organizing-economic study of production power, society and natures of Average Ural. More attention addressed on new problems of region and on needs of their scientific solving.

  6. High-Average, High-Peak Current Injector Design

    CERN Document Server

    Biedron, S G; Virgo, M

    2005-01-01

    There is increasing interest in high-average-power (>100 kW), um-range FELs. These machines require high peak current (~1 kA), modest transverse emittance, and beam energies of ~100 MeV. High average currents (~1 A) place additional constraints on the design of the injector. We present a design for an injector intended to produce the required peak currents at the injector, eliminating the need for magnetic compression within the linac. This reduces the potential for beam quality degradation due to CSR and space charge effects within magnetic chicanes.

  7. Non-self-averaging nucleation rate due to quenched disorder

    International Nuclear Information System (INIS)

    Sear, Richard P

    2012-01-01

    We study the nucleation of a new thermodynamic phase in the presence of quenched disorder. The quenched disorder is a generic model of both impurities and disordered porous media; both are known to have large effects on nucleation. We find that the nucleation rate is non-self-averaging. This is in a simple Ising model with clusters of quenched spins. We also show that non-self-averaging behaviour is straightforward to detect in experiments, and may be rather common. (fast track communication)

  8. A note on moving average models for Gaussian random fields

    DEFF Research Database (Denmark)

    Hansen, Linda Vadgård; Thorarinsdottir, Thordis L.

    The class of moving average models offers a flexible modeling framework for Gaussian random fields with many well known models such as the Matérn covariance family and the Gaussian covariance falling under this framework. Moving average models may also be viewed as a kernel smoothing of a Lévy...... basis, a general modeling framework which includes several types of non-Gaussian models. We propose a new one-parameter spatial correlation model which arises from a power kernel and show that the associated Hausdorff dimension of the sample paths can take any value between 2 and 3. As a result...

  9. Assembly Test of Elastic Averaging Technique to Improve Mechanical Alignment for Accelerating Structure Assemblies in CLIC

    CERN Document Server

    Huopana, J

    2010-01-01

    The CLIC (Compact LInear Collider) is being studied at CERN as a potential multi-TeV e+e- collider [1]. The manufacturing and assembly tolerances for the required RF-components are important for the final efficiency and for the operation of CLIC. The proper function of an accelerating structure is very sensitive to errors in shape and location of the accelerating cavity. This causes considerable issues in the field of mechanical design and manufacturing. Currently the design of the accelerating structures is a disk design. Alternatively it is possible to create the accelerating assembly from quadrants, which favour the mass manufacturing. The functional shape inside of the accelerating structure remains the same and a single assembly uses less parts. The alignment of these quadrants has been previously made kinematic by using steel pins or spheres to align the pieces together. This method proved to be a quite tedious and time consuming method of assembly. To limit the number of different error sources, a meth...

  10. 26 CFR 1.410(b)-5 - Average benefit percentage test.

    Science.gov (United States)

    2010-04-01

    ... benefit percentages may be determined on the basis of any definition of compensation that satisfies § 1... underlying definition of compensation that satisfies section 414(s). Except as otherwise specifically... definitions of section 414(s) compensation in the determination of rates; (B) Use of different definitions of...

  11. Averaging processes in granular flows driven by gravity

    Science.gov (United States)

    Rossi, Giulia; Armanini, Aronne

    2016-04-01

    One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental

  12. Average and local structure of selected metal deuterides

    Energy Technology Data Exchange (ETDEWEB)

    Soerby, Magnus H.

    2005-07-01

    The main topic of this thesis is improved understanding of site preference and mutual interactions of deuterium (D) atoms in selected metallic metal deuterides. The work was partly motivated by reports of abnormally short D-D distances in RENiInD1.33 compounds (RE = rear-earth element; D-D {upsilon} square root 1.6 Aa) which show that the so-called Switendick criterion that demands a D-D separation of at least 2 Aa, is not a universal rule. The work is experimental and heavily based on scattering measurements using x-rays (lab and synchrotron) and neutrons. In order to enhance data quality, deuterium is almost exclusively used instead of natural hydrogen in sample preparations. The data-analyses are in some cases taken beyond ''conventional'' analysis of the Bragg scattering, as the diffuse scattering contains important information on D-D distances in disordered deuterides (Paper 3 and 4). A considerable part of this work is devoted to determination of the crystal structure of saturated Zr2Ni deuteride, Zr2NiD-4.8. The structure remained unsolved when only a few months remained of the scholarship. The route to the correct structure was found in the last moment. In Chapter II this winding road towards the structure determination is described; an interesting exercise in how to cope with triclinic superstructures of metal hydrides. The solution emerged by combining data from synchrotron radiation powder x-ray diffraction (SR-PXD), powder neutron diffraction (PND) and electron diffraction (ED). The triclinic crystal structure, described in space group P1 , is fully ordered with composition Zr4Ni2D9 (Zr2NiD4.5). The unit cell is doubled as compared to lower Zr2Ni deuterides due to a deuterium superstructure: asuper = a, bsuper = b - c, csuper = b + c. The deviation from higher symmetry is very small. The metal lattice is pseudo-I-centred tetragonal and the deuterium lattice is pseudo-C-centred monoclinic. The deuterium site preference in Zr2Ni

  13. Average and local structure of selected metal deuterides

    International Nuclear Information System (INIS)

    Soerby, Magnus H.

    2004-01-01

    The main topic of this thesis is improved understanding of site preference and mutual interactions of deuterium (D) atoms in selected metallic metal deuterides. The work was partly motivated by reports of abnormally short D-D distances in RENiInD1.33 compounds (RE = rear-earth element; D-D Υ square root 1.6 Aa) which show that the so-called Switendick criterion that demands a D-D separation of at least 2 Aa, is not a universal rule. The work is experimental and heavily based on scattering measurements using x-rays (lab and synchrotron) and neutrons. In order to enhance data quality, deuterium is almost exclusively used instead of natural hydrogen in sample preparations. The data-analyses are in some cases taken beyond ''conventional'' analysis of the Bragg scattering, as the diffuse scattering contains important information on D-D distances in disordered deuterides (Paper 3 and 4). A considerable part of this work is devoted to determination of the crystal structure of saturated Zr2Ni deuteride, Zr2NiD-4.8. The structure remained unsolved when only a few months remained of the scholarship. The route to the correct structure was found in the last moment. In Chapter II this winding road towards the structure determination is described; an interesting exercise in how to cope with triclinic superstructures of metal hydrides. The solution emerged by combining data from synchrotron radiation powder x-ray diffraction (SR-PXD), powder neutron diffraction (PND) and electron diffraction (ED). The triclinic crystal structure, described in space group P1 , is fully ordered with composition Zr4Ni2D9 (Zr2NiD4.5). The unit cell is doubled as compared to lower Zr2Ni deuterides due to a deuterium superstructure: asuper = a, bsuper = b - c, csuper = b + c. The deviation from higher symmetry is very small. The metal lattice is pseudo-I-centred tetragonal and the deuterium lattice is pseudo-C-centred monoclinic. The deuterium site preference in Zr2Ni deuterides at 1 bar D2 and

  14. Exploring Higher Thinking.

    Science.gov (United States)

    Conover, Willis M.

    1992-01-01

    Maintains that the social studies reform movement includes a call for the de-emphasis of rote memory and more attention to the development of higher-order thinking skills. Discusses the "thinking tasks" concept derived from the work of Hilda Taba and asserts that the tasks can be used with almost any social studies topic. (CFR)

  15. Higher-Order Hierarchies

    DEFF Research Database (Denmark)

    Ernst, Erik

    2003-01-01

    This paper introduces the notion of higher-order inheritance hierarchies. They are useful because they provide well-known benefits of object-orientation at the level of entire hierarchies-benefits which are not available with current approaches. Three facets must be adressed: First, it must be po...

  16. Inflation from higher dimensions

    International Nuclear Information System (INIS)

    Shafi, Q.

    1987-01-01

    We argue that an inflationary phase in the very early universe is related to the transition from a higher dimensional to a four-dimensional universe. We present details of a previously considered model which gives sufficient inflation without fine tuning of parameters. (orig.)

  17. Higher Education Funding Formulas.

    Science.gov (United States)

    McKeown-Moak, Mary P.

    1999-01-01

    One of the most critical components of the college or university chief financial officer's job is budget planning, especially using formulas. A discussion of funding formulas looks at advantages, disadvantages, and types of formulas used by states in budgeting for higher education, and examines how chief financial officers can position the campus…

  18. Liberty and Higher Education.

    Science.gov (United States)

    Thompson, Dennis F.

    1989-01-01

    John Stuart Mill's principle of liberty is discussed with the view that it needs to be revised to guide moral judgments in higher education. Three key elements need to be modified: the action that is constrained; the constraint on the action; and the agent whose action is constrained. (MLW)

  19. Fuel Class Higher Alcohols

    KAUST Repository

    Sarathy, Mani

    2016-01-01

    This chapter focuses on the production and combustion of alcohol fuels with four or more carbon atoms, which we classify as higher alcohols. It assesses the feasibility of utilizing various C4-C8 alcohols as fuels for internal combustion engines

  20. Evaluation in Higher Education

    Science.gov (United States)

    Bognar, Branko; Bungic, Maja

    2014-01-01

    One of the means of transforming classroom experience is by conducting action research with students. This paper reports about the action research with university students. It has been carried out within a semester of the course "Methods of Upbringing". Its goal has been to improve evaluation of higher education teaching. Different forms…

  1. Higher-level Innovization

    DEFF Research Database (Denmark)

    Bandaru, Sunith; Tutum, Cem Celal; Deb, Kalyanmoy

    2011-01-01

    we introduce the higher-level innovization task through an application of a manufacturing process simulation for the Friction Stir Welding (FSW) process where commonalities among two different Pareto-optimal fronts are analyzed. Multiple design rules are simultaneously deciphered from each front...

  2. Benchmarking for Higher Education.

    Science.gov (United States)

    Jackson, Norman, Ed.; Lund, Helen, Ed.

    The chapters in this collection explore the concept of benchmarking as it is being used and developed in higher education (HE). Case studies and reviews show how universities in the United Kingdom are using benchmarking to aid in self-regulation and self-improvement. The chapters are: (1) "Introduction to Benchmarking" (Norman Jackson…

  3. Creativity in Higher Education

    Science.gov (United States)

    Gaspar, Drazena; Mabic, Mirela

    2015-01-01

    The paper presents results of research related to perception of creativity in higher education made by the authors at the University of Mostar from Bosnia and Herzegovina. This research was based on a survey conducted among teachers and students at the University. The authors developed two types of questionnaires, one for teachers and the other…

  4. California's Future: Higher Education

    Science.gov (United States)

    Johnson, Hans

    2015-01-01

    California's higher education system is not keeping up with the changing economy. Projections suggest that the state's economy will continue to need more highly educated workers. In 2025, if current trends persist, 41 percent of jobs will require at least a bachelor's degree and 36 percent will require some college education short of a bachelor's…

  5. Cyberbullying in Higher Education

    Science.gov (United States)

    Minor, Maria A.; Smith, Gina S.; Brashen, Henry

    2013-01-01

    Bullying has extended beyond the schoolyard into online forums in the form of cyberbullying. Cyberbullying is a growing concern due to the effect on its victims. Current studies focus on grades K-12; however, cyberbullying has entered the world of higher education. The focus of this study was to identify the existence of cyberbullying in higher…

  6. Deblurring of class-averaged images in single-particle electron microscopy

    International Nuclear Information System (INIS)

    Park, Wooram; Chirikjian, Gregory S; Madden, Dean R; Rockmore, Daniel N

    2010-01-01

    This paper proposes a method for the deblurring of class-averaged images in single-particle electron microscopy (EM). Since EM images of biological samples are very noisy, the images which are nominally identical projection images are often grouped, aligned and averaged in order to cancel or reduce the background noise. However, the noise in the individual EM images generates errors in the alignment process, which creates an inherent limit on the accuracy of the resulting class averages. This inaccurate class average due to the alignment errors can be viewed as the result of a convolution of an underlying clear image with a blurring function. In this work, we develop a deconvolution method that gives an estimate for the underlying clear image from a blurred class-averaged image using precomputed statistics of misalignment. Since this convolution is over the group of rigid-body motions of the plane, SE(2), we use the Fourier transform for SE(2) in order to convert the convolution into a matrix multiplication in the corresponding Fourier space. For practical implementation we use a Hermite-function-based image modeling technique, because Hermite expansions enable lossless Cartesian-polar coordinate conversion using the Laguerre–Fourier expansions, and Hermite expansion and Laguerre–Fourier expansion retain their structures under the Fourier transform. Based on these mathematical properties, we can obtain the deconvolution of the blurred class average using simple matrix multiplication. Tests of the proposed deconvolution method using synthetic and experimental EM images confirm the performance of our method

  7. Honest signaling in trust interactions: smiles rated as genuine induce trust and signal higher earning opportunities

    OpenAIRE

    Centorrino, S.; Djemai, E.; Hopfensitz, A.; Milinski, M.; Seabright, P.

    2015-01-01

    We test the hypothesis that smiles perceived as honest serve as a signal that has evolved to induce cooperation in situations requiring mutual trust. Potential trustees (84 participants from Toulouse, France) made two video clips averaging around 15 seconds for viewing by potential senders before the latter decided whether to ‘send’ or ‘keep’ a lower stake (4 euros) or higher stake (8 euros). Senders (198 participants from Lyon, France) made trust decisions with respect to the recorded clips....

  8. Competitiveness - higher education

    Directory of Open Access Journals (Sweden)

    Labas Istvan

    2016-03-01

    Full Text Available Involvement of European Union plays an important role in the areas of education and training equally. The member states are responsible for organizing and operating their education and training systems themselves. And, EU policy is aimed at supporting the efforts of member states and trying to find solutions for the common challenges which appear. In order to make our future sustainable maximally; the key to it lies in education. The highly qualified workforce is the key to development, advancement and innovation of the world. Nowadays, the competitiveness of higher education institutions has become more and more appreciated in the national economy. In recent years, the frameworks of operation of higher education systems have gone through a total transformation. The number of applying students is continuously decreasing in some European countries therefore only those institutions can “survive” this shortfall, which are able to minimize the loss of the number of students. In this process, the factors forming the competitiveness of these budgetary institutions play an important role from the point of view of survival. The more competitive a higher education institution is, the greater the chance is that the students would like to continue their studies there and thus this institution will have a greater chance for the survival in the future, compared to ones lagging behind in the competition. Aim of our treatise prepared is to present the current situation and main data of the EU higher education and we examine the performance of higher education: to what extent it fulfils the strategy for smart, sustainable and inclusive growth which is worded in the framework of Europe 2020 programme. The treatise is based on analysis of statistical data.

  9. High Average Power UV Free Electron Laser Experiments At JLAB

    International Nuclear Information System (INIS)

    Douglas, David; Benson, Stephen; Evtushenko, Pavel; Gubeli, Joseph; Hernandez-Garcia, Carlos; Legg, Robert; Neil, George; Powers, Thomas; Shinn, Michelle; Tennant, Christopher; Williams, Gwyn

    2012-01-01

    Having produced 14 kW of average power at ∼2 microns, JLAB has shifted its focus to the ultraviolet portion of the spectrum. This presentation will describe the JLab UV Demo FEL, present specifics of its driver ERL, and discuss the latest experimental results from FEL experiments and machine operations.

  10. Establishment of Average Body Measurement and the Development ...

    African Journals Online (AJOL)

    cce

    body measurement for height and backneck to waist for ages 2,3,4 and 5 years. The ... average measurements of the different parts of the body must be established. ..... and OAU Charter on Rights of the child: Lagos: Nigeria Country office.

  11. Adaptive Spontaneous Transitions between Two Mechanisms of Numerical Averaging.

    Science.gov (United States)

    Brezis, Noam; Bronfman, Zohar Z; Usher, Marius

    2015-06-04

    We investigated the mechanism with which humans estimate numerical averages. Participants were presented with 4, 8 or 16 (two-digit) numbers, serially and rapidly (2 numerals/second) and were instructed to convey the sequence average. As predicted by a dual, but not a single-component account, we found a non-monotonic influence of set-size on accuracy. Moreover, we observed a marked decrease in RT as set-size increases and RT-accuracy tradeoff in the 4-, but not in the 16-number condition. These results indicate that in accordance with the normative directive, participants spontaneously employ analytic/sequential thinking in the 4-number condition and intuitive/holistic thinking in the 16-number condition. When the presentation rate is extreme (10 items/sec) we find that, while performance still remains high, the estimations are now based on intuitive processing. The results are accounted for by a computational model postulating population-coding underlying intuitive-averaging and working-memory-mediated symbolic procedures underlying analytical-averaging, with flexible allocation between the two.

  12. Determination of the average lifetime of bottom hadrons

    Energy Technology Data Exchange (ETDEWEB)

    Althoff, M; Braunschweig, W; Kirschfink, F J; Martyn, H U; Rosskamp, P; Schmitz, D; Siebke, H; Wallraff, W [Technische Hochschule Aachen (Germany, F.R.). Lehrstuhl fuer Experimentalphysik 1A und 1. Physikalisches Inst.; Eisenmann, J; Fischer, H M

    1984-12-27

    We have determined the average lifetime of hadrons containing b quarks produced in e/sup +/e/sup -/ annihilation to be tausub(B)=1.83 x 10/sup -12/ s. Our method uses charged decay products from both non-leptonic and semileptonic decay modes.

  13. Determination of the average lifetime of bottom hadrons

    Energy Technology Data Exchange (ETDEWEB)

    Althoff, M; Braunschweig, W; Kirschfink, F J; Martyn, H U; Rosskamp, P; Schmitz, D; Siebke, H; Wallraff, W; Eisenmann, J; Fischer, H M

    1984-12-27

    We have determined the average lifetime of hadrons containing b quarks produced in e e annihilation to be tausub(B)=1.83x10 S s. Our method uses charged decay products from both non-leptonic and semileptonic decay modes. (orig./HSI).

  14. Time Series ARIMA Models of Undergraduate Grade Point Average.

    Science.gov (United States)

    Rogers, Bruce G.

    The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…

  15. Crystallographic extraction and averaging of data from small image areas

    NARCIS (Netherlands)

    Perkins, GA; Downing, KH; Glaeser, RM

    The accuracy of structure factor phases determined from electron microscope images is determined mainly by the level of statistical significance, which is limited by the low level of allowed electron exposure and by the number of identical unit cells that can be averaged. It is shown here that

  16. Reducing Noise by Repetition: Introduction to Signal Averaging

    Science.gov (United States)

    Hassan, Umer; Anwar, Muhammad Sabieh

    2010-01-01

    This paper describes theory and experiments, taken from biophysics and physiological measurements, to illustrate the technique of signal averaging. In the process, students are introduced to the basic concepts of signal processing, such as digital filtering, Fourier transformation, baseline correction, pink and Gaussian noise, and the cross- and…

  17. Environmental stresses can alleviate the average deleterious effect of mutations

    Directory of Open Access Journals (Sweden)

    Leibler Stanislas

    2003-05-01

    Full Text Available Abstract Background Fundamental questions in evolutionary genetics, including the possible advantage of sexual reproduction, depend critically on the effects of deleterious mutations on fitness. Limited existing experimental evidence suggests that, on average, such effects tend to be aggravated under environmental stresses, consistent with the perception that stress diminishes the organism's ability to tolerate deleterious mutations. Here, we ask whether there are also stresses with the opposite influence, under which the organism becomes more tolerant to mutations. Results We developed a technique, based on bioluminescence, which allows accurate automated measurements of bacterial growth rates at very low cell densities. Using this system, we measured growth rates of Escherichia coli mutants under a diverse set of environmental stresses. In contrast to the perception that stress always reduces the organism's ability to tolerate mutations, our measurements identified stresses that do the opposite – that is, despite decreasing wild-type growth, they alleviate, on average, the effect of deleterious mutations. Conclusions Our results show a qualitative difference between various environmental stresses ranging from alleviation to aggravation of the average effect of mutations. We further show how the existence of stresses that are biased towards alleviation of the effects of mutations may imply the existence of average epistatic interactions between mutations. The results thus offer a connection between the two main factors controlling the effects of deleterious mutations: environmental conditions and epistatic interactions.

  18. The background effective average action approach to quantum gravity

    DEFF Research Database (Denmark)

    D’Odorico, G.; Codello, A.; Pagani, C.

    2016-01-01

    of an UV attractive non-Gaussian fixed-point, which we find characterized by real critical exponents. Our closure method is general and can be applied systematically to more general truncations of the gravitational effective average action. © Springer International Publishing Switzerland 2016....

  19. Error estimates in horocycle averages asymptotics: challenges from string theory

    NARCIS (Netherlands)

    Cardella, M.A.

    2010-01-01

    For modular functions of rapid decay, a classical result connects the error estimate in their long horocycle average asymptotic to the Riemann hypothesis. We study similar asymptotics, for modular functions with not that mild growing conditions, such as of polynomial growth and of exponential growth

  20. Moving average rules as a source of market instability

    NARCIS (Netherlands)

    Chiarella, C.; He, X.Z.; Hommes, C.H.

    2006-01-01

    Despite the pervasiveness of the efficient markets paradigm in the academic finance literature, the use of various moving average (MA) trading rules remains popular with financial market practitioners. This paper proposes a stochastic dynamic financial market model in which demand for traded assets

  1. arXiv Averaged Energy Conditions and Bouncing Universes

    CERN Document Server

    Giovannini, Massimo

    2017-11-16

    The dynamics of bouncing universes is characterized by violating certain coordinate-invariant restrictions on the total energy-momentum tensor, customarily referred to as energy conditions. Although there could be epochs in which the null energy condition is locally violated, it may perhaps be enforced in an averaged sense. Explicit examples of this possibility are investigated in different frameworks.

  2. 26 CFR 1.1301-1 - Averaging of farm income.

    Science.gov (United States)

    2010-04-01

    ... January 1, 2003, rental income based on a share of a tenant's production determined under an unwritten... the Collection of Income Tax at Source on Wages (Federal income tax withholding), or the amount of net... 26 Internal Revenue 11 2010-04-01 2010-04-01 true Averaging of farm income. 1.1301-1 Section 1...

  3. Implications of Methodist clergies' average lifespan and missional ...

    African Journals Online (AJOL)

    2015-06-09

    Jun 9, 2015 ... The author of Genesis 5 paid meticulous attention to the lifespan of several people ... of Southern Africa (MCSA), and to argue that memories of the ... average ages at death were added up and the sum was divided by 12 (which represents the 12 ..... not explicit in how the departed Methodist ministers were.

  4. Pareto Principle in Datamining: an Above-Average Fencing Algorithm

    Directory of Open Access Journals (Sweden)

    K. Macek

    2008-01-01

    Full Text Available This paper formulates a new datamining problem: which subset of input space has the relatively highest output where the minimal size of this subset is given. This can be useful where usual datamining methods fail because of error distribution asymmetry. The paper provides a novel algorithm for this datamining problem, and compares it with clustering of above-average individuals.

  5. Average Distance Travelled To School by Primary and Secondary ...

    African Journals Online (AJOL)

    This study investigated average distance travelled to school by students in primary and secondary schools in Anambra, Enugu, and Ebonyi States and effect on attendance. These are among the top ten densely populated and educationally advantaged States in Nigeria. Research evidences report high dropout rates in ...

  6. Computation of the average energy for LXY electrons

    International Nuclear Information System (INIS)

    Grau Carles, A.; Grau, A.

    1996-01-01

    The application of an atomic rearrangement model in which we only consider the three shells K, L and M, to compute the counting efficiency for electron capture nuclides, requires a fine averaged energy value for LMN electrons. In this report, we illustrate the procedure with two example, ''125 I and ''109 Cd. (Author) 4 refs

  7. Bounding quantum gate error rate based on reported average fidelity

    International Nuclear Information System (INIS)

    Sanders, Yuval R; Wallman, Joel J; Sanders, Barry C

    2016-01-01

    Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli distance as a measure of this deviation, and we show that knowledge of the Pauli distance enables tighter estimates of the error rate of quantum gates. (fast track communication)

  8. 75 FR 78157 - Farmer and Fisherman Income Averaging

    Science.gov (United States)

    2010-12-15

    ... to the averaging of farm and fishing income in computing income tax liability. The regulations...: PART 1--INCOME TAXES 0 Paragraph 1. The authority citation for part 1 continues to read in part as... section 1 tax would be increased if one-third of elected farm income were allocated to each year. The...

  9. Domain-averaged Fermi-hole Analysis for Solids

    Czech Academy of Sciences Publication Activity Database

    Baranov, A.; Ponec, Robert; Kohout, M.

    2012-01-01

    Roč. 137, č. 21 (2012), s. 214109 ISSN 0021-9606 R&D Projects: GA ČR GA203/09/0118 Institutional support: RVO:67985858 Keywords : bonding in solids * domain averaged fermi hole * natural orbitals Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 3.164, year: 2012

  10. Characteristics of phase-averaged equations for modulated wave groups

    NARCIS (Netherlands)

    Klopman, G.; Petit, H.A.H.; Battjes, J.A.

    2000-01-01

    The project concerns the influence of long waves on coastal morphology. The modelling of the combined motion of the long waves and short waves in the horizontal plane is done by phase-averaging over the short wave motion and using intra-wave modelling for the long waves, see e.g. Roelvink (1993).

  11. Effect of tank geometry on its average performance

    Science.gov (United States)

    Orlov, Aleksey A.; Tsimbalyuk, Alexandr F.; Malyugin, Roman V.; Leontieva, Daria A.; Kotelnikova, Alexandra A.

    2018-03-01

    The mathematical model of non-stationary filling of vertical submerged tanks with gaseous uranium hexafluoride is presented in the paper. There are calculations of the average productivity, heat exchange area, and filling time of various volumes tanks with smooth inner walls depending on their "height : radius" ratio as well as the average productivity, degree, and filling time of horizontal ribbing tank with volume 6.10-2 m3 with change central hole diameter of the ribs. It has been shown that the growth of "height / radius" ratio in tanks with smooth inner walls up to the limiting values allows significantly increasing tank average productivity and reducing its filling time. Growth of H/R ratio of tank with volume 1.0 m3 to the limiting values (in comparison with the standard tank having H/R equal 3.49) augments tank productivity by 23.5 % and the heat exchange area by 20%. Besides, we have demonstrated that maximum average productivity and a minimum filling time are reached for the tank with volume 6.10-2 m3 having central hole diameter of horizontal ribs 6.4.10-2 m.

  12. An averaged polarizable potential for multiscale modeling in phospholipid membranes

    DEFF Research Database (Denmark)

    Witzke, Sarah; List, Nanna Holmgaard; Olsen, Jógvan Magnus Haugaard

    2017-01-01

    A set of average atom-centered charges and polarizabilities has been developed for three types of phospholipids for use in polarizable embedding calculations. The lipids investigated are 1,2-dimyristoyl-sn-glycero-3-phosphocholine, 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine, and 1-palmitoyl...

  13. Understanding coastal morphodynamic patterns from depth-averaged sediment concentration

    NARCIS (Netherlands)

    Ribas, F.; Falques, A.; de Swart, H. E.; Dodd, N.; Garnier, R.; Calvete, D.

    This review highlights the important role of the depth-averaged sediment concentration (DASC) to understand the formation of a number of coastal morphodynamic features that have an alongshore rhythmic pattern: beach cusps, surf zone transverse and crescentic bars, and shoreface-connected sand

  14. Determination of average activating thermal neutron flux in bulk samples

    International Nuclear Information System (INIS)

    Doczi, R.; Csikai, J.; Doczi, R.; Csikai, J.; Hassan, F. M.; Ali, M.A.

    2004-01-01

    A previous method used for the determination of the average neutron flux within bulky samples has been applied for the measurements of hydrogen contents of different samples. An analytical function is given for the description of the correlation between the activity of Dy foils and the hydrogen concentrations. Results obtained by the activation and the thermal neutron reflection methods are compared

  15. Grade Point Average: What's Wrong and What's the Alternative?

    Science.gov (United States)

    Soh, Kay Cheng

    2011-01-01

    Grade point average (GPA) has been around for more than two centuries. However, it has created a lot of confusion, frustration, and anxiety to GPA-producers and users alike, especially when used across-nation for different purposes. This paper looks into the reasons for such a state of affairs from the perspective of educational measurement. It…

  16. The Effect of Honors Courses on Grade Point Averages

    Science.gov (United States)

    Spisak, Art L.; Squires, Suzanne Carter

    2016-01-01

    High-ability entering college students give three main reasons for not choosing to become part of honors programs and colleges; they and/or their parents believe that honors classes at the university level require more work than non-honors courses, are more stressful, and will adversely affect their self-image and grade point average (GPA) (Hill;…

  17. An average salary: approaches to the index determination

    Directory of Open Access Journals (Sweden)

    T. M. Pozdnyakova

    2017-01-01

    Full Text Available The article “An average salary: approaches to the index determination” is devoted to studying various methods of calculating this index, both used by official state statistics of the Russian Federation and offered by modern researchers.The purpose of this research is to analyze the existing approaches to calculating the average salary of employees of enterprises and organizations, as well as to make certain additions that would help to clarify this index.The information base of the research is laws and regulations of the Russian Federation Government, statistical and analytical materials of the Federal State Statistics Service of Russia for the section «Socio-economic indexes: living standards of the population», as well as materials of scientific papers, describing different approaches to the average salary calculation. The data on the average salary of employees of educational institutions of the Khabarovsk region served as the experimental base of research. In the process of conducting the research, the following methods were used: analytical, statistical, calculated-mathematical and graphical.The main result of the research is an option of supplementing the method of calculating average salary index within enterprises or organizations, used by Goskomstat of Russia, by means of introducing a correction factor. Its essence consists in the specific formation of material indexes for different categories of employees in enterprises or organizations, mainly engaged in internal secondary jobs. The need for introducing this correction factor comes from the current reality of working conditions of a wide range of organizations, when an employee is forced, in addition to the main position, to fulfill additional job duties. As a result, the situation is frequent when the average salary at the enterprise is difficult to assess objectively because it consists of calculating multiple rates per staff member. In other words, the average salary of

  18. Developing an Empirical Test of the Impact of Vouchers on Elasticity of Demand for Post-Secondary Education and on the Financing of Higher Education; and Economic Efficiency in Post-Secondary Education. Final Project Report.

    Science.gov (United States)

    Newton, Jan N.; And Others

    Two separate NIE research projects in higher education, closely related in substance and complementary, were undertaken in Oregon in 1973-75. During the first year, the objectives were to: (1) compute and analyze various configurations of student schooling costs and financial resources according to institutional type and to student sex and…

  19. High-average-power diode-pumped Yb: YAG lasers

    International Nuclear Information System (INIS)

    Avizonis, P V; Beach, R; Bibeau, C M; Emanuel, M A; Harris, D G; Honea, E C; Monroe, R S; Payne, S A; Skidmore, J A; Sutton, S B

    1999-01-01

    A scaleable diode end-pumping technology for high-average-power slab and rod lasers has been under development for the past several years at Lawrence Livermore National Laboratory (LLNL). This technology has particular application to high average power Yb:YAG lasers that utilize a rod configured gain element. Previously, this rod configured approach has achieved average output powers in a single 5 cm long by 2 mm diameter Yb:YAG rod of 430 W cw and 280 W q-switched. High beam quality (M(sup 2)= 2.4) q-switched operation has also been demonstrated at over 180 W of average output power. More recently, using a dual rod configuration consisting of two, 5 cm long by 2 mm diameter laser rods with birefringence compensation, we have achieved 1080 W of cw output with an M(sup 2) value of 13.5 at an optical-to-optical conversion efficiency of 27.5%. With the same dual rod laser operated in a q-switched mode, we have also demonstrated 532 W of average power with an M(sup 2) and lt; 2.5 at 17% optical-to-optical conversion efficiency. These q-switched results were obtained at a 10 kHz repetition rate and resulted in 77 nsec pulse durations. These improved levels of operational performance have been achieved as a result of technology advancements made in several areas that will be covered in this manuscript. These enhancements to our architecture include: (1) Hollow lens ducts that enable the use of advanced cavity architectures permitting birefringence compensation and the ability to run in large aperture-filling near-diffraction-limited modes. (2) Compound laser rods with flanged-nonabsorbing-endcaps fabricated by diffusion bonding. (3) Techniques for suppressing amplified spontaneous emission (ASE) and parasitics in the polished barrel rods

  20. High average power diode pumped solid state lasers for CALIOPE

    International Nuclear Information System (INIS)

    Comaskey, B.; Halpin, J.; Moran, B.

    1994-07-01

    Diode pumping of solid state media offers the opportunity for very low maintenance, high efficiency, and compact laser systems. For remote sensing, such lasers may be used to pump tunable non-linear sources, or if tunable themselves, act directly or through harmonic crystals as the probe. The needs of long range remote sensing missions require laser performance in the several watts to kilowatts range. At these power performance levels, more advanced thermal management technologies are required for the diode pumps. The solid state laser design must now address a variety of issues arising from the thermal loads, including fracture limits, induced lensing and aberrations, induced birefringence, and laser cavity optical component performance degradation with average power loading. In order to highlight the design trade-offs involved in addressing the above issues, a variety of existing average power laser systems are briefly described. Included are two systems based on Spectra Diode Laboratory's water impingement cooled diode packages: a two times diffraction limited, 200 watt average power, 200 Hz multi-rod laser/amplifier by Fibertek, and TRW's 100 watt, 100 Hz, phase conjugated amplifier. The authors also present two laser systems built at Lawrence Livermore National Laboratory (LLNL) based on their more aggressive diode bar cooling package, which uses microchannel cooler technology capable of 100% duty factor operation. They then present the design of LLNL's first generation OPO pump laser for remote sensing. This system is specified to run at 100 Hz, 20 nsec pulses each with 300 mJ, less than two times diffraction limited, and with a stable single longitudinal mode. The performance of the first testbed version will be presented. The authors conclude with directions their group is pursuing to advance average power lasers. This includes average power electro-optics, low heat load lasing media, and heat capacity lasers

  1. Construction of average adult Japanese voxel phantoms for dose assessment

    International Nuclear Information System (INIS)

    Sato, Kaoru; Takahashi, Fumiaki; Satoh, Daiki; Endo, Akira

    2011-12-01

    The International Commission on Radiological Protection (ICRP) adopted the adult reference voxel phantoms based on the physiological and anatomical reference data of Caucasian on October, 2007. The organs and tissues of these phantoms were segmented on the basis of ICRP Publication 103. In future, the dose coefficients for internal dose and dose conversion coefficients for external dose calculated using the adult reference voxel phantoms will be widely used for the radiation protection fields. On the other hand, the body sizes and organ masses of adult Japanese are generally smaller than those of adult Caucasian. In addition, there are some cases that the anatomical characteristics such as body sizes, organ masses and postures of subjects influence the organ doses in dose assessment for medical treatments and radiation accident. Therefore, it was needed to use human phantoms with average anatomical characteristics of Japanese. The authors constructed the averaged adult Japanese male and female voxel phantoms by modifying the previously developed high-resolution adult male (JM) and female (JF) voxel phantoms. It has been modified in the following three aspects: (1) The heights and weights were agreed with the Japanese averages; (2) The masses of organs and tissues were adjusted to the Japanese averages within 10%; (3) The organs and tissues, which were newly added for evaluation of the effective dose in ICRP Publication 103, were modeled. In this study, the organ masses, distances between organs, specific absorbed fractions (SAFs) and dose conversion coefficients of these phantoms were compared with those evaluated using the ICRP adult reference voxel phantoms. This report provides valuable information on the anatomical and dosimetric characteristics of the averaged adult Japanese male and female voxel phantoms developed as reference phantoms of adult Japanese. (author)

  2. An averaging battery model for a lead-acid battery operating in an electric car

    Science.gov (United States)

    Bozek, J. M.

    1979-01-01

    A battery model is developed based on time averaging the current or power, and is shown to be an effective means of predicting the performance of a lead acid battery. The effectiveness of this battery model was tested on battery discharge profiles expected during the operation of an electric vehicle following the various SAE J227a driving schedules. The averaging model predicts the performance of a battery that is periodically charged (regenerated) if the regeneration energy is assumed to be converted to retrievable electrochemical energy on a one-to-one basis.

  3. Average arterial input function for quantitative dynamic contrast enhanced magnetic resonance imaging of neck nodal metastases

    International Nuclear Information System (INIS)

    Shukla-Dave, Amita; Lee, Nancy; Stambuk, Hilda; Wang, Ya; Huang, Wei; Thaler, Howard T; Patel, Snehal G; Shah, Jatin P; Koutcher, Jason A

    2009-01-01

    The present study determines the feasibility of generating an average arterial input function (Avg-AIF) from a limited population of patients with neck nodal metastases to be used for pharmacokinetic modeling of dynamic contrast-enhanced MRI (DCE-MRI) data in clinical trials of larger populations. Twenty patients (mean age 50 years [range 27–77 years]) with neck nodal metastases underwent pretreatment DCE-MRI studies with a temporal resolution of 3.75 to 7.5 sec on a 1.5T clinical MRI scanner. Eleven individual AIFs (Ind-AIFs) met the criteria of expected enhancement pattern and were used to generate Avg-AIF. Tofts model was used to calculate pharmacokinetic DCE-MRI parameters. Bland-Altman plots and paired Student t-tests were used to describe significant differences between the pharmacokinetic parameters obtained from individual and average AIFs. Ind-AIFs obtained from eleven patients were used to calculate the Avg-AIF. No overall significant difference (bias) was observed for the transfer constant (K trans ) measured with Ind-AIFs compared to Avg-AIF (p = 0.20 for region-of-interest (ROI) analysis and p = 0.18 for histogram median analysis). Similarly, no overall significant difference was observed for interstitial fluid space volume fraction (v e ) measured with Ind-AIFs compared to Avg-AIF (p = 0.48 for ROI analysis and p = 0.93 for histogram median analysis). However, the Bland-Altman plot suggests that as K trans increases, the Ind-AIF estimates tend to become proportionally higher than the Avg-AIF estimates. We found no statistically significant overall bias in K trans or v e estimates derived from Avg-AIF, generated from a limited population, as compared with Ind-AIFs. However, further study is needed to determine whether calibration is needed across the range of K trans . The Avg-AIF obtained from a limited population may be used for pharmacokinetic modeling of DCE-MRI data in larger population studies with neck nodal metastases. Further validation of

  4. An Enhanced Method to Estimate Heart Rate from Seismocardiography via Ensemble Averaging of Body Movements at Six Degrees of Freedom

    Directory of Open Access Journals (Sweden)

    Hyunwoo Lee

    2018-01-01

    Full Text Available Continuous cardiac monitoring has been developed to evaluate cardiac activity outside of clinical environments due to the advancement of novel instruments. Seismocardiography (SCG is one of the vital components that could develop such a monitoring system. Although SCG has been presented with a lower accuracy, this novel cardiac indicator has been steadily proposed over traditional methods such as electrocardiography (ECG. Thus, it is necessary to develop an enhanced method by combining the significant cardiac indicators. In this study, the six-axis signals of accelerometer and gyroscope were measured and integrated by the L2 normalization and multi-dimensional kineticardiography (MKCG approaches, respectively. The waveforms of accelerometer and gyroscope were standardized and combined via ensemble averaging, and the heart rate was calculated from the dominant frequency. Thirty participants (15 females were asked to stand or sit in relaxed and aroused conditions. Their SCG was measured during the task. As a result, proposed method showed higher accuracy than traditional SCG methods in all measurement conditions. The three main contributions are as follows: (1 the ensemble averaging enhanced heart rate estimation with the benefits of the six-axis signals; (2 the proposed method was compared with the previous SCG method that employs fewer-axis; and (3 the method was tested in various measurement conditions for a more practical application.

  5. Radiosensitivity of higher plants

    International Nuclear Information System (INIS)

    Feng Zhijie

    1992-11-01

    The general views on radiosensitivity of higher plants have been introduced from published references. The radiosensitivity varies with species, varieties and organs or tissues. The main factors of determining the radiosensitivity in different species are nucleus volume, chromosome volume, DNA content and endogenous compounds. The self-repair ability of DNA damage and chemical group of biological molecules, such as -SH thiohydroxy of proteins, are main factors to determine the radiosensitivity in different varieties. The moisture, oxygen, temperature radiosensitizer and protector are important external factors for radiosensitivity. Both the multiple target model and Chadwick-Leenhouts model are ideal mathematical models for describing the radiosensitivity of higher plants and the latter has more clear significance in biology

  6. Higher Education Language Policy

    DEFF Research Database (Denmark)

    Lauridsen, Karen M.

    2013-01-01

    Summary of recommendations HEIs are encouraged, within the framework of their own societal context, mission, vision and strategies, to develop the aims and objectives of a Higher Education Language Policy (HELP) that allows them to implement these strategies. In this process, they may want......: As the first step in a Higher Education Language Policy, HEIs should determine the relative status and use of the languages employed in the institution, taking into consideration the answers to the following questions:  What is/are the official language(s) of the HEI?  What is/are the language...... and the level of internationalisation the HEI has or wants to have, and as a direct implication of that, what are the language proficiency levels expected from the graduates of these programme?  Given the profile of the HEI and its educational strategies, which language components are to be offered within...

  7. The effect of wind shielding and pen position on the average daily weight gain and feed conversion rate of grower/finisher pigs

    DEFF Research Database (Denmark)

    Jensen, Dan B.; Toft, Nils; Cornou, Cécile

    2014-01-01

    of the effects of wind shielding, linear mixed models were fitted to describe the average daily weight gain and feed conversion rate of 1271 groups (14 individuals per group) of purebred Duroc, Yorkshire and Danish Landrace boars, as a function of shielding (yes/no), insert season (winter, spring, summer, autumn...... groups placed in the 1st and 4th pen (p=0.0001). A similar effect was not seen on smaller pigs. Pen placement appears to have no effect on feed conversion rate.No interaction effects between shielding and distance to the corridor could be demonstrated. Furthermore, in models including both factors......). The effect could not be tested for Yorkshire and Danish Landrace due to lack of data on these breeds. For groups of pigs above the average start weight, a clear tendency of higher growth rates at greater distances from the central corridor was observed, with the most significant differences being between...

  8. P1-25: Filling-in the Blind Spot with the Average Direction

    Directory of Open Access Journals (Sweden)

    Sang-Ah Yoo

    2012-10-01

    Full Text Available Previous studies have shown that the visual system integrates local motions and perceives the average direction (Watamaniuk & Duchon, 1992 Vision Research 32 931–941. We investigated whether the surface of the blind spot is filled in with the average direction of the surrounding local motions. To test this, we varied the direction of a random-dot kinematogram (RDK both in adaptation and test. Motion aftereffects (MAE were defined as the difference of motion coherence thresholds between with and without adaptation. The participants were initially adapted to an annular RDK surrounding the blind spot for 30 s in their dominant eyes. The direction of each dot in this RDK was selected equally and randomly from either a normal distribution with the mean of 15° clockwise from vertical, 15° counterclockwise from vertical, or from the mixture of them. Immediately after the adaptation, a disk-shaped test RDK was presented for 1 s to the corresponding blind-spot location in the opposite eye. This RDK moved either 15° clockwise, 15° counterclockwise, or vertically (the average of the two directions. The participants' task was to discriminate the direction of the test RDK across different coherence levels. We found significant MAE when the test RDK had the same directions as the adaptor. More importantly, equally strong MAE was observed even when the direction of the test RDK was vertical, which was not physically present during adaptation. The result demonstrates that the visual system uses the average direction of the local surrounding motions to fill in the blind spot.

  9. Position-Dependent Dynamics Explain Pore-Averaged Diffusion in Strongly Attractive Adsorptive Systems.

    Science.gov (United States)

    Krekelberg, William P; Siderius, Daniel W; Shen, Vincent K; Truskett, Thomas M; Errington, Jeffrey R

    2017-12-12

    Using molecular simulations, we investigate the relationship between the pore-averaged and position-dependent self-diffusivity of a fluid adsorbed in a strongly attractive pore as a function of loading. Previous work (Krekelberg, W. P.; Siderius, D. W.; Shen, V. K.; Truskett, T. M.; Errington, J. R. Connection between thermodynamics and dynamics of simple fluids in highly attractive pores. Langmuir 2013, 29, 14527-14535, doi: 10.1021/la4037327) established that pore-averaged self-diffusivity in the multilayer adsorption regime, where the fluid exhibits a dense film at the pore surface and a lower density interior pore region, is nearly constant as a function of loading. Here we show that this puzzling behavior can be understood in terms of how loading affects the fraction of particles that reside in the film and interior pore regions as well as their distinct dynamics. Specifically, the insensitivity of pore-averaged diffusivity to loading arises from the approximate cancellation of two factors: an increase in the fraction of particles in the higher diffusivity interior pore region with loading and a corresponding decrease in the particle diffusivity in that region. We also find that the position-dependent self-diffusivities scale with the position-dependent density. We present a model for predicting the pore-average self-diffusivity based on the position-dependent self-diffusivity, which captures the unusual characteristics of pore-averaged self-diffusivity in strongly attractive pores over several orders of magnitude.

  10. Are MFT-B Results Biased Because of Students Who Do Not Take the Test?

    Science.gov (United States)

    Valero, Magali; Kocher, Claudia

    2014-01-01

    The authors study the characteristics of students who take the Major Field Test in Business (MFT-B) versus those who do not. The authors find that students with higher cumulative grade point averages (GPAs) are more likely to take the test. Additionally, students are more likely to take the test if it is offered late in the semester. Further…

  11. Hybrid Reynolds-Averaged/Large Eddy Simulation of the Flow in a Model SCRamjet Cavity Flameholder

    Science.gov (United States)

    Baurle, R. A.

    2016-01-01

    Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. Experimental data available for this configuration include velocity statistics obtained from particle image velocimetry. Several turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged/large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This e ort was undertaken to not only assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community, but to also begin to understand how this capability can best be used to augment standard Reynolds-averaged simulations. The numerical errors were quantified for the steady-state simulations, and at least qualitatively assessed for the scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results displayed a high degree of variability when comparing the flameholder fuel distributions obtained from each turbulence model. This prompted the consideration of applying the higher-fidelity scale-resolving simulations as a surrogate "truth" model to calibrate the Reynolds-averaged closures in a non-reacting setting prior to their use for the combusting simulations. In general, the Reynolds-averaged velocity profile predictions at the lowest fueling level matched the particle imaging measurements almost as well as was observed for the non-reacting condition. However, the velocity field predictions proved to be more sensitive to the flameholder fueling rate than was indicated in the measurements.

  12. Resident characterization of better-than- and worse-than-average clinical teaching.

    Science.gov (United States)

    Haydar, Bishr; Charnin, Jonathan; Voepel-Lewis, Terri; Baker, Keith

    2014-01-01

    Clinical teachers and trainees share a common view of what constitutes excellent clinical teaching, but associations between these behaviors and high teaching scores have not been established. This study used residents' written feedback to their clinical teachers, to identify themes associated with above- or below-average teaching scores. All resident evaluations of their clinical supervisors in a single department were collected from January 1, 2007 until December 31, 2008. A mean teaching score assigned by each resident was calculated. Evaluations that were 20% higher or 15% lower than the resident's mean score were used. A subset of these evaluations was reviewed, generating a list of 28 themes for further study. Two researchers then, independently coded the presence or absence of these themes in each evaluation. Interrater reliability of the themes and logistic regression were used to evaluate the predictive associations of the themes with above- or below-average evaluations. Five hundred twenty-seven above-average and 285 below-average evaluations were evaluated for the presence or absence of 15 positive themes and 13 negative themes, which were divided into four categories: teaching, supervision, interpersonal, and feedback. Thirteen of 15 positive themes correlated with above-average evaluations and nine had high interrater reliability (Intraclass Correlation Coefficient >0.6). Twelve of 13 negative themes correlated with below-average evaluations, and all had high interrater reliability. On the basis of these findings, the authors developed 13 recommendations for clinical educators. The authors developed 13 recommendations for clinical teachers using the themes identified from the above- and below-average clinical teaching evaluations submitted by anesthesia residents.

  13. The calculation of average error probability in a digital fibre optical communication system

    Science.gov (United States)

    Rugemalira, R. A. M.

    1980-03-01

    This paper deals with the problem of determining the average error probability in a digital fibre optical communication system, in the presence of message dependent inhomogeneous non-stationary shot noise, additive Gaussian noise and intersymbol interference. A zero-forcing equalization receiver filter is considered. Three techniques for error rate evaluation are compared. The Chernoff bound and the Gram-Charlier series expansion methods are compared to the characteristic function technique. The latter predicts a higher receiver sensitivity

  14. Beef steers with average dry matter intake and divergent average daily gain have altered gene expression in the jejunum

    Science.gov (United States)

    The objective of this study was to determine the association of differentially expressed genes (DEG) in the jejunum of steers with average DMI and high or low ADG. Feed intake and growth were measured in a cohort of 144 commercial Angus steers consuming a finishing diet containing (on a DM basis) 67...

  15. Is average daily travel time expenditure constant? In search of explanations for an increase in average travel time.

    NARCIS (Netherlands)

    van Wee, B.; Rietveld, P.; Meurs, H.

    2006-01-01

    Recent research suggests that the average time spent travelling by the Dutch population has increased over the past decades. However, different data sources show different levels of increase. This paper explores possible causes for this increase. They include a rise in incomes, which has probably

  16. Cast Stone Formulation At Higher Sodium Concentrations

    International Nuclear Information System (INIS)

    Fox, K. M.; Edwards, T. A.; Roberts, K. B.

    2013-01-01

    A low temperature waste form known as Cast Stone is being considered to provide supplemental Low Activity Waste (LAW) immobilization capacity for the Hanford site. Formulation of Cast Stone at high sodium concentrations is of interest since a significant reduction in the necessary volume of Cast Stone and subsequent disposal costs could be achieved if an acceptable waste form can be produced with a high sodium molarity salt solution combined with a high water to premix (or dry blend) ratio. The objectives of this study were to evaluate the factors involved with increasing the sodium concentration in Cast Stone, including production and performance properties and the retention and release of specific components of interest. Three factors were identified for the experimental matrix: the concentration of sodium in the simulated salt solution, the water to premix ratio, and the blast furnace slag portion of the premix. The salt solution simulants used in this study were formulated to represent the overall average waste composition. The cement, blast furnace slag, and fly ash were sourced from a supplier in the Hanford area in order to be representative. The test mixes were prepared in the laboratory and fresh properties were measured. Fresh density increased with increasing sodium molarity and with decreasing water to premix ratio, as expected given the individual densities of these components. Rheology measurements showed that all of the test mixes produced very fluid slurries. The fresh density and rheology data are of potential value in designing a future Cast Stone production facility. Standing water and density gradient testing showed that settling is not of particular concern for the high sodium compositions studied. Heat of hydration measurements may provide some insight into the reactions that occur within the test mixes, which may in turn be related to the properties and performance of the waste form. These measurements showed that increased sodium

  17. Cast Stone Formulation At Higher Sodium Concentrations

    International Nuclear Information System (INIS)

    Fox, K. M.; Roberts, K. A.; Edwards, T. B.

    2014-01-01

    A low temperature waste form known as Cast Stone is being considered to provide supplemental Low Activity Waste (LAW) immobilization capacity for the Hanford site. Formulation of Cast Stone at high sodium concentrations is of interest since a significant reduction in the necessary volume of Cast Stone and subsequent disposal costs could be achieved if an acceptable waste form can be produced with a high sodium molarity salt solution combined with a high water to premix (or dry blend) ratio. The objectives of this study were to evaluate the factors involved with increasing the sodium concentration in Cast Stone, including production and performance properties and the retention and release of specific components of interest. Three factors were identified for the experimental matrix: the concentration of sodium in the simulated salt solution, the water to premix ratio, and the blast furnace slag portion of the premix. The salt solution simulants used in this study were formulated to represent the overall average waste composition. The cement, blast furnace slag, and fly ash were sourced from a supplier in the Hanford area in order to be representative. The test mixes were prepared in the laboratory and fresh properties were measured. Fresh density increased with increasing sodium molarity and with decreasing water to premix ratio, as expected given the individual densities of these components. Rheology measurements showed that all of the test mixes produced very fluid slurries. The fresh density and rheology data are of potential value in designing a future Cast Stone production facility. Standing water and density gradient testing showed that settling is not of particular concern for the high sodium compositions studied. Heat of hydration measurements may provide some insight into the reactions that occur within the test mixes, which may in turn be related to the properties and performance of the waste form. These measurements showed that increased sodium

  18. Cast Stone Formulation At Higher Sodium Concentrations

    Energy Technology Data Exchange (ETDEWEB)

    Fox, K. M. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Roberts, K. A. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Edwards, T. B. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2014-02-28

    A low temperature waste form known as Cast Stone is being considered to provide supplemental Low Activity Waste (LAW) immobilization capacity for the Hanford site. Formulation of Cast Stone at high sodium concentrations is of interest since a significant reduction in the necessary volume of Cast Stone and subsequent disposal costs could be achieved if an acceptable waste form can be produced with a high sodium molarity salt solution combined with a high water to premix (or dry blend) ratio. The objectives of this study were to evaluate the factors involved with increasing the sodium concentration in Cast Stone, including production and performance properties and the retention and release of specific components of interest. Three factors were identified for the experimental matrix: the concentration of sodium in the simulated salt solution, the water to premix ratio, and the blast furnace slag portion of the premix. The salt solution simulants used in this study were formulated to represent the overall average waste composition. The cement, blast furnace slag, and fly ash were sourced from a supplier in the Hanford area in order to be representative. The test mixes were prepared in the laboratory and fresh properties were measured. Fresh density increased with increasing sodium molarity and with decreasing water to premix ratio, as expected given the individual densities of these components. Rheology measurements showed that all of the test mixes produced very fluid slurries. The fresh density and rheology data are of potential value in designing a future Cast Stone production facility. Standing water and density gradient testing showed that settling is not of particular concern for the high sodium compositions studied. Heat of hydration measurements may provide some insight into the reactions that occur within the test mixes, which may in turn be related to the properties and performance of the waste form. These measurements showed that increased sodium

  19. Cast Stone Formulation At Higher Sodium Concentrations

    Energy Technology Data Exchange (ETDEWEB)

    Fox, K. M.; Edwards, T. A.; Roberts, K. B.

    2013-10-02

    A low temperature waste form known as Cast Stone is being considered to provide supplemental Low Activity Waste (LAW) immobilization capacity for the Hanford site. Formulation of Cast Stone at high sodium concentrations is of interest since a significant reduction in the necessary volume of Cast Stone and subsequent disposal costs could be achieved if an acceptable waste form can be produced with a high sodium molarity salt solution combined with a high water to premix (or dry blend) ratio. The objectives of this study were to evaluate the factors involved with increasing the sodium concentration in Cast Stone, including production and performance properties and the retention and release of specific components of interest. Three factors were identified for the experimental matrix: the concentration of sodium in the simulated salt solution, the water to premix ratio, and the blast furnace slag portion of the premix. The salt solution simulants used in this study were formulated to represent the overall average waste composition. The cement, blast furnace slag, and fly ash were sourced from a supplier in the Hanford area in order to be representative. The test mixes were prepared in the laboratory and fresh properties were measured. Fresh density increased with increasing sodium molarity and with decreasing water to premix ratio, as expected given the individual densities of these components. Rheology measurements showed that all of the test mixes produced very fluid slurries. The fresh density and rheology data are of potential value in designing a future Cast Stone production facility. Standing water and density gradient testing showed that settling is not of particular concern for the high sodium compositions studied. Heat of hydration measurements may provide some insight into the reactions that occur within the test mixes, which may in turn be related to the properties and performance of the waste form. These measurements showed that increased sodium

  20. Cast Stone Formulation At Higher Sodium Concentrations

    Energy Technology Data Exchange (ETDEWEB)

    Fox, K. M.; Roberts, K. A.; Edwards, T. B.

    2013-09-17

    A low temperature waste form known as Cast Stone is being considered to provide supplemental Low Activity Waste (LAW) immobilization capacity for the Hanford site. Formulation of Cast Stone at high sodium concentrations is of interest since a significant reduction in the necessary volume of Cast Stone and subsequent disposal costs could be achieved if an acceptable waste form can be produced with a high sodium molarity salt solution combined with a high water to premix (or dry blend) ratio. The objectives of this study were to evaluate the factors involved with increasing the sodium concentration in Cast Stone, including production and performance properties and the retention and release of specific components of interest. Three factors were identified for the experimental matrix: the concentration of sodium in the simulated salt solution, the water to premix ratio, and the blast furnace slag portion of the premix. The salt solution simulants used in this study were formulated to represent the overall average waste composition. The cement, blast furnace slag, and fly ash were sourced from a supplier in the Hanford area in order to be representative. The test mixes were prepared in the laboratory and fresh properties were measured. Fresh density increased with increasing sodium molarity and with decreasing water to premix ratio, as expected given the individual densities of these components. Rheology measurements showed that all of the test mixes produced very fluid slurries. The fresh density and rheology data are of potential value in designing a future Cast Stone production facility. Standing water and density gradient testing showed that settling is not of particular concern for the high sodium compositions studied. Heat of hydration measurements may provide some insight into the reactions that occur within the test mixes, which may in turn be related to the properties and performance of the waste form. These measurements showed that increased sodium