WorldWideScience

Sample records for higher average number

  1. Self-similarity of higher-order moving averages

    Science.gov (United States)

    Arianos, Sergio; Carbone, Anna; Türk, Christian

    2011-10-01

    In this work, higher-order moving average polynomials are defined by straightforward generalization of the standard moving average. The self-similarity of the polynomials is analyzed for fractional Brownian series and quantified in terms of the Hurst exponent H by using the detrending moving average method. We prove that the exponent H of the fractional Brownian series and of the detrending moving average variance asymptotically agree for the first-order polynomial. Such asymptotic values are compared with the results obtained by the simulations. The higher-order polynomials correspond to trend estimates at shorter time scales as the degree of the polynomial increases. Importantly, the increase of polynomial degree does not require to change the moving average window. Thus trends at different time scales can be obtained on data sets with the same size. These polynomials could be interesting for those applications relying on trend estimates over different time horizons (financial markets) or on filtering at different frequencies (image analysis).

  2. The average crossing number of equilateral random polygons

    International Nuclear Information System (INIS)

    Diao, Y; Dobay, A; Kusner, R B; Millett, K; Stasiak, A

    2003-01-01

    In this paper, we study the average crossing number of equilateral random walks and polygons. We show that the mean average crossing number ACN of all equilateral random walks of length n is of the form (3/16)n ln n + O(n). A similar result holds for equilateral random polygons. These results are confirmed by our numerical studies. Furthermore, our numerical studies indicate that when random polygons of length n are divided into individual knot types, the for each knot type K can be described by a function of the form = a(n-n 0 )ln(n-n 0 ) + b(n-n 0 ) + c where a, b and c are constants depending on K and n 0 is the minimal number of segments required to form K. The profiles diverge from each other, with more complex knots showing higher than less complex knots. Moreover, the profiles intersect with the profile of all closed walks. These points of intersection define the equilibrium length of K, i.e., the chain length n e (K) at which a statistical ensemble of configurations with given knot type K-upon cutting, equilibration and reclosure to a new knot type K'-does not show a tendency to increase or decrease . This concept of equilibrium length seems to be universal, and applies also to other length-dependent observables for random knots, such as the mean radius of gyration g >

  3. Decreases in average bacterial community rRNA operon copy number during succession.

    Science.gov (United States)

    Nemergut, Diana R; Knelman, Joseph E; Ferrenberg, Scott; Bilinski, Teresa; Melbourne, Brett; Jiang, Lin; Violle, Cyrille; Darcy, John L; Prest, Tiffany; Schmidt, Steven K; Townsend, Alan R

    2016-05-01

    Trait-based studies can help clarify the mechanisms driving patterns of microbial community assembly and coexistence. Here, we use a trait-based approach to explore the importance of rRNA operon copy number in microbial succession, building on prior evidence that organisms with higher copy numbers respond more rapidly to nutrient inputs. We set flasks of heterotrophic media into the environment and examined bacterial community assembly at seven time points. Communities were arrayed along a geographic gradient to introduce stochasticity via dispersal processes and were analyzed using 16 S rRNA gene pyrosequencing, and rRNA operon copy number was modeled using ancestral trait reconstruction. We found that taxonomic composition was similar between communities at the beginning of the experiment and then diverged through time; as well, phylogenetic clustering within communities decreased over time. The average rRNA operon copy number decreased over the experiment, and variance in rRNA operon copy number was lowest both early and late in succession. We then analyzed bacterial community data from other soil and sediment primary and secondary successional sequences from three markedly different ecosystem types. Our results demonstrate that decreases in average copy number are a consistent feature of communities across various drivers of ecological succession. Importantly, our work supports the scaling of the copy number trait over multiple levels of biological organization, ranging from cells to populations and communities, with implications for both microbial ecology and evolution.

  4. The average inter-crossing number of equilateral random walks and polygons

    International Nuclear Information System (INIS)

    Diao, Y; Dobay, A; Stasiak, A

    2005-01-01

    In this paper, we study the average inter-crossing number between two random walks and two random polygons in the three-dimensional space. The random walks and polygons in this paper are the so-called equilateral random walks and polygons in which each segment of the walk or polygon is of unit length. We show that the mean average inter-crossing number ICN between two equilateral random walks of the same length n is approximately linear in terms of n and we were able to determine the prefactor of the linear term, which is a = 3ln2/8 ∼ 0.2599. In the case of two random polygons of length n, the mean average inter-crossing number ICN is also linear, but the prefactor of the linear term is different from that of the random walks. These approximations apply when the starting points of the random walks and polygons are of a distance ρ apart and ρ is small compared to n. We propose a fitting model that would capture the theoretical asymptotic behaviour of the mean average ICN for large values of ρ. Our simulation result shows that the model in fact works very well for the entire range of ρ. We also study the mean ICN between two equilateral random walks and polygons of different lengths. An interesting result is that even if one random walk (polygon) has a fixed length, the mean average ICN between the two random walks (polygons) would still approach infinity if the length of the other random walk (polygon) approached infinity. The data provided by our simulations match our theoretical predictions very well

  5. Relationships Between Average Depth and Number of Nodes for Decision Trees

    KAUST Repository

    Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail

    2013-01-01

    This paper presents a new tool for the study of relationships between total path length or average depth and number of nodes of decision trees. In addition to algorithm, the paper also presents the results of experiments with datasets from UCI ML

  6. The average number of partons per clan in rapidity intervals in parton showers

    Energy Technology Data Exchange (ETDEWEB)

    Giovannini, A. [Turin Univ. (Italy). Ist. di Fisica Teorica; Lupia, S. [Max-Planck-Institut fuer Physik, Muenchen (Germany). Werner-Heisenberg-Institut; Ugoccioni, R. [Lund Univ. (Sweden). Dept. of Theoretical Physics

    1996-04-01

    The dependence of the average number of partons per clan on virtuality and rapidity variables is analytically predicted in the framework of the Generalized Simplified Parton Shower model, based on the idea that clans are genuine elementary subprocesses. The obtained results are found to be qualitatively consistent with experimental trends. This study extends previous results on the behavior of the average number of clans in virtuality and rapidity and shows how important physical quantities can be calculated analytically in a model based on essentials of QCD allowing local violations of the energy-momentum conservation law, still requiring its global validity. (orig.)

  7. The average number of partons per clan in rapidity intervals in parton showers

    International Nuclear Information System (INIS)

    Giovannini, A.; Lupia, S.; Ugoccioni, R.

    1996-01-01

    The dependence of the average number of partons per clan on virtuality and rapidity variables is analytically predicted in the framework of the Generalized Simplified Parton Shower model, based on the idea that clans are genuine elementary subprocesses. The obtained results are found to be qualitatively consistent with experimental trends. This study extends previous results on the behavior of the average number of clans in virtuality and rapidity and shows how important physical quantities can be calculated analytically in a model based on essentials of QCD allowing local violations of the energy-momentum conservation law, still requiring its global validity. (orig.)

  8. Relationships Between Average Depth and Number of Nodes for Decision Trees

    KAUST Repository

    Chikalov, Igor

    2013-07-24

    This paper presents a new tool for the study of relationships between total path length or average depth and number of nodes of decision trees. In addition to algorithm, the paper also presents the results of experiments with datasets from UCI ML Repository [1]. © Springer-Verlag Berlin Heidelberg 2014.

  9. On averaging the Kubo-Hall conductivity of magnetic Bloch bands leading to Chern numbers

    International Nuclear Information System (INIS)

    Riess, J.

    1997-01-01

    The authors re-examine the topological approach to the integer quantum Hall effect in its original form where an average of the Kubo-Hall conductivity of a magnetic Bloch band has been considered. For the precise definition of this average it is crucial to make a sharp distinction between the discrete Bloch wave numbers k 1 , k 2 and the two continuous integration parameters α 1 , α 2 . The average over the parameter domain 0 ≤ α j 1 , k 2 . They show how this can be transformed into a single integral over the continuous magnetic Brillouin zone 0 ≤ α j j , j = 1, 2, n j = number of unit cells in j-direction, keeping k 1 , k 2 fixed. This average prescription for the Hall conductivity of a magnetic Bloch band is exactly the same as the one used for a many-body system in the presence of disorder

  10. Relationships between average depth and number of misclassifications for decision trees

    KAUST Repository

    Chikalov, Igor

    2014-02-14

    This paper presents a new tool for the study of relationships between the total path length or the average depth and the number of misclassifications for decision trees. In addition to algorithm, the paper also presents the results of experiments with datasets from UCI ML Repository [9] and datasets representing Boolean functions with 10 variables.

  11. Relationships between average depth and number of misclassifications for decision trees

    KAUST Repository

    Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail

    2014-01-01

    This paper presents a new tool for the study of relationships between the total path length or the average depth and the number of misclassifications for decision trees. In addition to algorithm, the paper also presents the results of experiments with datasets from UCI ML Repository [9] and datasets representing Boolean functions with 10 variables.

  12. Average electronegativity, electronic polarizability and optical basicity of lanthanide oxides for different coordination numbers

    International Nuclear Information System (INIS)

    Zhao Xinyu; Wang Xiaoli; Lin Hai; Wang Zhiqiang

    2008-01-01

    On the basis of new electronegativity values, electronic polarizability and optical basicity of lanthanide oxides are calculated from the concept of average electronegativity given by Asokamani and Manjula. The estimated values are in close agreement with our previous conclusion. Particularly, we attempt to obtain new data of electronic polarizability and optical basicity of lanthanide sesquioxides for different coordination numbers (6-12). The present investigation suggests that both electronic polarizability and optical basicity increase gradually with increasing coordination number. We also looked for another double peak effect, that is, electronic polarizability and optical basicity of trivalent lanthanide oxides show a gradual decrease and then an abrupt increase at the Europia and Ytterbia. Furthermore, close correlations are investigated among average electronegativity, optical basicity, electronic polarizability and coordination number in this paper

  13. Averaging Principle for the Higher Order Nonlinear Schrödinger Equation with a Random Fast Oscillation

    Science.gov (United States)

    Gao, Peng

    2018-04-01

    This work concerns the problem associated with averaging principle for a higher order nonlinear Schrödinger equation perturbed by a oscillating term arising as the solution of a stochastic reaction-diffusion equation evolving with respect to the fast time. This model can be translated into a multiscale stochastic partial differential equations. Stochastic averaging principle is a powerful tool for studying qualitative analysis of stochastic dynamical systems with different time-scales. To be more precise, under suitable conditions, we prove that there is a limit process in which the fast varying process is averaged out and the limit process which takes the form of the higher order nonlinear Schrödinger equation is an average with respect to the stationary measure of the fast varying process. Finally, by using the Khasminskii technique we can obtain the rate of strong convergence for the slow component towards the solution of the averaged equation, and as a consequence, the system can be reduced to a single higher order nonlinear Schrödinger equation with a modified coefficient.

  14. Averaging Principle for the Higher Order Nonlinear Schrödinger Equation with a Random Fast Oscillation

    Science.gov (United States)

    Gao, Peng

    2018-06-01

    This work concerns the problem associated with averaging principle for a higher order nonlinear Schrödinger equation perturbed by a oscillating term arising as the solution of a stochastic reaction-diffusion equation evolving with respect to the fast time. This model can be translated into a multiscale stochastic partial differential equations. Stochastic averaging principle is a powerful tool for studying qualitative analysis of stochastic dynamical systems with different time-scales. To be more precise, under suitable conditions, we prove that there is a limit process in which the fast varying process is averaged out and the limit process which takes the form of the higher order nonlinear Schrödinger equation is an average with respect to the stationary measure of the fast varying process. Finally, by using the Khasminskii technique we can obtain the rate of strong convergence for the slow component towards the solution of the averaged equation, and as a consequence, the system can be reduced to a single higher order nonlinear Schrödinger equation with a modified coefficient.

  15. The true bladder dose: on average thrice higher than the ICRU reference

    International Nuclear Information System (INIS)

    Barillot, I.; Horiot, J.C.; Maingon, P.; Bone-Lepinoy, M.C.; D'Hombres, A.; Comte, J.; Delignette, A.; Feutray, S.; Vaillant, D.

    1996-01-01

    The aim of this study is to compare ICRU dose to doses at the bladder base located from ultrasonography measurements. Since 1990, the dose delivered to the bladder during utero-vaginal brachytherapy was systematically calculated at 3 or 4 points representative of bladder base determined with ultrasonography. The ICRU Reference Dose (IRD) from films, the Maximum Dose (Dmax), the Mean Dose (Dmean) representative of the dose received by a large area of bladder mucosa, the Reference Dose Rate (RDR) and the Mean Dose Rate (MDR) were recorded. Material: from 1990 to 1994, 198 measurements were performed in 152 patients. 98 patients were treated for cervix carcinomas, 54 for endometrial carcinomas. Methods: Bladder complications were classified using French Italian Syllabus. The influence of doses and dose rates on complications were tested using non parametric t test. Results: On average IRD is 21 Gy +/- 12 Gy, Dmax is 51Gy +/- 21Gy, Dmean is 40 Gy +/16 Gy. On average Dmax is thrice higher than IRD and Dmean twice higher than IRD. The same results are obtained for cervix and endometrium. Comparisons on dose rates were also performed: MDR is on average twice higher than RDR (RDR 48 cGy/h vs MDR 88 cGy/h). The five observed complications consist of incontinence only (3 G1, 1G2, 1G3). They are only statistically correlated with RDR p=0.01 (46 cGy/h in patients without complications vs 74 cGy/h in patients with complications). However the full responsibility of RT remains doubtful and should be shared with surgery in all cases. In summary: Bladder mucosa seems to tolerate well much higher doses than previous recorded without increased risk of severe sequelae. However this finding is probably explained by our efforts to spare most of bladder mucosa by 1 deg. ) customised external irradiation therapy (4 fields, full bladder) 2 deg. ) reproduction of physiologic bladder filling during brachytherapy by intermittent clamping of the Foley catheter

  16. Interpolation of property-values between electron numbers is inconsistent with ensemble averaging

    Energy Technology Data Exchange (ETDEWEB)

    Miranda-Quintana, Ramón Alain [Laboratory of Computational and Theoretical Chemistry, Faculty of Chemistry, University of Havana, Havana (Cuba); Department of Chemistry and Chemical Biology, McMaster University, Hamilton, Ontario L8S 4M1 (Canada); Ayers, Paul W. [Department of Chemistry and Chemical Biology, McMaster University, Hamilton, Ontario L8S 4M1 (Canada)

    2016-06-28

    In this work we explore the physical foundations of models that study the variation of the ground state energy with respect to the number of electrons (E vs. N models), in terms of general grand-canonical (GC) ensemble formulations. In particular, we focus on E vs. N models that interpolate the energy between states with integer number of electrons. We show that if the interpolation of the energy corresponds to a GC ensemble, it is not differentiable. Conversely, if the interpolation is smooth, then it cannot be formulated as any GC ensemble. This proves that interpolation of electronic properties between integer electron numbers is inconsistent with any form of ensemble averaging. This emphasizes the role of derivative discontinuities and the critical role of a subsystem’s surroundings in determining its properties.

  17. Higher arithmetic an algorithmic introduction to number theory

    CERN Document Server

    Edwards, Harold M

    2008-01-01

    Although number theorists have sometimes shunned and even disparaged computation in the past, today's applications of number theory to cryptography and computer security demand vast arithmetical computations. These demands have shifted the focus of studies in number theory and have changed attitudes toward computation itself. The important new applications have attracted a great many students to number theory, but the best reason for studying the subject remains what it was when Gauss published his classic Disquisitiones Arithmeticae in 1801: Number theory is the equal of Euclidean geometry--some would say it is superior to Euclidean geometry--as a model of pure, logical, deductive thinking. An arithmetical computation, after all, is the purest form of deductive argument. Higher Arithmetic explains number theory in a way that gives deductive reasoning, including algorithms and computations, the central role. Hands-on experience with the application of algorithms to computational examples enables students to m...

  18. String fields, higher spins and number theory

    CERN Document Server

    Polyakov, Dimitri

    2018-01-01

    The book aims to analyze and explore deep and profound relations between string field theory, higher spin gauge theories and holography the disciplines that have been on the cutting edge of theoretical high energy physics and other fields. These intriguing relations and connections involve some profound ideas in number theory, which appear to be part of a unifying language to describe these connections.

  19. Control of underactuated driftless systems using higher-order averaging theory

    OpenAIRE

    Vela, Patricio A.; Burdick, Joel W.

    2003-01-01

    This paper applies a recently developed "generalized averaging theory" to construct stabilizing feedback control laws for underactuated driftless systems. These controls exponentialy stabilize in the average; the actual system may orbit around the average. Conditions for which the orbit collapses to the averaged trajectory are given. An example validates the theory, demonstrating its utility.

  20. Higher Education Faculty in Mexico and the United States: Characteristics and Policy Issues. Understanding the Differences: A Working Paper Series on Higher Education in the U. S. and Mexico. Working Paper Number 2.

    Science.gov (United States)

    Lovell, Cheryl D.; Sanchez, Maria Dolores Soler

    This working paper analyzes higher education faculty characteristics in Mexico and the United States. The first section describes and compares Mexican and U.S. faculty characteristics and conditions, including total number of faculty, student-teacher ratios, full- versus part-time status, rank, tenure, average salaries, gender and ethnicity, and…

  1. Total number albedo and average cosine of the polar angle of low-energy photons reflected from water

    Directory of Open Access Journals (Sweden)

    Marković Srpko

    2007-01-01

    Full Text Available The total number albedo and average cosine of the polar angle for water and initial photon energy range from 20 keV to 100 keV are presented in this pa per. A water shield in the form of a thick, homogenous plate and per pendicular incidence of the monoenergetic photon beam are assumed. The results were obtained through Monte Carlo simulations of photon reflection by means of the MCNP computer code. Calculated values for the total number albedo were compared with data previously published and good agreement was confirmed. The dependence of the average cosine of the polar angle on energy is studied in detail. It has been found that the total average cosine of the polar angle has values in the narrow interval of 0.66-0.67, approximately corresponding to the reflection angle of 48°, and that it does not depend on the initial photon energy.

  2. Rotational averaging of multiphoton absorption cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Friese, Daniel H., E-mail: daniel.h.friese@uit.no; Beerepoot, Maarten T. P.; Ruud, Kenneth [Centre for Theoretical and Computational Chemistry, University of Tromsø — The Arctic University of Norway, N-9037 Tromsø (Norway)

    2014-11-28

    Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.

  3. On the time-averaging of ultrafine particle number size spectra in vehicular plumes

    Directory of Open Access Journals (Sweden)

    X. H. Yao

    2006-01-01

    Full Text Available Ultrafine vehicular particle (<100 nm number size distributions presented in the literature are mostly averages of long scan-time (~30 s or more spectra mainly due to the non-availability of commercial instruments that can measure particle distributions in the <10 nm to 100 nm range faster than 30 s even though individual researchers have built faster (1–2.5 s scanning instruments. With the introduction of the Engine Exhaust Particle Sizer (EEPS in 2004, high time-resolution (1 full 32-channel spectrum per second particle size distribution data become possible and allow atmospheric researchers to study the characteristics of ultrafine vehicular particles in rapidly and perhaps randomly varying high concentration environments such as roadside, on-road and tunnel. In this study, particle size distributions in these environments were found to vary as rapidly as one second frequently. This poses the question on the generality of using averages of long scan-time spectra for dynamic and/or mechanistic studies in rapidly and perhaps randomly varying high concentration environments. One-second EEPS data taken at roadside, on roads and in tunnels by a mobile platform are time-averaged to yield 5, 10, 30 and 120 s distributions to answer this question.

  4. Safety Impact of Average Speed Control in the UK

    DEFF Research Database (Denmark)

    Lahrmann, Harry Spaabæk; Brassøe, Bo; Johansen, Jonas Wibert

    2016-01-01

    of automatic speed control was point-based, but in recent years a potentially more effective alternative automatic speed control method has been introduced. This method is based upon records of drivers’ average travel speed over selected sections of the road and is normally called average speed control...... in the UK. The study demonstrates that the introduction of average speed control results in statistically significant and substantial reductions both in speed and in number of accidents. The evaluation indicates that average speed control has a higher safety effect than point-based automatic speed control....

  5. Higher-order Nielsen numbers

    Directory of Open Access Journals (Sweden)

    Saveliev Peter

    2005-01-01

    Full Text Available Suppose , are manifolds, are maps. The well-known coincidence problem studies the coincidence set . The number is called the codimension of the problem. More general is the preimage problem. For a map and a submanifold of , it studies the preimage set , and the codimension is . In case of codimension , the classical Nielsen number is a lower estimate of the number of points in changing under homotopies of , and for an arbitrary codimension, of the number of components of . We extend this theory to take into account other topological characteristics of . The goal is to find a "lower estimate" of the bordism group of . The answer is the Nielsen group defined as follows. In the classical definition, the Nielsen equivalence of points of based on paths is replaced with an equivalence of singular submanifolds of based on bordisms. We let , then the Nielsen group of order is the part of preserved under homotopies of . The Nielsen number of order is the rank of this group (then . These numbers are new obstructions to removability of coincidences and preimages. Some examples and computations are provided.

  6. State Averages

    Data.gov (United States)

    U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...

  7. Higher P-Wave Dispersion in Migraine Patients with Higher Number of Attacks

    Directory of Open Access Journals (Sweden)

    A. Koçer

    2012-01-01

    Full Text Available Objective and Aim. An imbalance of the sympathetic system may explain many of the clinical manifestations of the migraine. We aimed to evaluate P-waves as a reveal of sympathetic system function in migraine patients and healthy controls. Materials and Methods. Thirty-five episodic type of migraine patients (complained of migraine during 5 years or more, BMI < 30 kg/m2 and 30 controls were included in our study. We measured P-wave durations (minimum, maximum, and dispersion from 12-lead ECG recording during pain-free periods. ECGs were transferred to a personal computer via a scanner and then used for magnification of x400 by Adobe Photoshop software. Results. P-wave durations were found to be similar between migraine patients and controls. Although P WD (P-wave dispersion was similar, the mean value was higher in migraine subjects. P WD was positively correlated with P max (P<0.01. Attacks number per month and male gender were the factors related to the P WD (P<0.01. Conclusions. Many previous studies suggested that increased sympathetic activity may cause an increase in P WD. We found that P WD of migraine patients was higher than controls, and P WD was related to attacks number per month and male gender. Further studies are needed to explain the chronic effects of migraine.

  8. The effects of sweep numbers per average and protocol type on the accuracy of the p300-based concealed information test.

    Science.gov (United States)

    Dietrich, Ariana B; Hu, Xiaoqing; Rosenfeld, J Peter

    2014-03-01

    In the first of two experiments, we compared the accuracy of the P300 concealed information test protocol as a function of numbers of trials experienced by subjects and ERP averages analyzed by investigators. Contrary to Farwell et al. (Cogn Neurodyn 6(2):115-154, 2012), we found no evidence that 100 trial based averages are more accurate than 66 or 33 trial based averages (all numbers led to accuracies of 84-94 %). There was actually a trend favoring the lowest trial numbers. The second study compared numbers of irrelevant stimuli recalled and recognized in the 3-stimulus protocol versus the complex trial protocol (Rosenfeld in Memory detection: theory and application of the concealed information test, Cambridge University Press, New York, pp 63-89, 2011). Again, in contrast to expectations from Farwell et al. (Cogn Neurodyn 6(2):115-154, 2012), there were no differences between protocols, although there were more irrelevant stimuli recognized than recalled, and irrelevant 4-digit number group stimuli were neither recalled nor recognized as well as irrelevant city name stimuli. We therefore conclude that stimulus processing in the P300-based complex trial protocol-with no more than 33 sweep averages-is adequate to allow accurate detection of concealed information.

  9. All ternary permutation constraint satisfaction problems parameterized above average have kernels with quadratic numbers of variables

    DEFF Research Database (Denmark)

    Gutin, Gregory; Van Iersel, Leo; Mnich, Matthias

    2010-01-01

    A ternary Permutation-CSP is specified by a subset Π of the symmetric group S3. An instance of such a problem consists of a set of variables V and a multiset of constraints, which are ordered triples of distinct variables of V. The objective is to find a linear ordering α of V that maximizes...... the number of triples whose rearrangement (under α) follows a permutation in Π. We prove that all ternary Permutation-CSPs parameterized above average have kernels with quadratic numbers of variables....

  10. Higher-order Nielsen numbers

    Directory of Open Access Journals (Sweden)

    Peter Saveliev

    2005-04-01

    Full Text Available Suppose X, Y are manifolds, f,g:X→Y are maps. The well-known coincidence problem studies the coincidence set C={x:f(x=g(x}. The number m=dim X−dim Y is called the codimension of the problem. More general is the preimage problem. For a map f:X→Z and a submanifold Y of Z, it studies the preimage set C={x:f(x∈Y}, and the codimension is m=dim X+dim Y−dim Z. In case of codimension 0, the classical Nielsen number N(f,Y is a lower estimate of the number of points in C changing under homotopies of f, and for an arbitrary codimension, of the number of components of C. We extend this theory to take into account other topological characteristics of C. The goal is to find a “lower estimate” of the bordism group Ωp(C of C. The answer is the Nielsen group Sp(f,Y defined as follows. In the classical definition, the Nielsen equivalence of points of C based on paths is replaced with an equivalence of singular submanifolds of C based on bordisms. We let Sp'(f,Y=Ωp(C/∼N, then the Nielsen group of order p is the part of Sp'(f,Y preserved under homotopies of f. The Nielsen number Np(F,Y of order p is the rank of this group (then N(f,Y=N0(f,Y. These numbers are new obstructions to removability of coincidences and preimages. Some examples and computations are provided.

  11. The average number of alpha-particle hits to the cell nucleus required to eradicate a tumour cell population

    International Nuclear Information System (INIS)

    Roeske, John C; Stinchcomb, Thomas G

    2006-01-01

    Alpha-particle emitters are currently being considered for the treatment of micrometastatic disease. Based on in vitro studies, it has been speculated that only a few alpha-particle hits to the cell nucleus are considered lethal. However, such estimates do not consider the stochastic variations in the number of alpha-particle hits, energy deposited, or in the cell survival process itself. Using a tumour control probability (TCP) model for alpha-particle emitters, we derive an estimate of the average number of hits to the cell nucleus required to provide a high probability of eradicating a tumour cell population. In simulation studies, our results demonstrate that the average number of hits required to achieve a 90% TCP for 10 4 clonogenic cells ranges from 18 to 108. Those cells that have large cell nuclei, high radiosensitivities and alpha-particle emissions occurring primarily in the nuclei tended to require more hits. As the clinical implementation of alpha-particle emitters is considered, this type of analysis may be useful in interpreting clinical results and in designing treatment strategies to achieve a favourable therapeutic outcome. (note)

  12. Neocortical glial cell numbers in human brains

    DEFF Research Database (Denmark)

    Pelvig, D.P.; Pakkenberg, H.; Stark, A.K.

    2008-01-01

    Stereological cell counting was applied to post-mortem neocortices of human brains from 31 normal individuals, age 18-93 years, 18 females (average age 65 years, range 18-93) and 13 males (average age 57 years, range 19-87). The cells were differentiated in astrocytes, oligodendrocytes, microglia...... while the total astrocyte number is constant through life; finally males have a 28% higher number of neocortical glial cells and a 19% higher neocortical neuron number than females. The overall total number of neocortical neurons and glial cells was 49.3 billion in females and 65.2 billion in males...... and neurons and counting were done in each of the four lobes. The study showed that the different subpopulations of glial cells behave differently as a function of age; the number of oligodendrocytes showed a significant 27% decrease over adult life and a strong correlation to the total number of neurons...

  13. Neocortical glial cell numbers in human brains.

    Science.gov (United States)

    Pelvig, D P; Pakkenberg, H; Stark, A K; Pakkenberg, B

    2008-11-01

    Stereological cell counting was applied to post-mortem neocortices of human brains from 31 normal individuals, age 18-93 years, 18 females (average age 65 years, range 18-93) and 13 males (average age 57 years, range 19-87). The cells were differentiated in astrocytes, oligodendrocytes, microglia and neurons and counting were done in each of the four lobes. The study showed that the different subpopulations of glial cells behave differently as a function of age; the number of oligodendrocytes showed a significant 27% decrease over adult life and a strong correlation to the total number of neurons while the total astrocyte number is constant through life; finally males have a 28% higher number of neocortical glial cells and a 19% higher neocortical neuron number than females. The overall total number of neocortical neurons and glial cells was 49.3 billion in females and 65.2 billion in males, a difference of 24% with a high biological variance. These numbers can serve as reference values in quantitative studies of the human neocortex.

  14. Average gluon and quark jet multiplicities at higher orders

    Energy Technology Data Exchange (ETDEWEB)

    Bolzoni, Paolo; Kniehl, Bernd A. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Kotikov, Anatoly V. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Joint Institute of Nuclear Research, Moscow (Russian Federation). Bogoliubov Lab. of Theoretical Physics

    2013-05-15

    We develop a new formalism for computing and including both the perturbative and nonperturbative QCD contributions to the scale evolution of average gluon and quark jet multiplicities. The new method is motivated by recent progress in timelike small-x resummation obtained in the MS factorization scheme. We obtain next-to-next-to-leading-logarithmic (NNLL) resummed expressions, which represent generalizations of previous analytic results. Our expressions depend on two nonperturbative parameters with clear and simple physical interpretations. A global fit of these two quantities to all available experimental data sets that are compatible with regard to the jet algorithms demonstrates by its goodness how our results solve a longstanding problem of QCD. We show that the statistical and theoretical uncertainties both do not exceed 5% for scales above 10 GeV. We finally propose to use the jet multiplicity data as a new way to extract the strong-coupling constant. Including all the available theoretical input within our approach, we obtain {alpha}{sub s}{sup (5)}(M{sub Z})=0.1199{+-}0.0026 in the MS scheme in an approximation equivalent to next-to-next-to-leading order enhanced by the resummations of ln(x) terms through the NNLL level and of ln Q{sup 2} terms by the renormalization group, in excellent agreement with the present world average.

  15. Relationship between gustatory function and average number of taste buds per fungiform papilla measured by confocal laser scanning microscopy in humans.

    Science.gov (United States)

    Saito, Takehisa; Ito, Tetsufumi; Ito, Yumi; Manabe, Yasuhiro; Sano, Kazuo

    2017-02-01

    The aim of this study was to elucidate the relationship between the gustatory function and average number of taste buds per fungiform papilla (FP) in humans. Systemically healthy volunteers (n = 211), pre-operative patients with chronic otitis media (n = 79), and postoperative patients, with or without a chorda tympani nerve (CTN) severed during middle ear surgery (n = 63), were included. Confocal laser scanning microscopy was employed to observe fungiform taste buds because it allows many FP to be observed non-invasively in a short period of time. Taste buds in an average of 10 FP in the midlateral region of the tongue were counted. In total, 3,849 FP were observed in 353 subjects. The gustatory function was measured by electrogustometry (EGM). An inverse relationship was found between the gustatory function and average number of fungiform taste buds per papilla. The healthy volunteers showed a lower EGM threshold (better gustatory function) and had more taste buds than did the patients with otitis media, and the patients with otitis media showed a lower EGM threshold and had more taste buds than did postoperative patients, reflecting the severity of damage to the CTN. It was concluded that the confocal laser scanning microscope is a very useful tool for using to observe a large number of taste buds non-invasively. © 2017 Eur J Oral Sci.

  16. Flight Measurements of Average Skin-Friction Coefficients on a Parabolic Body of Revolution (NACA RM-10) at Mach Numbers from 1.0 to 3.7

    Science.gov (United States)

    Loposer, J. Dan; Rumsey, Charles B.

    1954-01-01

    Measurement of average skin-friction coefficients have been made on six rocket-powered free-flight models by using the boundary-layer rake technique. The model configuration was the NACA RM-10, a 12.2-fineness-ratio parabolic body of revolution with a flat base. Measurements were made over a Mach number range from 1 to 3.7, a Reynolds number range 40 x 10(exp 6) to 170 x 10(exp 6) based on length to the measurement station, and with aerodynamic heating conditions varying from strong skin heating to strong skin cooling. The measurements show the same trends over the test ranges as Van Driest's theory for turbulent boundary layer on a flat plate. The measured values are approximately 7 percent higher than the values of the flat-plate theory. A comparison which takes into account the differences in Reynolds number is made between the present results and skin-friction measurements obtained on NACA RM-10 scale models in the Langley 4- by 4-foot supersonic pressure tunnel, the Lewis 8- by 6-foot supersonic tunnel, and the Langley 9-inch supersonic tunnel. Good agreement is shown at all but the lowest tunnel Reynolds number conditions. A simple empirical equation is developed which represents the measurements over the range of the tests.

  17. When larger brains do not have more neurons: Increased numbers of cells are compensated by decreased average cell size across mouse individuals

    Directory of Open Access Journals (Sweden)

    Suzana eHerculano-Houzel

    2015-06-01

    Full Text Available There is a strong trend toward increased brain size in mammalian evolution, with larger brains composed of more and larger neurons than smaller brains across species within each mammalian order. Does the evolution of increased numbers of brain neurons, and thus larger brain size, occur simply through the selection of individuals with more and larger neurons, and thus larger brains, within a population? That is, do individuals with larger brains also have more, and larger, neurons than individuals with smaller brains, such that allometric relationships across species are simply an extension of intraspecific scaling? Here we show that this is not the case across adult male mice of a similar age. Rather, increased numbers of neurons across individuals are accompanied by increased numbers of other cells and smaller average cell size of both types, in a trade-off that explains how increased brain mass does not necessarily ensue. Fundamental regulatory mechanisms thus must exist that tie numbers of neurons to numbers of other cells and to average cell size within individual brains. Finally, our results indicate that changes in brain size in evolution are not an extension of individual variation in numbers of neurons, but rather occur through step changes that must simultaneously increase numbers of neurons and cause cell size to increase, rather than decrease.

  18. A main factors affecting average number of teats in pigs

    Directory of Open Access Journals (Sweden)

    Emil Krupa

    2016-09-01

    Full Text Available The influence of factors (breed, year and season of farrowing, herd, parity order, sire of litter, total number of born piglets - TNB, number of piglets born alive - NBA, number of weaned piglets - NW, and linear and quadratic regression on the number of teats, found for all piglets in the litter till ten days after born, expressed as arithmetic mean for each litter as sum of all teats number of each piglet in appropriate litter divided by number of piglets in this litter at first litter (ANT1 and second and subsequent litters (ANT2+ were analysed. The coefficient of determination was 0.46 and 0.33 for ANT1 and ANT2+, respectively. The statistically high influence (P<0.001 on ANT1 and ANT2+ was determined for year and season of farrowing, herd, parity order (only for ANT2+ and sire of litter effects. Impact of breed was found only on ANT2+ (P<0.001. The rest of factors have negligible of no impact on traits. Based on the data available for analyses, obtained results will serve as a relevant set-up in developing the model for genetic evaluation for these traits.

  19. Average number of neutrons in π-p, π-n, and π-12C interactions at 4 GeV/c

    International Nuclear Information System (INIS)

    Bekmirzaev, R.N.; Grishin, V.G.; Muminov, M.M.; Suvanov, I.; Trka, Z.; Trkova, J.

    1984-01-01

    The average numbers of secondary neutrons in π - p, π - n, and π -12 C interactions at 4 GeV/c have been determined by investigating secondary neutral stars produced by neutrons in a propane bubble chamber. The following values were obtained for the charge-exchange coefficients: α(p→n) = 0.39 +- 0.04 and α(n→p) = 0.37 +- 0.08

  20. Potential host number in cuckoo bees (Psithyrus subgen. increases toward higher elevations

    Directory of Open Access Journals (Sweden)

    Jean-Nicolas Pradervand

    2013-07-01

    Full Text Available In severe and variable conditions, specialized resource selection strategies should be less frequent because extinction risks increase for species that depend on a single and unstable resource. Psithyrus (Bombus subgenus Psithyrus are bumblebee parasites that usurp Bombus nests and display inter‐specific variation in the number of hosts they parasitize. Using a phylogenetic comparative framework, we show that Psithyrus species at higher elevations display a higher number of hosts species compared with species restricted to lower elevations. Species inhabiting high elevations also cover a larger temperature range, suggesting that species able to occur in colder conditions may benefit from recruitment from populations occurring in warmer conditions. Our results provide evidence for an ‘altitudinal niche breadth hypothesis’ in parasitic species, showing a decrease in the parasites’ specialization along the elevational gradient, and also suggesting that Rapoport’s rule might apply to Psithyrus. 

  1. Determining average path length and average trapping time on generalized dual dendrimer

    Science.gov (United States)

    Li, Ling; Guan, Jihong

    2015-03-01

    Dendrimer has wide number of important applications in various fields. In some cases during transport or diffusion process, it transforms into its dual structure named Husimi cactus. In this paper, we study the structure properties and trapping problem on a family of generalized dual dendrimer with arbitrary coordination numbers. We first calculate exactly the average path length (APL) of the networks. The APL increases logarithmically with the network size, indicating that the networks exhibit a small-world effect. Then we determine the average trapping time (ATT) of the trapping process in two cases, i.e., the trap placed on a central node and the trap is uniformly distributed in all the nodes of the network. In both case, we obtain explicit solutions of ATT and show how they vary with the networks size. Besides, we also discuss the influence of the coordination number on trapping efficiency.

  2. Regulation of chloroplast number and DNA synthesis in higher plants. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Mullet, J.E.

    1995-11-10

    The long term objective of this research is to understand the process of chloroplast development and its coordination with leaf development in higher plants. This is important because the photosynthetic capacity of plants is directly related to leaf and chloroplast development. This research focuses on obtaining a detailed description of leaf development and the early steps in chloroplast development including activation of plastid DNA synthesis, changes in plastid DNA copy number, activation of chloroplast transcription and increases in plastid number per cell. The grant will also begin analysis of specific biochemical mechanisms by isolation of the plastid DNA polymerase, and identification of genetic mutants which are altered in their accumulation of plastid DNA and plastid number per cell.

  3. Convolutional Code Based PAPR Reduction Scheme for Multicarrier Transmission with Higher Number of Subcarriers

    Directory of Open Access Journals (Sweden)

    SAJJAD ALIMEMON

    2017-10-01

    Full Text Available Multicarrier transmission technique has become a prominent transmission technique in high-speed wireless communication systems. It is due to its frequency diversity,small inter-symbol interference in the multipath fading channel, simple equalizer structure, and high bandwidth efficiency. Nevertheless, in thetime domain, multicarrier transmission signal has high PAPR (Peak-to-Average Power Ratio thatinterprets to low power amplifier efficiencies. To decrease the PAPR, a CCSLM (Convolutional Code Selective Mapping scheme for multicarrier transmission with a high number of subcarriers is proposed in this paper. Proposed scheme is based on SLM method and employs interleaver and convolutional coding. Related works on the PAPR reduction have considered either 128 or 256 number of subcarriers. However, PAPR of multicarrier transmission signal will increase as a number of subcarriers increases. The proposed method achieves significant PAPR reduction for ahigher number of subcarriers as well as better power amplifier efficiency. Simulation outcomes validate the usefulness of projected scheme.

  4. The average number of critical rank-one approximations to a tensor

    NARCIS (Netherlands)

    Draisma, J.; Horobet, E.

    2014-01-01

    Motivated by the many potential applications of low-rank multi-way tensor approximations, we set out to count the rank-one tensors that are critical points of the distance function to a general tensor v. As this count depends on v, we average over v drawn from a Gaussian distribution, and find

  5. Applicability of higher-order TVD method to low mach number compressible flows

    International Nuclear Information System (INIS)

    Akamatsu, Mikio

    1995-01-01

    Steep gradients of fluid density are the influential factor of spurious oscillation in numerical solutions of low Mach number (M<<1) compressible flows. The total variation diminishing (TVD) scheme is a promising remedy to overcome this problem and obtain accurate solutions. TVD schemes for high-speed flows are, however, not compatible with commonly used methods in low Mach number flows using pressure-based formulation. In the present study a higher-order TVD scheme is constructed on a modified form of each individual scalar equation of primitive variables. It is thus clarified that the concept of TVD is applicable to low Mach number flows within the framework of the existing numerical method. Results of test problems of the moving interface of two-component gases with the density ratio ≥ 4, demonstrate the accurate and robust (wiggle-free) profile of the scheme. (author)

  6. Determination of seasonal, diurnal, and height resolved average number concentration in a pollution impacted rural continental location

    Science.gov (United States)

    Bullard, Robert L.; Stanier, Charles O.; Ogren, John A.; Sheridan, Patrick J.

    2013-05-01

    The impact of aerosols on Earth's radiation balance and the associated climate forcing effects of aerosols represent significant uncertainties in assessment reports. The main source of ultrafine aerosols in the atmosphere is the nucleation and subsequent growth of gas phase aerosol precursors into liquid or solid phase particles. Long term records of aerosol number, nucleation event frequency, and vertical profiles of number concentration are rare. The data record from multiagency monitoring assets at Bondville, IL can contribute important information on long term and vertically resolved patterns. Although particle number size distribution data are only occasionally available at Bondville, highly time-resolved particle number concentration data have been measured for nearly twenty years by the NOAA ESRL Global Monitoring Division. Furthermore, vertically-resolved aerosol counts and other aerosol physical parameters are available from more than 300 flights of the NOAA Airborne Aerosol Observatory (AAO). These data sources are used to better understand the seasonal, diurnal, and vertical variation and trends in atmospheric aerosols. The highest peaks in condensation nuclei greater than 14 nm occur during the spring months (May, April) with slightly lower peaks during the fall months (September, October). The diurnal pattern of aerosol number has a midday peak and the timing of the peak has seasonal patterns (earlier during warm months and later during colder months). The seasonal and diurnal patterns of high particle number peaks correspond to seasons and times of day associated with low aerosol mass and surface area. Average vertical profiles show a nearly monotonic decrease with altitude in all months, and with peak magnitudes occurring in the spring and fall. Individual flight tracks show evidence of plumes (i.e., enhanced aerosol number is limited to a small altitude range, is not homogeneous horizontally, or both) as well as periods with enhanced particle number

  7. Contemporary and prospective fuel cycles for WWER-440 based on new assemblies with higher uranium capacity and higher average fuel enrichment

    International Nuclear Information System (INIS)

    Gagarinskiy, A.A.; Saprykin, V.V.

    2009-01-01

    RRC 'Kurchatov Institute' has performed an extensive cycle of calculations intended to validate the opportunities of improving different fuel cycles for WWER-440 reactors. Works were performed to upgrade and improve WWER-440 fuel cycles on the basis of second-generation fuel assemblies allowing core thermal power to be uprated to 107 108 % of its nominal value (1375 MW), while maintaining the same fuel operation lifetime. Currently intensive work is underway to develop fuel cycles based on second-generation assemblies with higher fuel capacity and average fuel enrichment per assembly increased up to 4.87 % of U-235. Fuel capacity of second-generation assemblies was increased by means of eliminated central apertures of fuel pellets, and pellet diameter extended due to reduced fuel cladding thickness. This paper intends to summarize the results of works performed in the field of WWER-440 fuel cycle modernization, and to present yet unemployed opportunities and prospects of further improvement of WWER-440 neutronic and operating parameters by means of additional optimization of fuel assembly designs and fuel element arrangements applied. (Authors)

  8. The average size of ordered binary subgraphs

    NARCIS (Netherlands)

    van Leeuwen, J.; Hartel, Pieter H.

    To analyse the demands made on the garbage collector in a graph reduction system, the change in size of an average graph is studied when an arbitrary edge is removed. In ordered binary trees the average number of deleted nodes as a result of cutting a single edge is equal to the average size of a

  9. The growth of the mean average crossing number of equilateral polygons in confinement

    International Nuclear Information System (INIS)

    Arsuaga, J; Borgo, B; Scharein, R; Diao, Y

    2009-01-01

    The physical and biological properties of collapsed long polymer chains as well as of highly condensed biopolymers (such as DNA in all organisms) are known to be determined, at least in part, by their topological and geometrical properties. With this purpose of characterizing the topological properties of such condensed systems equilateral random polygons restricted to confined volumes are often used. However, very few analytical results are known. In this paper, we investigate the effect of volume confinement on the mean average crossing number (ACN) of equilateral random polygons. The mean ACN of knots and links under confinement provides a simple alternative measurement for the topological complexity of knots and links in the statistical sense. For an equilateral random polygon of n segments without any volume confinement constrain, it is known that its mean ACN (ACN) is of the order 3/16 n log n + O(n). Here we model the confining volume as a simple sphere of radius R. We provide an analytical argument which shows that (ACN) of an equilateral random polygon of n segments under extreme confinement (meaning R 2 ). We propose to model the growth of (ACN) as a(R)n 2 + b(R)nln(n) under a less-extreme confinement condition, where a(R) and b(R) are functions of R with R being the radius of the confining sphere. Computer simulations performed show a fairly good fit using this model.

  10. Experimental study on the potential of higher octane number fuels for low load partially premixed combustion

    NARCIS (Netherlands)

    Wang, S.; van der Waart, K.; Somers, B.; de Goey, P.

    2017-01-01

    The optimal fuel for partially premixed combustion (PPC) is considered to be a gasoline boiling range fuel with an octane number around 70. Higher octane number fuels are considered problematic with low load and idle conditions. In previous studies mostly the intake air temperature did not exceed 30

  11. Average of delta: a new quality control tool for clinical laboratories.

    Science.gov (United States)

    Jones, Graham R D

    2016-01-01

    Average of normals is a tool used to control assay performance using the average of a series of results from patients' samples. Delta checking is a process of identifying errors in individual patient results by reviewing the difference from previous results of the same patient. This paper introduces a novel alternate approach, average of delta, which combines these concepts to use the average of a number of sequential delta values to identify changes in assay performance. Models for average of delta and average of normals were developed in a spreadsheet application. The model assessed the expected scatter of average of delta and average of normals functions and the effect of assay bias for different values of analytical imprecision and within- and between-subject biological variation and the number of samples included in the calculations. The final assessment was the number of patients' samples required to identify an added bias with 90% certainty. The model demonstrated that with larger numbers of delta values, the average of delta function was tighter (lower coefficient of variation). The optimal number of samples for bias detection with average of delta was likely to be between 5 and 20 for most settings and that average of delta outperformed average of normals when the within-subject biological variation was small relative to the between-subject variation. Average of delta provides a possible additional assay quality control tool which theoretical modelling predicts may be more valuable than average of normals for analytes where the group biological variation is wide compared with within-subject variation and where there is a high rate of repeat testing in the laboratory patient population. © The Author(s) 2015.

  12. 4π-spectrometer technique for measurements of secondary neutron average number in nuclear fission by 252Cf neutrons

    International Nuclear Information System (INIS)

    Vasil'ev, Yu.A.; Barashkov, Yu.A.; Golovanov, O.A.; Sidorov, L.V.

    1977-01-01

    A method for determining the average number of secondary neutrons anti ν produced in nuclear fission by the neutrons of the 252 Cf fission spectra by means of a 4π time-of-flight spectrometer is described. Layers of 252 Cf and an isotope studied are placed close to each other; if the isotope layer density is 1 mg/cm 2 probability of its fission is about 10 -5 per one spontaneous fission of californium. Fission fragments of 252 Cf and the isotope investigated have been detected by two surface-barrier counters with an efficiency close to 100%. The layers and the counters are situated in a measuring chamber placed in the center of the 4π time-of-flight spectrometer. The latter is utilized as a neutron counter because of its fast response. The method has been verified by carrying out measurements for 235 U and 239 Pu. A comparison of the experimental and calculated results shows that the method suggested can apply to determine the number of secondary neutrons in fission of isotopes that have not been investigated yet

  13. Influence of Averaging Preprocessing on Image Analysis with a Markov Random Field Model

    Science.gov (United States)

    Sakamoto, Hirotaka; Nakanishi-Ohno, Yoshinori; Okada, Masato

    2018-02-01

    This paper describes our investigations into the influence of averaging preprocessing on the performance of image analysis. Averaging preprocessing involves a trade-off: image averaging is often undertaken to reduce noise while the number of image data available for image analysis is decreased. We formulated a process of generating image data by using a Markov random field (MRF) model to achieve image analysis tasks such as image restoration and hyper-parameter estimation by a Bayesian approach. According to the notions of Bayesian inference, posterior distributions were analyzed to evaluate the influence of averaging. There are three main results. First, we found that the performance of image restoration with a predetermined value for hyper-parameters is invariant regardless of whether averaging is conducted. We then found that the performance of hyper-parameter estimation deteriorates due to averaging. Our analysis of the negative logarithm of the posterior probability, which is called the free energy based on an analogy with statistical mechanics, indicated that the confidence of hyper-parameter estimation remains higher without averaging. Finally, we found that when the hyper-parameters are estimated from the data, the performance of image restoration worsens as averaging is undertaken. We conclude that averaging adversely influences the performance of image analysis through hyper-parameter estimation.

  14. The association between higher education and approximate number system acuity.

    Science.gov (United States)

    Lindskog, Marcus; Winman, Anders; Juslin, Peter

    2014-01-01

    Humans are equipped with an approximate number system (ANS) supporting non-symbolic numerosity representation. Studies indicate a relationship between ANS-precision (acuity) and math achievement. Whether the ANS is a prerequisite for learning mathematics or if mathematics education enhances the ANS remains an open question. We investigated the association between higher education and ANS acuity with university students majoring in subjects with varying amounts of mathematics (mathematics, business, and humanities), measured either early (First year) or late (Third year) in their studies. The results suggested a non-significant trend where students taking more mathematics had better ANS acuity and a significant improvement in ANS acuity as a function of study length that was mainly confined to the business students. The results provide partial support for the hypothesis that education in mathematics can enhance the ANS acuity.

  15. The Association Between Higher Education and Approximate Number System Acuity

    Directory of Open Access Journals (Sweden)

    Marcus eLindskog

    2014-05-01

    Full Text Available Humans are equipped with an Approximate Number System (ANS supporting non-symbolic numerosity representation. Studies indicate a relationship between ANS-precision (acuity and math achievement. Whether the ANS is a prerequisite for learning mathematics or if mathematics education enhances the ANS remains an open question. We investigated the association between higher education and ANS acuity with university students majoring in subjects with varying amounts of mathematics (mathematics, business, and humanities, measured either early (1th year or late (3rd year in their studies. The results suggested a non-significant trend where students taking more mathematics had better ANS acuity and a significant improvement in ANS acuity as a function of study length that was mainly confined to the business students. The results provide partial support for the hypothesis that education in mathematics can enhance the ANS acuity.

  16. The association between higher education and approximate number system acuity

    Science.gov (United States)

    Lindskog, Marcus; Winman, Anders; Juslin, Peter

    2014-01-01

    Humans are equipped with an approximate number system (ANS) supporting non-symbolic numerosity representation. Studies indicate a relationship between ANS-precision (acuity) and math achievement. Whether the ANS is a prerequisite for learning mathematics or if mathematics education enhances the ANS remains an open question. We investigated the association between higher education and ANS acuity with university students majoring in subjects with varying amounts of mathematics (mathematics, business, and humanities), measured either early (First year) or late (Third year) in their studies. The results suggested a non-significant trend where students taking more mathematics had better ANS acuity and a significant improvement in ANS acuity as a function of study length that was mainly confined to the business students. The results provide partial support for the hypothesis that education in mathematics can enhance the ANS acuity. PMID:24904478

  17. Limits on hypothesizing new quantum numbers

    International Nuclear Information System (INIS)

    Goldstein, G.R.; Moravcsik, M.J.

    1986-01-01

    According to a recent theorem, for a general quantum-mechanical system undergoing a process, one can tell from measurements on this system whether or not it is characterized by a quantum number, the existence of which is unknown to the observer, even though the detecting equipment used by the observer is unable to distinguish among the various possible values of the ''secret'' quantum number and hence always averages over them. The present paper deals with situations in which this averaging is avoided and hence the ''secret'' quantum number remains ''secret.'' This occurs when a new quantum number is hypothesized in such a way that all the past measurements pertain to the system with one and the same value of the ''secret'' quantum number, or when the new quantum number is related to the old ones by a specific dynamical model providing a one-to-one correspondence. In the first of these cases, however, the one and the same state of the ''secret'' quantum number needs to be a nondegenerate one. If it is degenerate, the theorem can again be applied. This last feature provides a tool for experimentally testing symmetry breaking and the reestablishment of symmetries in asymptotic regions. The situation is illustrated on historical examples like isospin and strangeness, as well as on some contemporary schemes involving spaces of higher dimensionality

  18. Increasing average period lengths by switching of robust chaos maps in finite precision

    Science.gov (United States)

    Nagaraj, N.; Shastry, M. C.; Vaidya, P. G.

    2008-12-01

    Grebogi, Ott and Yorke (Phys. Rev. A 38, 1988) have investigated the effect of finite precision on average period length of chaotic maps. They showed that the average length of periodic orbits (T) of a dynamical system scales as a function of computer precision (ɛ) and the correlation dimension (d) of the chaotic attractor: T ˜ɛ-d/2. In this work, we are concerned with increasing the average period length which is desirable for chaotic cryptography applications. Our experiments reveal that random and chaotic switching of deterministic chaotic dynamical systems yield higher average length of periodic orbits as compared to simple sequential switching or absence of switching. To illustrate the application of switching, a novel generalization of the Logistic map that exhibits Robust Chaos (absence of attracting periodic orbits) is first introduced. We then propose a pseudo-random number generator based on chaotic switching between Robust Chaos maps which is found to successfully pass stringent statistical tests of randomness.

  19. Adaptive Spontaneous Transitions between Two Mechanisms of Numerical Averaging.

    Science.gov (United States)

    Brezis, Noam; Bronfman, Zohar Z; Usher, Marius

    2015-06-04

    We investigated the mechanism with which humans estimate numerical averages. Participants were presented with 4, 8 or 16 (two-digit) numbers, serially and rapidly (2 numerals/second) and were instructed to convey the sequence average. As predicted by a dual, but not a single-component account, we found a non-monotonic influence of set-size on accuracy. Moreover, we observed a marked decrease in RT as set-size increases and RT-accuracy tradeoff in the 4-, but not in the 16-number condition. These results indicate that in accordance with the normative directive, participants spontaneously employ analytic/sequential thinking in the 4-number condition and intuitive/holistic thinking in the 16-number condition. When the presentation rate is extreme (10 items/sec) we find that, while performance still remains high, the estimations are now based on intuitive processing. The results are accounted for by a computational model postulating population-coding underlying intuitive-averaging and working-memory-mediated symbolic procedures underlying analytical-averaging, with flexible allocation between the two.

  20. Improved averaging for non-null interferometry

    Science.gov (United States)

    Fleig, Jon F.; Murphy, Paul E.

    2013-09-01

    Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.

  1. Average Revisited in Context

    Science.gov (United States)

    Watson, Jane; Chick, Helen

    2012-01-01

    This paper analyses the responses of 247 middle school students to items requiring the concept of average in three different contexts: a city's weather reported in maximum daily temperature, the number of children in a family, and the price of houses. The mixed but overall disappointing performance on the six items in the three contexts indicates…

  2. Beta-energy averaging and beta spectra

    International Nuclear Information System (INIS)

    Stamatelatos, M.G.; England, T.R.

    1976-07-01

    A simple yet highly accurate method for approximately calculating spectrum-averaged beta energies and beta spectra for radioactive nuclei is presented. This method should prove useful for users who wish to obtain accurate answers without complicated calculations of Fermi functions, complex gamma functions, and time-consuming numerical integrations as required by the more exact theoretical expressions. Therefore, this method should be a good time-saving alternative for investigators who need to make calculations involving large numbers of nuclei (e.g., fission products) as well as for occasional users interested in restricted number of nuclides. The average beta-energy values calculated by this method differ from those calculated by ''exact'' methods by no more than 1 percent for nuclides with atomic numbers in the 20 to 100 range and which emit betas of energies up to approximately 8 MeV. These include all fission products and the actinides. The beta-energy spectra calculated by the present method are also of the same quality

  3. 40 CFR 80.205 - How is the annual refinery or importer average and corporate pool average sulfur level determined?

    Science.gov (United States)

    2010-07-01

    ... volume of gasoline produced or imported in batch i. Si=The sulfur content of batch i determined under § 80.330. n=The number of batches of gasoline produced or imported during the averaging period. i=Individual batch of gasoline produced or imported during the averaging period. (b) All annual refinery or...

  4. The influence of poly(acrylic) acid number average molecular weight and concentration in solution on the compressive fracture strength and modulus of a glass-ionomer restorative.

    LENUS (Irish Health Repository)

    Dowling, Adam H

    2011-06-01

    The aim was to investigate the influence of number average molecular weight and concentration of the poly(acrylic) acid (PAA) liquid constituent of a GI restorative on the compressive fracture strength (σ) and modulus (E).

  5. Determination of the average number of electrons released during the oxidation of ethanol in a direct ethanol fuel cell

    International Nuclear Information System (INIS)

    Majidi, Pasha; Pickup, Peter G.

    2015-01-01

    The energy efficiency of a direct ethanol fuel cell (DEFC) is directly proportional to the average number of electrons released per ethanol molecule (n-value) at the anode. An approach to measuring n-values in DEFC hardware is presented, validated for the oxidation of methanol, and shown to provide n-values for ethanol oxidation that are consistent with trends and estimates from full product analysis. The method is based on quantitative oxidation of fuel that crosses through the membrane to avoid the errors that would otherwise result from crossover. It will be useful for rapid screening of catalysts, and allows performances (polarization curves) and n-values to be determined simultaneously under well controlled transport conditions.

  6. Gender Variations in the Effects of Number of Organizational Memberships, Number of Social Networking Sites, and Grade-Point Average on Global Social Responsibility in Filipino University Students

    Directory of Open Access Journals (Sweden)

    Romeo B. Lee

    2016-02-01

    Full Text Available The study seeks to estimate gender variations in the direct effects of (a number of organizational memberships, (b number of social networking sites (SNS, and (c grade-point average (GPA on global social responsibility (GSR; and in the indirect effects of (a and of (b through (c on GSR. Cross-sectional survey data were drawn from questionnaire interviews involving 3,173 Filipino university students. Based on a path model, the three factors were tested to determine their inter-relationships and their relationships with GSR. The direct and total effects of the exogenous factors on the dependent variable are statistically significantly robust. The indirect effects of organizational memberships on GSR through GPA are also statistically significant, but the indirect effects of SNS on GSR through GPA are marginal. Men and women significantly differ only in terms of the total effects of their organizational memberships on GSR. The lack of broad gender variations in the effects of SNS, organizational memberships and GPA on GSR may be linked to the relatively homogenous characteristics and experiences of the university students interviewed. There is a need for more path models to better understand the predictors of GSR in local students.

  7. Gender Variations in the Effects of Number of Organizational Memberships, Number of Social Networking Sites, and Grade-Point Average on Global Social Responsibility in Filipino University Students

    Science.gov (United States)

    Lee, Romeo B.; Baring, Rito V.; Sta. Maria, Madelene A.

    2016-01-01

    The study seeks to estimate gender variations in the direct effects of (a) number of organizational memberships, (b) number of social networking sites (SNS), and (c) grade-point average (GPA) on global social responsibility (GSR); and in the indirect effects of (a) and of (b) through (c) on GSR. Cross-sectional survey data were drawn from questionnaire interviews involving 3,173 Filipino university students. Based on a path model, the three factors were tested to determine their inter-relationships and their relationships with GSR. The direct and total effects of the exogenous factors on the dependent variable are statistically significantly robust. The indirect effects of organizational memberships on GSR through GPA are also statistically significant, but the indirect effects of SNS on GSR through GPA are marginal. Men and women significantly differ only in terms of the total effects of their organizational memberships on GSR. The lack of broad gender variations in the effects of SNS, organizational memberships and GPA on GSR may be linked to the relatively homogenous characteristics and experiences of the university students interviewed. There is a need for more path models to better understand the predictors of GSR in local students. PMID:27247700

  8. Gender Variations in the Effects of Number of Organizational Memberships, Number of Social Networking Sites, and Grade-Point Average on Global Social Responsibility in Filipino University Students.

    Science.gov (United States)

    Lee, Romeo B; Baring, Rito V; Sta Maria, Madelene A

    2016-02-01

    The study seeks to estimate gender variations in the direct effects of (a) number of organizational memberships, (b) number of social networking sites (SNS), and (c) grade-point average (GPA) on global social responsibility (GSR); and in the indirect effects of (a) and of (b) through (c) on GSR. Cross-sectional survey data were drawn from questionnaire interviews involving 3,173 Filipino university students. Based on a path model, the three factors were tested to determine their inter-relationships and their relationships with GSR. The direct and total effects of the exogenous factors on the dependent variable are statistically significantly robust. The indirect effects of organizational memberships on GSR through GPA are also statistically significant, but the indirect effects of SNS on GSR through GPA are marginal. Men and women significantly differ only in terms of the total effects of their organizational memberships on GSR. The lack of broad gender variations in the effects of SNS, organizational memberships and GPA on GSR may be linked to the relatively homogenous characteristics and experiences of the university students interviewed. There is a need for more path models to better understand the predictors of GSR in local students.

  9. Average and local structure of α-CuI by configurational averaging

    International Nuclear Information System (INIS)

    Mohn, Chris E; Stoelen, Svein

    2007-01-01

    Configurational Boltzmann averaging together with density functional theory are used to study in detail the average and local structure of the superionic α-CuI. We find that the coppers are spread out with peaks in the atom-density at the tetrahedral sites of the fcc sublattice of iodines. We calculate Cu-Cu, Cu-I and I-I pair radial distribution functions, the distribution of coordination numbers and the distribution of Cu-I-Cu, I-Cu-I and Cu-Cu-Cu bond-angles. The partial pair distribution functions are in good agreement with experimental neutron diffraction-reverse Monte Carlo, extended x-ray absorption fine structure and ab initio molecular dynamics results. In particular, our results confirm the presence of a prominent peak at around 2.7 A in the Cu-Cu pair distribution function as well as a broader, less intense peak at roughly 4.3 A. We find highly flexible bonds and a range of coordination numbers for both iodines and coppers. This structural flexibility is of key importance in order to understand the exceptional conductivity of coppers in α-CuI; the iodines can easily respond to changes in the local environment as the coppers diffuse, and a myriad of different diffusion-pathways is expected due to the large variation in the local motifs

  10. Diversity Leadership in Higher Education. ASHE Higher Education Report, Volume 32, Number 3

    Science.gov (United States)

    Aguirre, Adalberto, Jr., Ed.; Martinez, Ruben O., Ed.

    2006-01-01

    This monograph examines and discusses the context for diversity leadership roles and practices in higher education by using research and theoretical and applied literatures from a variety of fields, including the social sciences, business, and higher education. Framing the discussion on leadership in this monograph is the perspective that American…

  11. Improving consensus structure by eliminating averaging artifacts

    Directory of Open Access Journals (Sweden)

    KC Dukka B

    2009-03-01

    Full Text Available Abstract Background Common structural biology methods (i.e., NMR and molecular dynamics often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures. However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%; in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38 of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA 1, our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction 2, which

  12. Sedimentological regimes for turbidity currents: Depth-averaged theory

    Science.gov (United States)

    Halsey, Thomas C.; Kumar, Amit; Perillo, Mauricio M.

    2017-07-01

    Turbidity currents are one of the most significant means by which sediment is moved from the continents into the deep ocean; their properties are interesting both as elements of the global sediment cycle and due to their role in contributing to the formation of deep water oil and gas reservoirs. One of the simplest models of the dynamics of turbidity current flow was introduced three decades ago, and is based on depth-averaging of the fluid mechanical equations governing the turbulent gravity-driven flow of relatively dilute turbidity currents. We examine the sedimentological regimes of a simplified version of this model, focusing on the role of the Richardson number Ri [dimensionless inertia] and Rouse number Ro [dimensionless sedimentation velocity] in determining whether a current is net depositional or net erosional. We find that for large Rouse numbers, the currents are strongly net depositional due to the disappearance of local equilibria between erosion and deposition. At lower Rouse numbers, the Richardson number also plays a role in determining the degree of erosion versus deposition. The currents become more erosive at lower values of the product Ro × Ri, due to the effect of clear water entrainment. At higher values of this product, the turbulence becomes insufficient to maintain the sediment in suspension, as first pointed out by Knapp and Bagnold. We speculate on the potential for two-layer solutions in this insufficiently turbulent regime, which would comprise substantial bedload flow with an overlying turbidity current.

  13. Average subentropy, coherence and entanglement of random mixed quantum states

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Lin, E-mail: godyalin@163.com [Institute of Mathematics, Hangzhou Dianzi University, Hangzhou 310018 (China); Singh, Uttam, E-mail: uttamsingh@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India); Pati, Arun K., E-mail: akpati@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India)

    2017-02-15

    Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate that mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.

  14. Brain scaling in mammalian evolution as a consequence of concerted and mosaic changes in numbers of neurons and average neuronal cell size

    Directory of Open Access Journals (Sweden)

    Suzana eHerculano-Houzel

    2014-08-01

    Full Text Available Enough species have now been subject to systematic quantitative analysis of the relationship between the morphology and cellular composition of their brain that patterns begin to emerge and shed light on the evolutionary path that led to mammalian brain diversity. Based on an analysis of the shared and clade-specific characteristics of 41 modern mammalian species in 6 clades, and in light of the phylogenetic relationships among them, here we propose that ancestral mammal brains were composed and scaled in their cellular composition like modern afrotherian and glire brains: with an addition of neurons that is accompanied by a decrease in neuronal density and very little modification in glial cell density, implying a significant increase in average neuronal cell size in larger brains, and the allocation of approximately 2 neurons in the cerebral cortex and 8 neurons in the cerebellum for every neuron allocated to the rest of brain. We also propose that in some clades the scaling of different brain structures has diverged away from the common ancestral layout through clade-specific (or clade-defining changes in how average neuronal cell mass relates to numbers of neurons in each structure, and how numbers of neurons are differentially allocated to each structure relative to the number of neurons in the rest of brain. Thus, the evolutionary expansion of mammalian brains has involved both concerted and mosaic patterns of scaling across structures. This is, to our knowledge, the first mechanistic model that explains the generation of brains large and small in mammalian evolution, and it opens up new horizons for seeking the cellular pathways and genes involved in brain evolution.

  15. Decision trees with minimum average depth for sorting eight elements

    KAUST Repository

    AbouEisha, Hassan M.

    2015-11-19

    We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees is approximately equal to 8.548×10^326365), has also minimum depth. Both problems were considered by Knuth (1998). To obtain these results, we use tools based on extensions of dynamic programming which allow us to make sequential optimization of decision trees relative to depth and average depth, and to count the number of decision trees with minimum average depth.

  16. Reynolds-averaged Navier-Stokes investigation of high-lift low-pressure turbine blade aerodynamics at low Reynolds number

    Science.gov (United States)

    Arko, Bryan M.

    Design trends for the low-pressure turbine (LPT) section of modern gas turbine engines include increasing the loading per airfoil, which promises a decreased airfoil count resulting in reduced manufacturing and operating costs. Accurate Reynolds-Averaged Navier-Stokes predictions of separated boundary layers and transition to turbulence are needed, as the lack of an economical and reliable computational model has contributed to this high-lift concept not reaching its full potential. Presented here for what is believed to be the first time applied to low-Re computations of high-lift linear cascade simulations is the Abe-Kondoh-Nagano (AKN) linear low-Re two-equation turbulence model which utilizes the Kolmogorov velocity scale for improved predictions of separated boundary layers. A second turbulence model investigated is the Kato-Launder modified version of the AKN, denoted MPAKN, which damps turbulent production in highly strained regions of flow. Fully Laminar solutions have also been calculated in an effort to elucidate the transitional quality of the turbulence model solutions. Time accurate simulations of three modern high-lift blades at a Reynolds number of 25,000 are compared to experimental data and higher-order computations in order to judge the accuracy of the results, where it is shown that the RANS simulations with highly refined grids can produce both quantitatively and qualitatively similar separation behavior as found in experiments. In particular, the MPAKN model is shown to predict the correct boundary layer behavior for all three blades, and evidence of transition is found through inspection of the components of the Reynolds Stress Tensor, spectral analysis, and the turbulence production parameter. Unfortunately, definitively stating that transition is occurring becomes an uncertain task, as similar evidence of the transition process is found in the Laminar predictions. This reveals that boundary layer reattachment may be a result of laminar

  17. A new hybrid nonlinear congruential number generator based on higher functional power of logistic maps

    International Nuclear Information System (INIS)

    Cecen, Songul; Demirer, R. Murat; Bayrak, Coskun

    2009-01-01

    We propose a nonlinear congruential pseudorandom number generator consisting of summation of higher order composition of random logistic maps under certain congruential mappings. We change both bifurcation parameters of logistic maps in the interval of U=[3.5599,4) and coefficients of the polynomials in each higher order composition of terms up to degree d. This helped us to obtain a perfect random decorrelated generator which is infinite and aperiodic. It is observed from the simulation results that our new PRNG has good uniformity and power spectrum properties with very flat white noise characteristics. The results are interesting, new and may have applications in cryptography and in Monte Carlo simulations.

  18. Proton transport properties of poly(aspartic acid) with different average molecular weights

    Energy Technology Data Exchange (ETDEWEB)

    Nagao, Yuki, E-mail: ynagao@kuchem.kyoto-u.ac.j [Department of Mechanical Systems and Design, Graduate School of Engineering, Tohoku University, 6-6-01 Aoba Aramaki, Aoba-ku, Sendai 980-8579 (Japan); Imai, Yuzuru [Institute of Development, Aging and Cancer (IDAC), Tohoku University, 4-1 Seiryo-cho, Aoba-ku, Sendai 980-8575 (Japan); Matsui, Jun [Institute of Multidisciplinary Research for Advanced Materials (IMRAM), Tohoku University, 2-1-1 Katahira, Sendai 980-8577 (Japan); Ogawa, Tomoyuki [Department of Electronic Engineering, Graduate School of Engineering, Tohoku University, 6-6-05 Aoba Aramaki, Aoba-ku, Sendai 980-8579 (Japan); Miyashita, Tokuji [Institute of Multidisciplinary Research for Advanced Materials (IMRAM), Tohoku University, 2-1-1 Katahira, Sendai 980-8577 (Japan)

    2011-04-15

    Research highlights: Seven polymers with different average molecular weights were synthesized. The proton conductivity depended on the number-average degree of polymerization. The difference of the proton conductivities was more than one order of magnitude. The number-average molecular weight contributed to the stability of the polymer. - Abstract: We synthesized seven partially protonated poly(aspartic acids)/sodium polyaspartates (P-Asp) with different average molecular weights to study their proton transport properties. The number-average degree of polymerization (DP) for each P-Asp was 30 (P-Asp30), 115 (P-Asp115), 140 (P-Asp140), 160 (P-Asp160), 185 (P-Asp185), 205 (P-Asp205), and 250 (P-Asp250). The proton conductivity depended on the number-average DP. The maximum and minimum proton conductivities under a relative humidity of 70% and 298 K were 1.7 . 10{sup -3} S cm{sup -1} (P-Asp140) and 4.6 . 10{sup -4} S cm{sup -1} (P-Asp250), respectively. Differential thermogravimetric analysis (TG-DTA) was carried out for each P-Asp. The results were classified into two categories. One exhibited two endothermic peaks between t = (270 and 300) {sup o}C, the other exhibited only one peak. The P-Asp group with two endothermic peaks exhibited high proton conductivity. The high proton conductivity is related to the stability of the polymer. The number-average molecular weight also contributed to the stability of the polymer.

  19. Average formation number n-barOH of colloid-type indium hydroxide

    International Nuclear Information System (INIS)

    Stefanowicz, T.; Szent-Kirallyine Gajda, J.

    1983-01-01

    Indium perchlorate in perchloric acid solution was titrated with sodium hydroxide solution to various pH values. Indium hydroxide colloid was removed by ultracentrifugation and supernatant solution was titrated with base to neutral pH. The two-stage titration data were used to calculate the formation number of indium hydroxide colloid, which was found to equal n-bar OH = 2.8. (author)

  20. Predicting Lotto Numbers

    DEFF Research Database (Denmark)

    Jørgensen, Claus Bjørn; Suetens, Sigrid; Tyran, Jean-Robert

    numbers based on recent drawings. While most players pick the same set of numbers week after week without regards of numbers drawn or anything else, we find that those who do change, act on average in the way predicted by the law of small numbers as formalized in recent behavioral theory. In particular......We investigate the “law of small numbers” using a unique panel data set on lotto gambling. Because we can track individual players over time, we can measure how they react to outcomes of recent lotto drawings. We can therefore test whether they behave as if they believe they can predict lotto......, on average they move away from numbers that have recently been drawn, as suggested by the “gambler’s fallacy”, and move toward numbers that are on streak, i.e. have been drawn several weeks in a row, consistent with the “hot hand fallacy”....

  1. Model averaging, optimal inference and habit formation

    Directory of Open Access Journals (Sweden)

    Thomas H B FitzGerald

    2014-06-01

    Full Text Available Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function – the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge – that of determining which model or models of their environment are the best for guiding behaviour. Bayesian model averaging – which says that an agent should weight the predictions of different models according to their evidence – provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent’s behaviour should show an equivalent balance. We hypothesise that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realisable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behaviour. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded Bayesian inference, focussing particularly upon the relationship between goal-directed and habitual behaviour.

  2. Function reconstruction from noisy local averages

    International Nuclear Information System (INIS)

    Chen Yu; Huang Jianguo; Han Weimin

    2008-01-01

    A regularization method is proposed for the function reconstruction from noisy local averages in any dimension. Error bounds for the approximate solution in L 2 -norm are derived. A number of numerical examples are provided to show computational performance of the method, with the regularization parameters selected by different strategies

  3. Mental health care and average happiness: strong effect in developed nations.

    Science.gov (United States)

    Touburg, Giorgio; Veenhoven, Ruut

    2015-07-01

    Mental disorder is a main cause of unhappiness in modern society and investment in mental health care is therefore likely to add to average happiness. This prediction was checked in a comparison of 143 nations around 2005. Absolute investment in mental health care was measured using the per capita number of psychiatrists and psychologists working in mental health care. Relative investment was measured using the share of mental health care in the total health budget. Average happiness in nations was measured with responses to survey questions about life-satisfaction. Average happiness appeared to be higher in countries that invest more in mental health care, both absolutely and relative to investment in somatic medicine. A data split by level of development shows that this difference exists only among developed nations. Among these nations the link between mental health care and happiness is quite strong, both in an absolute sense and compared to other known societal determinants of happiness. The correlation between happiness and share of mental health care in the total health budget is twice as strong as the correlation between happiness and size of the health budget. A causal effect is likely, but cannot be proved in this cross-sectional analysis.

  4. Averaging processes in granular flows driven by gravity

    Science.gov (United States)

    Rossi, Giulia; Armanini, Aronne

    2016-04-01

    One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental

  5. Salecker-Wigner-Peres clock and average tunneling times

    International Nuclear Information System (INIS)

    Lunardi, Jose T.; Manzoni, Luiz A.; Nystrom, Andrew T.

    2011-01-01

    The quantum clock of Salecker-Wigner-Peres is used, by performing a post-selection of the final state, to obtain average transmission and reflection times associated to the scattering of localized wave packets by static potentials in one dimension. The behavior of these average times is studied for a Gaussian wave packet, centered around a tunneling wave number, incident on a rectangular barrier and, in particular, on a double delta barrier potential. The regime of opaque barriers is investigated and the results show that the average transmission time does not saturate, showing no evidence of the Hartman effect (or its generalized version).

  6. Decision trees with minimum average depth for sorting eight elements

    KAUST Repository

    AbouEisha, Hassan M.; Chikalov, Igor; Moshkov, Mikhail

    2015-01-01

    We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees

  7. Warfarin maintenance dose in older patients: higher average dose and wider dose frequency distribution in patients of African ancestry than those of European ancestry.

    Science.gov (United States)

    Garwood, Candice L; Clemente, Jennifer L; Ibe, George N; Kandula, Vijay A; Curtis, Kristy D; Whittaker, Peter

    2010-06-15

    Studies report that warfarin doses required to maintain therapeutic anticoagulation decrease with age; however, these studies almost exclusively enrolled patients of European ancestry. Consequently, universal application of dosing paradigms based on such evidence may be confounded because ethnicity also influences dose. Therefore, we determined if warfarin dose decreased with age in Americans of African ancestry, if older African and European ancestry patients required different doses, and if their daily dose frequency distributions differed. Our chart review examined 170 patients of African ancestry and 49 patients of European ancestry cared for in our anticoagulation clinic. We calculated the average weekly dose required for each stable, anticoagulated patient to maintain an international normalized ratio of 2.0 to 3.0, determined dose averages for groups 80 years of age and plotted dose as a function of age. The maintenance dose in patients of African ancestry decreased with age (PAfrican ancestry required higher average weekly doses than patients of European ancestry: 33% higher in the 70- to 79-year-old group (38.2+/-1.9 vs. 28.8+/-1.7 mg; P=0.006) and 52% in the >80-year-old group (33.2+/-1.7 vs. 21.8+/-3.8 mg; P=0.011). Therefore, 43% of older patients of African ancestry required daily doses >5mg and hence would have been under-dosed using current starting-dose guidelines. The dose frequency distribution was wider for older patients of African ancestry compared to those of European ancestry (PAfrican ancestry indicate that strategies for initiating warfarin therapy based on studies of patients of European ancestry could result in insufficient anticoagulation and thereby potentially increase their thromboembolism risk. Copyright 2010 Elsevier Inc. All rights reserved.

  8. Exploiting scale dependence in cosmological averaging

    International Nuclear Information System (INIS)

    Mattsson, Teppo; Ronkainen, Maria

    2008-01-01

    We study the role of scale dependence in the Buchert averaging method, using the flat Lemaitre–Tolman–Bondi model as a testing ground. Within this model, a single averaging scale gives predictions that are too coarse, but by replacing it with the distance of the objects R(z) for each redshift z, we find an O(1%) precision at z<2 in the averaged luminosity and angular diameter distances compared to their exact expressions. At low redshifts, we show the improvement for generic inhomogeneity profiles, and our numerical computations further verify it up to redshifts z∼2. At higher redshifts, the method breaks down due to its inability to capture the time evolution of the inhomogeneities. We also demonstrate that the running smoothing scale R(z) can mimic acceleration, suggesting that it could be at least as important as the backreaction in explaining dark energy as an inhomogeneity induced illusion

  9. Average cross sections for the 252Cf neutron spectrum

    International Nuclear Information System (INIS)

    Dezso, Z.; Csikai, J.

    1977-01-01

    A number of average cross sections have been measured for 252 Cf neutrons in (n, γ), (n,p), (n,2n), (n,α) reactions by the activation method and for fission by fission chamber. Cross sections have been determined for 19 elements and 45 reactions. The (n,γ) cross section values lie in the interval from 0.3 to 200 mb. The data as a function of target neutron number increases up to about N=60 with minimum near to dosed shells. The values lie between 0.3 mb and 113 mb. These cross sections decrease significantly with increasing the threshold energy. The values are below 20 mb. The data do not exceed 10 mb. Average (n,p) cross sections as a function of the threshold energy and average fission cross sections as a function of Zsup(4/3)/A are shown. The results obtained are summarized in tables

  10. [Algorithm for taking into account the average annual background of air pollution in the assessment of health risks].

    Science.gov (United States)

    Fokin, M V

    2013-01-01

    State Budgetary Educational Institution of Higher Professional Education "I.M. Sechenov First Moscow State Medical University" of the Ministry of Health care and Social Development, Moscow, Russian Federation. The assessment of health risks from air pollution with emissions from industrial facilities, without the average annual background of air pollution does not meet sanitary legislation. However Russian Federal Service for Hydrometeorology and Environmental Monitoring issues official certificates for a limited number of areas covered by the observations of the full program on the stationary points. Questions of accounting average background air pollution in the evaluation of health risks from exposure to emissions from industrial facilities are considered.

  11. Independence, Odd Girth, and Average Degree

    DEFF Research Database (Denmark)

    Löwenstein, Christian; Pedersen, Anders Sune; Rautenbach, Dieter

    2011-01-01

      We prove several tight lower bounds in terms of the order and the average degree for the independence number of graphs that are connected and/or satisfy some odd girth condition. Our main result is the extension of a lower bound for the independence number of triangle-free graphs of maximum...... degree at most three due to Heckman and Thomas [Discrete Math 233 (2001), 233–237] to arbitrary triangle-free graphs. For connected triangle-free graphs of order n and size m, our result implies the existence of an independent set of order at least (4n−m−1) / 7.  ...

  12. Higher order net-proton number cumulants dependence on the centrality definition and other spurious effects

    Science.gov (United States)

    Sombun, S.; Steinheimer, J.; Herold, C.; Limphirat, A.; Yan, Y.; Bleicher, M.

    2018-02-01

    We study the dependence of the normalized moments of the net-proton multiplicity distributions on the definition of centrality in relativistic nuclear collisions at a beam energy of \\sqrt{{s}{NN}}=7.7 {GeV}. Using the ultra relativistic quantum molecular dynamics model as event generator we find that the centrality definition has a large effect on the extracted cumulant ratios. Furthermore we find that the finite efficiency for the determination of the centrality introduces an additional systematic uncertainty. Finally, we quantitatively investigate the effects of event-pile up and other possible spurious effects which may change the measured proton number. We find that pile-up alone is not sufficient to describe the data and show that a random double counting of events, adding significantly to the measured proton number, affects mainly the higher order cumulants in most central collisions.

  13. A dynamic analysis of moving average rules

    NARCIS (Netherlands)

    Chiarella, C.; He, X.Z.; Hommes, C.H.

    2006-01-01

    The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type

  14. 40 CFR 63.652 - Emissions averaging provisions.

    Science.gov (United States)

    2010-07-01

    ... emissions more than the reference control technology, but the combination of the pollution prevention... emissions average. This must include any Group 1 emission points to which the reference control technology... agrees has a higher nominal efficiency than the reference control technology. Information on the nominal...

  15. A high speed digital signal averager for pulsed NMR

    International Nuclear Information System (INIS)

    Srinivasan, R.; Ramakrishna, J.; Ra agopalan, S.R.

    1978-01-01

    A 256-channel digital signal averager suitable for pulsed nuclear magnetic resonance spectroscopy is described. It implements 'stable averaging' algorithm and hence provides a calibrated display of the average signal at all times during the averaging process on a CRT. It has a maximum sampling rate of 2.5 μ sec and a memory capacity of 256 x 12 bit words. Number of sweeps is selectable through a front panel control in binary steps from 2 3 to 2 12 . The enhanced signal can be displayed either on a CRT or by a 3.5-digit LED display. The maximum S/N improvement that can be achieved with this instrument is 36 dB. (auth.)

  16. A time averaged background compensator for Geiger-Mueller counters

    International Nuclear Information System (INIS)

    Bhattacharya, R.C.; Ghosh, P.K.

    1983-01-01

    The GM tube compensator described stores background counts to cancel an equal number of pulses from the measuring channel providing time averaged compensation. The method suits portable instruments. (orig.)

  17. Delineation of facial archetypes by 3d averaging.

    Science.gov (United States)

    Shaweesh, Ashraf I; Thomas, C David L; Bankier, Agnes; Clement, John G

    2004-10-01

    The objective of this study was to investigate the feasibility of creating archetypal 3D faces through computerized 3D facial averaging. A 3D surface scanner Fiore and its software were used to acquire the 3D scans of the faces while 3D Rugle3 and locally-developed software generated the holistic facial averages. 3D facial averages were created from two ethnic groups; European and Japanese and from children with three previous genetic disorders; Williams syndrome, achondroplasia and Sotos syndrome as well as the normal control group. The method included averaging the corresponding depth (z) coordinates of the 3D facial scans. Compared with other face averaging techniques there was not any warping or filling in the spaces by interpolation; however, this facial average lacked colour information. The results showed that as few as 14 faces were sufficient to create an archetypal facial average. In turn this would make it practical to use face averaging as an identification tool in cases where it would be difficult to recruit a larger number of participants. In generating the average, correcting for size differences among faces was shown to adjust the average outlines of the facial features. It is assumed that 3D facial averaging would help in the identification of the ethnic status of persons whose identity may not be known with certainty. In clinical medicine, it would have a great potential for the diagnosis of syndromes with distinctive facial features. The system would also assist in the education of clinicians in the recognition and identification of such syndromes.

  18. Higher Education in Brazil and the policies for increasing the number of vacancies from Reuni: advances and controversies

    Directory of Open Access Journals (Sweden)

    Maria Célia Borges

    2012-01-01

    Full Text Available This paper presents a discussion on the policies to expand Higher Education, stating the influences of neoliberalism and explaining the contradictions in legislation and reforms at this level of education in Brazil after the 1990s. It questions the model of the New University with regard to the Brazilian reality and the poor investments available for such a reform. It calls attention to the danger of prioritizing the increase in the number of vacancies instead of the quality of teaching, something which would represent the scrapping of the public university. It highlights the contradictions of Reuni, with improvised actions and conditioning of funds, through the achievement of goals. On one hand, it recognizes the increasing number of vacancies in Higher Education and, on the other, it reaffirms that democratization of access requires universities with financial autonomy, well-structured courses with innovative curricula, qualified professors, adequate infrastructure, and high quality teaching, with research aiming the production of new knowledge, as well as university extension.

  19. Hydrophone area-averaging correction factors in nonlinearly generated ultrasonic beams

    International Nuclear Information System (INIS)

    Cooling, M P; Humphrey, V F; Wilkens, V

    2011-01-01

    The nonlinear propagation of an ultrasonic wave can be used to produce a wavefield rich in higher frequency components that is ideally suited to the calibration, or inter-calibration, of hydrophones. These techniques usually use a tone-burst signal, limiting the measurements to harmonics of the fundamental calibration frequency. Alternatively, using a short pulse enables calibration at a continuous spectrum of frequencies. Such a technique is used at PTB in conjunction with an optical measurement technique to calibrate devices. Experimental findings indicate that the area-averaging correction factor for a hydrophone in such a field demonstrates a complex behaviour, most notably varying periodically between frequencies that are harmonics of the centre frequency of the original pulse and frequencies that lie midway between these harmonics. The beam characteristics of such nonlinearly generated fields have been investigated using a finite difference solution to the nonlinear Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation for a focused field. The simulation results are used to calculate the hydrophone area-averaging correction factors for 0.2 mm and 0.5 mm devices. The results clearly demonstrate a number of significant features observed in the experimental investigations, including the variation with frequency, drive level and hydrophone element size. An explanation for these effects is also proposed.

  20. Hydrophone area-averaging correction factors in nonlinearly generated ultrasonic beams

    Science.gov (United States)

    Cooling, M. P.; Humphrey, V. F.; Wilkens, V.

    2011-02-01

    The nonlinear propagation of an ultrasonic wave can be used to produce a wavefield rich in higher frequency components that is ideally suited to the calibration, or inter-calibration, of hydrophones. These techniques usually use a tone-burst signal, limiting the measurements to harmonics of the fundamental calibration frequency. Alternatively, using a short pulse enables calibration at a continuous spectrum of frequencies. Such a technique is used at PTB in conjunction with an optical measurement technique to calibrate devices. Experimental findings indicate that the area-averaging correction factor for a hydrophone in such a field demonstrates a complex behaviour, most notably varying periodically between frequencies that are harmonics of the centre frequency of the original pulse and frequencies that lie midway between these harmonics. The beam characteristics of such nonlinearly generated fields have been investigated using a finite difference solution to the nonlinear Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation for a focused field. The simulation results are used to calculate the hydrophone area-averaging correction factors for 0.2 mm and 0.5 mm devices. The results clearly demonstrate a number of significant features observed in the experimental investigations, including the variation with frequency, drive level and hydrophone element size. An explanation for these effects is also proposed.

  1. Bounds on Average Time Complexity of Decision Trees

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any diagnostic problem, the minimum average depth of decision tree is bounded from below by the entropy of probability distribution (with a multiplier 1/log2 k for a problem over a k-valued information system). Among diagnostic problems, the problems with a complete set of attributes have the lowest minimum average depth of decision trees (e.g, the problem of building optimal prefix code [1] and a blood test study in assumption that exactly one patient is ill [23]). For such problems, the minimum average depth of decision tree exceeds the lower bound by at most one. The minimum average depth reaches the maximum on the problems in which each attribute is "indispensable" [44] (e.g., a diagnostic problem with n attributes and kn pairwise different rows in the decision table and the problem of implementing the modulo 2 summation function). These problems have the minimum average depth of decision tree equal to the number of attributes in the problem description. © Springer-Verlag Berlin Heidelberg 2011.

  2. Artificial Intelligence Can Predict Daily Trauma Volume and Average Acuity.

    Science.gov (United States)

    Stonko, David P; Dennis, Bradley M; Betzold, Richard D; Peetz, Allan B; Gunter, Oliver L; Guillamondegui, Oscar D

    2018-04-19

    The goal of this study was to integrate temporal and weather data in order to create an artificial neural network (ANN) to predict trauma volume, the number of emergent operative cases, and average daily acuity at a level 1 trauma center. Trauma admission data from TRACS and weather data from the National Oceanic and Atmospheric Administration (NOAA) was collected for all adult trauma patients from July 2013-June 2016. The ANN was constructed using temporal (time, day of week), and weather factors (daily high, active precipitation) to predict four points of daily trauma activity: number of traumas, number of penetrating traumas, average ISS, and number of immediate OR cases per day. We trained a two-layer feed-forward network with 10 sigmoid hidden neurons via the Levenberg-Marquardt backpropagation algorithm, and performed k-fold cross validation and accuracy calculations on 100 randomly generated partitions. 10,612 patients over 1,096 days were identified. The ANN accurately predicted the daily trauma distribution in terms of number of traumas, number of penetrating traumas, number of OR cases, and average daily ISS (combined training correlation coefficient r = 0.9018+/-0.002; validation r = 0.8899+/- 0.005; testing r = 0.8940+/-0.006). We were able to successfully predict trauma and emergent operative volume, and acuity using an ANN by integrating local weather and trauma admission data from a level 1 center. As an example, for June 30, 2016, it predicted 9.93 traumas (actual: 10), and a mean ISS score of 15.99 (actual: 13.12); see figure 3. This may prove useful for predicting trauma needs across the system and hospital administration when allocating limited resources. Level III STUDY TYPE: Prognostic/Epidemiological.

  3. Effects of non-unity Lewis number of gas-phase species in turbulent nonpremixed sooting flames

    KAUST Repository

    Attili, Antonio

    2016-02-13

    Turbulence statistics from two three-dimensional direct numerical simulations of planar n-heptane/air turbulent jets are compared to assess the effect of the gas-phase species diffusion model on flame dynamics and soot formation. The Reynolds number based on the initial jet width and velocity is around 15, 000, corresponding to a Taylor scale Reynolds number in the range 100 ≤ Reλ ≤ 150. In one simulation, multicomponent transport based on a mixture-averaged approach is employed, while in the other the gas-phase species Lewis numbers are set equal to unity. The statistics of temperature and major species obtained with the mixture-averaged formulation are very similar to those in the unity Lewis number case. In both cases, the statistics of temperature are captured with remarkable accuracy by a laminar flamelet model with unity Lewis numbers. On the contrary, a flamelet with a mixture-averaged diffusion model, which corresponds to the model used in the multi-component diffusion three-dimensional DNS, produces significant differences with respect to the DNS results. The total mass of soot precursors decreases by 20-30% with the unity Lewis number approximation, and their distribution is more homogeneous in space and time. Due to the non-linearity of the soot growth rate with respect to the precursors\\' concentration, the soot mass yield decreases by a factor of two. Being strongly affected by coagulation, soot number density is not altered significantly if the unity Lewis number model is used rather than the mixture-averaged diffusion. The dominant role of turbulent transport over differential diffusion effects is expected to become more pronounced for higher Reynolds numbers. © 2016 The Combustion Institute.

  4. Effects of non-unity Lewis number of gas-phase species in turbulent nonpremixed sooting flames

    KAUST Repository

    Attili, Antonio; Bisetti, Fabrizio; Mueller, Michael E.; Pitsch, Heinz

    2016-01-01

    Turbulence statistics from two three-dimensional direct numerical simulations of planar n-heptane/air turbulent jets are compared to assess the effect of the gas-phase species diffusion model on flame dynamics and soot formation. The Reynolds number based on the initial jet width and velocity is around 15, 000, corresponding to a Taylor scale Reynolds number in the range 100 ≤ Reλ ≤ 150. In one simulation, multicomponent transport based on a mixture-averaged approach is employed, while in the other the gas-phase species Lewis numbers are set equal to unity. The statistics of temperature and major species obtained with the mixture-averaged formulation are very similar to those in the unity Lewis number case. In both cases, the statistics of temperature are captured with remarkable accuracy by a laminar flamelet model with unity Lewis numbers. On the contrary, a flamelet with a mixture-averaged diffusion model, which corresponds to the model used in the multi-component diffusion three-dimensional DNS, produces significant differences with respect to the DNS results. The total mass of soot precursors decreases by 20-30% with the unity Lewis number approximation, and their distribution is more homogeneous in space and time. Due to the non-linearity of the soot growth rate with respect to the precursors' concentration, the soot mass yield decreases by a factor of two. Being strongly affected by coagulation, soot number density is not altered significantly if the unity Lewis number model is used rather than the mixture-averaged diffusion. The dominant role of turbulent transport over differential diffusion effects is expected to become more pronounced for higher Reynolds numbers. © 2016 The Combustion Institute.

  5. Life Science's Average Publishable Unit (APU Has Increased over the Past Two Decades.

    Directory of Open Access Journals (Sweden)

    Radames J B Cordero

    Full Text Available Quantitative analysis of the scientific literature is important for evaluating the evolution and state of science. To study how the density of biological literature has changed over the past two decades we visually inspected 1464 research articles related only to the biological sciences from ten scholarly journals (with average Impact Factors, IF, ranging from 3.8 to 32.1. By scoring the number of data items (tables and figures, density of composite figures (labeled panels per figure or PPF, as well as the number of authors, pages and references per research publication we calculated an Average Publishable Unit or APU for 1993, 2003, and 2013. The data show an overall increase in the average ± SD number of data items from 1993 to 2013 of approximately 7±3 to 14±11 and PPF ratio of 2±1 to 4±2 per article, suggesting that the APU has doubled in size over the past two decades. As expected, the increase in data items per article is mainly in the form of supplemental material, constituting 0 to 80% of the data items per publication in 2013, depending on the journal. The changes in the average number of pages (approx. 8±3 to 10±3, references (approx. 44±18 to 56±24 and authors (approx. 5±3 to 8±9 per article are also presented and discussed. The average number of data items, figure density and authors per publication are correlated with the journal's average IF. The increasing APU size over time is important when considering the value of research articles for life scientists and publishers, as well as, the implications of these increasing trends in the mechanisms and economics of scientific communication.

  6. Life Science's Average Publishable Unit (APU) Has Increased over the Past Two Decades.

    Science.gov (United States)

    Cordero, Radames J B; de León-Rodriguez, Carlos M; Alvarado-Torres, John K; Rodriguez, Ana R; Casadevall, Arturo

    2016-01-01

    Quantitative analysis of the scientific literature is important for evaluating the evolution and state of science. To study how the density of biological literature has changed over the past two decades we visually inspected 1464 research articles related only to the biological sciences from ten scholarly journals (with average Impact Factors, IF, ranging from 3.8 to 32.1). By scoring the number of data items (tables and figures), density of composite figures (labeled panels per figure or PPF), as well as the number of authors, pages and references per research publication we calculated an Average Publishable Unit or APU for 1993, 2003, and 2013. The data show an overall increase in the average ± SD number of data items from 1993 to 2013 of approximately 7±3 to 14±11 and PPF ratio of 2±1 to 4±2 per article, suggesting that the APU has doubled in size over the past two decades. As expected, the increase in data items per article is mainly in the form of supplemental material, constituting 0 to 80% of the data items per publication in 2013, depending on the journal. The changes in the average number of pages (approx. 8±3 to 10±3), references (approx. 44±18 to 56±24) and authors (approx. 5±3 to 8±9) per article are also presented and discussed. The average number of data items, figure density and authors per publication are correlated with the journal's average IF. The increasing APU size over time is important when considering the value of research articles for life scientists and publishers, as well as, the implications of these increasing trends in the mechanisms and economics of scientific communication.

  7. The weighted average cost of capital over the lifecycle of the firm: Is the overinvestment problem of mature firms intensified by a higher WACC?

    Directory of Open Access Journals (Sweden)

    Carlos S. Garcia

    2016-08-01

    Full Text Available Firm lifecycle theory predicts that the Weighted Average Cost of Capital (WACC will tend to fall over the lifecycle of the firm (Mueller, 2003, p. 80-81. However, given that previous research finds that corporate governance deteriorates as firms get older (Mueller and Yun, 1998; Saravia, 2014 there is good reason to suspect that the opposite could be the case, that is, that the WACC is higher for older firms. Since our literature review indicates that no direct tests to clarify this question have been carried out up till now, this paper aims to fill the gap by testing this prediction empirically. Our findings support the proposition that the WACC of younger firms is higher than that of mature firms. Thus, we find that the mature firm overinvestment problem is not intensified by a higher cost of capital, on the contrary, our results suggest that mature firms manage to invest in negative net present value projects even though they have access to cheaper capital. This finding sheds new light on the magnitude of the corporate governance problems found in mature firms.

  8. Sunspot number recalibration: The ~1840–1920 anomaly in the observer normalization factors of the group sunspot number

    Directory of Open Access Journals (Sweden)

    Cliver Edward W.

    2017-01-01

    Full Text Available We analyze the normalization factors (k′-factors used to scale secondary observers to the Royal Greenwich Observatory (RGO reference series of the Hoyt & Schatten (1998a, 1998b group sunspot number (GSN. A time series of these k′-factors exhibits an anomaly from 1841 to 1920, viz., the average k′-factor for all observers who began reporting groups from 1841 to 1883 is 1.075 vs. 1.431 for those who began from 1884 to 1920, with a progressive rise, on average, during the latter period. The 1883–1884 break between the two subintervals occurs precisely at the point where Hoyt and Schatten began to use a complex daisy-chaining method to scale observers to RGO. The 1841–1920 anomaly implies, implausibly, that the average sunspot observer who began from 1841 to 1883 was nearly as proficient at counting groups as mid-20th century RGO (for which k′ = 1.0 by definition while observers beginning during the 1884–1920 period regressed in group counting capability relative to those from the earlier interval. Instead, as shown elsewhere and substantiated here, RGO group counts increased relative to those of other long-term observers from 1874 to ~1915. This apparent inhomogeneity in the RGO group count series is primarily responsible for the increase in k′-factors from 1884 to 1920 and the suppression, by 44% on average, of the Hoyt and Schatten GSN relative to the original Wolf sunspot number (WSN before ~1885. Correcting for the early “learning curve” in the RGO reference series and minimizing the use of daisy-chaining rectifies the anomalous behavior of the k′-factor series. The resultant GSN time series (designated GSN* is in reasonable agreement with the revised WSN (SN*; Clette & Lefèvre 2016 and the backbone-based group sunspot number (RGS; Svalgaard & Schatten 2016 but significantly higher than other recent reconstructions (Friedli, personal communication, 2016; Lockwood et al. 2014a, 2014b; Usoskin et al. 2016a. This result

  9. Small Bandwidth Asymptotics for Density-Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    This paper proposes (apparently) novel standard error formulas for the density-weighted average derivative estimator of Powell, Stock, and Stoker (1989). Asymptotic validity of the standard errors developed in this paper does not require the use of higher-order kernels and the standard errors...

  10. Data Point Averaging for Computational Fluid Dynamics Data

    Science.gov (United States)

    Norman, Jr., David (Inventor)

    2016-01-01

    A system and method for generating fluid flow parameter data for use in aerodynamic heating analysis. Computational fluid dynamics data is generated for a number of points in an area on a surface to be analyzed. Sub-areas corresponding to areas of the surface for which an aerodynamic heating analysis is to be performed are identified. A computer system automatically determines a sub-set of the number of points corresponding to each of the number of sub-areas and determines a value for each of the number of sub-areas using the data for the sub-set of points corresponding to each of the number of sub-areas. The value is determined as an average of the data for the sub-set of points corresponding to each of the number of sub-areas. The resulting parameter values then may be used to perform an aerodynamic heating analysis.

  11. Advanced pulse oximeter signal processing technology compared to simple averaging. I. Effect on frequency of alarms in the operating room.

    Science.gov (United States)

    Rheineck-Leyssius, A T; Kalkman, C J

    1999-05-01

    To determine the effect of a new signal processing technique (Oxismart, Nellcor, Inc., Pleasanton, CA) on the incidence of false pulse oximeter alarms in the operating room (OR). Prospective observational study. Nonuniversity hospital. 53 ASA physical status I, II, and III consecutive patients undergoing general anesthesia with tracheal intubation. In the OR we compared the number of alarms produced by a recently developed third generation pulse oximeter (Nellcor Symphony N-3000) with Oxismart signal processing technique and a conventional pulse oximeter (Criticare 504). Three pulse oximeters were used simultaneously in each patient: a Nellcor pulse oximeter, a Criticare with the signal averaging time set at 3 seconds (Criticareaverage3s) and a similar unit with the signal averaging time set at 21 seconds (Criticareaverage21s). For each pulse oximeter, the number of false (artifact) alarms was counted. One false alarm was produced by the Nellcor (duration 55 sec) and one false alarm by the Criticareaverage21s monitor (5 sec). The incidence of false alarms was higher in Criticareaverage3s. In eight patients, Criticareaverage3s produced 20 false alarms (p signal processing compared with the Criticare monitor with the longer averaging time of 21 seconds.

  12. An experimental study of the effect of octane number higher than engine requirement on the engine performance and emissions

    Energy Technology Data Exchange (ETDEWEB)

    Sayin, Cenk; Kilicaslan, Ibrahim; Canakci, Mustafa; Ozsezen, Necati [Kocaeli Univ., Dept. of Mechanical Education, Izmit (Turkey)

    2005-06-01

    In this study, the effect of using higher-octane gasoline than that of engine requirement on the performance and exhaust emissions was experimentally studied. The test engine chosen has a fuel system with carburettor because 60% of the vehicles in Turkey are equipped with the carburettor. The engine, which required 91-RON (Research Octane Number) gasoline, was tested using 95-RON and 91-RON. Results show that using octane ratings higher than the requirement of an engine not only decreases engine performance but also increases exhaust emissions. (Author)

  13. HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS

    International Nuclear Information System (INIS)

    2005-01-01

    Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department

  14. ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE

    Directory of Open Access Journals (Sweden)

    Carmen BOGHEAN

    2013-12-01

    Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.

  15. Averages, Areas and Volumes; Cambridge Conference on School Mathematics Feasibility Study No. 45.

    Science.gov (United States)

    Cambridge Conference on School Mathematics, Newton, MA.

    Presented is an elementary approach to areas, columns and other mathematical concepts usually treated in calculus. The approach is based on the idea of average and this concept is utilized throughout the report. In the beginning the average (arithmetic mean) of a set of numbers is considered and two properties of the average which often simplify…

  16. Predicting Performance on the National Athletic Trainers' Association Board of Certification Examination From Grade Point Average and Number of Clinical Hours.

    Science.gov (United States)

    Middlemas, David A.; Manning, James M.; Gazzillo, Linda M.; Young, John

    2001-06-01

    OBJECTIVE: To determine whether grade point average, hours of clinical education, or both are significant predictors of performance on the National Athletic Trainers' Association Board of Certification examination and whether curriculum and internship candidates' scores on the certification examination can be differentially predicted. DESIGN AND SETTING: Data collection forms and consent forms were mailed to the subjects to collect data for predictor variables. Subject scores on the certification examination were obtained from Columbia Assessment Services. SUBJECTS: A total of 270 first-time candidates for the April and June 1998 certification examinations. MEASUREMENTS: Grade point average, number of clinical hours completed, sex, route to certification eligibility (curriculum or internship), scores on each section of the certification examination, and pass/fail criteria for each section. RESULTS: We found no significant difference between the scores of men and women on any section of the examination. Scores for curriculum and internship candidates differed significantly on the written and practical sections of the examination but not on the simulation section. Grade point average was a significant predictor of scores on each section of the examination and the examination as a whole. Clinical hours completed did not add a significant increment for any section but did add a significant increment for the examination overall. Although no significant difference was noted between curriculum and internship candidates in predicting scores on sections of the examination, a significant difference by route was found in predicting whether candidates would pass the examination as a whole (P =.047). Proportion of variance accounted for was less than R(2) = 0.0723 for any section of the examination and R(2) = 0.057 for the examination as a whole. CONCLUSIONS: Potential predictors of performance on the certification examination can be useful to athletic training educators in

  17. Understanding coastal morphodynamic patterns from depth-averaged sediment concentration

    NARCIS (Netherlands)

    Ribas, F.; Falques, A.; de Swart, H. E.; Dodd, N.; Garnier, R.; Calvete, D.

    This review highlights the important role of the depth-averaged sediment concentration (DASC) to understand the formation of a number of coastal morphodynamic features that have an alongshore rhythmic pattern: beach cusps, surf zone transverse and crescentic bars, and shoreface-connected sand

  18. Hybrid Reynolds-Averaged/Large-Eddy Simulations of a Co-Axial Supersonic Free-Jet Experiment

    Science.gov (United States)

    Baurle, R. A.; Edwards, J. R.

    2009-01-01

    Reynolds-averaged and hybrid Reynolds-averaged/large-eddy simulations have been applied to a supersonic coaxial jet flow experiment. The experiment utilized either helium or argon as the inner jet nozzle fluid, and the outer jet nozzle fluid consisted of laboratory air. The inner and outer nozzles were designed and operated to produce nearly pressure-matched Mach 1.8 flow conditions at the jet exit. The purpose of the computational effort was to assess the state-of-the-art for each modeling approach, and to use the hybrid Reynolds-averaged/large-eddy simulations to gather insight into the deficiencies of the Reynolds-averaged closure models. The Reynolds-averaged simulations displayed a strong sensitivity to choice of turbulent Schmidt number. The baseline value chosen for this parameter resulted in an over-prediction of the mixing layer spreading rate for the helium case, but the opposite trend was noted when argon was used as the injectant. A larger turbulent Schmidt number greatly improved the comparison of the results with measurements for the helium simulations, but variations in the Schmidt number did not improve the argon comparisons. The hybrid simulation results showed the same trends as the baseline Reynolds-averaged predictions. The primary reason conjectured for the discrepancy between the hybrid simulation results and the measurements centered around issues related to the transition from a Reynolds-averaged state to one with resolved turbulent content. Improvements to the inflow conditions are suggested as a remedy to this dilemma. Comparisons between resolved second-order turbulence statistics and their modeled Reynolds-averaged counterparts were also performed.

  19. Hybrid Reynolds-Averaged/Large-Eddy Simulations of a Coaxial Supersonic Free-Jet Experiment

    Science.gov (United States)

    Baurle, Robert A.; Edwards, Jack R.

    2010-01-01

    Reynolds-averaged and hybrid Reynolds-averaged/large-eddy simulations have been applied to a supersonic coaxial jet flow experiment. The experiment was designed to study compressible mixing flow phenomenon under conditions that are representative of those encountered in scramjet combustors. The experiment utilized either helium or argon as the inner jet nozzle fluid, and the outer jet nozzle fluid consisted of laboratory air. The inner and outer nozzles were designed and operated to produce nearly pressure-matched Mach 1.8 flow conditions at the jet exit. The purpose of the computational effort was to assess the state-of-the-art for each modeling approach, and to use the hybrid Reynolds-averaged/large-eddy simulations to gather insight into the deficiencies of the Reynolds-averaged closure models. The Reynolds-averaged simulations displayed a strong sensitivity to choice of turbulent Schmidt number. The initial value chosen for this parameter resulted in an over-prediction of the mixing layer spreading rate for the helium case, but the opposite trend was observed when argon was used as the injectant. A larger turbulent Schmidt number greatly improved the comparison of the results with measurements for the helium simulations, but variations in the Schmidt number did not improve the argon comparisons. The hybrid Reynolds-averaged/large-eddy simulations also over-predicted the mixing layer spreading rate for the helium case, while under-predicting the rate of mixing when argon was used as the injectant. The primary reason conjectured for the discrepancy between the hybrid simulation results and the measurements centered around issues related to the transition from a Reynolds-averaged state to one with resolved turbulent content. Improvements to the inflow conditions were suggested as a remedy to this dilemma. Second-order turbulence statistics were also compared to their modeled Reynolds-averaged counterparts to evaluate the effectiveness of common turbulence closure

  20. Variational coupling between q-number and c-number dynamics

    International Nuclear Information System (INIS)

    Amaral, C.M. do; Joffily, S.

    1984-01-01

    The time-dependent quantum variational principle is generalized for the case of hamiltonian operators having real parameters and their time derivates. The obtained variational system is formed by a Schroedinger equation coupled to a Lagrange equation system, where the lagrangian is the average value of the parametrized hamiltonian operator. The consequent dynamics of the variational principle, describes the interaction between a q-number sub-dynamics with a c-number sub-dynamics. In the ((h/2π)) 0 -order W.K.B. approximation, the variational system reduces to a Hamilton-Jacobi-like equation, coupled to a Lagrange equation family. The formal features of the obtained variational system are appropriated for the description of, adiabatics and non-adiabatics, time-dependent q-number c-number interactions. (L.C.) [pt

  1. Analysis of the boron pile measurement of the average neutron yield per fission of 252Cf: (AWBA development program)

    International Nuclear Information System (INIS)

    Ullo, J.J.

    1977-08-01

    The Harwell Boron Pile measurement of the average number of prompt neutrons emitted per fission, ν-bar/sub p/, of 252 Cf was analyzed in detail by a Monte Carlo method. From the calculated energy dependence of the neutron detection efficiency a value of ν-bar/sub p/ = 3.733 +- 0.022 was obtained. This value is 0.76 percent higher than the original reported value of 3.705 +- 0.015. Possible causes for this increase are discussed. 3 figures, 6 tables

  2. Calculating ensemble averaged descriptions of protein rigidity without sampling.

    Science.gov (United States)

    González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J

    2012-01-01

    Previous works have demonstrated that protein rigidity is related to thermodynamic stability, especially under conditions that favor formation of native structure. Mechanical network rigidity properties of a single conformation are efficiently calculated using the integer body-bar Pebble Game (PG) algorithm. However, thermodynamic properties require averaging over many samples from the ensemble of accessible conformations to accurately account for fluctuations in network topology. We have developed a mean field Virtual Pebble Game (VPG) that represents the ensemble of networks by a single effective network. That is, all possible number of distance constraints (or bars) that can form between a pair of rigid bodies is replaced by the average number. The resulting effective network is viewed as having weighted edges, where the weight of an edge quantifies its capacity to absorb degrees of freedom. The VPG is interpreted as a flow problem on this effective network, which eliminates the need to sample. Across a nonredundant dataset of 272 protein structures, we apply the VPG to proteins for the first time. Our results show numerically and visually that the rigidity characterizations of the VPG accurately reflect the ensemble averaged [Formula: see text] properties. This result positions the VPG as an efficient alternative to understand the mechanical role that chemical interactions play in maintaining protein stability.

  3. A virtual pebble game to ensemble average graph rigidity.

    Science.gov (United States)

    González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J

    2015-01-01

    The body-bar Pebble Game (PG) algorithm is commonly used to calculate network rigidity properties in proteins and polymeric materials. To account for fluctuating interactions such as hydrogen bonds, an ensemble of constraint topologies are sampled, and average network properties are obtained by averaging PG characterizations. At a simpler level of sophistication, Maxwell constraint counting (MCC) provides a rigorous lower bound for the number of internal degrees of freedom (DOF) within a body-bar network, and it is commonly employed to test if a molecular structure is globally under-constrained or over-constrained. MCC is a mean field approximation (MFA) that ignores spatial fluctuations of distance constraints by replacing the actual molecular structure by an effective medium that has distance constraints globally distributed with perfect uniform density. The Virtual Pebble Game (VPG) algorithm is a MFA that retains spatial inhomogeneity in the density of constraints on all length scales. Network fluctuations due to distance constraints that may be present or absent based on binary random dynamic variables are suppressed by replacing all possible constraint topology realizations with the probabilities that distance constraints are present. The VPG algorithm is isomorphic to the PG algorithm, where integers for counting "pebbles" placed on vertices or edges in the PG map to real numbers representing the probability to find a pebble. In the VPG, edges are assigned pebble capacities, and pebble movements become a continuous flow of probability within the network. Comparisons between the VPG and average PG results over a test set of proteins and disordered lattices demonstrate the VPG quantitatively estimates the ensemble average PG results well. The VPG performs about 20% faster than one PG, and it provides a pragmatic alternative to averaging PG rigidity characteristics over an ensemble of constraint topologies. The utility of the VPG falls in between the most

  4. Familiarity and Voice Representation: From Acoustic-Based Representation to Voice Averages

    Directory of Open Access Journals (Sweden)

    Maureen Fontaine

    2017-07-01

    Full Text Available The ability to recognize an individual from their voice is a widespread ability with a long evolutionary history. Yet, the perceptual representation of familiar voices is ill-defined. In two experiments, we explored the neuropsychological processes involved in the perception of voice identity. We specifically explored the hypothesis that familiar voices (trained-to-familiar (Experiment 1, and famous voices (Experiment 2 are represented as a whole complex pattern, well approximated by the average of multiple utterances produced by a single speaker. In experiment 1, participants learned three voices over several sessions, and performed a three-alternative forced-choice identification task on original voice samples and several “speaker averages,” created by morphing across varying numbers of different vowels (e.g., [a] and [i] produced by the same speaker. In experiment 2, the same participants performed the same task on voice samples produced by familiar speakers. The two experiments showed that for famous voices, but not for trained-to-familiar voices, identification performance increased and response times decreased as a function of the number of utterances in the averages. This study sheds light on the perceptual representation of familiar voices, and demonstrates the power of average in recognizing familiar voices. The speaker average captures the unique characteristics of a speaker, and thus retains the information essential for recognition; it acts as a prototype of the speaker.

  5. Predicting Lotto Numbers

    DEFF Research Database (Denmark)

    Suetens, Sigrid; Galbo-Jørgensen, Claus B.; Tyran, Jean-Robert Karl

    2016-01-01

    We investigate the ‘law of small numbers’ using a data set on lotto gambling that allows us to measure players’ reactions to draws. While most players pick the same set of numbers week after week, we find that those who do change react on average as predicted by the law of small numbers...... as formalized in recent behavioral theory. In particular, players tend to bet less on numbers that have been drawn in the preceding week, as suggested by the ‘gambler’s fallacy’, and bet more on a number if it was frequently drawn in the recent past, consistent with the ‘hot-hand fallacy’....

  6. Average use of Alcohol and Binge Drinking in Pregnancy: Neuropsychological Effects at Age 5

    DEFF Research Database (Denmark)

    Kilburn, Tina R.

    Objectives The objective of this PhD. was to examine the relation between low weekly average maternal alcohol consumption and ‘Binge drinking' (defined as intake of 5 or more drinks per occasion) during pregnancy and information processing time (IPT) in children aged five years. Since a method...... that provided detailed information on maternal alcohol drinking patterns before and during pregnancy and other lifestyle factors. These women were categorized in groups of prenatally average alcohol intake and binge drinking, timing and number of episodes. At the age of five years the children of these women...... and number of episodes) and between simple reaction time (SRT) and alcohol intake or binge drinking (timing and number of episodes) during pregnancy. Conclusion This was one of the first studies investigating IPT and prenatally average alcohol intake and binge drinking in early pregnancy. Daily prenatal...

  7. Generalized Bernoulli-Hurwitz numbers and the universal Bernoulli numbers

    International Nuclear Information System (INIS)

    Ônishi, Yoshihiro

    2011-01-01

    The three fundamental properties of the Bernoulli numbers, namely, the von Staudt-Clausen theorem, von Staudt's second theorem, and Kummer's original congruence, are generalized to new numbers that we call generalized Bernoulli-Hurwitz numbers. These are coefficients in the power series expansion of a higher-genus algebraic function with respect to a suitable variable. Our generalization differs strongly from previous works. Indeed, the order of the power of the modulus prime in our Kummer-type congruences is exactly the same as in the trigonometric function case (namely, Kummer's own congruence for the original Bernoulli numbers), and as in the elliptic function case (namely, H. Lang's extension for the Hurwitz numbers). However, in other past results on higher-genus algebraic functions, the modulus was at most half of its value in these classical cases. This contrast is clarified by investigating the analogue of the three properties above for the universal Bernoulli numbers. Bibliography: 34 titles.

  8. Generalized Bernoulli-Hurwitz numbers and the universal Bernoulli numbers

    Energy Technology Data Exchange (ETDEWEB)

    Onishi, Yoshihiro [Faculty of Education Human Sciences, University of Yamanashi, Takeda, Kofu (Japan)

    2011-10-31

    The three fundamental properties of the Bernoulli numbers, namely, the von Staudt-Clausen theorem, von Staudt's second theorem, and Kummer's original congruence, are generalized to new numbers that we call generalized Bernoulli-Hurwitz numbers. These are coefficients in the power series expansion of a higher-genus algebraic function with respect to a suitable variable. Our generalization differs strongly from previous works. Indeed, the order of the power of the modulus prime in our Kummer-type congruences is exactly the same as in the trigonometric function case (namely, Kummer's own congruence for the original Bernoulli numbers), and as in the elliptic function case (namely, H. Lang's extension for the Hurwitz numbers). However, in other past results on higher-genus algebraic functions, the modulus was at most half of its value in these classical cases. This contrast is clarified by investigating the analogue of the three properties above for the universal Bernoulli numbers. Bibliography: 34 titles.

  9. Multistage parallel-serial time averaging filters

    International Nuclear Information System (INIS)

    Theodosiou, G.E.

    1980-01-01

    Here, a new time averaging circuit design, the 'parallel filter' is presented, which can reduce the time jitter, introduced in time measurements using counters of large dimensions. This parallel filter could be considered as a single stage unit circuit which can be repeated an arbitrary number of times in series, thus providing a parallel-serial filter type as a result. The main advantages of such a filter over a serial one are much less electronic gate jitter and time delay for the same amount of total time uncertainty reduction. (orig.)

  10. Can Higher Education Foster Economic Growth? Chicago Fed Letter. Number 229

    Science.gov (United States)

    Mattoon, Richard H.

    2006-01-01

    Not all observers agree that higher education and economic growth are obvious or necessary complements to each other. The controversy may be exacerbated because of the difficulty of measuring the exact contribution of colleges and universities to economic growth. Recognizing that a model based on local conditions and higher education's response…

  11. The Entrepreneurial Domains of American Higher Education. ASHE Higher Education Report, Volume 34, Number 5

    Science.gov (United States)

    Mars, Matthew M.; Metcalf, Amy Scott

    2009-01-01

    This volume draws on a diverse set of literatures to represent the various ways in which entrepreneurship is understood in and applied to higher education. It provides a platform for debate for those considering applications of entrepreneurial principles to academic research and practices. Using academic entrepreneurship in the United States as…

  12. Number and importance of somatic cells in goat’s milk

    Directory of Open Access Journals (Sweden)

    Lidija Kozačinski

    2001-04-01

    Full Text Available Goat’s milk samples were examined on mastitis using stable procedure (California-mastitis test. 427 of the examined milk samples (46.82% had positive reaction from 1 to 3 while other 485 samples (53.18% had negative reaction on the mastitis test, indicating that no illness of mammary gland occurred. Number of somatic cells, counted using “Fossomatic” counter, was 1.3x106/ml average. By comparing the results of mastitis-test evaluation (CMT with the number of somatic cells and findings of mastitis agents in milk showed that higher number of somatic cells is not the only indication of goat’s mammary gland illness. Mastitis-test is method that can exclude inflammation of goat’s mammary gland, but every positive reaction should be confirmed or eliminate with bacteriological examination. Based on the results of this research, it has been shown that the limit for somatic cells number in goat's milk can be over 1 000 000/ml.

  13. Averaging models: parameters estimation with the R-Average procedure

    Directory of Open Access Journals (Sweden)

    S. Noventa

    2010-01-01

    Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.

  14. Reynolds-Averaged Navier-Stokes Solutions to Flat Plate Film Cooling Scenarios

    Science.gov (United States)

    Johnson, Perry L.; Shyam, Vikram; Hah, Chunill

    2011-01-01

    The predictions of several Reynolds-Averaged Navier-Stokes solutions for a baseline film cooling geometry are analyzed and compared with experimental data. The Fluent finite volume code was used to perform the computations with the realizable k-epsilon turbulence model. The film hole was angled at 35 to the crossflow with a Reynolds number of 17,400. Multiple length-to-diameter ratios (1.75 and 3.5) as well as momentum flux ratios (0.125 and 0.5) were simulated with various domains, boundary conditions, and grid refinements. The coolant to mainstream density ratio was maintained at 2.0 for all scenarios. Computational domain and boundary condition variations show the ability to reduce the computational cost as compared to previous studies. A number of grid refinement and coarsening variations are compared for further insights into the reduction of computational cost. Liberal refinement in the near hole region is valuable, especially for higher momentum jets that tend to lift-off and create a recirculating flow. A lack of proper refinement in the near hole region can severely diminish the accuracy of the solution, even in the far region. The effects of momentum ratio and hole length-to-diameter ratio are also discussed.

  15. Calculating ensemble averaged descriptions of protein rigidity without sampling.

    Directory of Open Access Journals (Sweden)

    Luis C González

    Full Text Available Previous works have demonstrated that protein rigidity is related to thermodynamic stability, especially under conditions that favor formation of native structure. Mechanical network rigidity properties of a single conformation are efficiently calculated using the integer body-bar Pebble Game (PG algorithm. However, thermodynamic properties require averaging over many samples from the ensemble of accessible conformations to accurately account for fluctuations in network topology. We have developed a mean field Virtual Pebble Game (VPG that represents the ensemble of networks by a single effective network. That is, all possible number of distance constraints (or bars that can form between a pair of rigid bodies is replaced by the average number. The resulting effective network is viewed as having weighted edges, where the weight of an edge quantifies its capacity to absorb degrees of freedom. The VPG is interpreted as a flow problem on this effective network, which eliminates the need to sample. Across a nonredundant dataset of 272 protein structures, we apply the VPG to proteins for the first time. Our results show numerically and visually that the rigidity characterizations of the VPG accurately reflect the ensemble averaged [Formula: see text] properties. This result positions the VPG as an efficient alternative to understand the mechanical role that chemical interactions play in maintaining protein stability.

  16. Higher-order differencing method with a multigrid approach for the solution of the incompressible flow equations at high Reynolds numbers

    International Nuclear Information System (INIS)

    Tzanos, C.P.

    1992-01-01

    A higher-order differencing method was recently proposed for the convection-diffusion equation, which even with a coarse mesh gives oscillation-free solutions that are far more accurate than those of the upwind scheme. In this paper, the performance of this method is investigated in conjunction with the performance of different iterative solvers for the solution of the Navier-Stokes equations in the vorticity-streamfunction formulation for incompressible flow at high Reynolds numbers. Flow in a square cavity with a moving lid was chosen as a model problem. Solvers that performed well at low Re numbers either failed to converge or had a computationally prohibitive convergence rate at high Re numbers. The additive correction method of Settari and Aziz and an iterative incomplete lower and upper (ILU) solver were used in a multigrid approach that performed well in the whole range of Re numbers considered (from 1000 to 10,000) and for uniform as well as nonuniform grids. At high Re numbers, point or line Gauss-Seidel solvers converged with uniform grids, but failed to converge with nonuniform grids

  17. Weather conditions influence the number of psychiatric emergency room patients

    Science.gov (United States)

    Brandl, Eva Janina; Lett, Tristram A.; Bakanidze, George; Heinz, Andreas; Bermpohl, Felix; Schouler-Ocak, Meryam

    2017-12-01

    The specific impact of weather factors on psychiatric disorders has been investigated only in few studies with inconsistent results. We hypothesized that meteorological conditions influence the number of cases presenting in a psychiatric emergency room as a measure of mental health conditions. We analyzed the number of patients consulting the emergency room (ER) of a psychiatric hospital in Berlin, Germany, between January 1, 2008, and December 31, 2014. A total of N = 22,672 cases were treated in the ER over the study period. Meteorological data were obtained from a publicly available data base. Due to collinearity among the meteorological variables, we performed a principal component (PC) analysis. Association of PCs with the daily number of patients was analyzed with autoregressive integrated moving average model. Delayed effects were investigated using Granger causal modeling. Daily number of patients in the ER was significantly higher in spring and summer compared to fall and winter (p psychiatric patients consulting the emergency room. In particular, our data indicate lower patient numbers during very cold temperatures.

  18. Nuclear fuel management via fuel quality factor averaging

    International Nuclear Information System (INIS)

    Mingle, J.O.

    1978-01-01

    The numerical procedure of prime number averaging is applied to the fuel quality factor distribution of once and twice-burned fuel in order to evolve a fuel management scheme. The resulting fuel shuffling arrangement produces a near optimal flat power profile both under beginning-of-life and end-of-life conditions. The procedure is easily applied requiring only the solution of linear algebraic equations. (author)

  19. Higher first Chern numbers in one-dimensional Bose-Fermi mixtures

    Science.gov (United States)

    Knakkergaard Nielsen, Kristian; Wu, Zhigang; Bruun, G. M.

    2018-02-01

    We propose to use a one-dimensional system consisting of identical fermions in a periodically driven lattice immersed in a Bose gas, to realise topological superfluid phases with Chern numbers larger than 1. The bosons mediate an attractive induced interaction between the fermions, and we derive a simple formula to analyse the topological properties of the resulting pairing. When the coherence length of the bosons is large compared to the lattice spacing and there is a significant next-nearest neighbour hopping for the fermions, the system can realise a superfluid with Chern number ±2. We show that this phase is stable in a large region of the phase diagram as a function of the filling fraction of the fermions and the coherence length of the bosons. Cold atomic gases offer the possibility to realise the proposed system using well-known experimental techniques.

  20. The effects of average revenue regulation on electricity transmission investment and pricing

    International Nuclear Information System (INIS)

    Matsukawa, Isamu

    2008-01-01

    This paper investigates the long-run effects of average revenue regulation on an electricity transmission monopolist who applies a two-part tariff comprising a variable congestion price and a non-negative fixed access fee. A binding constraint on the monopolist's expected average revenue lowers the access fee, promotes transmission investment, and improves consumer surplus. In a case of any linear or log-linear electricity demand function with a positive probability that no congestion occurs, average revenue regulation is allocatively more efficient than a Coasian two-part tariff if the level of capacity under average revenue regulation is higher than that under a Coasian two-part tariff. (author)

  1. MCBS Highlights: Ownership and Average Premiums for Medicare Supplementary Insurance Policies

    Science.gov (United States)

    Chulis, George S.; Eppig, Franklin J.; Poisal, John A.

    1995-01-01

    This article describes private supplementary health insurance holdings and average premiums paid by Medicare enrollees. Data were collected as part of the 1992 Medicare Current Beneficiary Survey (MCBS). Data show the number of persons with insurance and average premiums paid by type of insurance held—individually purchased policies, employer-sponsored policies, or both. Distributions are shown for a variety of demographic, socioeconomic, and health status variables. Primary findings include: Seventy-eight percent of Medicare beneficiaries have private supplementary insurance; 25 percent of those with private insurance hold more than one policy. The average premium paid for private insurance in 1992 was $914. PMID:10153473

  2. Using autoregressive integrated moving average (ARIMA models to predict and monitor the number of beds occupied during a SARS outbreak in a tertiary hospital in Singapore

    Directory of Open Access Journals (Sweden)

    Earnest Arul

    2005-05-01

    Full Text Available Abstract Background The main objective of this study is to apply autoregressive integrated moving average (ARIMA models to make real-time predictions on the number of beds occupied in Tan Tock Seng Hospital, during the recent SARS outbreak. Methods This is a retrospective study design. Hospital admission and occupancy data for isolation beds was collected from Tan Tock Seng hospital for the period 14th March 2003 to 31st May 2003. The main outcome measure was daily number of isolation beds occupied by SARS patients. Among the covariates considered were daily number of people screened, daily number of people admitted (including observation, suspect and probable cases and days from the most recent significant event discovery. We utilized the following strategy for the analysis. Firstly, we split the outbreak data into two. Data from 14th March to 21st April 2003 was used for model development. We used structural ARIMA models in an attempt to model the number of beds occupied. Estimation is via the maximum likelihood method using the Kalman filter. For the ARIMA model parameters, we considered the simplest parsimonious lowest order model. Results We found that the ARIMA (1,0,3 model was able to describe and predict the number of beds occupied during the SARS outbreak well. The mean absolute percentage error (MAPE for the training set and validation set were 5.7% and 8.6% respectively, which we found was reasonable for use in the hospital setting. Furthermore, the model also provided three-day forecasts of the number of beds required. Total number of admissions and probable cases admitted on the previous day were also found to be independent prognostic factors of bed occupancy. Conclusion ARIMA models provide useful tools for administrators and clinicians in planning for real-time bed capacity during an outbreak of an infectious disease such as SARS. The model could well be used in planning for bed-capacity during outbreaks of other infectious

  3. Using autoregressive integrated moving average (ARIMA) models to predict and monitor the number of beds occupied during a SARS outbreak in a tertiary hospital in Singapore.

    Science.gov (United States)

    Earnest, Arul; Chen, Mark I; Ng, Donald; Sin, Leo Yee

    2005-05-11

    The main objective of this study is to apply autoregressive integrated moving average (ARIMA) models to make real-time predictions on the number of beds occupied in Tan Tock Seng Hospital, during the recent SARS outbreak. This is a retrospective study design. Hospital admission and occupancy data for isolation beds was collected from Tan Tock Seng hospital for the period 14th March 2003 to 31st May 2003. The main outcome measure was daily number of isolation beds occupied by SARS patients. Among the covariates considered were daily number of people screened, daily number of people admitted (including observation, suspect and probable cases) and days from the most recent significant event discovery. We utilized the following strategy for the analysis. Firstly, we split the outbreak data into two. Data from 14th March to 21st April 2003 was used for model development. We used structural ARIMA models in an attempt to model the number of beds occupied. Estimation is via the maximum likelihood method using the Kalman filter. For the ARIMA model parameters, we considered the simplest parsimonious lowest order model. We found that the ARIMA (1,0,3) model was able to describe and predict the number of beds occupied during the SARS outbreak well. The mean absolute percentage error (MAPE) for the training set and validation set were 5.7% and 8.6% respectively, which we found was reasonable for use in the hospital setting. Furthermore, the model also provided three-day forecasts of the number of beds required. Total number of admissions and probable cases admitted on the previous day were also found to be independent prognostic factors of bed occupancy. ARIMA models provide useful tools for administrators and clinicians in planning for real-time bed capacity during an outbreak of an infectious disease such as SARS. The model could well be used in planning for bed-capacity during outbreaks of other infectious diseases as well.

  4. Capillary Electrophoresis Sensitivity Enhancement Based on Adaptive Moving Average Method.

    Science.gov (United States)

    Drevinskas, Tomas; Telksnys, Laimutis; Maruška, Audrius; Gorbatsova, Jelena; Kaljurand, Mihkel

    2018-06-05

    In the present work, we demonstrate a novel approach to improve the sensitivity of the "out of lab" portable capillary electrophoretic measurements. Nowadays, many signal enhancement methods are (i) underused (nonoptimal), (ii) overused (distorts the data), or (iii) inapplicable in field-portable instrumentation because of a lack of computational power. The described innovative migration velocity-adaptive moving average method uses an optimal averaging window size and can be easily implemented with a microcontroller. The contactless conductivity detection was used as a model for the development of a signal processing method and the demonstration of its impact on the sensitivity. The frequency characteristics of the recorded electropherograms and peaks were clarified. Higher electrophoretic mobility analytes exhibit higher-frequency peaks, whereas lower electrophoretic mobility analytes exhibit lower-frequency peaks. On the basis of the obtained data, a migration velocity-adaptive moving average algorithm was created, adapted, and programmed into capillary electrophoresis data-processing software. Employing the developed algorithm, each data point is processed depending on a certain migration time of the analyte. Because of the implemented migration velocity-adaptive moving average method, the signal-to-noise ratio improved up to 11 times for sampling frequency of 4.6 Hz and up to 22 times for sampling frequency of 25 Hz. This paper could potentially be used as a methodological guideline for the development of new smoothing algorithms that require adaptive conditions in capillary electrophoresis and other separation methods.

  5. The relationship between career mobility and occupational expertise. A retrospective study among higher-level Dutch professionals in three age groups

    NARCIS (Netherlands)

    van der Heijden, Beatrice

    2003-01-01

    The present study investigates the relationship between two career-related variables and occupational expertise of higher-level employees from large working organisations in three different age groups. The factors in question are: total number of jobs that have been performed; and the average period

  6. Bounding quantum gate error rate based on reported average fidelity

    International Nuclear Information System (INIS)

    Sanders, Yuval R; Wallman, Joel J; Sanders, Barry C

    2016-01-01

    Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli distance as a measure of this deviation, and we show that knowledge of the Pauli distance enables tighter estimates of the error rate of quantum gates. (fast track communication)

  7. Accurate phenotyping: Reconciling approaches through Bayesian model averaging.

    Directory of Open Access Journals (Sweden)

    Carla Chia-Ming Chen

    Full Text Available Genetic research into complex diseases is frequently hindered by a lack of clear biomarkers for phenotype ascertainment. Phenotypes for such diseases are often identified on the basis of clinically defined criteria; however such criteria may not be suitable for understanding the genetic composition of the diseases. Various statistical approaches have been proposed for phenotype definition; however our previous studies have shown that differences in phenotypes estimated using different approaches have substantial impact on subsequent analyses. Instead of obtaining results based upon a single model, we propose a new method, using Bayesian model averaging to overcome problems associated with phenotype definition. Although Bayesian model averaging has been used in other fields of research, this is the first study that uses Bayesian model averaging to reconcile phenotypes obtained using multiple models. We illustrate the new method by applying it to simulated genetic and phenotypic data for Kofendred personality disorder-an imaginary disease with several sub-types. Two separate statistical methods were used to identify clusters of individuals with distinct phenotypes: latent class analysis and grade of membership. Bayesian model averaging was then used to combine the two clusterings for the purpose of subsequent linkage analyses. We found that causative genetic loci for the disease produced higher LOD scores using model averaging than under either individual model separately. We attribute this improvement to consolidation of the cores of phenotype clusters identified using each individual method.

  8. Neutron resonance averaging

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1986-10-01

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs

  9. 40 CFR 80.1238 - How is a refinery's or importer's average benzene concentration determined?

    Science.gov (United States)

    2010-07-01

    ... concentration determined? (a) The average benzene concentration of gasoline produced at a refinery or imported... percent benzene). i = Individual batch of gasoline produced at the refinery or imported during the applicable averaging period. n = Total number of batches of gasoline produced at the refinery or imported...

  10. Enhanced π-Back-Donation as a Way to Higher Coordination Numbers in d10 [M(NHC)n] Complexes: A DFT Study

    NARCIS (Netherlands)

    Nitsch, J.; Wolters, L.P.; Fonseca Guerra, C.; Bickelhaupt, F.M.; Steffen, A.

    2016-01-01

    We aim to understand the electronic factors determining the stability and coordination number of d10 transition-metal complexes bearing N-heterocyclic carbene (NHC) ligands, with a particular emphasis on higher coordinated species. In this DFT study on the formation and bonding of Group 9–12 d10

  11. Enhanced pi-Back-Donation as a Way to Higher Coordination Numbers in d10 [M(NHC)n] Complexes: A DFT Study

    NARCIS (Netherlands)

    Nitsch, J.S.; Wolters, L.P.; Fonseca Guerra, C.; Bickelhaupt, F.M.; Steffen, A.

    2017-01-01

    We aim to understand the electronic factors determining the stability and coordination number of d10 transition-metal complexes bearing N-heterocyclic carbene (NHC) ligands, with a particular emphasis on higher coordinated species. In this DFT study on the formation and bonding of Group 9–12 d10

  12. Effects of gradient encoding and number of signal averages on fractional anisotropy and fiber density index in vivo at 1.5 tesla.

    Science.gov (United States)

    Widjaja, E; Mahmoodabadi, S Z; Rea, D; Moineddin, R; Vidarsson, L; Nilsson, D

    2009-01-01

    Tensor estimation can be improved by increasing the number of gradient directions (NGD) or increasing the number of signal averages (NSA), but at a cost of increased scan time. To evaluate the effects of NGD and NSA on fractional anisotropy (FA) and fiber density index (FDI) in vivo. Ten healthy adults were scanned on a 1.5T system using nine different diffusion tensor sequences. Combinations of 7 NGD, 15 NGD, and 25 NGD with 1 NSA, 2 NSA, and 3 NSA were used, with scan times varying from 2 to 18 min. Regions of interest (ROIs) were placed in the internal capsules, middle cerebellar peduncles, and splenium of the corpus callosum, and FA and FDI were calculated. Analysis of variance was used to assess whether there was a difference in FA and FDI of different combinations of NGD and NSA. There was no significant difference in FA of different combinations of NGD and NSA of the ROIs (P>0.005). There was a significant difference in FDI between 7 NGD/1 NSA and 25 NGD/3 NSA in all three ROIs (PNSA, 25 NGD/1 NSA, and 25 NGD/2 NSA and 25 NGD/3 NSA in all ROIs (P>0.005). We have not found any significant difference in FA with varying NGD and NSA in vivo in areas with relatively high anisotropy. However, lower NGD resulted in reduced FDI in vivo. With larger NGD, NSA has less influence on FDI. The optimal sequence among the nine sequences tested with the shortest scan time was 25 NGD/1 NSA.

  13. A Martian PFS average spectrum: Comparison with ISO SWS

    Science.gov (United States)

    Formisano, V.; Encrenaz, T.; Fonti, S.; Giuranna, M.; Grassi, D.; Hirsh, H.; Khatuntsev, I.; Ignatiev, N.; Lellouch, E.; Maturilli, A.; Moroz, V.; Orleanski, P.; Piccioni, G.; Rataj, M.; Saggin, B.; Zasova, L.

    2005-08-01

    The evaluation of the planetary Fourier spectrometer performance at Mars is presented by comparing an average spectrum with the ISO spectrum published by Lellouch et al. [2000. Planet. Space Sci. 48, 1393.]. First, the average conditions of Mars atmosphere are compared, then the mixing ratios of the major gases are evaluated. Major and minor bands of CO 2 are compared, from the point of view of features characteristics and bands depth. The spectral resolution is also compared using several solar lines. The result indicates that PFS radiance is valid to better than 1% in the wavenumber range 1800-4200 cm -1 for the average spectrum considered (1680 measurements). The PFS monochromatic transfer function generates an overshooting on the left-hand side of strong narrow lines (solar or atmospheric). The spectral resolution of PFS is of the order of 1.3 cm -1 or better. A large number of narrow features to be identified are discovered.

  14. Higher prices at Canadian gas pumps: international crude oil prices or local market concentration? An empirical investigation

    International Nuclear Information System (INIS)

    Anindya Sen

    2003-01-01

    There is little consensus on whether higher retail gasoline prices in Canada are the result of international crude oil price fluctuations or local market power exercised by large vertically-integrated firms. I find that although both increasing local market concentration and higher average monthly wholesale prices are positively and significantly associated with higher retail prices, wholesale prices are more important than local market concentration. Similarly, crude oil prices are more important than the number of local wholesalers in determining wholesale prices. These results suggest that movements in gasoline prices are largely the result of input price fluctuations rather than local market structure. (author)

  15. Yearly, seasonal and monthly daily average diffuse sky radiation models

    International Nuclear Information System (INIS)

    Kassem, A.S.; Mujahid, A.M.; Turner, D.W.

    1993-01-01

    A daily average diffuse sky radiation regression model based on daily global radiation was developed utilizing two year data taken near Blytheville, Arkansas (Lat. =35.9 0 N, Long. = 89.9 0 W), U.S.A. The model has a determination coefficient of 0.91 and 0.092 standard error of estimate. The data were also analyzed for a seasonal dependence and four seasonal average daily models were developed for the spring, summer, fall and winter seasons. The coefficient of determination is 0.93, 0.81, 0.94 and 0.93, whereas the standard error of estimate is 0.08, 0.102, 0.042 and 0.075 for spring, summer, fall and winter, respectively. A monthly average daily diffuse sky radiation model was also developed. The coefficient of determination is 0.92 and the standard error of estimate is 0.083. A seasonal monthly average model was also developed which has 0.91 coefficient of determination and 0.085 standard error of estimate. The developed monthly daily average and daily models compare well with a selected number of previously developed models. (author). 11 ref., figs., tabs

  16. Crystallographic extraction and averaging of data from small image areas

    NARCIS (Netherlands)

    Perkins, GA; Downing, KH; Glaeser, RM

    The accuracy of structure factor phases determined from electron microscope images is determined mainly by the level of statistical significance, which is limited by the low level of allowed electron exposure and by the number of identical unit cells that can be averaged. It is shown here that

  17. Quantitative Developments in Turkish Higher Education since 1933

    Directory of Open Access Journals (Sweden)

    Aslı GÜNAY

    2011-01-01

    Full Text Available In this study, quantitative developments in Turkish higher education during the Republic period from 1933, when the first university was established, to date are tried to be demonstrated. In parallel with this purpose, first, establishment dates of universities, number of universities by years as well as number of universities established during the periods of each presidents of Turkish Council of Higher Education are listed. Also, spread to all provinces as of 2008, the distribution of the number of universities with regard to provinces is given. On the other hand, development of Turkish higher education by years is examined by using several quantitative indicators about higher education. Thus, number of students in higher education, total number of academic staffs as well as those with PhD, improvement in the number of students per academic staff and higher education gross enrollment rates by years are shown. Furthermore, especially for big provinces in Turkey (Ankara, İstanbul and İzmir number of universities, number of students in higher education and higher education gross enrollment rates are provided. Distribution of higher education students according to higher education institutions, higher education programs and education types in 2011 is presented as well as distribution of academic staffs according to higher education institutions and information about their academic positions. In addition, quantitative data about higher education bachelor and associate degrees (numbers of programs types, programs, quotas and placed students in 2010 is given. Finally, the position of Turkish higher education in the world with respect to the number of academic publications and the change in the number of academic publications per staff by years are analyzed.

  18. Averaged null energy condition from causality

    Science.gov (United States)

    Hartman, Thomas; Kundu, Sandipan; Tajdini, Amirhossein

    2017-07-01

    Unitary, Lorentz-invariant quantum field theories in flat spacetime obey mi-crocausality: commutators vanish at spacelike separation. For interacting theories in more than two dimensions, we show that this implies that the averaged null energy, ∫ duT uu , must be non-negative. This non-local operator appears in the operator product expansion of local operators in the lightcone limit, and therefore contributes to n-point functions. We derive a sum rule that isolates this contribution and is manifestly positive. The argument also applies to certain higher spin operators other than the stress tensor, generating an infinite family of new constraints of the form ∫ duX uuu··· u ≥ 0. These lead to new inequalities for the coupling constants of spinning operators in conformal field theory, which include as special cases (but are generally stronger than) the existing constraints from the lightcone bootstrap, deep inelastic scattering, conformal collider methods, and relative entropy. We also comment on the relation to the recent derivation of the averaged null energy condition from relative entropy, and suggest a more general connection between causality and information-theoretic inequalities in QFT.

  19. Introduction to the method of average magnitude analysis and application to natural convection in cavities

    International Nuclear Information System (INIS)

    Lykoudis, P.S.

    1995-01-01

    The method of Average Magnitude Analysis is a mixture of the Integral Method and the Order of Magnitude Analysis. The paper shows how the differential equations of conservation for steady-state, laminar, boundary layer flows are converted to a system of algebraic equations, where the result is a sum of the order of magnitude of each term, multiplied by, a weight coefficient. These coefficients are determined from integrals containing the assumed velocity and temperature profiles. The method is illustrated by applying it to the case of drag and heat transfer over an infinite flat plate. It is then applied to the case of natural convection over an infinite flat plate with and without the presence of a horizontal magnetic field, and subsequently to enclosures of aspect ratios of one or higher. The final correlation in this instance yields the Nusselt number as a function of the aspect ratio and the Rayleigh and Prandtl numbers. This correlation is tested against a wide range of small and large values of these parameters. 19 refs., 4 figs

  20. Fractional averaging of repetitive waveforms induced by self-imaging effects

    Science.gov (United States)

    Romero Cortés, Luis; Maram, Reza; Azaña, José

    2015-10-01

    We report the theoretical prediction and experimental observation of averaging of stochastic events with an equivalent result of calculating the arithmetic mean (or sum) of a rational number of realizations of the process under test, not necessarily limited to an integer record of realizations, as discrete statistical theory dictates. This concept is enabled by a passive amplification process, induced by self-imaging (Talbot) effects. In the specific implementation reported here, a combined spectral-temporal Talbot operation is shown to achieve undistorted, lossless repetition-rate division of a periodic train of noisy waveforms by a rational factor, leading to local amplification, and the associated averaging process, by the fractional rate-division factor.

  1. The use of averages and other summation quantities in the testing of evaluated fission product yield and decay data. Applications to ENDF/B(IV)

    International Nuclear Information System (INIS)

    Walker, W.H.

    1976-01-01

    Averages of some fission product properties can be obtained by multiplying the fission product yield for each fission product by the value of the property (e.g. mass, atomic number, mass defect) for that fission product and summing all significant contributions. These averages can be used to test the reliability of the yield set or provide useful data for reactor calculations. The report gives the derivation of these averages and discusses their application using the ENDF/B(IV) fission product library. The following quantities are treated here: the number of fission products per fission ΣYsub(i); the average mass number and the average number of neutrons per fission; the average atomic number of the stable fission products and the average number of β-decays per fission; the average mass defect of the stable fission products and the total energy release per fission; the average decay energy per fission (beta, gamma and anti-neutrino); the average β-decay energy per fission; individual and group-averaged delayed neutron emission; the total yield for each fission product element. Wherever it is meaningful to do so, a sum is subdivided into its light and heavy mass components. The most significant differences between calculated values based on ENDF/B(IV) and measurements are the β and γ decay energies for 235 U thermal fission and delayed neutron yields for other fissile nuclides, most notably 238 U. (author)

  2. Visualization of Radial Peripapillary Capillaries Using Optical Coherence Tomography Angiography: The Effect of Image Averaging.

    Directory of Open Access Journals (Sweden)

    Shelley Mo

    Full Text Available To assess the effect of image registration and averaging on the visualization and quantification of the radial peripapillary capillary (RPC network on optical coherence tomography angiography (OCTA.Twenty-two healthy controls were imaged with a commercial OCTA system (AngioVue, Optovue, Inc.. Ten 10x10° scans of the optic disc were obtained, and the most superficial layer (50-μm slab extending from the inner limiting membrane was extracted for analysis. Rigid registration was achieved using ImageJ, and averaging of each 2 to 10 frames was performed in five ~2x2° regions of interest (ROI located 1° from the optic disc margin. The ROI were automatically skeletonized. Signal-to-noise ratio (SNR, number of endpoints and mean capillary length from the skeleton, capillary density, and mean intercapillary distance (ICD were measured for the reference and each averaged ROI. Repeated measures analysis of variance was used to assess statistical significance. Three patients with primary open angle glaucoma were also imaged to compare RPC density to controls.Qualitatively, vessels appeared smoother and closer to histologic descriptions with increasing number of averaged frames. Quantitatively, number of endpoints decreased by 51%, and SNR, mean capillary length, capillary density, and ICD increased by 44%, 91%, 11%, and 4.5% from single frame to 10-frame averaged, respectively. The 10-frame averaged images from the glaucomatous eyes revealed decreased density correlating to visual field defects and retinal nerve fiber layer thinning.OCTA image registration and averaging is a viable and accessible method to enhance the visualization of RPCs, with significant improvements in image quality and RPC quantitative parameters. With this technique, we will be able to non-invasively and reliably study RPC involvement in diseases such as glaucoma.

  3. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    1999-01-01

    In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belo...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion......In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  4. Average-energy games

    Directory of Open Access Journals (Sweden)

    Patricia Bouyer

    2015-09-01

    Full Text Available Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in NP inter coNP and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements.

  5. Average Nuclear properties based on statistical model

    International Nuclear Information System (INIS)

    El-Jaick, L.J.

    1974-01-01

    The rough properties of nuclei were investigated by statistical model, in systems with the same and different number of protons and neutrons, separately, considering the Coulomb energy in the last system. Some average nuclear properties were calculated based on the energy density of nuclear matter, from Weizsscker-Beth mass semiempiric formulae, generalized for compressible nuclei. In the study of a s surface energy coefficient, the great influence exercised by Coulomb energy and nuclear compressibility was verified. For a good adjust of beta stability lines and mass excess, the surface symmetry energy were established. (M.C.K.) [pt

  6. The effect of the cranial bone CT numbers on the brain CT numbers

    Energy Technology Data Exchange (ETDEWEB)

    Fukuda, Hitoshi; Kobayashi, Shotai; Koide, Hiromi; Yamaguchi, Shuhei; Okada, Kazunori; Shimote, Koichi; Tsunematsu, Tokugoro (Shimane Medical Univ., Izumo (Japan))

    1989-06-01

    The effects of the cranial size and the computed tomography (CT) numbers of the cranial bone on that of the brain were studied in 70 subjects, aged from 30 to 94 years. The subjects had no histories of cerebrovascular accidents and showed no abnormalities in the central nervous system upon physical examinations and a CT scan. We measured the average attenuation values (CT numbers) of each elliptical region (165 pixels, 0.39 cm{sup 2}) at the bilateral thalamus and at twelve areas of the deep white matter. Multiple regression analysis was used to assess the effects of age, cranial size, and cranial bone CT numbers on the brain CT numbers. The effect of the cranial bone CT numbers on the brain CT numbers was statistically significant. The brain CT numbers increased with the increase in the cranial bone CT numbers. There was, however, no significant correlation between brain CT numbers and cranial size. In measuring the brain CT numbers, it is desirable that consideration be given to the cranial bone CT numbers. (author).

  7. Are average and symmetric faces attractive to infants? Discrimination and looking preferences.

    Science.gov (United States)

    Rhodes, Gillian; Geddes, Keren; Jeffery, Linda; Dziurawiec, Suzanne; Clark, Alison

    2002-01-01

    Young infants prefer to look at faces that adults find attractive, suggesting a biological basis for some face preferences. However, the basis for infant preferences is not known. Adults find average and symmetric faces attractive. We examined whether 5-8-month-old infants discriminate between different levels of averageness and symmetry in faces, and whether they prefer to look at faces with higher levels of these traits. Each infant saw 24 pairs of female faces. Each pair consisted of two versions of the same face differing either in averageness (12 pairs) or symmetry (12 pairs). Data from the mothers confirmed that adults preferred the more average and more symmetric versions in each pair. The infants were sensitive to differences in both averageness and symmetry, but showed no looking preference for the more average or more symmetric versions. On the contrary, longest looks were significantly longer for the less average versions, and both longest looks and first looks were marginally longer for the less symmetric versions. Mean looking times were also longer for the less average and less symmetric versions, but those differences were not significant. We suggest that the infant looking behaviour may reflect a novelty preference rather than an aesthetic preference.

  8. Human Umbilical Cord Blood Serum Has Higher Potential in Inducing Proliferation of Fibroblast than Fetal Bovine Serum

    Directory of Open Access Journals (Sweden)

    Ferry Sandra

    2017-09-01

    Full Text Available Background: Cytokines and growth factors were reported to play an important role in stimulating fibroblast proliferation. In vitro culture, fibroblast is mostly culture in medium containing fetal bovine serum (FBS.  Human umbilical cord blood (hUCB has been reported to have low immunogenic property and potential in wound healing, so therefore hUCB serum (hUCBS could be potential and were investigated in current study. Materials and Methods: Five hUCBs were collected from healthy volunteers with normal delivering procedure. hUCB was ex utero immediately collected from umbilical vein in vacutainers and processed. NIH3T3 cells were cultured in DMEM with 10% FBS or 5-20% hUCBS for 48 hours. Cells were then quantified using MTT assay. Protein concentration of FBS and hUCBS were quantified using Bradford assay. Results: NIH3T3 cells density grown in DMEM with 10% FBS was the lowest. NIH3T3 cells densities were increased along with the increment of hUCBS concentrations. MTT results showed that average number of NIH3T3 cells grown in DMEM with 10% FBS was 6,185±1,243. Meanwhile average numbers of NIH3T3 cells grown in DMEM with 5%, 10% and 20% hUCBS were 8,126±628, 9,685±313 and 12,200±304, respectively. Average numbers of NIH3T3 cells grown in DMEM with 5% hUCBS were significantly higher than the ones with 10% FBS (p=0.000. Bradford results showed that concentration of hUCBS was significantly higher than the one of FBS (p=0.000. Conclusion: hUCBS could induce higher proliferation rate of NIH3T3 cells than FBS. Hence hUCBS could be suggested as an alternate of FBS in inducing fibroblast. Keywords: NIH3T3, fibroblast, UCB, serum, FBS, proliferation

  9. Averaging of nonlinearity-managed pulses

    International Nuclear Information System (INIS)

    Zharnitsky, Vadim; Pelinovsky, Dmitry

    2005-01-01

    We consider the nonlinear Schroedinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons

  10. 40 CFR 80.825 - How is the refinery or importer annual average toxics value determined?

    Science.gov (United States)

    2010-07-01

    ... volume of applicable gasoline produced or imported in batch i. Ti = The toxics value of batch i. n = The number of batches of gasoline produced or imported during the averaging period. i = Individual batch of gasoline produced or imported during the averaging period. (b) The calculation specified in paragraph (a...

  11. Asymmetric network connectivity using weighted harmonic averages

    Science.gov (United States)

    Morrison, Greg; Mahadevan, L.

    2011-02-01

    We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.

  12. Top Grafting Performance of Some Cocoa (Theobroma cacao L. Clones as Affected by Scion Budwood Number

    Directory of Open Access Journals (Sweden)

    Fakhrusy Zakariyya

    2015-12-01

    Full Text Available Reducing budwood number is an efficient effort to overcome problemsrelated with limited scion materials. The objective of this research was to studythe effect of scion budwood number in some clones on the performance of graftedcocoa seedlings. The research was conducted at Kaliwining Research Station,Indonesian Coffee and Cocoa Research Institute, Jember, Indonesia at an elevationof 48 m above sea level. Layout for this study used factorial with 2 factors inrandomized complete block design, with four replications for every treatment.The first factor was clone type, namely MCC 02 and Sulawesi 1; whereas the secondfactor was number of grafted scion budwood, namely one, two, and three graftedbudwoods. There was no interaction between clone and number of scion budwoodfor variables of shoot length, stem girth, content of total chlorophyll, chlorophylla, and chlorophyll b. Meanwhile, there was interaction for stomatal conductanceand stomatal diffusion resistance. Clone significantly affected photosynthesisand stomatal diffusion resistance, while number of scion budwood affected significantlythe shoot length. Photosynthesis activity of MCC 02 was higher comparedto Sulawesi 1. In average, stomatal diffusion resistance of Sulawesi 1 was higherthan MCC 02. The shoot length of one grafted budwood was higher than thetwo or three grafted budwood.

  13. Analysis and comparison of safety models using average daily, average hourly, and microscopic traffic.

    Science.gov (United States)

    Wang, Ling; Abdel-Aty, Mohamed; Wang, Xuesong; Yu, Rongjie

    2018-02-01

    There have been plenty of traffic safety studies based on average daily traffic (ADT), average hourly traffic (AHT), or microscopic traffic at 5 min intervals. Nevertheless, not enough research has compared the performance of these three types of safety studies, and seldom of previous studies have intended to find whether the results of one type of study is transferable to the other two studies. First, this study built three models: a Bayesian Poisson-lognormal model to estimate the daily crash frequency using ADT, a Bayesian Poisson-lognormal model to estimate the hourly crash frequency using AHT, and a Bayesian logistic regression model for the real-time safety analysis using microscopic traffic. The model results showed that the crash contributing factors found by different models were comparable but not the same. Four variables, i.e., the logarithm of volume, the standard deviation of speed, the logarithm of segment length, and the existence of diverge segment, were positively significant in the three models. Additionally, weaving segments experienced higher daily and hourly crash frequencies than merge and basic segments. Then, each of the ADT-based, AHT-based, and real-time models was used to estimate safety conditions at different levels: daily and hourly, meanwhile, the real-time model was also used in 5 min intervals. The results uncovered that the ADT- and AHT-based safety models performed similar in predicting daily and hourly crash frequencies, and the real-time safety model was able to provide hourly crash frequency. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. “Simpson’s paradox” as a manifestation of the properties of weighted average (part 2)

    OpenAIRE

    Zhekov, Encho

    2012-01-01

    The article proves that the so-called “Simpson's paradox” is a special case of manifestation of the properties of weighted average. In this case always comes to comparing two weighted averages, where the average of the larger variables is less than that of the smaller. The article demonstrates one method for analyzing the relative change of magnitudes of the type: k S = Σ x iy i i=1 who gives answer to question: what is the reason, the weighted average of few variables with higher values, to ...

  15. “Simpson’s paradox” as a manifestation of the properties of weighted average (part 1)

    OpenAIRE

    Zhekov, Encho

    2012-01-01

    The article proves that the so-called “Simpson's paradox” is a special case of manifestation of the properties of weighted average. In this case always comes to comparing two weighted averages, where the average of the larger variables is less than that of the smaller. The article demonstrates one method for analyzing the relative change of magnitudes of the type: S = Σ ki=1x iy i who gives answer to question: what is the reason, the weighted average of few variables with higher values, to be...

  16. Case and partnership reproduction numbers for a curable sexually transmitted infection.

    Science.gov (United States)

    Heijne, Janneke C M; Herzog, Sereina A; Althaus, Christian L; Low, Nicola; Kretzschmar, Mirjam

    2013-08-21

    Sexually transmitted infections (STIs) are, by definition, transmitted between sexual partners. For curable STIs an infected index case can potentially re-infect the same partner multiple times. Thus, R0, the average number of secondary infections one typical infected individual will produce during his or her infectious period is not necessarily the same as the average number of secondary cases (infected persons). Here we introduce the new concept of the case reproduction number (Rc). In addition, we define the partnership reproduction number (Rp) as the average number of secondary partnerships consisting of two infected individuals one typical infected individual will produce over his or her infectious lifetime. Rp takes into account clearance and re-infection within partnerships, which results in a prolongation of the duration of the infectious period. The two new reproduction numbers were derived for a deterministic pair model with serial monogamous partnerships using infection parameters for Chlamydia trachomatis, an example of a curable STI. We showed that re-infection within partnerships means that curable STIs can be sustained endemically even when the average number of secondary cases a person produces during his or her infectious period is below one. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Copy number of the transposon, Pokey, in rDNA is positively correlated with rDNA copy number in Daphnia obtuse [corrected].

    Directory of Open Access Journals (Sweden)

    Kaitlynn LeRiche

    Full Text Available Pokey is a class II DNA transposon that inserts into 28S ribosomal RNA (rRNA genes and other genomic regions of species in the subgenus, Daphnia. Two divergent lineages, PokeyA and PokeyB have been identified. Recombination between misaligned rRNA genes changes their number and the number of Pokey elements. We used quantitative PCR (qPCR to estimate rRNA gene and Pokey number in isolates from natural populations of Daphnia obtusa, and in clonally-propagated mutation accumulation lines (MAL initiated from a single D. obtusa female. The change in direction and magnitude of Pokey and rRNA gene number did not show a consistent pattern across ∼ 87 generations in the MAL; however, Pokey and rRNA gene number changed in concert. PokeyA and 28S gene number were positively correlated in the isolates from both natural populations and the MAL. PokeyB number was much lower than PokeyA in both MAL and natural population isolates, and showed no correlation with 28S gene number. Preliminary analysis did not detect PokeyB outside rDNA in any isolates and detected only 0 to 4 copies of PokeyA outside rDNA indicating that Pokey may be primarily an rDNA element in D. obtusa. The recombination rate in this species is high and the average size of the rDNA locus is about twice as large as that in other Daphnia species such as D. pulicaria and D. pulex, which may have facilitated expansion of PokeyA to much higher numbers in D. obtusa rDNA than these other species.

  18. Evolutionary Pattern of N-Glycosylation Sequon Numbers  in Eukaryotic ABC Protein Superfamilies

    Directory of Open Access Journals (Sweden)

    R. Shyama Prasad Rao

    2010-02-01

    Full Text Available Many proteins contain a large number of NXS/T sequences (where X is any amino acid except proline which are the potential sites of asparagine (N linked glycosylation. However, the patterns of occurrence of these N-glycosylation sequons in related proteins or groups of proteins and their underlying causes have largely been unexplored. We computed the actual and probabilistic occurrence of NXS/T sequons in ABC protein superfamilies from eight diverse eukaryotic organisms. The ABC proteins contained significantly higher NXS/T sequon numbers compared to respective genome-wide average, but the sequon density was significantly lower owing to the increase in protein size and decrease in sequon specific amino acids. However, mammalian ABC proteins have significantly higher sequon density, and both serine and threonine containing sequons (NXS and NXT have been positively selected—against the recent findings of only threonine specific Darwinian selection of sequons in proteins. The occurrence of sequons was positively correlated with the frequency of sequon specific amino acids and negatively correlated with proline and the NPS/T sequences. Further, the NPS/T sequences were significantly higher than expected in plant ABC proteins which have the lowest number of NXS/T sequons. Accord- ingly, compared to overall proteins, N-glycosylation sequons in ABC protein superfamilies have a distinct pattern of occurrence, and the results are discussed in an evolutionary perspective.

  19. Treatments that generate higher number of adverse drug reactions and their symptoms

    Directory of Open Access Journals (Sweden)

    Lucía Fernández-López

    2015-12-01

    Full Text Available Objectives: Adverse drug reactions (ADRs are an important cause of morbidity and mortality worldwide and generate high health costs. Therefore, the aims of this study were to determine the treatments which produce more ADRs in general population and the main symptoms they generate. Methods: An observational, cross-sectional study consisting in performing a self-rated questionnaire was carried out. 510 patients were asked about the treatments, illnesses and ADRs, they had suffered from. Results: 26.7% of patients had suffered from some ADR. Classifying patients according to the type of prescribed treatment and studying the number of ADR that they had, we obtained significant differences (p ≤ 0.05 for treatments against arthrosis, anemia and nervous disorders (anxiety, depression, insomnia. Moreover, determining absolute frequencies of these ADRs appearance in each treatment, higher frequencies were again for drugs against arthrosis (22.6% of patients treated for arthrosis suffered some ADR, anemia (14.28%, nerve disorders (13.44% and also asthma (16%. Regarding the symptoms produced by ADRs, the most frequent were gastrointestinal (60% of patients who suffered an ADR, had gastrointestinal symptoms and nervous alterations (dizziness, headache, sleep disturbances etc (24.6%. Conclusion: Therapeutic groups which produce more commonly ADRs are those for arthrosis, anemia, nervous disorders and asthma. In addition, symptoms which are generated more frequently are gastrointestinal and nervous problems. This is in accordance with the usual side effects of mentioned treatments. Health professionals should be informed about it, so that they would be more alert about a possible emergence of an ADR in these treatments. They also could provide enough information to empower patients and thus, they probably could detect ADR events. This would facilitate ADR detection and would avoid serious consequences generated to both patients' health and health economics.

  20. Association of Air Pollution Exposures With High-Density Lipoprotein Cholesterol and Particle Number: The Multi-Ethnic Study of Atherosclerosis.

    Science.gov (United States)

    Bell, Griffith; Mora, Samia; Greenland, Philip; Tsai, Michael; Gill, Ed; Kaufman, Joel D

    2017-05-01

    The relationship between air pollution and cardiovascular disease may be explained by changes in high-density lipoprotein (HDL). We examined the cross-sectional relationship between air pollution and both HDL cholesterol and HDL particle number in the MESA Air study (Multi-Ethnic Study of Atherosclerosis Air Pollution). Study participants were 6654 white, black, Hispanic, and Chinese men and women aged 45 to 84 years. We estimated individual residential ambient fine particulate pollution exposure (PM 2.5 ) and black carbon concentrations using a fine-scale likelihood-based spatiotemporal model and cohort-specific monitoring. Exposure periods were averaged to 12 months, 3 months, and 2 weeks prior to examination. HDL cholesterol and HDL particle number were measured in the year 2000 using the cholesterol oxidase method and nuclear magnetic resonance spectroscopy, respectively. We used multivariable linear regression to examine the relationship between air pollution exposure and HDL measures. A 0.7×10 - 6 m - 1 higher exposure to black carbon (a marker of traffic-related pollution) averaged over a 1-year period was significantly associated with a lower HDL cholesterol (-1.68 mg/dL; 95% confidence interval, -2.86 to -0.50) and approached significance with HDL particle number (-0.55 mg/dL; 95% confidence interval, -1.13 to 0.03). In the 3-month averaging time period, a 5 μg/m 3 higher PM 2.5 was associated with lower HDL particle number (-0.64 μmol/L; 95% confidence interval, -1.01 to -0.26), but not HDL cholesterol (-0.05 mg/dL; 95% confidence interval, -0.82 to 0.71). These data are consistent with the hypothesis that exposure to air pollution is adversely associated with measures of HDL. © 2017 American Heart Association, Inc.

  1. A comparative analysis of 9 multi-model averaging approaches in hydrological continuous streamflow simulation

    Science.gov (United States)

    Arsenault, Richard; Gatien, Philippe; Renaud, Benoit; Brissette, François; Martel, Jean-Luc

    2015-10-01

    This study aims to test whether a weighted combination of several hydrological models can simulate flows more accurately than the models taken individually. In addition, the project attempts to identify the most efficient model averaging method and the optimal number of models to include in the weighting scheme. In order to address the first objective, streamflow was simulated using four lumped hydrological models (HSAMI, HMETS, MOHYSE and GR4J-6), each of which were calibrated with three different objective functions on 429 watersheds. The resulting 12 hydrographs (4 models × 3 metrics) were weighted and combined with the help of 9 averaging methods which are the simple arithmetic mean (SAM), Akaike information criterion (AICA), Bates-Granger (BGA), Bayes information criterion (BICA), Bayesian model averaging (BMA), Granger-Ramanathan average variant A, B and C (GRA, GRB and GRC) and the average by SCE-UA optimization (SCA). The same weights were then applied to the hydrographs in validation mode, and the Nash-Sutcliffe Efficiency metric was measured between the averaged and observed hydrographs. Statistical analyses were performed to compare the accuracy of weighted methods to that of individual models. A Kruskal-Wallis test and a multi-objective optimization algorithm were then used to identify the most efficient weighted method and the optimal number of models to integrate. Results suggest that the GRA, GRB, GRC and SCA weighted methods perform better than the individual members. Model averaging from these four methods were superior to the best of the individual members in 76% of the cases. Optimal combinations on all watersheds included at least one of each of the four hydrological models. None of the optimal combinations included all members of the ensemble of 12 hydrographs. The Granger-Ramanathan average variant C (GRC) is recommended as the best compromise between accuracy, speed of execution, and simplicity.

  2. The difference between alternative averages

    Directory of Open Access Journals (Sweden)

    James Vaupel

    2012-09-01

    Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.

  3. Stochastic Growth Theory of Spatially-Averaged Distributions of Langmuir Fields in Earth's Foreshock

    Science.gov (United States)

    Boshuizen, Christopher R.; Cairns, Iver H.; Robinson, P. A.

    2001-01-01

    Langmuir-like waves in the foreshock of Earth are characteristically bursty and irregular, and are the subject of a number of recent studies. Averaged over the foreshock, it is observed that the probability distribution is power-law P(bar)(log E) in the wave field E with the bar denoting this averaging over position, In this paper it is shown that stochastic growth theory (SGT) can explain a power-law spatially-averaged distributions P(bar)(log E), when the observed power-law variations of the mean and standard deviation of log E with position are combined with the log normal statistics predicted by SGT at each location.

  4. Nonlinearity management in higher dimensions

    International Nuclear Information System (INIS)

    Kevrekidis, P G; Pelinovsky, D E; Stefanov, A

    2006-01-01

    In the present paper, we revisit nonlinearity management of the time-periodic nonlinear Schroedinger equation and the related averaging procedure. By means of rigorous estimates, we show that the averaged nonlinear Schroedinger equation does not blow up in the higher dimensional case so long as the corresponding solution remains smooth. In particular, we show that the H 1 norm remains bounded, in contrast with the usual blow-up mechanism for the focusing Schroedinger equation. This conclusion agrees with earlier works in the case of strong nonlinearity management but contradicts those in the case of weak nonlinearity management. The apparent discrepancy is explained by the divergence of the averaging procedure in the limit of weak nonlinearity management

  5. Effect of parameters in moving average method for event detection enhancement using phase sensitive OTDR

    Science.gov (United States)

    Kwon, Yong-Seok; Naeem, Khurram; Jeon, Min Yong; Kwon, Il-bum

    2017-04-01

    We analyze the relations of parameters in moving average method to enhance the event detectability of phase sensitive optical time domain reflectometer (OTDR). If the external events have unique frequency of vibration, then the control parameters of moving average method should be optimized in order to detect these events efficiently. A phase sensitive OTDR was implemented by a pulsed light source, which is composed of a laser diode, a semiconductor optical amplifier, an erbium-doped fiber amplifier, a fiber Bragg grating filter, and a light receiving part, which has a photo-detector and high speed data acquisition system. The moving average method is operated with the control parameters: total number of raw traces, M, number of averaged traces, N, and step size of moving, n. The raw traces are obtained by the phase sensitive OTDR with sound signals generated by a speaker. Using these trace data, the relation of the control parameters is analyzed. In the result, if the event signal has one frequency, then the optimal values of N, n are existed to detect the event efficiently.

  6. A supersymmetric matrix model: II. Exploring higher-fermion-number sectors

    CERN Document Server

    Veneziano, Gabriele

    2006-01-01

    Continuing our previous analysis of a supersymmetric quantum-mechanical matrix model, we study in detail the properties of its sectors with fermion number F=2 and 3. We confirm all previous expectations, modulo the appearance, at strong coupling, of {\\it two} new bosonic ground states causing a further jump in Witten's index across a previously identified critical 't Hooft coupling $\\lambda_c$. We are able to elucidate the origin of these new SUSY vacua by considering the $\\lambda \\to \\infty$ limit and a strong coupling expansion around it.

  7. FEL system with homogeneous average output

    Energy Technology Data Exchange (ETDEWEB)

    Douglas, David R.; Legg, Robert; Whitney, R. Roy; Neil, George; Powers, Thomas Joseph

    2018-01-16

    A method of varying the output of a free electron laser (FEL) on very short time scales to produce a slightly broader, but smooth, time-averaged wavelength spectrum. The method includes injecting into an accelerator a sequence of bunch trains at phase offsets from crest. Accelerating the particles to full energy to result in distinct and independently controlled, by the choice of phase offset, phase-energy correlations or chirps on each bunch train. The earlier trains will be more strongly chirped, the later trains less chirped. For an energy recovered linac (ERL), the beam may be recirculated using a transport system with linear and nonlinear momentum compactions M.sub.56, which are selected to compress all three bunch trains at the FEL with higher order terms managed.

  8. The Pulsair 3000 tonometer--how many readings need to be taken to ensure accuracy of the average?

    Science.gov (United States)

    McCaghrey, G E; Matthews, F E

    2001-07-01

    Manufacturers of non-contact tonometers recommend that a number of readings are taken on each eye, and an average obtained. With the Keeler Pulsair 3000 it is advised to take four readings, and average these. This report analyses readings in 100 subjects, and compares the first reading, and the averages of the first two and first three readings with the "machine standard" of the average of four readings. It is found that, in the subject group investigated, the average of three readings is not different from the average of four in 95% of individuals, with equivalence defined as +/- 1.0 mmHg.

  9. Researches on Nutritional Behaviour in Romanian Black and White Primiparous Cows. Interruptions Number and their Duration in the Ration Consumption Time

    Directory of Open Access Journals (Sweden)

    Silvia Erina

    2012-10-01

    Full Text Available The study was carried out on 9 Romanian Black and White primiparous cows. The aim of this study was todetermine some aspect of nutritional behaviour of the cows. During the experiments, the following behaviour aspectswere determined: interruption number and their duration in the feed consumption time. Results showed that theadministration order of forages had an influence on the interruptions number, which was 0.74 less for hay in fibroussucculentorder (O1. For silage, the interruption number was 0.42 higher in fibrous-succulent order (O1. Betweenportion 1 (P1 and portion 3 (P3, the significant difference (p<0.05 was for interruptions duration, duringconsumption silage, in favour portion P1. Distinct significant differences (p<0.01 was observed for the interruptionnumber during consumption silage (0.95 sec. higher in P1 than in P3, for interruption duration (5.96 sec. higher inP1 than in P3. Between P2 and P3, significant difference (p<0.05 was observed for interruptions number duringconsumption silage and for average interruptions duration during consumption beet in favour to portion P2.Regarding the number of feedings per portion, always the differences were higher in the second feeding F1 than inthe first feeding F2.

  10. Trend of Average Wages as Indicator of Hypothetical Money Illusion

    Directory of Open Access Journals (Sweden)

    Julian Daszkowski

    2010-06-01

    Full Text Available The definition of wage in Poland not before 1998 includes any value of social security contribution. Changed definition creates higher level of reported wages, but was expected not to influence the take home pay. Nevertheless, the trend of average wages, after a short period, has returned to its previous line. Such effect is explained in the term of money illusion.

  11. Effect of land cover change on runoff curve number estimation in Iowa, 1832-2001

    Science.gov (United States)

    Wehmeyer, Loren L.; Weirich, Frank H.; Cuffney, Thomas F.

    2011-01-01

    Within the first few decades of European-descended settlers arriving in Iowa, much of the land cover across the state was transformed from prairie and forest to farmland, patches of forest, and urbanized areas. Land cover change over the subsequent 126 years was minor in comparison. Between 1832 and 1859, the General Land Office conducted a survey of the State of Iowa to aid in the disbursement of land. In 1875, an illustrated atlas of the State of Iowa was published, and in 2001, the US Geological Survey National Land Cover Dataset was compiled. Using these three data resources for classifying land cover, the hydrologic impact of the land cover change at three points in time over a period of 132+ years is presented in terms of the effect on the area-weighted average curve number, a term commonly used to predict peak runoff from rainstorms. In the four watersheds studied, the area-weighted average curve number associated with the first 30 years of settlement increased from 61·4 to 77·8. State-wide mapped forest area over this same period decreased 19%. Over the next 126 years, the area-weighted average curve number decreased to 76·7, despite an additional forest area reduction of 60%. This suggests that degradation of aquatic resources (plants, fish, invertebrates, and habitat) arising from hydrologic alteration was likely to have been much higher during the 30 years of initial settlement than in the subsequent period of 126 years in which land cover changes resulted primarily from deforestation and urbanization. 

  12. Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam's Window.

    Science.gov (United States)

    Onorante, Luca; Raftery, Adrian E

    2016-01-01

    Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam's window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods.

  13. Bayesian model averaging and weighted average least squares : Equivariance, stability, and numerical issues

    NARCIS (Netherlands)

    De Luca, G.; Magnus, J.R.

    2011-01-01

    In this article, we describe the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals, which implement, respectively, the exact Bayesian model-averaging estimator and the weighted-average least-squares

  14. CONSIDERATIONS REGARDING ROMANIAN HIGHER EDUCATION GRADUATES

    Directory of Open Access Journals (Sweden)

    Popovici (Barbulescu Adina

    2012-07-01

    Full Text Available The paper aims at analyzing the dynamics of in Romanian higher education graduates in the 2006-2010 period, both in Romania and by the Romanian development regions. After highlighting the importance of human capital and its education, the paper analyzes the dynamics of Romanian higher education graduates in the targeted period, at both of the above-mentioned levels. The conclusions reveal that, during the analysed period: 2006-2010, the number of female, and, respectively, male higher education graduates, as well as the total number of higher education graduates, continuously increased in the 2006-2010 period at the whole country level and registered an increase trend, as well, by the eight development regions of Romania in the 2006-2010 period, with very few exceptions in some years of the period, in some of the the eight development regions of Romania. Therefore, the Romanian higher education system must correlate the graduates number with the number of work places in the Romanian economy, and take into account the necessities imposed by the participation at international competition.

  15. Resident characterization of better-than- and worse-than-average clinical teaching.

    Science.gov (United States)

    Haydar, Bishr; Charnin, Jonathan; Voepel-Lewis, Terri; Baker, Keith

    2014-01-01

    Clinical teachers and trainees share a common view of what constitutes excellent clinical teaching, but associations between these behaviors and high teaching scores have not been established. This study used residents' written feedback to their clinical teachers, to identify themes associated with above- or below-average teaching scores. All resident evaluations of their clinical supervisors in a single department were collected from January 1, 2007 until December 31, 2008. A mean teaching score assigned by each resident was calculated. Evaluations that were 20% higher or 15% lower than the resident's mean score were used. A subset of these evaluations was reviewed, generating a list of 28 themes for further study. Two researchers then, independently coded the presence or absence of these themes in each evaluation. Interrater reliability of the themes and logistic regression were used to evaluate the predictive associations of the themes with above- or below-average evaluations. Five hundred twenty-seven above-average and 285 below-average evaluations were evaluated for the presence or absence of 15 positive themes and 13 negative themes, which were divided into four categories: teaching, supervision, interpersonal, and feedback. Thirteen of 15 positive themes correlated with above-average evaluations and nine had high interrater reliability (Intraclass Correlation Coefficient >0.6). Twelve of 13 negative themes correlated with below-average evaluations, and all had high interrater reliability. On the basis of these findings, the authors developed 13 recommendations for clinical educators. The authors developed 13 recommendations for clinical teachers using the themes identified from the above- and below-average clinical teaching evaluations submitted by anesthesia residents.

  16. Counting loop diagrams: computational complexity of higher-order amplitude evaluation

    International Nuclear Information System (INIS)

    Eijk, E. van; Kleiss, R.; Lazopoulos, A.

    2004-01-01

    We discuss the computational complexity of the perturbative evaluation of scattering amplitudes, both by the Caravaglios-Moretti algorithm and by direct evaluation of the individual diagrams. For a self-interacting scalar theory, we determine the complexity as a function of the number of external legs. We describe a method for obtaining the number of topologically inequivalent Feynman graphs containing closed loops, and apply this to 1- and 2-loop amplitudes. We also compute the number of graphs weighted by their symmetry factors, thus arriving at exact and asymptotic estimates for the average symmetry factor of diagrams. We present results for the asymptotic number of diagrams up to 10 loops, and prove that the average symmetry factor approaches unity as the number of external legs becomes large. (orig.)

  17. A RED modified weighted moving average for soft real-time application

    Directory of Open Access Journals (Sweden)

    Domanśka Joanna

    2014-09-01

    Full Text Available The popularity of TCP/IP has resulted in an increase in usage of best-effort networks for real-time communication. Much effort has been spent to ensure quality of service for soft real-time traffic over IP networks. The Internet Engineering Task Force has proposed some architecture components, such as Active Queue Management (AQM. The paper investigates the influence of the weighted moving average on packet waiting time reduction for an AQM mechanism: the RED algorithm. The proposed method for computing the average queue length is based on a difference equation (a recursive equation. Depending on a particular optimality criterion, proper parameters of the modified weighted moving average function can be chosen. This change will allow reducing the number of violations of timing constraints and better use of this mechanism for soft real-time transmissions. The optimization problem is solved through simulations performed in OMNeT++ and later verified experimentally on a Linux implementation

  18. Transferability of hydrological models and ensemble averaging methods between contrasting climatic periods

    Science.gov (United States)

    Broderick, Ciaran; Matthews, Tom; Wilby, Robert L.; Bastola, Satish; Murphy, Conor

    2016-10-01

    Understanding hydrological model predictive capabilities under contrasting climate conditions enables more robust decision making. Using Differential Split Sample Testing (DSST), we analyze the performance of six hydrological models for 37 Irish catchments under climate conditions unlike those used for model training. Additionally, we consider four ensemble averaging techniques when examining interperiod transferability. DSST is conducted using 2/3 year noncontinuous blocks of (i) the wettest/driest years on record based on precipitation totals and (ii) years with a more/less pronounced seasonal precipitation regime. Model transferability between contrasting regimes was found to vary depending on the testing scenario, catchment, and evaluation criteria considered. As expected, the ensemble average outperformed most individual ensemble members. However, averaging techniques differed considerably in the number of times they surpassed the best individual model member. Bayesian Model Averaging (BMA) and the Granger-Ramanathan Averaging (GRA) method were found to outperform the simple arithmetic mean (SAM) and Akaike Information Criteria Averaging (AICA). Here GRA performed better than the best individual model in 51%-86% of cases (according to the Nash-Sutcliffe criterion). When assessing model predictive skill under climate change conditions we recommend (i) setting up DSST to select the best available analogues of expected annual mean and seasonal climate conditions; (ii) applying multiple performance criteria; (iii) testing transferability using a diverse set of catchments; and (iv) using a multimodel ensemble in conjunction with an appropriate averaging technique. Given the computational efficiency and performance of GRA relative to BMA, the former is recommended as the preferred ensemble averaging technique for climate assessment.

  19. Averaging in spherically symmetric cosmology

    International Nuclear Information System (INIS)

    Coley, A. A.; Pelavas, N.

    2007-01-01

    The averaging problem in cosmology is of fundamental importance. When applied to study cosmological evolution, the theory of macroscopic gravity (MG) can be regarded as a long-distance modification of general relativity. In the MG approach to the averaging problem in cosmology, the Einstein field equations on cosmological scales are modified by appropriate gravitational correlation terms. We study the averaging problem within the class of spherically symmetric cosmological models. That is, we shall take the microscopic equations and effect the averaging procedure to determine the precise form of the correlation tensor in this case. In particular, by working in volume-preserving coordinates, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. We find that the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background must be of the form of a spatial curvature. Inhomogeneities and spatial averaging, through this spatial curvature correction term, can have a very significant dynamical effect on the dynamics of the Universe and cosmological observations; in particular, we discuss whether spatial averaging might lead to a more conservative explanation of the observed acceleration of the Universe (without the introduction of exotic dark matter fields). We also find that the correlation tensor for a non-FLRW background can be interpreted as the sum of a spatial curvature and an anisotropic fluid. This may lead to interesting effects of averaging on astrophysical scales. We also discuss the results of averaging an inhomogeneous Lemaitre-Tolman-Bondi solution as well as calculations of linear perturbations (that is, the backreaction) in an FLRW background, which support the main conclusions of the analysis

  20. Classical properties and semiclassical calculations in a spherical nuclear average potential

    International Nuclear Information System (INIS)

    Carbonell, J.; Brut, F.; Arvieu, R.; Touchard, J.

    1984-03-01

    We study the relation between the classical properties or an average nuclear potential and its spectral properties. We have drawn the energy-action surface of this potential and related its properties to the spectral ones in the framework of the EBK semiclassical method. We also describe a method allowing us to get the evolution of the spectrum with the mass number

  1. Estimating marine aerosol particle volume and number from Maritime Aerosol Network data

    Directory of Open Access Journals (Sweden)

    A. M. Sayer

    2012-09-01

    Full Text Available As well as spectral aerosol optical depth (AOD, aerosol composition and concentration (number, volume, or mass are of interest for a variety of applications. However, remote sensing of these quantities is more difficult than for AOD, as it is more sensitive to assumptions relating to aerosol composition. This study uses spectral AOD measured on Maritime Aerosol Network (MAN cruises, with the additional constraint of a microphysical model for unpolluted maritime aerosol based on analysis of Aerosol Robotic Network (AERONET inversions, to estimate these quantities over open ocean. When the MAN data are subset to those likely to be comprised of maritime aerosol, number and volume concentrations obtained are physically reasonable. Attempts to estimate surface concentration from columnar abundance, however, are shown to be limited by uncertainties in vertical distribution. Columnar AOD at 550 nm and aerosol number for unpolluted maritime cases are also compared with Moderate Resolution Imaging Spectroradiometer (MODIS data, for both the present Collection 5.1 and forthcoming Collection 6. MODIS provides a best-fitting retrieval solution, as well as the average for several different solutions, with different aerosol microphysical models. The "average solution" MODIS dataset agrees more closely with MAN than the "best solution" dataset. Terra tends to retrieve lower aerosol number than MAN, and Aqua higher, linked with differences in the aerosol models commonly chosen. Collection 6 AOD is likely to agree more closely with MAN over open ocean than Collection 5.1. In situations where spectral AOD is measured accurately, and aerosol microphysical properties are reasonably well-constrained, estimates of aerosol number and volume using MAN or similar data would provide for a greater variety of potential comparisons with aerosol properties derived from satellite or chemistry transport model data. However, without accurate AOD data and prior knowledge of

  2. Limit cycles from a cubic reversible system via the third-order averaging method

    Directory of Open Access Journals (Sweden)

    Linping Peng

    2015-04-01

    Full Text Available This article concerns the bifurcation of limit cycles from a cubic integrable and non-Hamiltonian system. By using the averaging theory of the first and second orders, we show that under any small cubic homogeneous perturbation, at most two limit cycles bifurcate from the period annulus of the unperturbed system, and this upper bound is sharp. By using the averaging theory of the third order, we show that two is also the maximal number of limit cycles emerging from the period annulus of the unperturbed system.

  3. Exactly averaged equations for flow and transport in random media

    International Nuclear Information System (INIS)

    Shvidler, Mark; Karasaki, Kenzi

    2001-01-01

    It is well known that exact averaging of the equations of flow and transport in random porous media can be realized only for a small number of special, occasionally exotic, fields. On the other hand, the properties of approximate averaging methods are not yet fully understood. For example, the convergence behavior and the accuracy of truncated perturbation series. Furthermore, the calculation of the high-order perturbations is very complicated. These problems for a long time have stimulated attempts to find the answer for the question: Are there in existence some exact general and sufficiently universal forms of averaged equations? If the answer is positive, there arises the problem of the construction of these equations and analyzing them. There exist many publications related to these problems and oriented on different applications: hydrodynamics, flow and transport in porous media, theory of elasticity, acoustic and electromagnetic waves in random fields, etc. We present a method of finding the general form of exactly averaged equations for flow and transport in random fields by using (1) an assumption of the existence of Green's functions for appropriate stochastic problems, (2) some general properties of the Green's functions, and (3) the some basic information about the random fields of the conductivity, porosity and flow velocity. We present a general form of the exactly averaged non-local equations for the following cases. 1. Steady-state flow with sources in porous media with random conductivity. 2. Transient flow with sources in compressible media with random conductivity and porosity. 3. Non-reactive solute transport in random porous media. We discuss the problem of uniqueness and the properties of the non-local averaged equations, for the cases with some types of symmetry (isotropic, transversal isotropic, orthotropic) and we analyze the hypothesis of the structure non-local equations in general case of stochastically homogeneous fields. (author)

  4. How to average logarithmic retrievals?

    Directory of Open Access Journals (Sweden)

    B. Funke

    2012-04-01

    Full Text Available Calculation of mean trace gas contributions from profiles obtained by retrievals of the logarithm of the abundance rather than retrievals of the abundance itself are prone to biases. By means of a system simulator, biases of linear versus logarithmic averaging were evaluated for both maximum likelihood and maximum a priori retrievals, for various signal to noise ratios and atmospheric variabilities. These biases can easily reach ten percent or more. As a rule of thumb we found for maximum likelihood retrievals that linear averaging better represents the true mean value in cases of large local natural variability and high signal to noise ratios, while for small local natural variability logarithmic averaging often is superior. In the case of maximum a posteriori retrievals, the mean is dominated by the a priori information used in the retrievals and the method of averaging is of minor concern. For larger natural variabilities, the appropriateness of the one or the other method of averaging depends on the particular case because the various biasing mechanisms partly compensate in an unpredictable manner. This complication arises mainly because of the fact that in logarithmic retrievals the weight of the prior information depends on abundance of the gas itself. No simple rule was found on which kind of averaging is superior, and instead of suggesting simple recipes we cannot do much more than to create awareness of the traps related with averaging of mixing ratios obtained from logarithmic retrievals.

  5. Correlation between Grade Point Averages and Student Evaluation of Teaching Scores: Taking a Closer Look

    Science.gov (United States)

    Griffin, Tyler J.; Hilton, John, III.; Plummer, Kenneth; Barret, Devynne

    2014-01-01

    One of the most contentious potential sources of bias is whether instructors who give higher grades receive higher ratings from students. We examined the grade point averages (GPAs) and student ratings across 2073 general education religion courses at a large private university. A moderate correlation was found between GPAs and student evaluations…

  6. Chaotic Universe, Friedmannian on the average 2

    Energy Technology Data Exchange (ETDEWEB)

    Marochnik, L S [AN SSSR, Moscow. Inst. Kosmicheskikh Issledovanij

    1980-11-01

    The cosmological solutions are found for the equations for correlators, describing a statistically chaotic Universe, Friedmannian on the average in which delta-correlated fluctuations with amplitudes h >> 1 are excited. For the equation of state of matter p = n epsilon, the kind of solutions depends on the position of maximum of the spectrum of the metric disturbances. The expansion of the Universe, in which long-wave potential and vortical motions and gravitational waves (modes diverging at t ..-->.. 0) had been excited, tends asymptotically to the Friedmannian one at t ..-->.. identity and depends critically on n: at n < 0.26, the solution for the scalefactor is situated higher than the Friedmannian one, and lower at n > 0.26. The influence of finite at t ..-->.. 0 long-wave fluctuation modes leads to an averaged quasiisotropic solution. The contribution of quantum fluctuations and of short-wave parts of the spectrum of classical fluctuations to the expansion law is considered. Their influence is equivalent to the contribution from an ultrarelativistic gas with corresponding energy density and pressure. The restrictions are obtained for the degree of chaos (the spectrum characteristics) compatible with the observed helium abundance, which could have been retained by a completely chaotic Universe during its expansion up to the nucleosynthesis epoch.

  7. SACALCCYL, Calculates the average solid angle subtended by a volume; SACALC2B, Calculates the average solid angle for source-detector geometries

    International Nuclear Information System (INIS)

    Whitcher, Ralph

    2007-01-01

    1 - Description of program or function: SACALC2B calculates the average solid angle subtended by a rectangular or circular detector window to a coaxial or non-coaxial rectangular, circular or point source, including where the source and detector planes are not parallel. SACALC C YL calculates the average solid angle subtended by a cylinder to a rectangular or circular source, plane or thick, at any location and orientation. This is needed, for example, in calculating the intrinsic gamma efficiency of a detector such as a GM tube. The program also calculates the number of hits on the cylinder side and on each end, and the average path length through the detector volume (assuming no scattering or absorption). Point sources can be modelled by using a circular source of zero radius. NEA-1688/03: Documentation has been updated (January 2006). 2 - Methods: The program uses a Monte Carlo method to calculate average solid angle for source-detector geometries that are difficult to analyse by analytical methods. The values of solid angle are calculated to accuracies of typically better than 0.1%. The calculated values from the Monte Carlo method agree closely with those produced by polygon approximation and numerical integration by Gardner and Verghese, and others. 3 - Restrictions on the complexity of the problem: The program models a circular or rectangular detector in planes that are not necessarily coaxial, nor parallel. Point sources can be modelled by using a circular source of zero radius. The sources are assumed to be uniformly distributed. NEA-1688/04: In SACALC C YL, to avoid rounding errors, differences less than 1 E-12 are assumed to be zero

  8. Quantum gravity unification via transfinite arithmetic and geometrical averaging

    International Nuclear Information System (INIS)

    El Naschie, M.S.

    2008-01-01

    In E-Infinity theory, we have not only infinitely many dimensions but also infinitely many fundamental forces. However, due to the hierarchical structure of ε (∞) spacetime we have a finite expectation number for its dimensionality and likewise a finite expectation number for the corresponding interactions. Starting from the preceding fundamental principles and using the experimental findings as well as the theoretical value of the coupling constants of the electroweak and the strong forces we present an extremely simple averaging procedure for determining the quantum gravity unification coupling constant with and without super symmetry. The work draws heavily on previous results, in particular a paper, by the Slovian Prof. Marek-Crnjac [Marek-Crnjac L. On the unification of all fundamental forces in a fundamentally fuzzy Cantorian ε (∞) manifold and high energy physics. Chaos, Solitons and Fractals 2004;4:657-68

  9. Optimized Database of Higher Education Management Using Data Warehouse

    Directory of Open Access Journals (Sweden)

    Spits Warnars

    2010-04-01

    Full Text Available The emergence of new higher education institutions has created the competition in higher education market, and data warehouse can be used as an effective technology tools for increasing competitiveness in the higher education market. Data warehouse produce reliable reports for the institution’s high-level management in short time for faster and better decision making, not only on increasing the admission number of students, but also on the possibility to find extraordinary, unconventional funds for the institution. Efficiency comparison was based on length and amount of processed records, total processed byte, amount of processed tables, time to run query and produced record on OLTP database and data warehouse. Efficiency percentages was measured by the formula for percentage increasing and the average efficiency percentage of 461.801,04% shows that using data warehouse is more powerful and efficient rather than using OLTP database. Data warehouse was modeled based on hypercube which is created by limited high demand reports which usually used by high level management. In every table of fact and dimension fields will be inserted which represent the loading constructive merge where the ETL (Extraction, Transformation and Loading process is run based on the old and new files.

  10. Assessing Mitochondrial DNA Variation and Copy Number in Lymphocytes of ~2,000 Sardinians Using Tailored Sequencing Analysis Tools.

    Directory of Open Access Journals (Sweden)

    Jun Ding

    2015-07-01

    Full Text Available DNA sequencing identifies common and rare genetic variants for association studies, but studies typically focus on variants in nuclear DNA and ignore the mitochondrial genome. In fact, analyzing variants in mitochondrial DNA (mtDNA sequences presents special problems, which we resolve here with a general solution for the analysis of mtDNA in next-generation sequencing studies. The new program package comprises 1 an algorithm designed to identify mtDNA variants (i.e., homoplasmies and heteroplasmies, incorporating sequencing error rates at each base in a likelihood calculation and allowing allele fractions at a variant site to differ across individuals; and 2 an estimation of mtDNA copy number in a cell directly from whole-genome sequencing data. We also apply the methods to DNA sequence from lymphocytes of ~2,000 SardiNIA Project participants. As expected, mothers and offspring share all homoplasmies but a lesser proportion of heteroplasmies. Both homoplasmies and heteroplasmies show 5-fold higher transition/transversion ratios than variants in nuclear DNA. Also, heteroplasmy increases with age, though on average only ~1 heteroplasmy reaches the 4% level between ages 20 and 90. In addition, we find that mtDNA copy number averages ~110 copies/lymphocyte and is ~54% heritable, implying substantial genetic regulation of the level of mtDNA. Copy numbers also decrease modestly but significantly with age, and females on average have significantly more copies than males. The mtDNA copy numbers are significantly associated with waist circumference (p-value = 0.0031 and waist-hip ratio (p-value = 2.4×10-5, but not with body mass index, indicating an association with central fat distribution. To our knowledge, this is the largest population analysis to date of mtDNA dynamics, revealing the age-imposed increase in heteroplasmy, the relatively high heritability of copy number, and the association of copy number with metabolic traits.

  11. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    2001-01-01

    In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong ...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation.......In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  12. SU-E-T-614: Plan Averaging for Multi-Criteria Navigation of Step-And-Shoot IMRT

    Energy Technology Data Exchange (ETDEWEB)

    Guo, M; Gao, H [Shanghai Jiao Tong University, Shanghai, Shanghai (China); Craft, D [Massachusetts General Hospital, Cambridge, MA (United States)

    2015-06-15

    Purpose: Step-and-shoot IMRT is fundamentally discrete in nature, while multi-criteria optimization (MCO) is fundamentally continuous: the MCO planning consists of continuous sliding across the Pareto surface (the set of plans which represent the tradeoffs between organ-at-risk doses and target doses). In order to achieve close to real-time dose display during this sliding, it is desired that averaged plans share many of the same apertures as the pre-computed plans, since dose computation for apertures generated on-the-fly would be expensive. We propose a method to ensure that neighboring plans on a Pareto surface share many apertures. Methods: Our baseline step-and-shoot sequencing method is that of K. Engel (a method which minimizes the number of segments while guaranteeing the minimum number of monitor units), which we customize to sequence a set of Pareto optimal plans simultaneously. We also add an error tolerance to study the relationship between the number of shared apertures, the total number of apertures needed, and the quality of the fluence map re-creation. Results: We run tests for a 2D Pareto surface trading off rectum and bladder dose versus target coverage for a clinical prostate case. We find that if we enforce exact fluence map recreation, we are not able to achieve much sharing of apertures across plans. The total number of apertures for all seven beams and 4 plans without sharing is 217. With sharing and a 2% error tolerance, this number is reduced to 158 (73%). Conclusion: With the proposed method, total number of apertures can be decreased by 42% (averaging) with no increment of total MU, when an error tolerance of 5% is allowed. With this large amount of sharing, dose computations for averaged plans which occur during Pareto navigation will be much faster, leading to a real-time what-you-see-is-what-you-get Pareto navigation experience. Minghao Guo and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000

  13. SU-E-T-614: Plan Averaging for Multi-Criteria Navigation of Step-And-Shoot IMRT

    International Nuclear Information System (INIS)

    Guo, M; Gao, H; Craft, D

    2015-01-01

    Purpose: Step-and-shoot IMRT is fundamentally discrete in nature, while multi-criteria optimization (MCO) is fundamentally continuous: the MCO planning consists of continuous sliding across the Pareto surface (the set of plans which represent the tradeoffs between organ-at-risk doses and target doses). In order to achieve close to real-time dose display during this sliding, it is desired that averaged plans share many of the same apertures as the pre-computed plans, since dose computation for apertures generated on-the-fly would be expensive. We propose a method to ensure that neighboring plans on a Pareto surface share many apertures. Methods: Our baseline step-and-shoot sequencing method is that of K. Engel (a method which minimizes the number of segments while guaranteeing the minimum number of monitor units), which we customize to sequence a set of Pareto optimal plans simultaneously. We also add an error tolerance to study the relationship between the number of shared apertures, the total number of apertures needed, and the quality of the fluence map re-creation. Results: We run tests for a 2D Pareto surface trading off rectum and bladder dose versus target coverage for a clinical prostate case. We find that if we enforce exact fluence map recreation, we are not able to achieve much sharing of apertures across plans. The total number of apertures for all seven beams and 4 plans without sharing is 217. With sharing and a 2% error tolerance, this number is reduced to 158 (73%). Conclusion: With the proposed method, total number of apertures can be decreased by 42% (averaging) with no increment of total MU, when an error tolerance of 5% is allowed. With this large amount of sharing, dose computations for averaged plans which occur during Pareto navigation will be much faster, leading to a real-time what-you-see-is-what-you-get Pareto navigation experience. Minghao Guo and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000

  14. Lattice Boltzmann analysis of effect of heating location and Rayleigh number on natural convection in partially heated open ended cavity

    Energy Technology Data Exchange (ETDEWEB)

    Gangawane, Krunal Madhukar; Bharti, Ram Prakash; Kumar, Surendra [Indian Institute of Technology Roorkee, Uttarakhand (India)

    2015-08-15

    Natural convection characteristics of a partially heated open ended square cavity have been investigated numerically by using an in-house computational flow solver based on the passive scalar thermal lattice Boltzmann method (PS-TLBM) with D2Q9 (two-dimensional and nine-velocity link) lattice model. The partial part of left wall of the cavity is heated isothermally at either of the three different (bottom, middle and top) locations for the fixed heating length as half of characteristic length (H/2) while the right wall is open to the ambient conditions. The other parts of the cavity are thermally isolated. In particular, the influences of partial heating locations and Rayleigh number (103≤ Ra≤106) in the laminar zone on the local and global natural convection characteristics (such as streamline, vorticity and isotherm contours; centerline variations of velocity and temperature; and local and average Nusselt numbers) have been presented and discussed for the fixed value of the Prandtl number (Pr=0.71). The streamline patterns show qualitatively similar nature for all the three heating cases and Rayleigh numbers, except the change in the recirculation zone which is found to be largest for middle heating case. Isotherm patterns are shifted towards a partially heated wall on increasing Rayleigh number and/or shifting of heating location from bottom to top. Both the local and average Nusselt numbers, as anticipated, shown proportional increase with Rayleigh number. The cavity with middle heating location shown higher heat transfer rate than that for the top and bottom heating cases. Finally, the functional dependence of the average Nusselt number on flow governing parameters is also presented as a closure relationship for the best possible utilization in engineering practices and design.

  15. the impact of machine geometries on the average torque of dual ...

    African Journals Online (AJOL)

    HOD

    Keywords: average torque, dual start, machine geometry, optimal value, PM machines. 1. ... permanent magnet length, back-iron size etc. were ..... e (N m. ) Stator tooth width/stator slot pitch. 4. 5. 7. 8. 10. 11. 13. 14. Number of rotor poles. 0. 1. 2. 3. 4. 5. 6. 0. 2. 4. 6. 8. 10. 12. T orq u e (Nm. ) Back-iron thickness (mm). 4. 5. 7.

  16. Advanced number theory with applications

    CERN Document Server

    Mollin, Richard A

    2009-01-01

    Algebraic Number Theory and Quadratic Fields Algebraic Number Fields The Gaussian Field Euclidean Quadratic Fields Applications of Unique Factorization Ideals The Arithmetic of Ideals in Quadratic Fields Dedekind Domains Application to Factoring Binary Quadratic Forms Basics Composition and the Form Class Group Applications via Ambiguity Genus Representation Equivalence Modulo p Diophantine Approximation Algebraic and Transcendental Numbers Transcendence Minkowski's Convex Body Theorem Arithmetic Functions The Euler-Maclaurin Summation Formula Average Orders The Riemann zeta-functionIntroduction to p-Adic AnalysisSolving Modulo pn Introduction to Valuations Non-Archimedean vs. Archimedean Valuations Representation of p-Adic NumbersDirichlet: Characters, Density, and Primes in Progression Dirichlet Characters Dirichlet's L-Function and Theorem Dirichlet DensityApplications to Diophantine Equations Lucas-Lehmer Theory Generalized Ramanujan-Nagell Equations Bachet's Equation The Fermat Equation Catalan and the A...

  17. Assimilation of time-averaged observations in a quasi-geostrophic atmospheric jet model

    Energy Technology Data Exchange (ETDEWEB)

    Huntley, Helga S. [University of Washington, Department of Applied Mathematics, Seattle, WA (United States); University of Delaware, School of Marine Science and Policy, Newark, DE (United States); Hakim, Gregory J. [University of Washington, Department of Atmospheric Sciences, Seattle, WA (United States)

    2010-11-15

    The problem of reconstructing past climates from a sparse network of noisy time-averaged observations is considered with a novel ensemble Kalman filter approach. Results for a sparse network of 100 idealized observations for a quasi-geostrophic model of a jet interacting with a mountain reveal that, for a wide range of observation averaging times, analysis errors are reduced by about 50% relative to the control case without assimilation. Results are robust to changes to observational error, the number of observations, and an imperfect model. Specifically, analysis errors are reduced relative to the control case for observations having errors up to three times the climatological variance for a fixed 100-station network, and for networks consisting of ten or more stations when observational errors are fixed at one-third the climatological variance. In the limit of small numbers of observations, station location becomes critically important, motivating an optimally determined network. A network of fifteen optimally determined observations reduces analysis errors by 30% relative to the control, as compared to 50% for a randomly chosen network of 100 observations. (orig.)

  18. Review the number of accidents in Tehran over a two-year period and prediction of the number of events based on a time-series model

    Science.gov (United States)

    Teymuri, Ghulam Heidar; Sadeghian, Marzieh; Kangavari, Mehdi; Asghari, Mehdi; Madrese, Elham; Abbasinia, Marzieh; Ahmadnezhad, Iman; Gholizadeh, Yavar

    2013-01-01

    Background: One of the significant dangers that threaten people’s lives is the increased risk of accidents. Annually, more than 1.3 million people die around the world as a result of accidents, and it has been estimated that approximately 300 deaths occur daily due to traffic accidents in the world with more than 50% of that number being people who were not even passengers in the cars. The aim of this study was to examine traffic accidents in Tehran and forecast the number of future accidents using a time-series model. Methods: The study was a cross-sectional study that was conducted in 2011. The sample population was all traffic accidents that caused death and physical injuries in Tehran in 2010 and 2011, as registered in the Tehran Emergency ward. The present study used Minitab 15 software to provide a description of accidents in Tehran for the specified time period as well as those that occurred during April 2012. Results: The results indicated that the average number of daily traffic accidents in Tehran in 2010 was 187 with a standard deviation of 83.6. In 2011, there was an average of 180 daily traffic accidents with a standard deviation of 39.5. One-way analysis of variance indicated that the average number of accidents in the city was different for different months of the year (P accidents occurred in March, July, August, and September. Thus, more accidents occurred in the summer than in the other seasons. The number of accidents was predicted based on an auto-regressive, moving average (ARMA) for April 2012. The number of accidents displayed a seasonal trend. The prediction of the number of accidents in the city during April of 2012 indicated that a total of 4,459 accidents would occur with mean of 149 accidents per day during these three months. Conclusion: The number of accidents in Tehran displayed a seasonal trend, and the number of accidents was different for different seasons of the year. PMID:26120405

  19. Lagrangian averaging with geodesic mean.

    Science.gov (United States)

    Oliver, Marcel

    2017-11-01

    This paper revisits the derivation of the Lagrangian averaged Euler (LAE), or Euler- α equations in the light of an intrinsic definition of the averaged flow map as the geodesic mean on the volume-preserving diffeomorphism group. Under the additional assumption that first-order fluctuations are statistically isotropic and transported by the mean flow as a vector field, averaging of the kinetic energy Lagrangian of an ideal fluid yields the LAE Lagrangian. The derivation presented here assumes a Euclidean spatial domain without boundaries.

  20. Stochastic Optimal Prediction with Application to Averaged Euler Equations

    Energy Technology Data Exchange (ETDEWEB)

    Bell, John [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chorin, Alexandre J. [Univ. of California, Berkeley, CA (United States); Crutchfield, William [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2017-04-24

    Optimal prediction (OP) methods compensate for a lack of resolution in the numerical solution of complex problems through the use of an invariant measure as a prior measure in the Bayesian sense. In first-order OP, unresolved information is approximated by its conditional expectation with respect to the invariant measure. In higher-order OP, unresolved information is approximated by a stochastic estimator, leading to a system of random or stochastic differential equations. We explain the ideas through a simple example, and then apply them to the solution of Averaged Euler equations in two space dimensions.

  1. Fourier analysis of spherically averaged momentum densities for some gaseous molecules

    International Nuclear Information System (INIS)

    Tossel, J.A.; Moore, J.H.

    1981-01-01

    The spherically averaged autocorrelation function, B(r), of the position-space wavefunction, psi(anti r), is calculated by numerical Fourier transformation from spherically averaged momentum densities, rho(p), obtained from either theoretical wavefunctions or (e,2e) electron-impact ionization experiments. Inspection of B(r) for the π molecular orbitals of C 4 H 6 established that autocorrelation function differences, ΔB(r), can be qualitatively related to bond lengths and numbers of bonding interactions. Differences between B(r) functions obtained from different approximate wavefunctions for a given orbital can be qualitatively understood in terms of wavefunction difference, Δpsi(1anti r), maps for these orbitals. Comparison of the B(r) function for the 1αsub(u) orbital of C 4 H 6 obtained from (e,2e) momentum densities with that obtained from an ab initio SCF MO wavefunction shows differences consistent with expected correlation effects. Thus, B(r) appears to be a useful quantity for relating spherically averaged momentum distributions to position-space wavefunction differences. (orig.)

  2. The Chicken Soup Effect: The Role of Recreation and Intramural Participation in Boosting Freshman Grade Point Average

    Science.gov (United States)

    Gibbison, Godfrey A.; Henry, Tracyann L.; Perkins-Brown, Jayne

    2011-01-01

    Freshman grade point average, in particular first semester grade point average, is an important predictor of survival and eventual student success in college. As many institutions of higher learning are searching for ways to improve student success, one would hope that policies geared towards the success of freshmen have long term benefits…

  3. Artificial neural network optimisation for monthly average daily global solar radiation prediction

    International Nuclear Information System (INIS)

    Alsina, Emanuel Federico; Bortolini, Marco; Gamberi, Mauro; Regattieri, Alberto

    2016-01-01

    Highlights: • Prediction of the monthly average daily global solar radiation over Italy. • Multi-location Artificial Neural Network (ANN) model: 45 locations considered. • Optimal ANN configuration with 7 input climatologic/geographical parameters. • Statistical indicators: MAPE, NRMSE, MPBE. - Abstract: The availability of reliable climatologic data is essential for multiple purposes in a wide set of anthropic activities and operative sectors. Frequently direct measures present spatial and temporal lacks so that predictive approaches become of interest. This paper focuses on the prediction of the Monthly Average Daily Global Solar Radiation (MADGSR) over Italy using Artificial Neural Networks (ANNs). Data from 45 locations compose the multi-location ANN training and testing sets. For each location, 13 input parameters are considered, including the geographical coordinates and the monthly values for the most frequently adopted climatologic parameters. A subset of 17 locations is used for ANN training, while the testing step is against data from the remaining 28 locations. Furthermore, the Automatic Relevance Determination method (ARD) is used to point out the most relevant input for the accurate MADGSR prediction. The ANN best configuration includes 7 parameters, only, i.e. Top of Atmosphere (TOA) radiation, day length, number of rainy days and average rainfall, latitude and altitude. The correlation performances, expressed through statistical indicators as the Mean Absolute Percentage Error (MAPE), range between 1.67% and 4.25%, depending on the number and type of the chosen input, representing a good solution compared to the current standards.

  4. Measuring average angular velocity with a smartphone magnetic field sensor

    Science.gov (United States)

    Pili, Unofre; Violanda, Renante

    2018-02-01

    The angular velocity of a spinning object is, by standard, measured using a device called a tachometer. However, by directly using it in a classroom setting, the activity is likely to appear as less instructive and less engaging. Indeed, some alternative classroom-suitable methods for measuring angular velocity have been presented. In this paper, we present a further alternative that is smartphone-based, making use of the real-time magnetic field (simply called B-field in what follows) data gathering capability of the B-field sensor of the smartphone device as the timer for measuring average rotational period and average angular velocity. The in-built B-field sensor in smartphones has already found a number of uses in undergraduate experimental physics. For instance, in elementary electrodynamics, it has been used to explore the well-known Bio-Savart law and in a measurement of the permeability of air.

  5. Original article Functioning of memory and attention processes in children with intelligence below average

    Directory of Open Access Journals (Sweden)

    Aneta Rita Borkowska

    2014-05-01

    Full Text Available BACKGROUND The aim of the research was to assess memorization and recall of logically connected and unconnected material, coded graphically and linguistically, and the ability to focus attention, in a group of children with intelligence below average, compared to children with average intelligence. PARTICIPANTS AND PROCEDURE The study group included 27 children with intelligence below average. The control group consisted of 29 individuals. All of them were examined using the authors’ experimental trials and the TUS test (Attention and Perceptiveness Test. RESULTS Children with intelligence below average memorized significantly less information contained in the logical material, demonstrated lower ability to memorize the visual material, memorized significantly fewer words in the verbal material learning task, achieved lower results in such indicators of the visual attention process pace as the number of omissions and mistakes, and had a lower pace of perceptual work, compared to children with average intelligence. CONCLUSIONS The results confirm that children with intelligence below average have difficulties with memorizing new material, both logically connected and unconnected. The significantly lower capacity of direct memory is independent of modality. The results of the study on the memory process confirm the hypothesis about lower abilities of children with intelligence below average, in terms of concentration, work pace, efficiency and perception.

  6. Numbers and other math ideas come alive

    CERN Document Server

    Pappas, Theoni

    2012-01-01

    Most people don't think about numbers, or take them for granted. For the average person numbers are looked upon as cold, clinical, inanimate objects. Math ideas are viewed as something to get a job done or a problem solved. Get ready for a big surprise with Numbers and Other Math Ideas Come Alive. Pappas explores mathematical ideas by looking behind the scenes of what numbers, points, lines, and other concepts are saying and thinking. In each story, properties and characteristics of math ideas are entertainingly uncovered and explained through the dialogues and actions of its math

  7. [Clinical analysis of percutaneous nephrolithotomy for staghorn calculi with different stone branch number].

    Science.gov (United States)

    Qi, Shi-yong; Zhang, Zhi-hong; Zhang, Chang-wen; Liu, Ran-lu; Shi, Qi-duo; Xu, Yong

    2013-12-01

    To investigate the impact of staghorn stone branch number on outcomes of percutaneous nephrolithotomy (PNL). From January 2009 to January 2013, the 371 patients with staghorn stones who were referred to our hospital for PNL were considered for this study. All calculi were showed with CT 3-dimentional reconstruction (3-DR) imaging. The computerized database of the patients had been reviewed. Our exclusion criterion was patients with congenital renal anomalies, such as horse-shoe and ectopic kidneys. And borderline stones that branched to one major calyx only were also not included. From 3-DR images, the number of stone branching into minor renal calices was recorded. We made "3" as the branch breakdown between groups. And the patients were divided into four groups. The number of percutaneous tract, operative time, staged PNL, intra-operative blood loss, complications, stone clearance rate, and postoperative hospital day were compared. The 371 patients (386 renal units) underwent PNL successfully, included 144 single-tract PNL, 242 multi-tract PNL, 97 staged PNL. The average operative time was (100 ± 50) minutes; the average intra-operative blood loss was (83 ± 67) ml. The stone clearance rate were 61.7% (3 days) and 79.5% (3 months). The postoperative hospital stay was (6.9 ± 3.4) days. A significantly higher ratio of multi-tract (χ(2) = 212.220, P PNL (χ(2) = 49.679, P PNL for calculi with stone branch number ≥ 5. There was no statistically meaningful difference among the 4 groups based on Clavien complication system (P = 0.460). The possibility of multi-tract and staged PNL, lower rate of stone clearance and longer postoperative hospital day increase for staghorn calculi with stone branch number more than 5.

  8. Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam’s Window*

    Science.gov (United States)

    Onorante, Luca; Raftery, Adrian E.

    2015-01-01

    Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam’s window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods. PMID:26917859

  9. Dose calculation for photon-emitting brachytherapy sources with average energy higher than 50 keV: report of the AAPM and ESTRO.

    Science.gov (United States)

    Perez-Calatayud, Jose; Ballester, Facundo; Das, Rupak K; Dewerd, Larry A; Ibbott, Geoffrey S; Meigooni, Ali S; Ouhib, Zoubir; Rivard, Mark J; Sloboda, Ron S; Williamson, Jeffrey F

    2012-05-01

    Recommendations of the American Association of Physicists in Medicine (AAPM) and the European Society for Radiotherapy and Oncology (ESTRO) on dose calculations for high-energy (average energy higher than 50 keV) photon-emitting brachytherapy sources are presented, including the physical characteristics of specific (192)Ir, (137)Cs, and (60)Co source models. This report has been prepared by the High Energy Brachytherapy Source Dosimetry (HEBD) Working Group. This report includes considerations in the application of the TG-43U1 formalism to high-energy photon-emitting sources with particular attention to phantom size effects, interpolation accuracy dependence on dose calculation grid size, and dosimetry parameter dependence on source active length. Consensus datasets for commercially available high-energy photon sources are provided, along with recommended methods for evaluating these datasets. Recommendations on dosimetry characterization methods, mainly using experimental procedures and Monte Carlo, are established and discussed. Also included are methodological recommendations on detector choice, detector energy response characterization and phantom materials, and measurement specification methodology. Uncertainty analyses are discussed and recommendations for high-energy sources without consensus datasets are given. Recommended consensus datasets for high-energy sources have been derived for sources that were commercially available as of January 2010. Data are presented according to the AAPM TG-43U1 formalism, with modified interpolation and extrapolation techniques of the AAPM TG-43U1S1 report for the 2D anisotropy function and radial dose function.

  10. Evolution of the Orszag-Tang vortex system in a compressible medium. I - Initial average subsonic flow

    Science.gov (United States)

    Dahlburg, R. B.; Picone, J. M.

    1989-01-01

    The results of fully compressible, Fourier collocation, numerical simulations of the Orszag-Tang vortex system are presented. The initial conditions for this system consist of a nonrandom, periodic field in which the magnetic and velocity field contain X points but differ in modal structure along one spatial direction. The velocity field is initially solenoidal, with the total initial pressure field consisting of the superposition of the appropriate incompressible pressure distribution upon a flat pressure field corresponding to the initial, average Mach number of the flow. In these numerical simulations, this initial Mach number is varied from 0.2-0.6. These values correspond to average plasma beta values ranging from 30.0 to 3.3, respectively. It is found that compressible effects develop within one or two Alfven transit times, as manifested in the spectra of compressible quantities such as the mass density and the nonsolenoidal flow field. These effects include (1) a retardation of growth of correlation between the magnetic field and the velocity field, (2) the emergence of compressible small-scale structure such as massive jets, and (3) bifurcation of eddies in the compressible flow field. Differences between the incompressible and compressible results tend to increase with increasing initial average Mach number.

  11. Accessibility of higher education: the right to higher education in comparative approach

    OpenAIRE

    Pūraitė, Aurelija

    2011-01-01

    At present there is an unprecedented demand for and a great diversification in higher education, as well as an increased awareness of its vital importance for socio-cultural and economic development. The complexity of the right to education is especially at issue while discussing the right to higher education, which on a national level is non-compulsory, even though the number of people who have acquired higher education during the second half of the twentieth century has tripled. Therefore t...

  12. Spacetime averaging of exotic singularity universes

    International Nuclear Information System (INIS)

    Dabrowski, Mariusz P.

    2011-01-01

    Taking a spacetime average as a measure of the strength of singularities we show that big-rips (type I) are stronger than big-bangs. The former have infinite spacetime averages while the latter have them equal to zero. The sudden future singularities (type II) and w-singularities (type V) have finite spacetime averages. The finite scale factor (type III) singularities for some values of the parameters may have an infinite average and in that sense they may be considered stronger than big-bangs.

  13. NSSEFF Designing New Higher Temperature Superconductors

    Science.gov (United States)

    2017-04-13

    AFRL-AFOSR-VA-TR-2017-0083 NSSEFF - DESIGINING NEW HIGHER TEMPERATURE SUPERCONDUCTORS Meigan Aronson THE RESEARCH FOUNDATION OF STATE UNIVERSITY OF...2015 4. TITLE AND SUBTITLE NSSEFF - DESIGINING NEW HIGHER TEMPERATURE SUPERCONDUCTORS 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER FA9550-10-1-0191 5c...materials, identifying the most promising candidates. 15. SUBJECT TERMS TEMPERATURE, SUPERCONDUCTOR 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF

  14. Average thermal stress in the Al+SiC composite due to its manufacturing process

    International Nuclear Information System (INIS)

    Miranda, Carlos A.J.; Libardi, Rosani M.P.; Marcelino, Sergio; Boari, Zoroastro M.

    2013-01-01

    The numerical analyses framework to obtain the average thermal stress in the Al+SiC Composite due to its manufacturing process is presented along with the obtained results. The mixing of Aluminum and SiC powders is done at elevated temperature and the usage is at room temperature. A thermal stress state arises in the composite due to the different thermal expansion coefficients of the materials. Due to the particles size and randomness in the SiC distribution, some sets of models were analyzed and a statistical procedure used to evaluate the average stress state in the composite. In each model the particles position, form and size are randomly generated considering a volumetric ratio (VR) between 20% and 25%, close to an actual composite. The obtained stress field is represented by a certain number of iso stress curves, each one weighted by the area it represents. Systematically it was investigated the influence of: (a) the material behavior: linear x non-linear; (b) the carbide particles form: circular x quadrilateral; (c) the number of iso stress curves considered in each analysis; and (e) the model size (the number of particles). Each of above analyzed condition produced conclusions to guide the next step. Considering a confidence level of 95%, the average thermal stress value in the studied composite (20% ≤ VR ≤ 25%) is 175 MPa with a standard deviation of 10 MPa. Depending on its usage, this value should be taken into account when evaluating the material strength. (author)

  15. Divisibility patterns of natural numbers on a complex network.

    Science.gov (United States)

    Shekatkar, Snehal M; Bhagwat, Chandrasheel; Ambika, G

    2015-09-16

    Investigation of divisibility properties of natural numbers is one of the most important themes in the theory of numbers. Various tools have been developed over the centuries to discover and study the various patterns in the sequence of natural numbers in the context of divisibility. In the present paper, we study the divisibility of natural numbers using the framework of a growing complex network. In particular, using tools from the field of statistical inference, we show that the network is scale-free but has a non-stationary degree distribution. Along with this, we report a new kind of similarity pattern for the local clustering, which we call "stretching similarity", in this network. We also show that the various characteristics like average degree, global clustering coefficient and assortativity coefficient of the network vary smoothly with the size of the network. Using analytical arguments we estimate the asymptotic behavior of global clustering and average degree which is validated using numerical analysis.

  16. Fighting for the Profession: A History of AFT Higher Education. Item Number 36-0701

    Science.gov (United States)

    American Federation of Teachers, 2003

    2003-01-01

    This document provides a history of the relationship between higher education faculty and the American Federation of Teachers (AFT). Highlights include the first AFT higher education local formed in 1918, the role played by the union in the expansion of the G.I. Bill following World War II, increased activism in the 1950s and 1960s to win…

  17. Fast integration using quasi-random numbers

    International Nuclear Information System (INIS)

    Bossert, J.; Feindt, M.; Kerzel, U.

    2006-01-01

    Quasi-random numbers are specially constructed series of numbers optimised to evenly sample a given s-dimensional volume. Using quasi-random numbers in numerical integration converges faster with a higher accuracy compared to the case of pseudo-random numbers. The basic properties of quasi-random numbers are introduced, various generators are discussed and the achieved gain is illustrated by examples

  18. Fast integration using quasi-random numbers

    Science.gov (United States)

    Bossert, J.; Feindt, M.; Kerzel, U.

    2006-04-01

    Quasi-random numbers are specially constructed series of numbers optimised to evenly sample a given s-dimensional volume. Using quasi-random numbers in numerical integration converges faster with a higher accuracy compared to the case of pseudo-random numbers. The basic properties of quasi-random numbers are introduced, various generators are discussed and the achieved gain is illustrated by examples.

  19. Noise Reduction for Nonlinear Nonstationary Time Series Data using Averaging Intrinsic Mode Function

    Directory of Open Access Journals (Sweden)

    Christofer Toumazou

    2013-07-01

    Full Text Available A novel noise filtering algorithm based on averaging Intrinsic Mode Function (aIMF, which is a derivation of Empirical Mode Decomposition (EMD, is proposed to remove white-Gaussian noise of foreign currency exchange rates that are nonlinear nonstationary times series signals. Noise patterns with different amplitudes and frequencies were randomly mixed into the five exchange rates. A number of filters, namely; Extended Kalman Filter (EKF, Wavelet Transform (WT, Particle Filter (PF and the averaging Intrinsic Mode Function (aIMF algorithm were used to compare filtering and smoothing performance. The aIMF algorithm demonstrated high noise reduction among the performance of these filters.

  20. Mean link versus average plaquette tadpoles in lattice NRQCD

    Science.gov (United States)

    Shakespeare, Norman H.; Trottier, Howard D.

    1999-03-01

    We compare mean-link and average plaquette tadpole renormalization schemes in the context of the quarkonium hyperfine splittings in lattice NRQCD. Simulations are done for the three quarkonium systems c overlinec, b overlinec, and b overlineb. The hyperfine splittings are computed both at leading and at next-to-leading order in the relativistic expansion. Results are obtained at a large number of lattice spacings. A number of features emerge, all of which favor tadpole renormalization using mean links. This includes much better scaling of the hyperfine splittings in the three quarkonium systems. We also find that relativistic corrections to the spin splittings are smaller with mean-link tadpoles, particularly for the c overlinec and b overlinec systems. We also see signs of a breakdown in the NRQCD expansion when the bare quark mass falls below about one in lattice units (with the bare quark masses turning out to be much larger with mean-link tadpoles).

  1. Mean link versus average plaquette tadpoles in lattice NRQCD

    International Nuclear Information System (INIS)

    Shakespeare, Norman H.; Trottier, Howard D.

    1999-01-01

    We compare mean-link and average plaquette tadpole renormalization schemes in the context of the quarkonium hyperfine splittings in lattice NRQCD. Simulations are done for the three quarkonium systems cc-bar, bc-bar, and bb-bar. The hyperfine splittings are computed both at leading and at next-to-leading order in the relativistic expansion. Results are obtained at a large number of lattice spacings. A number of features emerge, all of which favor tadpole renormalization using mean links. This includes much better scaling of the hyperfine splittings in the three quarkonium systems. We also find that relativistic corrections to the spin splittings are smaller with mean-link tadpoles, particularly for the cc-bar and bc-bar systems. We also see signs of a breakdown in the NRQCD expansion when the bare quark mass falls below about one in lattice units (with the bare quark masses turning out to be much larger with mean-link tadpoles)

  2. Mean link versus average plaquette tadpoles in lattice NRQCD

    Energy Technology Data Exchange (ETDEWEB)

    Shakespeare, Norman H.; Trottier, Howard D

    1999-03-01

    We compare mean-link and average plaquette tadpole renormalization schemes in the context of the quarkonium hyperfine splittings in lattice NRQCD. Simulations are done for the three quarkonium systems cc-bar, bc-bar, and bb-bar. The hyperfine splittings are computed both at leading and at next-to-leading order in the relativistic expansion. Results are obtained at a large number of lattice spacings. A number of features emerge, all of which favor tadpole renormalization using mean links. This includes much better scaling of the hyperfine splittings in the three quarkonium systems. We also find that relativistic corrections to the spin splittings are smaller with mean-link tadpoles, particularly for the cc-bar and bc-bar systems. We also see signs of a breakdown in the NRQCD expansion when the bare quark mass falls below about one in lattice units (with the bare quark masses turning out to be much larger with mean-link tadpoles)

  3. Vegetable oil and fat viscosity forecast models based on iodine number and saponification number

    International Nuclear Information System (INIS)

    Toscano, G.; Riva, G.; Foppa Pedretti, E.; Duca, D.

    2012-01-01

    Vegetable oil and fats can be considered as an important renewable source for the energy production. There are many applications where these biofuels are used directly in engines. However, the use of pure vegetable oils causes some problems as consequence of its chemical and physical characteristic. Viscosity is one of the most important parameters affecting several physical and mechanical processes of the operation of the engine. The determination of this parameter at different tis important to determine the behavior of the vegetable oil and fats. In this work we investigated the effects of two analytical chemical parameters (iodine number and saponification number) and forecasting models have been proposed. -- Highlights: ► Vegetable oil and fat viscosity is predicted by mathematical model based on saponification number and iodine number. ► Unsaturated vegetable oils with small size molecules of fatty acids have a lower viscosity values. ► The models proposed show an average error lower than 12%

  4. Higher-order RANS turbulence models for separated flows

    Data.gov (United States)

    National Aeronautics and Space Administration — Higher-order Reynolds-averaged Navier-Stokes (RANS) models are developed to overcome the shortcomings of second-moment RANS models in predicting separated flows....

  5. Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.

    Science.gov (United States)

    Dirks, Jean; And Others

    1983-01-01

    Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)

  6. Evaluations of average level spacings

    International Nuclear Information System (INIS)

    Liou, H.I.

    1980-01-01

    The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of 168 Er data. 19 figures, 2 tables

  7. STAT3 polymorphism and Helicobacter pylori CagA strains with higher number of EPIYA-C segments independently increase the risk of gastric cancer

    International Nuclear Information System (INIS)

    Rocha, Gifone A; Rocha, Andreia MC; Gomes, Adriana D; Faria, César LL Jr; Melo, Fabrício F; Batista, Sérgio A; Fernandes, Viviane C; Almeida, Nathálie BF; Teixeira, Kádima N; Brito, Kátia S; Queiroz, Dulciene Maria Magalhães

    2015-01-01

    Because to date there is no available study on STAT3 polymorphism and gastric cancer in Western populations and taking into account that Helicobacter pylori CagA EPIYA-C segment deregulates SHP-2/ERK-JAK/STAT3 pathways, we evaluated whether the two variables are independently associated with gastric cancer. We included 1048 subjects: H. pylori-positive patients with gastric carcinoma (n = 232) and with gastritis (n = 275) and 541 blood donors. Data were analyzed using logistic regression model. The rs744166 polymorphic G allele (p = 0.01; OR = 1.76; 95 % CI = 1.44-2.70), and CagA-positive (OR = 12.80; 95 % CI = 5.58-19.86) status were independently associated with gastric cancer in comparison with blood donors. The rs744166 polymorphism (p = 0.001; OR = 1.64; 95 % CI = 1.16-2.31) and infection with H. pylori CagA-positive strains possessing higher number of EPIYA-C segments (p = 0.001; OR = 2.28; 95 % CI = 1.41-3.68) were independently associated with gastric cancer in comparison with gastritis. The association was stronger when host and bacterium genotypes were combined (p < 0.001; OR = 3.01; 95 % CI = 2.29-3.98). When stimulated with LPS (lipopolysaccharide) or Pam3Cys, peripheral mononuclear cells of healthy carriers of the rs744166 GG and AG genotypes expressed higher levels of STAT3 mRNA than those carrying AA genotype (p = 0.04 for both). The nuclear expression of phosphorylated p-STAT3 protein was significantly higher in the antral gastric tissue of carriers of rs744166 GG genotype than in carriers of AG and AA genotypes. Our study provides evidence that STAT3 rs744166 G allele and infection with CagA-positive H. pylori with higher number of EPIYA-C segments are independent risk factors for gastric cancer. The odds ratio of having gastric cancer was greater when bacterium and host high risk genotypes were combined

  8. Averaged RMHD equations

    International Nuclear Information System (INIS)

    Ichiguchi, Katsuji

    1998-01-01

    A new reduced set of resistive MHD equations is derived by averaging the full MHD equations on specified flux coordinates, which is consistent with 3D equilibria. It is confirmed that the total energy is conserved and the linearized equations for ideal modes are self-adjoint. (author)

  9. Reproducing multi-model ensemble average with Ensemble-averaged Reconstructed Forcings (ERF) in regional climate modeling

    Science.gov (United States)

    Erfanian, A.; Fomenko, L.; Wang, G.

    2016-12-01

    Multi-model ensemble (MME) average is considered the most reliable for simulating both present-day and future climates. It has been a primary reference for making conclusions in major coordinated studies i.e. IPCC Assessment Reports and CORDEX. The biases of individual models cancel out each other in MME average, enabling the ensemble mean to outperform individual members in simulating the mean climate. This enhancement however comes with tremendous computational cost, which is especially inhibiting for regional climate modeling as model uncertainties can originate from both RCMs and the driving GCMs. Here we propose the Ensemble-based Reconstructed Forcings (ERF) approach to regional climate modeling that achieves a similar level of bias reduction at a fraction of cost compared with the conventional MME approach. The new method constructs a single set of initial and boundary conditions (IBCs) by averaging the IBCs of multiple GCMs, and drives the RCM with this ensemble average of IBCs to conduct a single run. Using a regional climate model (RegCM4.3.4-CLM4.5), we tested the method over West Africa for multiple combination of (up to six) GCMs. Our results indicate that the performance of the ERF method is comparable to that of the MME average in simulating the mean climate. The bias reduction seen in ERF simulations is achieved by using more realistic IBCs in solving the system of equations underlying the RCM physics and dynamics. This endows the new method with a theoretical advantage in addition to reducing computational cost. The ERF output is an unaltered solution of the RCM as opposed to a climate state that might not be physically plausible due to the averaging of multiple solutions with the conventional MME approach. The ERF approach should be considered for use in major international efforts such as CORDEX. Key words: Multi-model ensemble, ensemble analysis, ERF, regional climate modeling

  10. Frame average optimization of cine-mode EPID images used for routine clinical in vivo patient dose verification of VMAT deliveries

    Energy Technology Data Exchange (ETDEWEB)

    McCowan, P. M., E-mail: pmccowan@cancercare.mb.ca [Department of Physics and Astronomy, University of Manitoba, Winnipeg, Manitoba R3T 2N2, Canada and Medical Physics Department, CancerCare Manitoba, 675 McDermot Avenue, Winnipeg, Manitoba R3E 0V9 (Canada); McCurdy, B. M. C. [Department of Physics and Astronomy, University of Manitoba, Winnipeg, Manitoba R3T 2N2 (Canada); Medical Physics Department, CancerCare Manitoba, 675 McDermot Avenue, Winnipeg, Manitoba R3E 0V9 (Canada); Department of Radiology, University of Manitoba, 820 Sherbrook Street, Winnipeg, Manitoba R3A 1R9 (Canada)

    2016-01-15

    Purpose: The in vivo 3D dose delivered to a patient during volumetric modulated arc therapy (VMAT) delivery can be calculated using electronic portal imaging device (EPID) images. These images must be acquired in cine-mode (i.e., “movie” mode) in order to capture the time-dependent delivery information. The angle subtended by each cine-mode EPID image during an arc can be changed via the frame averaging number selected within the image acquisition software. A large frame average number will decrease the EPID’s angular resolution and will result in a decrease in the accuracy of the dose information contained within each image. Alternatively, less EPID images acquired per delivery will decrease the overall 3D patient dose calculation time, which is appealing for large-scale clinical implementation. Therefore, the purpose of this study was to determine the optimal frame average value per EPID image, defined as the highest frame averaging that can be used without an appreciable loss in 3D dose reconstruction accuracy for VMAT treatments. Methods: Six different VMAT plans and six different SBRT-VMAT plans were delivered to an anthropomorphic phantom. Delivery was carried out on a Varian 2300ix model linear accelerator (Linac) equipped with an aS1000 EPID running at a frame acquisition rate of 7.5 Hz. An additional PC was set up at the Linac console area, equipped with specialized frame-grabber hardware and software packages allowing continuous acquisition of all EPID frames during delivery. Frames were averaged into “frame-averaged” EPID images using MATLAB. Each frame-averaged data set was used to calculate the in vivo dose to the patient and then compared to the single EPID frame in vivo dose calculation (the single frame calculation represents the highest possible angular resolution per EPID image). A mean percentage dose difference of low dose (<20% prescription dose) and high dose regions (>80% prescription dose) was calculated for each frame averaged

  11. Frame average optimization of cine-mode EPID images used for routine clinical in vivo patient dose verification of VMAT deliveries

    International Nuclear Information System (INIS)

    McCowan, P. M.; McCurdy, B. M. C.

    2016-01-01

    Purpose: The in vivo 3D dose delivered to a patient during volumetric modulated arc therapy (VMAT) delivery can be calculated using electronic portal imaging device (EPID) images. These images must be acquired in cine-mode (i.e., “movie” mode) in order to capture the time-dependent delivery information. The angle subtended by each cine-mode EPID image during an arc can be changed via the frame averaging number selected within the image acquisition software. A large frame average number will decrease the EPID’s angular resolution and will result in a decrease in the accuracy of the dose information contained within each image. Alternatively, less EPID images acquired per delivery will decrease the overall 3D patient dose calculation time, which is appealing for large-scale clinical implementation. Therefore, the purpose of this study was to determine the optimal frame average value per EPID image, defined as the highest frame averaging that can be used without an appreciable loss in 3D dose reconstruction accuracy for VMAT treatments. Methods: Six different VMAT plans and six different SBRT-VMAT plans were delivered to an anthropomorphic phantom. Delivery was carried out on a Varian 2300ix model linear accelerator (Linac) equipped with an aS1000 EPID running at a frame acquisition rate of 7.5 Hz. An additional PC was set up at the Linac console area, equipped with specialized frame-grabber hardware and software packages allowing continuous acquisition of all EPID frames during delivery. Frames were averaged into “frame-averaged” EPID images using MATLAB. Each frame-averaged data set was used to calculate the in vivo dose to the patient and then compared to the single EPID frame in vivo dose calculation (the single frame calculation represents the highest possible angular resolution per EPID image). A mean percentage dose difference of low dose ( 80% prescription dose) was calculated for each frame averaged scenario for each plan. The authors defined their

  12. Averaging for solitons with nonlinearity management

    International Nuclear Information System (INIS)

    Pelinovsky, D.E.; Kevrekidis, P.G.; Frantzeskakis, D.J.

    2003-01-01

    We develop an averaging method for solitons of the nonlinear Schroedinger equation with a periodically varying nonlinearity coefficient, which is used to effectively describe solitons in Bose-Einstein condensates, in the context of the recently proposed technique of Feshbach resonance management. Using the derived local averaged equation, we study matter-wave bright and dark solitons and demonstrate a very good agreement between solutions of the averaged and full equations

  13. Position-Dependent Dynamics Explain Pore-Averaged Diffusion in Strongly Attractive Adsorptive Systems.

    Science.gov (United States)

    Krekelberg, William P; Siderius, Daniel W; Shen, Vincent K; Truskett, Thomas M; Errington, Jeffrey R

    2017-12-12

    Using molecular simulations, we investigate the relationship between the pore-averaged and position-dependent self-diffusivity of a fluid adsorbed in a strongly attractive pore as a function of loading. Previous work (Krekelberg, W. P.; Siderius, D. W.; Shen, V. K.; Truskett, T. M.; Errington, J. R. Connection between thermodynamics and dynamics of simple fluids in highly attractive pores. Langmuir 2013, 29, 14527-14535, doi: 10.1021/la4037327) established that pore-averaged self-diffusivity in the multilayer adsorption regime, where the fluid exhibits a dense film at the pore surface and a lower density interior pore region, is nearly constant as a function of loading. Here we show that this puzzling behavior can be understood in terms of how loading affects the fraction of particles that reside in the film and interior pore regions as well as their distinct dynamics. Specifically, the insensitivity of pore-averaged diffusivity to loading arises from the approximate cancellation of two factors: an increase in the fraction of particles in the higher diffusivity interior pore region with loading and a corresponding decrease in the particle diffusivity in that region. We also find that the position-dependent self-diffusivities scale with the position-dependent density. We present a model for predicting the pore-average self-diffusivity based on the position-dependent self-diffusivity, which captures the unusual characteristics of pore-averaged self-diffusivity in strongly attractive pores over several orders of magnitude.

  14. Numerical and experimental research on pentagonal cross-section of the averaging Pitot tube

    Science.gov (United States)

    Zhang, Jili; Li, Wei; Liang, Ruobing; Zhao, Tianyi; Liu, Yacheng; Liu, Mingsheng

    2017-07-01

    Averaging Pitot tubes have been widely used in many fields because of their simple structure and stable performance. This paper introduces a new shape of the cross-section of an averaging Pitot tube. Firstly, the structure of the averaging Pitot tube and the distribution of pressure taps are given. Then, a mathematical model of the airflow around it is formulated. After that, a series of numerical simulations are carried out to optimize the geometry of the tube. The distribution of the streamline and pressures around the tube are given. To test its performance, a test platform was constructed in accordance with the relevant national standards and is described in this paper. Curves are provided, linking the values of flow coefficient with the values of Reynolds number. With a maximum deviation of only  ±3%, the results of the flow coefficient obtained from the numerical simulations were in agreement with those obtained from experimental methods. The proposed tube has a stable flow coefficient and favorable metrological characteristics.

  15. The Marketing of Higher Education.

    Science.gov (United States)

    Brooker, George; Noble, Michael

    1985-01-01

    Formal college and university marketing programs are challenging to develop and implement because of the complexity of the marketing mix, the perceived inappropriateness of a traditional marketing officer, the number of diverse groups with input, the uniqueness of higher education institutions, and the difficulty in identifying higher education…

  16. Impact of connected vehicle guidance information on network-wide average travel time

    Directory of Open Access Journals (Sweden)

    Jiangfeng Wang

    2016-12-01

    Full Text Available With the emergence of connected vehicle technologies, the potential positive impact of connected vehicle guidance on mobility has become a research hotspot by data exchange among vehicles, infrastructure, and mobile devices. This study is focused on micro-modeling and quantitatively evaluating the impact of connected vehicle guidance on network-wide travel time by introducing various affecting factors. To evaluate the benefits of connected vehicle guidance, a simulation architecture based on one engine is proposed representing the connected vehicle–enabled virtual world, and connected vehicle route guidance scenario is established through the development of communication agent and intelligent transportation systems agents using connected vehicle application programming interface considering the communication properties, such as path loss and transmission power. The impact of connected vehicle guidance on network-wide travel time is analyzed by comparing with non-connected vehicle guidance in response to different market penetration rate, following rate, and congestion level. The simulation results explore that average network-wide travel time in connected vehicle guidance shows a significant reduction versus that in non–connected vehicle guidance. Average network-wide travel time in connected vehicle guidance have an increase of 42.23% comparing to that in non-connected vehicle guidance, and average travel time variability (represented by the coefficient of variance increases as the travel time increases. Other vital findings include that higher penetration rate and following rate generate bigger savings of average network-wide travel time. The savings of average network-wide travel time increase from 17% to 38% according to different congestion levels, and savings of average travel time in more serious congestion have a more obvious improvement for the same penetration rate or following rate.

  17. Understanding of thermo-gravimetric analysis to calculate number of addends in multifunctional hemi-ortho ester derivatives of fullerenol

    International Nuclear Information System (INIS)

    Singh, Rachana; Goswami, Thakohari

    2011-01-01

    Test results for the applicability of existing thermo-gravimetric analysis (TGA) technique to ascertain average number of exohedral chemical attachment in a new class of fullerene dyads consisting of multiple hemi-ortho esters onto fullerenol is presented. Although the method is nicely applicable for higher fullerenol, but homogeneous phase products calculate lower number of addends, whereas, the hetero phase products indicate higher value. Lower value is attributed to either overlapping of thermal events or substituents effects and higher value is the contribution of tetra butyl ammonium hydroxide (TBAH) impurity used as phase transfer catalyst (PTC) in heterogeneous phase reactions. Presence of TBAH impurity is recognized through thermo-gravimetry mass spectrometry (TG-MS) measurement. Appropriate modification of test method to arrive at accurate and precise values of x (total mass contribution due to addends only) and y (total mass contribution due to fullerene plus char yield) are also reported. Successful use of two more different techniques, viz., electron-spray ionization mass spectrometry (ESI-MS) and X-ray photoelectron spectroscopy (XPS), supplement above results. Influences of fullerene and different substituents on thermal behavior of dyads are assessed.

  18. The effect of tip speed ratio on a vertical axis wind turbine at high Reynolds numbers

    Science.gov (United States)

    Parker, Colin M.; Leftwich, Megan C.

    2016-05-01

    This work visualizes the flow surrounding a scaled model vertical axis wind turbine at realistic operating conditions. The model closely matches geometric and dynamic properties—tip speed ratio and Reynolds number—of a full-size turbine. The flow is visualized using particle imaging velocimetry (PIV) in the midplane upstream, around, and after (up to 4 turbine diameters downstream) the turbine, as well as a vertical plane behind the turbine. Time-averaged results show an asymmetric wake behind the turbine, regardless of tip speed ratio, with a larger velocity deficit for a higher tip speed ratio. For the higher tip speed ratio, an area of averaged flow reversal is present with a maximum reverse flow of -0.04U_∞. Phase-averaged vorticity fields—achieved by syncing the PIV system with the rotation of the turbine—show distinct structures form from each turbine blade. There were distinct differences in results by tip speed ratios of 0.9, 1.3, and 2.2 of when in the cycle structures are shed into the wake—switching from two pairs to a single pair of vortices being shed—and how they convect into the wake—the middle tip speed ratio vortices convect downstream inside the wake, while the high tip speed ratio pair is shed into the shear layer of the wake. Finally, results show that the wake structure is much more sensitive to changes in tip speed ratio than to changes in Reynolds number.

  19. Unit Reynolds number, Mach number and pressure gradient effects on laminar-turbulent transition in two-dimensional boundary layers

    Science.gov (United States)

    Risius, Steffen; Costantini, Marco; Koch, Stefan; Hein, Stefan; Klein, Christian

    2018-05-01

    The influence of unit Reynolds number (Re_1=17.5× 106-80× 106 {m}^{-1}), Mach number (M= 0.35-0.77) and incompressible shape factor (H_{12} = 2.50-2.66) on laminar-turbulent boundary layer transition was systematically investigated in the Cryogenic Ludwieg-Tube Göttingen (DNW-KRG). For this investigation the existing two-dimensional wind tunnel model, PaLASTra, which offers a quasi-uniform streamwise pressure gradient, was modified to reduce the size of the flow separation region at its trailing edge. The streamwise temperature distribution and the location of laminar-turbulent transition were measured by means of temperature-sensitive paint (TSP) with a higher accuracy than attained in earlier measurements. It was found that for the modified PaLASTra model the transition Reynolds number (Re_{ {tr}}) exhibits a linear dependence on the pressure gradient, characterized by H_{12}. Due to this linear relation it was possible to quantify the so-called `unit Reynolds number effect', which is an increase of Re_{ {tr}} with Re_1. By a systematic variation of M, Re_1 and H_{12} in combination with a spectral analysis of freestream disturbances, a stabilizing effect of compressibility on boundary layer transition, as predicted by linear stability theory, was detected (`Mach number effect'). Furthermore, two expressions were derived which can be used to calculate the transition Reynolds number as a function of the amplitude of total pressure fluctuations, Re_1 and H_{12}. To determine critical N-factors, the measured transition locations were correlated with amplification rates, calculated by incompressible and compressible linear stability theory. By taking into account the spectral level of total pressure fluctuations at the frequency of the most amplified Tollmien-Schlichting wave at transition location, the scatter in the determined critical N-factors was reduced. Furthermore, the receptivity coefficients dependence on incidence angle of acoustic waves was used to

  20. Average concentrations of FSH and LH in seminal plasma as determined by radioimmunoassay

    International Nuclear Information System (INIS)

    Milbradt, R.; Linzbach, P.; Feller, H.

    1979-01-01

    In 322 males, 25 to 50 years of age, levels of LH and FSH respectively were determined in seminal plasma by radioimmunoassay. Average values of 0,78 ng/ml and 3,95 ng/ml were found as for FSH and LH respectively. Sperm count and motility were not related to FSH levels, but were to LH levels. A high count of spermatozoa corresponded to high concentration of LH, and normal motility was associated with higher levels of LH as compared to levels associated with asthenozoospermia. With respect to count of spermatozoa of a single or the average patient, it is suggested that the ratio of FSH/LH would be more meaningful than LH level alone. (orig.) [de

  1. 40 CFR 600.510-12 - Calculation of average fuel economy and average carbon-related exhaust emissions.

    Science.gov (United States)

    2010-07-01

    ... and average carbon-related exhaust emissions. 600.510-12 Section 600.510-12 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF... Transportation. (iv) [Reserved] (2) Average carbon-related exhaust emissions will be calculated to the nearest...

  2. Optimal numerical methods for determining the orientation averages of single-scattering properties of atmospheric ice crystals

    International Nuclear Information System (INIS)

    Um, Junshik; McFarquhar, Greg M.

    2013-01-01

    The optimal orientation averaging scheme (regular lattice grid scheme or quasi Monte Carlo (QMC) method), the minimum number of orientations, and the corresponding computing time required to calculate the average single-scattering properties (i.e., asymmetry parameter (g), single-scattering albedo (ω o ), extinction efficiency (Q ext ), scattering efficiency (Q sca ), absorption efficiency (Q abs ), and scattering phase function at scattering angles of 90° (P 11 (90°)), and 180° (P 11 (180°))) within a predefined accuracy level (i.e., 1.0%) were determined for four different nonspherical atmospheric ice crystal models (Gaussian random sphere, droxtal, budding Bucky ball, and column) with maximum dimension D=10μm using the Amsterdam discrete dipole approximation at λ=0.55, 3.78, and 11.0μm. The QMC required fewer orientations and less computing time than the lattice grid. The calculations of P 11 (90°) and P 11 (180°) required more orientations than the calculations of integrated scattering properties (i.e., g, ω o , Q ext , Q sca , and Q abs ) regardless of the orientation average scheme. The fewest orientations were required for calculating g and ω o . The minimum number of orientations and the corresponding computing time for single-scattering calculations decreased with an increase of wavelength, whereas they increased with the surface-area ratio that defines particle nonsphericity. -- Highlights: •The number of orientations required to calculate the average single-scattering properties of nonspherical ice crystals is investigated. •Single-scattering properties of ice crystals are calculated using ADDA. •Quasi Monte Carlo method is more efficient than lattice grid method for scattering calculations. •Single-scattering properties of ice crystals depend on a newly defined parameter called surface area ratio

  3. Particle number size distributions in urban air before and after volatilisation

    Directory of Open Access Journals (Sweden)

    W. Birmili

    2010-05-01

    Full Text Available Aerosol particle number size distributions (size range 0.003–10 μm in the urban atmosphere of Augsburg (Germany were examined with respect to the governing anthropogenic sources and meteorological factors. The two-year average particle number concentration between November 2004 and November 2006 was 12 200 cm−3, i.e. similar to previous observations in other European cities. A seasonal analysis yielded twice the total particle number concentrations in winter as compared to summer as consequence of more frequent inversion situations and enhanced particulate emissions. The diurnal variations of particle number were shaped by a remarkable maximum in the morning during the peak traffic hours. After a mid-day decrease along with the onset of vertical mixing, an evening concentration maximum could frequently be observed, suggesting a re-stratification of the urban atmosphere. Overall, the mixed layer height turned out to be the most influential meteorological parameter on the particle size distribution. Its influence was even greater than that of the geographical origin of the prevailing synoptic-scale air mass.

    Size distributions below 0.8 μm were also measured downstream of a thermodenuder (temperature: 300 °C, allowing to retrieve the volume concentration of non-volatile compounds. The balance of particle number upstream and downstream of the thermodenuder suggests that practically all particles >12 nm contain a non-volatile core while additional nucleation of particles smaller than 6 nm could be observed after the thermodenuder as an interfering artifact of the method. The good correlation between the non-volatile volume concentration and an independent measurement of the aerosol absorption coefficient (R2=0.9 suggests a close correspondence of the refractory and light-absorbing particle fractions. Using the "summation method", an average diameter ratio of particles before and after volatilisation could

  4. Experimental study of pitching and plunging airfoils at low Reynolds numbers

    Energy Technology Data Exchange (ETDEWEB)

    Baik, Yeon Sik; Bernal, Luis P. [University of Michigan, Department of Aerospace Engineering, Ann Arbor, MI (United States)

    2012-12-15

    Measurements of the unsteady flow structure and force time history of pitching and plunging SD7003 and flat plate airfoils at low Reynolds numbers are presented. The airfoils were pitched and plunged in the effective angle of attack range of 2.4 -13.6 (shallow-stall kinematics) and -6 to 22 (deep-stall kinematics). The shallow-stall kinematics results for the SD7003 airfoil show attached flow and laminar-to-turbulent transition at low effective angle of attack during the down stroke motion, while the flat plate model exhibits leading edge separation. Strong Re-number effects were found for the SD7003 airfoil which produced approximately 25 % increase in the peak lift coefficient at Re = 10,000 compared to higher Re flows. The flat plate airfoil showed reduced Re effects due to leading edge separation at the sharper leading edge, and the measured peak lift coefficient was higher than that predicted by unsteady potential flow theory. The deep-stall kinematics resulted in leading edge separation that led to formation of a large leading edge vortex (LEV) and a small trailing edge vortex (TEV) for both airfoils. The measured peak lift coefficient was significantly higher ({proportional_to}50 %) than that for the shallow-stall kinematics. The effect of airfoil shape on lift force was greater than the Re effect. Turbulence statistics were measured as a function of phase using ensemble averages. The results show anisotropic turbulence for the LEV and isotropic turbulence for the TEV. Comparison of unsteady potential flow theory with the experimental data showed better agreement by using the quasi-steady approximation, or setting C(k) = 1 in Theodorsen theory, for leading edge-separated flows. (orig.)

  5. A new mathematical process for the calculation of average forms of teeth.

    Science.gov (United States)

    Mehl, A; Blanz, V; Hickel, R

    2005-12-01

    Qualitative visual inspections and linear metric measurements have been predominant methods for describing the morphology of teeth. No quantitative formulation exists for the description of dental features. The aim of this study was to determine and validate a mathematical process for calculation of the average form of first maxillary molars, including the general occlusal features. Stone replicas of 174 caries-free first maxillary molar crowns from young patients ranging from 6 to 9 years of age were measured 3-dimensionally with a laser scanning system at a resolution of approximately 100,000 points. Then, the average tooth was computed, which captured the common features of the molar's surface quantitatively. This new method adapts algorithms both from computer science and neuroscience to detect and associate the same features and same surface points (correspondences) between 1 reference tooth and all other teeth. In this study, the method was tested for 7 different reference teeth. The algorithm does not involve any prior knowledge about teeth and their features. Irrespective of the reference tooth used, the procedure yielded average teeth that showed nearly no differences (less than +/-30 microm). This approach provides a valid quantitative process for calculating 3-dimensional (3D) averages of occlusal surfaces of teeth even in the event of a high number of digitized surface points. Additionally, because this process detects and assigns point-wise feature correspondences between all library teeth, it may also serve as a basis for a more substantiated principal component analysis evaluating the main natural shape deviations from the 3D average.

  6. Study of parameters and entrainment of a jet in cross-flow arrangement with transition at two low Reynolds numbers

    Energy Technology Data Exchange (ETDEWEB)

    Cardenas, Camilo [Karlsruhe Institute of Technology, Institute for Chemical Technology and Polymer Chemistry, Karlsruhe (Germany); Convenio Andres Bello, Instituto Internacional de Investigaciones Educativas para la Integracion, La Paz (Bolivia); Denev, Jordan A.; Bockhorn, Henning [Karlsruhe Institute of Technology, Engler-Bunte-Institute, Combustion Division, Karlsruhe (Germany); Suntz, Rainer [Karlsruhe Institute of Technology, Institute for Chemical Technology and Polymer Chemistry, Karlsruhe (Germany)

    2012-10-15

    Investigation of the mixing process is one of the main issues in chemical engineering and combustion and the configuration of a jet into a cross-flow (JCF) is often employed for this purpose. Experimental data are gained for the symmetry plane in a JCF-arrangement of an air flow using a combination of particle image velocimetry (PIV) with laser-induced fluorescence (LIF). The experimental data with thoroughly measured boundary conditions are complemented with direct numerical simulations, which are based on idealized boundary conditions. Two similar cases are studied with a fixed jet-to-cross-flow velocity ratio of 3.5 and variable cross-flow Reynolds numbers equal to 4,120 and 8,240; in both cases the jet issues from the pipe at laminar conditions. This leads to a laminar-to-turbulent transition, which depends on the Reynolds number and occurs quicker for the case with higher Reynolds number in both experiments and simulations as well. It was found that the Reynolds number only slightly affects the jet trajectory, which in the case with the higher Reynolds number is slightly deeper. It is attributed to the changed boundary layer shape of the cross-flow. Leeward streamlines bend toward the jet and are responsible for the strong entrainment of cross-flow fluid into the jet. Velocity components are compared for the two Reynolds numbers at the leeward side at positions where strongest entrainment is present and a pressure minimum near the jet trajectory is found. The numerical simulations showed that entrainment is higher for the case with the higher Reynolds number. The latter is attributed to the earlier transition in this case. Fluid entrainment of the jet in cross-flow is more than twice stronger than for a similar flow of a jet issuing into a co-flowing stream. This comparison is made along the trajectory of the two jets at a distance of 5.5 jet diameters downstream and is based on the results from the direct numerical simulations and recently published

  7. A time-averaged cosmic ray propagation theory

    International Nuclear Information System (INIS)

    Klimas, A.J.

    1975-01-01

    An argument is presented, which casts doubt on our ability to choose an appropriate magnetic field ensemble for computing the average behavior of cosmic ray particles. An alternate procedure, using time-averages rather than ensemble-averages, is presented. (orig.) [de

  8. Exposure to fine particulate, black carbon, and particle number concentration in transportation microenvironments

    Science.gov (United States)

    Morales Betancourt, R.; Galvis, B.; Balachandran, S.; Ramos-Bonilla, J. P.; Sarmiento, O. L.; Gallo-Murcia, S. M.; Contreras, Y.

    2017-05-01

    This research determined intake dose of fine particulate matter (PM2.5), equivalent black carbon (eBC), and number of sub-micron particles (Np) for commuters in Bogotá, Colombia. Doses were estimated through measurements of exposure concentration, a surrogate of physical activity, as well as travel times and speeds. Impacts of travel mode, traffic load, and street configuration on dose and exposure were explored. Three road segments were selected because of their different traffic loads and composition, and dissimilar street configuration. The transport modes considered include active modes (walking and cycling) and motorized modes (bus, car, taxi, and motorcycle). Measurements were performed simultaneously in the available modes at each road segment. High average eBC concentrations were observed throughout the campaign, ranging from 20 to 120 μgm-3 . Commuters in motorized modes experienced significantly higher exposure concentrations than pedestrians and bicyclists. The highest average concentrations of PM2.5, eBC , and Np were measured inside the city's Bus Rapid Transit (BRT) system vehicles. Pedestrians and bicycle users in an open street configuration were exposed to the lowest average concentrations of PM2.5 and eBC , six times lower than those experienced by commuters using the BRT in the same street segment. Pedestrians experienced the highest particulate matter intake dose in the road segments studied, despite being exposed to lower concentrations than commuters in motorized modes. Average potential dose of PM2.5 and eBC per unit length traveled were nearly three times higher for pedestrians in a street canyon configuration compared to commuters in public transport. Slower travel speed and elevated inhalation rates dominate PM dose for pedestrians. The presence of dedicated bike lanes on sidewalks has a significant impact on reducing the exposure concentration for bicyclists compared to those riding in mixed traffic lanes. This study proposes a simple

  9. [Evaluation of the influence of humidity and temperature on the drug stability by initial average rate experiment].

    Science.gov (United States)

    He, Ning; Sun, Hechun; Dai, Miaomiao

    2014-05-01

    To evaluate the influence of temperature and humidity on the drug stability by initial average rate experiment, and to obtained the kinetic parameters. The effect of concentration error, drug degradation extent, humidity and temperature numbers, humidity and temperature range, and average humidity and temperature on the accuracy and precision of kinetic parameters in the initial average rate experiment was explored. The stability of vitamin C, as a solid state model, was investigated by an initial average rate experiment. Under the same experimental conditions, the kinetic parameters obtained from this proposed method were comparable to those from classical isothermal experiment at constant humidity. The estimates were more accurate and precise by controlling the extent of drug degradation, changing humidity and temperature range, or by setting the average temperature closer to room temperature. Compared with isothermal experiments at constant humidity, our proposed method saves time, labor, and materials.

  10. Applications of ordered weighted averaging (OWA operators in environmental problems

    Directory of Open Access Journals (Sweden)

    Carlos Llopis-Albert

    2017-04-01

    Full Text Available This paper presents an application of a prioritized weighted aggregation operator based on ordered weighted averaging (OWA to deal with stakeholders' constructive participation in water resources projects. They have different degree of acceptance or preference regarding the measures and policies to be carried out, which lead to different environmental and socio-economic outcomes, and hence, to different levels of stakeholders’ satisfaction. The methodology establishes a prioritization relationship upon the stakeholders, which preferences are aggregated by means of weights depending on the satisfaction of the higher priority policy maker. The methodology establishes a prioritization relationship upon the stakeholders, which preferences are aggregated by means of weights depending on the satisfaction of the higher priority policy maker. The methodology has been successfully applied to a Public Participation Project (PPP in watershed management, thus obtaining efficient environmental measures in conflict resolution problems under actors’ preference uncertainties.

  11. Interferometric control of the photon-number distribution

    Directory of Open Access Journals (Sweden)

    H. Esat Kondakci

    2017-07-01

    Full Text Available We demonstrate deterministic control over the photon-number distribution by interfering two coherent beams within a disordered photonic lattice. By sweeping a relative phase between two equal-amplitude coherent fields with Poissonian statistics that excite adjacent sites in a lattice endowed with disorder-immune chiral symmetry, we measure an output photon-number distribution that changes periodically between super-thermal and sub-thermal photon statistics upon ensemble averaging. Thus, the photon-bunching level is controlled interferometrically at a fixed mean photon-number by gradually activating the excitation symmetry of the chiral-mode pairs with structured coherent illumination and without modifying the disorder level of the random system itself.

  12. The concept of average LET values determination

    International Nuclear Information System (INIS)

    Makarewicz, M.

    1981-01-01

    The concept of average LET (linear energy transfer) values determination, i.e. ordinary moments of LET in absorbed dose distribution vs. LET of ionizing radiation of any kind and any spectrum (even the unknown ones) has been presented. The method is based on measurement of ionization current with several values of voltage supplying an ionization chamber operating in conditions of columnar recombination of ions or ion recombination in clusters while the chamber is placed in the radiation field at the point of interest. By fitting a suitable algebraic expression to the measured current values one can obtain coefficients of the expression which can be interpreted as values of LET moments. One of the advantages of the method is its experimental and computational simplicity. It has been shown that for numerical estimation of certain effects dependent on LET of radiation it is not necessary to know the dose distribution but only a number of parameters of the distribution, i.e. the LET moments. (author)

  13. Competitiveness - higher education

    Directory of Open Access Journals (Sweden)

    Labas Istvan

    2016-03-01

    Full Text Available Involvement of European Union plays an important role in the areas of education and training equally. The member states are responsible for organizing and operating their education and training systems themselves. And, EU policy is aimed at supporting the efforts of member states and trying to find solutions for the common challenges which appear. In order to make our future sustainable maximally; the key to it lies in education. The highly qualified workforce is the key to development, advancement and innovation of the world. Nowadays, the competitiveness of higher education institutions has become more and more appreciated in the national economy. In recent years, the frameworks of operation of higher education systems have gone through a total transformation. The number of applying students is continuously decreasing in some European countries therefore only those institutions can “survive” this shortfall, which are able to minimize the loss of the number of students. In this process, the factors forming the competitiveness of these budgetary institutions play an important role from the point of view of survival. The more competitive a higher education institution is, the greater the chance is that the students would like to continue their studies there and thus this institution will have a greater chance for the survival in the future, compared to ones lagging behind in the competition. Aim of our treatise prepared is to present the current situation and main data of the EU higher education and we examine the performance of higher education: to what extent it fulfils the strategy for smart, sustainable and inclusive growth which is worded in the framework of Europe 2020 programme. The treatise is based on analysis of statistical data.

  14. HIGHER PARENTAL PERCEPTIONS OF WEALTH ASSOCIATED WITH THE BIRTH OF MORE SONS IN AN AUSTRALIAN POPULATION.

    Science.gov (United States)

    Behie, A M; O'Donnell, M H

    2017-09-20

    Many industrialized nations are currently experiencing a decline in average secondary sex ratio (SSR) resulting in fewer boys being born relative to girls. While many potential factors may explain the decline in the birth of males relative to females, it seems most studies support the idea that male offspring are produced less often when environmental conditions are poor owing to males being more susceptible to loss in harsh environments. This study investigates the maternal factors that are associated with the sex of offspring in a cohort of the Australian population. It found that greater parental perceptions of wealth were significantly associated with an increase in the number of sons produced. These results suggest that male offspring are born at increased numbers to women with higher available resources, which may reflect the fact that male offspring are more vulnerable in poor environments.

  15. Podoplanin-positive cancer-associated fibroblast recruitment within cancer stroma is associated with a higher number of single nucleotide variants in cancer cells in lung adenocarcinoma.

    Science.gov (United States)

    Nakasone, Shoko; Mimaki, Sachiyo; Ichikawa, Tomohiro; Aokage, Keiju; Miyoshi, Tomohiro; Sugano, Masato; Kojima, Motohiro; Fujii, Satoshi; Kuwata, Takeshi; Ochiai, Atsushi; Tsuboi, Masahiro; Goto, Koichi; Tsuchihara, Katsuya; Ishii, Genichiro

    2018-05-01

    Podoplanin-positive cancer-associated fibroblasts (CAFs) play an essential role in tumor progression. However, it is still unclear whether specific genomic alterations of cancer cells are required to recruit podoplanin-positive CAFs. The aim of this study was to investigate the relationship between the mutation status of lung adenocarcinoma cells and the presence of podoplanin-positive CAFs. Ninety-seven lung adenocarcinomas for which whole exome sequencing data were available were enrolled. First, we analyzed the clinicopathological features of the cases, and then, evaluated the relationship between genetic features of cancer cells (major driver mutations and the number of single nucleotide variants, SNVs) and the presence of podoplanin-positive CAFs. The presence of podoplanin-positive CAFs was associated with smoking history, solid predominant subtype, and lymph node metastasis. We could not find any significant correlations between major genetic mutations (EGFR, KRAS, TP53, MET, ERBB2, BRAF, and PIC3CA) in cancer cells and the presence of podoplanin-positive CAFs. However, cases with podoplanin-positive CAFs had a significantly higher number of SNVs in cancer cells than the podoplanin-negative CAFs cases (median 84 vs 37, respectively; p = 0.001). This was also detected in a non-smoker subgroup (p = 0.037). Multivariate analyses revealed that the number of SNVs in cancer cells was the only statistically significant independent predictor for the presence of podoplanin-positive CAFs (p = 0.044). In lung adenocarcinoma, the presence of podoplanin-positive CAFs was associated with higher numbers of SNVs in cancer cells, suggesting a relationship between accumulations of SNVs in cancer cells and the generation of a tumor-promoting microenvironment.

  16. Multisite study of particle number concentrations in urban air.

    Science.gov (United States)

    Harrison, Roy M; Jones, Alan M

    2005-08-15

    Particle number concentration data are reported from a total of eight urban site locations in the United Kingdom. Of these, six are central urban background sites, while one is an urban street canyon (Marylebone Road) and another is influenced by both a motorway and a steelworks (Port Talbot). The concentrations are generally of a similar order to those reported in the literature, although higher than those in some of the other studies. Highest concentrations are at the Marylebone Road site and lowest are at the Port Talbot site. The central urban background locations lie somewhere between with concentrations typically around 20 000 cm(-3). A seasonal pattern affects all sites, with highest concentrations in the winter months and lowest concentrations in the summer. Data from all sites show a diurnal variation with a morning rush hour peak typical of an anthropogenic pollutant. When the dilution effects of windspeed are accounted for, the data show little directionality at the central urban background sites indicating the influence of sources from all directions as might be expected if the major source were road traffic. At the London Marylebone Road site there is high directionality driven by the air circulation in the street canyon, and at the Port Talbot site different diurnal patterns are seen for particle number count and PM10 influenced by emissions from road traffic (particle number count) and the steelworks (PM10) and local meteorological factors. Hourly particle number concentrations are generally only weakly correlated to NO(x) and PM10, with the former showing a slightly closer relationship. Correlations between daily average particle number count and PM10 were also weak. Episodes of high PM10 concentration in summer typically show low particle number concentrations consistent with transport of accumulation mode secondary aerosol, while winter episodes are frequently associated with high PM10 and particle number count arising from poor dispersion of

  17. The Effect of Overweight Status on Total and Metastatic Number of Harvested Lymph Nodes During Colorectal Surgery

    Directory of Open Access Journals (Sweden)

    Sezgin Zeren

    2016-03-01

    Full Text Available Objective: The aim of this study is to evaluate the rela­tionship between higher body mass index (BMI and har­vested total or metastatic lymph node numbers in patients who underwent surgery for colorectal cancers. Methods: Between March 2014 and January 2016, totally 71patients who underwent laparoscopic or conventional surgery for colorectal cancer were evaluated retrospec­tively. The data of age, gender, BMI, surgical procedure, tumor localization , postoperative mortality status, total number of harvested and metastatic lymph node were collected. The patients having 24.9 (kg/m2 or lower BMI values were classified as normal (Group 1 and patients having BMI values of 25 or over were overweight (Group 2. Afterwards, the parameters between groups and the effect of higher BMI were analyzed. Results: The mean age of the patients was 64.5 ± 14 years. The average BMI value in group 1 was 22.3 (kg/m2 and 27.0 (kg/m2 in group 2. According to localisation of tumor, transverse colon was the rare region for both groups. The common regions for tumor localisation in group 1 were right colon, sigmoid colon and rectum. In group 2 the common localisation for tumors were rectum, right colon and sigmoid colon. There was no difference between groups about postoperative mortality rates (p > 0.05. The mean of the total number of harvested lymph nodes were 14 in group 1 and 12 in group 2. There were no relationship between BMI and tumor diameter, total or metastatic number of harvested lymph nodes. Conclusion: Higher BMI values does not effect the num­ber of excised total or metastatic lymph nodes and tumor diameters. Therefore, the surgeons should not hesitate in overweight patients cancer surgery for dissecting ad­equate number of lymph nodes.

  18. 40 CFR 76.11 - Emissions averaging.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Emissions averaging. 76.11 Section 76.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General...

  19. Glycogen with short average chain length enhances bacterial durability

    Science.gov (United States)

    Wang, Liang; Wise, Michael J.

    2011-09-01

    Glycogen is conventionally viewed as an energy reserve that can be rapidly mobilized for ATP production in higher organisms. However, several studies have noted that glycogen with short average chain length in some bacteria is degraded very slowly. In addition, slow utilization of glycogen is correlated with bacterial viability, that is, the slower the glycogen breakdown rate, the longer the bacterial survival time in the external environment under starvation conditions. We call that a durable energy storage mechanism (DESM). In this review, evidence from microbiology, biochemistry, and molecular biology will be assembled to support the hypothesis of glycogen as a durable energy storage compound. One method for testing the DESM hypothesis is proposed.

  20. Average spectral efficiency analysis of FSO links over turbulence channel with adaptive transmissions and aperture averaging

    Science.gov (United States)

    Aarthi, G.; Ramachandra Reddy, G.

    2018-03-01

    In our paper, the impact of adaptive transmission schemes: (i) optimal rate adaptation (ORA) and (ii) channel inversion with fixed rate (CIFR) on the average spectral efficiency (ASE) are explored for free-space optical (FSO) communications with On-Off Keying (OOK), Polarization shift keying (POLSK), and Coherent optical wireless communication (Coherent OWC) systems under different turbulence regimes. Further to enhance the ASE we have incorporated aperture averaging effects along with the above adaptive schemes. The results indicate that ORA adaptation scheme has the advantage of improving the ASE performance compared with CIFR under moderate and strong turbulence regime. The coherent OWC system with ORA excels the other modulation schemes and could achieve ASE performance of 49.8 bits/s/Hz at the average transmitted optical power of 6 dBm under strong turbulence. By adding aperture averaging effect we could achieve an ASE of 50.5 bits/s/Hz under the same conditions. This makes ORA with Coherent OWC modulation as a favorable candidate for improving the ASE of the FSO communication system.

  1. Trends In Funding Higher Education In Romania And EU

    Directory of Open Access Journals (Sweden)

    Raluca Mariana Dragoescu

    2014-05-01

    Full Text Available Education is one of the determinants of the economic growth in any state, education funding representing thus a very important aspect in public policies. In this article we present the general principles of funding higher education in Romania and how it evolved over the last decade, stressing that the public higher education has been consistently underfunded. We also present an overview of the evolution of the main statistical indicators that characterize higher education in Romania, the number of universities and faculties, the number of students, number of teachers, revealing discrepancies between their evolution and the evolution of funding. We compared the funding of higher education in Romania and EU countries highlighting the fact that Romania should pay a special attention to higher education to achieve the performancen of other EU member countries.

  2. A practical guide to averaging functions

    CERN Document Server

    Beliakov, Gleb; Calvo Sánchez, Tomasa

    2016-01-01

    This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...

  3. A spatially-averaged mathematical model of kidney branching morphogenesis

    KAUST Repository

    Zubkov, V.S.

    2015-08-01

    © 2015 Published by Elsevier Ltd. Kidney development is initiated by the outgrowth of an epithelial ureteric bud into a population of mesenchymal cells. Reciprocal morphogenetic responses between these two populations generate a highly branched epithelial ureteric tree with the mesenchyme differentiating into nephrons, the functional units of the kidney. While we understand some of the mechanisms involved, current knowledge fails to explain the variability of organ sizes and nephron endowment in mice and humans. Here we present a spatially-averaged mathematical model of kidney morphogenesis in which the growth of the two key populations is described by a system of time-dependant ordinary differential equations. We assume that branching is symmetric and is invoked when the number of epithelial cells per tip reaches a threshold value. This process continues until the number of mesenchymal cells falls below a critical value that triggers cessation of branching. The mathematical model and its predictions are validated against experimentally quantified C57Bl6 mouse embryonic kidneys. Numerical simulations are performed to determine how the final number of branches changes as key system parameters are varied (such as the growth rate of tip cells, mesenchyme cells, or component cell population exit rate). Our results predict that the developing kidney responds differently to loss of cap and tip cells. They also indicate that the final number of kidney branches is less sensitive to changes in the growth rate of the ureteric tip cells than to changes in the growth rate of the mesenchymal cells. By inference, increasing the growth rate of mesenchymal cells should maximise branch number. Our model also provides a framework for predicting the branching outcome when ureteric tip or mesenchyme cells change behaviour in response to different genetic or environmental developmental stresses.

  4. A spatially-averaged mathematical model of kidney branching morphogenesis

    KAUST Repository

    Zubkov, V.S.; Combes, A.N.; Short, K.M.; Lefevre, J.; Hamilton, N.A.; Smyth, I.M.; Little, M.H.; Byrne, H.M.

    2015-01-01

    © 2015 Published by Elsevier Ltd. Kidney development is initiated by the outgrowth of an epithelial ureteric bud into a population of mesenchymal cells. Reciprocal morphogenetic responses between these two populations generate a highly branched epithelial ureteric tree with the mesenchyme differentiating into nephrons, the functional units of the kidney. While we understand some of the mechanisms involved, current knowledge fails to explain the variability of organ sizes and nephron endowment in mice and humans. Here we present a spatially-averaged mathematical model of kidney morphogenesis in which the growth of the two key populations is described by a system of time-dependant ordinary differential equations. We assume that branching is symmetric and is invoked when the number of epithelial cells per tip reaches a threshold value. This process continues until the number of mesenchymal cells falls below a critical value that triggers cessation of branching. The mathematical model and its predictions are validated against experimentally quantified C57Bl6 mouse embryonic kidneys. Numerical simulations are performed to determine how the final number of branches changes as key system parameters are varied (such as the growth rate of tip cells, mesenchyme cells, or component cell population exit rate). Our results predict that the developing kidney responds differently to loss of cap and tip cells. They also indicate that the final number of kidney branches is less sensitive to changes in the growth rate of the ureteric tip cells than to changes in the growth rate of the mesenchymal cells. By inference, increasing the growth rate of mesenchymal cells should maximise branch number. Our model also provides a framework for predicting the branching outcome when ureteric tip or mesenchyme cells change behaviour in response to different genetic or environmental developmental stresses.

  5. 7 CFR 51.2561 - Average moisture content.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except when...

  6. On monogamy of non-locality and macroscopic averages: examples and preliminary results

    Directory of Open Access Journals (Sweden)

    Rui Soares Barbosa

    2014-12-01

    Full Text Available We explore a connection between monogamy of non-locality and a weak macroscopic locality condition: the locality of the average behaviour. These are revealed by our analysis as being two sides of the same coin. Moreover, we exhibit a structural reason for both in the case of Bell-type multipartite scenarios, shedding light on but also generalising the results in the literature [Ramanathan et al., Phys. Rev. Lett. 107, 060405 (2001; Pawlowski & Brukner, Phys. Rev. Lett. 102, 030403 (2009]. More specifically, we show that, provided the number of particles in each site is large enough compared to the number of allowed measurement settings, and whatever the microscopic state of the system, the macroscopic average behaviour is local realistic, or equivalently, general multipartite monogamy relations hold. This result relies on a classical mathematical theorem by Vorob'ev [Theory Probab. Appl. 7(2, 147-163 (1962] about extending compatible families of probability distributions defined on the faces of a simplicial complex – in the language of the sheaf-theoretic framework of Abramsky & Brandenburger [New J. Phys. 13, 113036 (2011], such families correspond to no-signalling empirical models, and the existence of an extension corresponds to locality or non-contextuality. Since Vorob'ev's theorem depends solely on the structure of the simplicial complex, which encodes the compatibility of the measurements, and not on the specific probability distributions (i.e. the empirical models, our result about monogamy relations and locality of macroscopic averages holds not just for quantum theory, but for any empirical model satisfying the no-signalling condition. In this extended abstract, we illustrate our approach by working out a couple of examples, which convey the intuition behind our analysis while keeping the discussion at an elementary level.

  7. Numerical and experimental research on pentagonal cross-section of the averaging Pitot tube

    International Nuclear Information System (INIS)

    Zhang, Jili; Li, Wei; Liang, Ruobing; Zhao, Tianyi; Liu, Yacheng; Liu, Mingsheng

    2017-01-01

    Averaging Pitot tubes have been widely used in many fields because of their simple structure and stable performance. This paper introduces a new shape of the cross-section of an averaging Pitot tube. Firstly, the structure of the averaging Pitot tube and the distribution of pressure taps are given. Then, a mathematical model of the airflow around it is formulated. After that, a series of numerical simulations are carried out to optimize the geometry of the tube. The distribution of the streamline and pressures around the tube are given. To test its performance, a test platform was constructed in accordance with the relevant national standards and is described in this paper. Curves are provided, linking the values of flow coefficient with the values of Reynolds number. With a maximum deviation of only  ±3%, the results of the flow coefficient obtained from the numerical simulations were in agreement with those obtained from experimental methods. The proposed tube has a stable flow coefficient and favorable metrological characteristics. (paper)

  8. Numerical simulations of turbulent heat transfer in a channel at Prandtl numbers higher than 100

    International Nuclear Information System (INIS)

    Bergant, R.; Tiselj, I.

    2005-01-01

    During the last years, many attempts have been made to extend turbulent heat transfer at low Prandtl numbers to high Prandtl numbers in the channel based on a very accurate pseudo-spectral code of direct numerical simulation (DNS). DNS describes all the length and time scales for velocity and temperature fields, which are different when Prandtl number is not equal to 1. DNS can be used at low Reynolds (Re τ =150. Very similar approach as for Pr=5.4 was done for numerical simulations at Pr=100 and Pr=200. Comparison was made with results of temperature fields performed on 9-times finer numerical grid, however without damping of the highest Fourier coefficients. The results of mean temperature profiles show no differences larger than statistical uncertainties (∼1%), while slightly larger differences are seen for temperature fluctuations. (author)

  9. Modeling and forecasting monthly movement of annual average solar insolation based on the least-squares Fourier-model

    International Nuclear Information System (INIS)

    Yang, Zong-Chang

    2014-01-01

    Highlights: • Introduce a finite Fourier-series model for evaluating monthly movement of annual average solar insolation. • Present a forecast method for predicting its movement based on the extended Fourier-series model in the least-squares. • Shown its movement is well described by a low numbers of harmonics with approximately 6-term Fourier series. • Predict its movement most fitting with less than 6-term Fourier series. - Abstract: Solar insolation is one of the most important measurement parameters in many fields. Modeling and forecasting monthly movement of annual average solar insolation is of increasingly importance in areas of engineering, science and economics. In this study, Fourier-analysis employing finite Fourier-series is proposed for evaluating monthly movement of annual average solar insolation and extended in the least-squares for forecasting. The conventional Fourier analysis, which is the most common analysis method in the frequency domain, cannot be directly applied for prediction. Incorporated with the least-square method, the introduced Fourier-series model is extended to predict its movement. The extended Fourier-series forecasting model obtains its optimums Fourier coefficients in the least-square sense based on its previous monthly movements. The proposed method is applied to experiments and yields satisfying results in the different cities (states). It is indicated that monthly movement of annual average solar insolation is well described by a low numbers of harmonics with approximately 6-term Fourier series. The extended Fourier forecasting model predicts the monthly movement of annual average solar insolation most fitting with less than 6-term Fourier series

  10. Computation of the bounce-average code

    International Nuclear Information System (INIS)

    Cutler, T.A.; Pearlstein, L.D.; Rensink, M.E.

    1977-01-01

    The bounce-average computer code simulates the two-dimensional velocity transport of ions in a mirror machine. The code evaluates and bounce-averages the collision operator and sources along the field line. A self-consistent equilibrium magnetic field is also computed using the long-thin approximation. Optionally included are terms that maintain μ, J invariance as the magnetic field changes in time. The assumptions and analysis that form the foundation of the bounce-average code are described. When references can be cited, the required results are merely stated and explained briefly. A listing of the code is appended

  11. Average is Over

    Science.gov (United States)

    Eliazar, Iddo

    2018-02-01

    The popular perception of statistical distributions is depicted by the iconic bell curve which comprises of a massive bulk of 'middle-class' values, and two thin tails - one of small left-wing values, and one of large right-wing values. The shape of the bell curve is unimodal, and its peak represents both the mode and the mean. Thomas Friedman, the famous New York Times columnist, recently asserted that we have entered a human era in which "Average is Over" . In this paper we present mathematical models for the phenomenon that Friedman highlighted. While the models are derived via different modeling approaches, they share a common foundation. Inherent tipping points cause the models to phase-shift from a 'normal' bell-shape statistical behavior to an 'anomalous' statistical behavior: the unimodal shape changes to an unbounded monotone shape, the mode vanishes, and the mean diverges. Hence: (i) there is an explosion of small values; (ii) large values become super-large; (iii) 'middle-class' values are wiped out, leaving an infinite rift between the small and the super large values; and (iv) "Average is Over" indeed.

  12. Local and average structure of Mn- and La-substituted BiFeO3

    Science.gov (United States)

    Jiang, Bo; Selbach, Sverre M.

    2017-06-01

    The local and average structure of solid solutions of the multiferroic perovskite BiFeO3 is investigated by synchrotron X-ray diffraction (XRD) and electron density functional theory (DFT) calculations. The average experimental structure is determined by Rietveld refinement and the local structure by total scattering data analyzed in real space with the pair distribution function (PDF) method. With equal concentrations of La on the Bi site or Mn on the Fe site, La causes larger structural distortions than Mn. Structural models based on DFT relaxed geometry give an improved fit to experimental PDFs compared to models constrained by the space group symmetry. Berry phase calculations predict a higher ferroelectric polarization than the experimental literature values, reflecting that structural disorder is not captured in either average structure space group models or DFT calculations with artificial long range order imposed by periodic boundary conditions. Only by including point defects in a supercell, here Bi vacancies, can DFT calculations reproduce the literature results on the structure and ferroelectric polarization of Mn-substituted BiFeO3. The combination of local and average structure sensitive experimental methods with DFT calculations is useful for illuminating the structure-property-composition relationships in complex functional oxides with local structural distortions.

  13. Averaged differential expression for the discovery of biomarkers in the blood of patients with prostate cancer.

    Directory of Open Access Journals (Sweden)

    V Uma Bai

    Full Text Available The identification of a blood-based diagnostic marker is a goal in many areas of medicine, including the early diagnosis of prostate cancer. We describe the use of averaged differential display as an efficient mechanism for biomarker discovery in whole blood RNA. The process of averaging reduces the problem of clinical heterogeneity while simultaneously minimizing sample handling.RNA was isolated from the blood of prostate cancer patients and healthy controls. Samples were pooled and subjected to the averaged differential display process. Transcripts present at different levels between patients and controls were purified and sequenced for identification. Transcript levels in the blood of prostate cancer patients and controls were verified by quantitative RT-PCR. Means were compared using a t-test and a receiver-operating curve was generated. The Ring finger protein 19A (RNF19A transcript was identified as having higher levels in prostate cancer patients compared to healthy men through the averaged differential display process. Quantitative RT-PCR analysis confirmed a more than 2-fold higher level of RNF19A mRNA levels in the blood of patients with prostate cancer than in healthy controls (p = 0.0066. The accuracy of distinguishing cancer patients from healthy men using RNF19A mRNA levels in blood as determined by the area under the receiving operator curve was 0.727.Averaged differential display offers a simplified approach for the comprehensive screening of body fluids, such as blood, to identify biomarkers in patients with prostate cancer. Furthermore, this proof-of-concept study warrants further analysis of RNF19A as a clinically relevant biomarker for prostate cancer detection.

  14. Convergence and approximate calculation of average degree under different network sizes for decreasing random birth-and-death networks

    Science.gov (United States)

    Long, Yin; Zhang, Xiao-Jun; Wang, Kui

    2018-05-01

    In this paper, convergence and approximate calculation of average degree under different network sizes for decreasing random birth-and-death networks (RBDNs) are studied. First, we find and demonstrate that the average degree is convergent in the form of power law. Meanwhile, we discover that the ratios of the back items to front items of convergent reminder are independent of network link number for large network size, and we theoretically prove that the limit of the ratio is a constant. Moreover, since it is difficult to calculate the analytical solution of the average degree for large network sizes, we adopt numerical method to obtain approximate expression of the average degree to approximate its analytical solution. Finally, simulations are presented to verify our theoretical results.

  15. Effects of parental number and duration of the breeding period on the effective population size and genetic diversity of a captive population of the endangered Tokyo bitterling Tanakia tanago (Teleostei: Cyprinidae).

    Science.gov (United States)

    Kubota, Hitoshi; Watanabe, Katsutoshi

    2012-01-01

    The maintenance of genetic diversity is one of the chief concerns in the captive breeding of endangered species. Using microsatellite and mtDNA markers, we examined the effects of two key variables (parental number and duration of breeding period) on effective population size (N(e) ) and genetic diversity of offspring in an experimental breeding program for the endangered Tokyo bitterling, Tanakia tanago. Average heterozygosity and number of alleles of offspring estimated from microsatellite data increased with parental number in a breeding aquarium, and exhibited higher values for a long breeding period treatment (9 weeks) compared with a short breeding period (3 weeks). Haplotype diversity in mtDNA of offspring decreased with the reduction in parental number, and this tendency was greater for the short breeding period treatment. Genetic estimates of N(e) obtained with two single-sample estimation methods were consistently higher for the long breeding period treatment with the same number of parental fish. Average N(e) /N ratios were ranged from 0.5 to 1.4, and were high especially in the long breeding period with small and medium parental number treatments. Our results suggest that the spawning intervals of females and alternative mating behaviors of males influence the effective size and genetic diversity of offspring in bitterling. To maintain the genetic diversity of captive T. tanago, we recommend that captive breeding programs should be conducted for a sufficiently long period with an optimal level of parental density, as well as using an adequate number of parents. © 2011 Wiley Periodicals, Inc.

  16. Averaging and sampling for magnetic-observatory hourly data

    Directory of Open Access Journals (Sweden)

    J. J. Love

    2010-11-01

    Full Text Available A time and frequency-domain analysis is made of the effects of averaging and sampling methods used for constructing magnetic-observatory hourly data values. Using 1-min data as a proxy for continuous, geomagnetic variation, we construct synthetic hourly values of two standard types: instantaneous "spot" measurements and simple 1-h "boxcar" averages. We compare these average-sample types with others: 2-h average, Gaussian, and "brick-wall" low-frequency-pass. Hourly spot measurements provide a statistically unbiased representation of the amplitude range of geomagnetic-field variation, but as a representation of continuous field variation over time, they are significantly affected by aliasing, especially at high latitudes. The 1-h, 2-h, and Gaussian average-samples are affected by a combination of amplitude distortion and aliasing. Brick-wall values are not affected by either amplitude distortion or aliasing, but constructing them is, in an operational setting, relatively more difficult than it is for other average-sample types. It is noteworthy that 1-h average-samples, the present standard for observatory hourly data, have properties similar to Gaussian average-samples that have been optimized for a minimum residual sum of amplitude distortion and aliasing. For 1-h average-samples from medium and low-latitude observatories, the average of the combination of amplitude distortion and aliasing is less than the 5.0 nT accuracy standard established by Intermagnet for modern 1-min data. For medium and low-latitude observatories, average differences between monthly means constructed from 1-min data and monthly means constructed from any of the hourly average-sample types considered here are less than the 1.0 nT resolution of standard databases. We recommend that observatories and World Data Centers continue the standard practice of reporting simple 1-h-average hourly values.

  17. SU-E-T-174: Evaluation of the Optimal Intensity Modulated Radiation Therapy Plans Done On the Maximum and Average Intensity Projection CTs

    Energy Technology Data Exchange (ETDEWEB)

    Jurkovic, I [University of Texas Health Science Center at San Antonio, San Antonio, TX (United States); Stathakis, S; Li, Y; Patel, A; Vincent, J; Papanikolaou, N; Mavroidis, P [Cancer Therapy and Research Center University of Texas Health Sciences Center at San Antonio, San Antonio, TX (United States)

    2014-06-01

    Purpose: To determine the difference in coverage between plans done on average intensity projection and maximum intensity projection CT data sets for lung patients and to establish correlations between different factors influencing the coverage. Methods: For six lung cancer patients, 10 phases of equal duration through the respiratory cycle, the maximum and average intensity projections (MIP and AIP) from their 4DCT datasets were obtained. MIP and AIP datasets had three GTVs delineated (GTVaip — delineated on AIP, GTVmip — delineated on MIP and GTVfus — delineated on each of the 10 phases and summed up). From the each GTV, planning target volumes (PTV) were then created by adding additional margins. For each of the PTVs an IMRT plan was developed on the AIP dataset. The plans were then copied to the MIP data set and were recalculated. Results: The effective depths in AIP cases were significantly smaller than in MIP (p < 0.001). The Pearson correlation coefficient of r = 0.839 indicates strong degree of positive linear relationship between the average percentage difference in effective depths and average PTV coverage on the MIP data set. The V2 0 Gy of involved lung depends on the PTV coverage. The relationship between PTVaip mean CT number difference and PTVaip coverage on MIP data set gives r = 0.830. When the plans are produced on MIP and copied to AIP, r equals −0.756. Conclusion: The correlation between the AIP and MIP data sets indicates that the selection of the data set for developing the treatment plan affects the final outcome (cases with high average percentage difference in effective depths between AIP and MIP should be calculated on AIP). The percentage of the lung volume receiving higher dose depends on how well PTV is covered, regardless of on which set plan is done.

  18. Sequence diversity and copy number variation of Mutator-like transposases in wheat

    Directory of Open Access Journals (Sweden)

    Nobuaki Asakura

    2008-01-01

    Full Text Available Partial transposase-coding sequences of Mutator-like elements (MULEs were isolated from a wild einkorn wheat, Triticum urartu, by degenerate PCR. The isolated sequences were classified into a MuDR or Class I clade and divided into two distinct subclasses (subclass I and subclass II. The average pair-wise identity between members of both subclasses was 58.8% at the nucleotide sequence level. Sequence diversity of subclass I was larger than that of subclass II. DNA gel blot analysis showed that subclass I was present as low copy number elements in the genomes of all Triticum and Aegilops accessions surveyed, while subclass II was present as high copy number elements. These two subclasses seemed uncapable of recognizing each other for transposition. The number of copies of subclass II elements was much higher in Aegilops with the S, Sl and D genomes and polyploid Triticum species than in diploid Triticum with the A genome, indicating that active transposition occurred in S, Sl and D genomes before polyploidization. DNA gel blot analysis of six species selected from three subfamilies of Poaceae demonstrated that only the tribe Triticeae possessed both subclasses. These results suggest that the differentiation of these two subclasses occurred before or immediately after the establishment of the tribe Triticeae.

  19. Work-related stress in midlife is associated with higher number of mobility limitation in older age-results from the FLAME study.

    Science.gov (United States)

    Kulmala, Jenni; Hinrichs, Timo; Törmäkangas, Timo; von Bonsdorff, Mikaela B; von Bonsdorff, Monika E; Nygård, Clas-Håkan; Klockars, Matti; Seitsamo, Jorma; Ilmarinen, Juhani; Rantanen, Taina

    2014-01-01

    The aim of this study is to investigate whether work-related stress symptoms in midlife are associated with a number of mobility limitations during three decades from midlife to late life. Data for the study come from the Finnish Longitudinal Study of Municipal Employees (FLAME). The study includes a total of 5429 public sector employees aged 44-58 years at baseline who had information available on work-related stress symptoms in 1981 and 1985 and mobility limitation score during the subsequent 28-year follow-up. Four midlife work-related stress profiles were identified: negative reactions to work and depressiveness, perceived decrease in cognition, sleep disturbances, and somatic symptoms. People with a high number of stress symptoms in 1981 and 1985 were categorized as having constant stress. The number of self-reported mobility limitations was computed based on an eight-item list of mobility tasks presented to the participants in 1992, 1997, and 2009. Data were analyzed using joint Poisson regression models. The study showed that depending on the stress profile, persons suffering from constant stress in midlife had a higher risk of 30-70 % for having one more mobility limitation during the following 28 years compared to persons without stress after adjusting for mortality, several lifestyle factors, and chronic conditions. A less pronounced risk increase (20-40 %) was observed for persons with occasional symptoms. The study suggests that effective interventions aiming to reduce work-related stress should focus on both primary and secondary prevention.

  20. Average size of random polygons with fixed knot topology.

    Science.gov (United States)

    Matsuda, Hiroshi; Yao, Akihisa; Tsukahara, Hiroshi; Deguchi, Tetsuo; Furuta, Ko; Inami, Takeo

    2003-07-01

    We have evaluated by numerical simulation the average size R(K) of random polygons of fixed knot topology K=,3(1),3(1) musical sharp 4(1), and we have confirmed the scaling law R(2)(K) approximately N(2nu(K)) for the number N of polygonal nodes in a wide range; N=100-2200. The best fit gives 2nu(K) approximately 1.11-1.16 with good fitting curves in the whole range of N. The estimate of 2nu(K) is consistent with the exponent of self-avoiding polygons. In a limited range of N (N greater, similar 600), however, we have another fit with 2nu(K) approximately 1.01-1.07, which is close to the exponent of random polygons.

  1. Year-ahead prediction of US landfalling hurricane numbers: intense hurricanes

    OpenAIRE

    Khare, Shree; Jewson, Stephen

    2005-01-01

    We continue with our program to derive simple practical methods that can be used to predict the number of US landfalling hurricanes a year in advance. We repeat an earlier study, but for a slightly different definition landfalling hurricanes, and for intense hurricanes only. We find that the averaging lengths needed for optimal predictions of numbers of intense hurricanes are longer than those needed for optimal predictions of numbers of hurricanes of all strengths.

  2. Estimate of average glandular dose (AGD) in national clinics of mammography

    International Nuclear Information System (INIS)

    Mora, Patricia; Segura, Helena

    2004-01-01

    The breast cancer represents the second cause of death by cancer in the femme population of our country. The specialized equipment for the obtaining of the mammographic images is higher every day and its use increases daily. The quality of the radiographic study is linked to the dose that this tissue intrinsically sensible receives to the ionizing radiations. The present work makes the first national study to quantify the average glandular doses and to connect them with the diagnostic quality and the recommendations to international scale. (Author) [es

  3. Characteristics of the higher education system

    NARCIS (Netherlands)

    Jongbloed, Benjamin W.A.; Sijgers, Irene; Hammer, Matthijs; ter Horst, Wolf; Nieuwenhuis, Paul; van der Sijde, Peter

    2005-01-01

    This chapter presents an overview of the main characteristics of the higher education system in the Netherlands. Section 2.1 presents some key facts about the system as a whole (types of institutions, number of students, degrees). Section 2.2 discusses the different types of higher education

  4. New technique for number-plate recognition

    Science.gov (United States)

    Guo, Jie; Shi, Peng-Fei

    2001-09-01

    This paper presents an alternative algorithm for number plate recognition. The algorithm consists of three modules. Respectively, they are number plate location module, character segmentation module and character recognition module. Number plate location module extracts the number plate from the detected car image by analyzing the color and the texture properties. Different from most license plate location methods, the algorithm has fewer limits to the car size, the car position in the image and the image background. Character segmentation module applies connected region algorithm both to eliminate noise points and to segment characters. Touching characters and broken characters can be processed correctly. Character recognition module recognizes characters with HHIC (Hierarchical Hybrid Integrated Classifier). The system has been tested with 100 images obtained from crossroad and parking lot, etc, where the cars have different size, position, background and illumination. Successful recognition rate is about 92%. The average processing time is 1.2 second.

  5. Academically Ambitious and Relevant Higher Education Research: The Legacy of the Consortium of Higher Education Researchers

    Science.gov (United States)

    Teichler, Ulrich

    2013-01-01

    The Consortium of Higher Education Researchers (CHER) was founded in 1988 to stimulate international communication and collaboration of higher education researchers. A need was felt to offset the isolation of the small numbers of scholars in this area of expertise in many countries, as well as the isolation of individual disciplines addressing…

  6. To quantum averages through asymptotic expansion of classical averages on infinite-dimensional space

    International Nuclear Information System (INIS)

    Khrennikov, Andrei

    2007-01-01

    We study asymptotic expansions of Gaussian integrals of analytic functionals on infinite-dimensional spaces (Hilbert and nuclear Frechet). We obtain an asymptotic equality coupling the Gaussian integral and the trace of the composition of scaling of the covariation operator of a Gaussian measure and the second (Frechet) derivative of a functional. In this way we couple classical average (given by an infinite-dimensional Gaussian integral) and quantum average (given by the von Neumann trace formula). We can interpret this mathematical construction as a procedure of 'dequantization' of quantum mechanics. We represent quantum mechanics as an asymptotic projection of classical statistical mechanics with infinite-dimensional phase space. This space can be represented as the space of classical fields, so quantum mechanics is represented as a projection of 'prequantum classical statistical field theory'

  7. XY model with higher-order exchange.

    Science.gov (United States)

    Žukovič, Milan; Kalagov, Georgii

    2017-08-01

    An XY model, generalized by inclusion of up to an infinite number of higher-order pairwise interactions with an exponentially decreasing strength, is studied by spin-wave theory and Monte Carlo simulations. At low temperatures the model displays a quasi-long-range-order phase characterized by an algebraically decaying correlation function with the exponent η=T/[2πJ(p,α)], nonlinearly dependent on the parameters p and α that control the number of the higher-order terms and the decay rate of their intensity, respectively. At higher temperatures the system shows a crossover from the continuous Berezinskii-Kosterlitz-Thouless to the first-order transition for the parameter values corresponding to a highly nonlinear shape of the potential well. The role of topological excitations (vortices) in changing the nature of the transition is discussed.

  8. Averaging Robertson-Walker cosmologies

    International Nuclear Information System (INIS)

    Brown, Iain A.; Robbers, Georg; Behrend, Juliane

    2009-01-01

    The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ω eff 0 ≈ 4 × 10 −6 , with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10 −8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w eff < −1/3 can be found for strongly phantom models

  9. Gender Gaps in High School GPA and ACT Scores: High School Grade Point Average and ACT Test Score by Subject and Gender. Information Brief 2014-12

    Science.gov (United States)

    ACT, Inc., 2014

    2014-01-01

    Female students who graduated from high school in 2013 averaged higher grades than their male counterparts in all subjects, but male graduates earned higher scores on the math and science sections of the ACT. This information brief looks at high school grade point average and ACT test score by subject and gender

  10. Criticality evaluation of BWR MOX fuel transport packages using average Pu content

    International Nuclear Information System (INIS)

    Mattera, C.; Martinotti, B.

    2004-01-01

    Currently in France, criticality studies in transport configurations for Boiling Water Reactor Mixed Oxide fuel assemblies are based on conservative hypothesis assuming that all rods (Mixed Oxide (Uranium and Plutonium), Uranium Oxide, Uranium and Gadolinium Oxide rods) are Mixed Oxide rods with the same Plutonium-content, corresponding to the maximum value. In that way, the real heterogeneous mapping of the assembly is masked and covered by a homogeneous Plutonium-content assembly, enriched at the maximum value. As this calculation hypothesis is extremely conservative, COGEMA LOGISTICS has studied a new calculation method based on the average Plutonium-content in the criticality studies. The use of the average Plutonium-content instead of the real Plutonium-content profiles provides a highest reactivity value that makes it globally conservative. This method can be applied for all Boiling Water Reactor Mixed Oxide complete fuel assemblies of type 8 x 8, 9 x 9 and 10 x 10 which Plutonium-content in mass weight does not exceed 15%; it provides advantages which are discussed in our approach. With this new method, for the same package reactivity, the Pu-content allowed in the package design approval can be higher. The COGEMA LOGISTICS' new method allows, at the design stage, to optimise the basket, materials or geometry for higher payload, keeping the same reactivity

  11. Association between average daily gain, faecal dry matter content and concentration of Lawsonia intracellularis in faeces

    DEFF Research Database (Denmark)

    Pedersen, Ken Steen; Skrubel, Rikke; Stege, Helle

    2012-01-01

    Background The objective of this study was to investigate the association between average daily gain and the number of Lawsonia intracellularis bacteria in faeces of growing pigs with different levels of diarrhoea. Methods A longitudinal field study (n?=?150 pigs) was performed in a Danish herd f...

  12. Determinants of College Grade Point Averages

    Science.gov (United States)

    Bailey, Paul Dean

    2012-01-01

    Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the…

  13. Ergodic averages via dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    2006-01-01

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain....

  14. Application of method of volume averaging coupled with time resolved PIV to determine transport characteristics of turbulent flows in porous bed

    Science.gov (United States)

    Patil, Vishal; Liburdy, James

    2012-11-01

    Turbulent porous media flows are encountered in catalytic bed reactors and heat exchangers. Dispersion and mixing properties of these flows play an essential role in efficiency and performance. In an effort to understand these flows, pore scale time resolved PIV measurements in a refractive index matched porous bed were made. Pore Reynolds numbers, based on hydraulic diameter and pore average velocity, were varied from 400-4000. Jet-like flows and recirculation regions associated with large scale structures were found to exist. Coherent vortical structures which convect at approximately 0.8 times the pore average velocity were identified. These different flow regions exhibited different turbulent characteristics and hence contributed unequally to global transport properties of the bed. The heterogeneity present within a pore and also from pore to pore can be accounted for in estimating transport properties using the method of volume averaging. Eddy viscosity maps and mean velocity field maps, both obtained from PIV measurements, along with the method of volume averaging were used to predict the dispersion tensor versus Reynolds number. Asymptotic values of dispersion compare well to existing correlations. The role of molecular diffusion was explored by varying the Schmidt number and molecular diffusion was found to play an important role in tracer transport, especially in recirculation regions. Funding by NSF grant 0933857, Particulate and Multiphase Processing.

  15. Potential of high-average-power solid state lasers

    International Nuclear Information System (INIS)

    Emmett, J.L.; Krupke, W.F.; Sooy, W.R.

    1984-01-01

    We discuss the possibility of extending solid state laser technology to high average power and of improving the efficiency of such lasers sufficiently to make them reasonable candidates for a number of demanding applications. A variety of new design concepts, materials, and techniques have emerged over the past decade that, collectively, suggest that the traditional technical limitations on power (a few hundred watts or less) and efficiency (less than 1%) can be removed. The core idea is configuring the laser medium in relatively thin, large-area plates, rather than using the traditional low-aspect-ratio rods or blocks. This presents a large surface area for cooling, and assures that deposited heat is relatively close to a cooled surface. It also minimizes the laser volume distorted by edge effects. The feasibility of such configurations is supported by recent developments in materials, fabrication processes, and optical pumps. Two types of lasers can, in principle, utilize this sheet-like gain configuration in such a way that phase and gain profiles are uniformly sampled and, to first order, yield high-quality (undistorted) beams. The zig-zag laser does this with a single plate, and should be capable of power levels up to several kilowatts. The disk laser is designed around a large number of plates, and should be capable of scaling to arbitrarily high power levels

  16. Phase-Averaged Method Applied to Periodic Flow Between Shrouded Corotating Disks

    Directory of Open Access Journals (Sweden)

    Shen-Chun Wu

    2003-01-01

    Full Text Available This study investigates the coherent flow fields between corotating disks in a cylindrical enclosure. By using two laser velocimeters and a phase-averaged technique, the vortical structures of the flow could be reconstructed and their dynamic behavior was observed. The experimental results reveal clearly that the flow field between the disks is composed of three distinct regions: an inner region near the hub, an outer region, and a shroud boundary layer region. The outer region is distinguished by the presence of large vortical structures. The number of vortical structures corresponds to the normalized frequency of the flow.

  17. Intuitive numbers guide decisions

    Directory of Open Access Journals (Sweden)

    Ellen Peters

    2008-12-01

    Full Text Available Measuring reaction times to number comparisons is thought to reveal a processing stage in elementary numerical cognition linked to internal, imprecise representations of number magnitudes. These intuitive representations of the mental number line have been demonstrated across species and human development but have been little explored in decision making. This paper develops and tests hypotheses about the influence of such evolutionarily ancient, intuitive numbers on human decisions. We demonstrate that individuals with more precise mental-number-line representations are higher in numeracy (number skills consistent with previous research with children. Individuals with more precise representations (compared to those with less precise representations also were more likely to choose larger, later amounts over smaller, immediate amounts, particularly with a larger proportional difference between the two monetary outcomes. In addition, they were more likely to choose an option with a larger proportional but smaller absolute difference compared to those with less precise representations. These results are consistent with intuitive number representations underlying: a perceived differences between numbers, b the extent to which proportional differences are weighed in decisions, and, ultimately, c the valuation of decision options. Human decision processes involving numbers important to health and financial matters may be rooted in elementary, biological processes shared with other species.

  18. Asymptotically optimum multialternative sequential procedures for discernment of processes minimizing average length of observations

    Science.gov (United States)

    Fishman, M. M.

    1985-01-01

    The problem of multialternative sequential discernment of processes is formulated in terms of conditionally optimum procedures minimizing the average length of observations, without any probabilistic assumptions about any one occurring process, rather than in terms of Bayes procedures minimizing the average risk. The problem is to find the procedure that will transform inequalities into equalities. The problem is formulated for various models of signal observation and data processing: (1) discernment of signals from background interference by a multichannel system; (2) discernment of pulse sequences with unknown time delay; (3) discernment of harmonic signals with unknown frequency. An asymptotically optimum sequential procedure is constructed which compares the statistics of the likelihood ratio with the mean-weighted likelihood ratio and estimates the upper bound for conditional average lengths of observations. This procedure is shown to remain valid as the upper bound for the probability of erroneous partial solutions decreases approaching zero and the number of hypotheses increases approaching infinity. It also remains valid under certain special constraints on the probability such as a threshold. A comparison with a fixed-length procedure reveals that this sequential procedure decreases the length of observations to one quarter, on the average, when the probability of erroneous partial solutions is low.

  19. [CHALLENGING THE OPTIMAL NUMBER OF RETRIEVED OOCYTES AND ITS IMPACT ON PREGNANCY AND LIVE BIRTH RATES IN IVF/ICSI CYCLES].

    Science.gov (United States)

    Blais, Idit; Lahav-Baratz, Shirly; Koifman, Mara; Wiener-Megnazi, Zofnat; Auslender, Ron; Dirnfeld, Martha

    2015-06-01

    Large numbers of retrieved oocytes are associated with higher chances of having cryopreservation of embryos. However, the process entailed exposes women to increased risk for ovarian hyperstimulation syndrome. Furthermore, mild ovary stimulation protocols are more patient-friendly and with less adverse effects. Only limited reports exist on the significance of the number of retrieved oocytes achieved in a single stimulation cycle. To investigate the optimal number of retrieved oocytes to achieve pregnancy and live birth. This retrospective analysis included 1590 IVF cycles. Oocytes maturation, fertilization, cleavage, as well as pregnancy and live birth rates were analyzed according to the number of retrieved oocytes. Oocyte maturation, fertilization and cleavage rates were lower in cycles with more than 10 retrieved oocytes compared with other groups. Live birth rates were highest when the number of retrieved oocytes was 11-15. Retrieval of more than 15 oocytes was not associated with a significant increase in chances of conception and birth. The better oocyte quality with 10 or less oocytes retrieved could be the result of a possible interference with the natural selection, or the minimized exposure of growing follicles to the potentially negative effects of ovarian stimulation. Although the average number of available embryos was higher when more than 10 oocytes were retrieved, achievement of more than 15 oocytes did not improve IVF outcome in terms of pregnancy and delivery rates. Analysis of 1590 IVF cycles including the frozen-thawed transfers shows that the best outcomes were achieved with an optimal number of 11-15 oocytes.

  20. Quantitative metagenomic analyses based on average genome size normalization

    DEFF Research Database (Denmark)

    Frank, Jeremy Alexander; Sørensen, Søren Johannes

    2011-01-01

    provide not just a census of the community members but direct information on metabolic capabilities and potential interactions among community members. Here we introduce a method for the quantitative characterization and comparison of microbial communities based on the normalization of metagenomic data...... marine sources using both conventional small-subunit (SSU) rRNA gene analyses and our quantitative method to calculate the proportion of genomes in each sample that are capable of a particular metabolic trait. With both environments, to determine what proportion of each community they make up and how......). These analyses demonstrate how genome proportionality compares to SSU rRNA gene relative abundance and how factors such as average genome size and SSU rRNA gene copy number affect sampling probability and therefore both types of community analysis....

  1. Multiphase averaging of periodic soliton equations

    International Nuclear Information System (INIS)

    Forest, M.G.

    1979-01-01

    The multiphase averaging of periodic soliton equations is considered. Particular attention is given to the periodic sine-Gordon and Korteweg-deVries (KdV) equations. The periodic sine-Gordon equation and its associated inverse spectral theory are analyzed, including a discussion of the spectral representations of exact, N-phase sine-Gordon solutions. The emphasis is on physical characteristics of the periodic waves, with a motivation from the well-known whole-line solitons. A canonical Hamiltonian approach for the modulational theory of N-phase waves is prescribed. A concrete illustration of this averaging method is provided with the periodic sine-Gordon equation; explicit averaging results are given only for the N = 1 case, laying a foundation for a more thorough treatment of the general N-phase problem. For the KdV equation, very general results are given for multiphase averaging of the N-phase waves. The single-phase results of Whitham are extended to general N phases, and more importantly, an invariant representation in terms of Abelian differentials on a Riemann surface is provided. Several consequences of this invariant representation are deduced, including strong evidence for the Hamiltonian structure of N-phase modulational equations

  2. Evidence for a Higher Number of Species of Odontotermes (Isoptera) than Currently Known from Peninsular Malaysia from Mitochondrial DNA Phylogenies

    Science.gov (United States)

    Cheng, Shawn; Kirton, Laurence G.; Panandam, Jothi M.; Siraj, Siti S.; Ng, Kevin Kit-Siong; Tan, Soon-Guan

    2011-01-01

    Termites of the genus Odontotermes are important decomposers in the Old World tropics and are sometimes important pests of crops, timber and trees. The species within the genus often have overlapping size ranges and are difficult to differentiate based on morphology. As a result, the taxonomy of Odontotermes in Peninsular Malaysia has not been adequately worked out. In this study, we examined the phylogeny of 40 samples of Odontotermes from Peninsular Malaysia using two mitochondrial DNA regions, that is, the 16S ribosomal RNA and cytochrome oxidase subunit I genes, to aid in elucidating the number of species in the peninsula. Phylogenies were reconstructed from the individual gene and combined gene data sets using parsimony and likelihood criteria. The phylogenies supported the presence of up to eleven species in Peninsular Malaysia, which were identified as O. escherichi, O. hainanensis, O. javanicus, O. longignathus, O. malaccensis, O. oblongatus, O. paraoblongatus, O. sarawakensis, and three possibly new species. Additionally, some of our taxa are thought to comprise a complex of two or more species. The number of species found in this study using DNA methods was more than the initial nine species thought to occur in Peninsular Malaysia. The support values for the clades and morphology of the soldiers provided further evidence for the existence of eleven or more species. Higher resolution genetic markers such as microsatellites would be required to confirm the presence of cryptic species in some taxa. PMID:21687629

  3. Tendon surveillance requirements - average tendon force

    International Nuclear Information System (INIS)

    Fulton, J.F.

    1982-01-01

    Proposed Rev. 3 to USNRC Reg. Guide 1.35 discusses the need for comparing, for individual tendons, the measured and predicted lift-off forces. Such a comparison is intended to detect any abnormal tendon force loss which might occur. Recognizing that there are uncertainties in the prediction of tendon losses, proposed Guide 1.35.1 has allowed specific tolerances on the fundamental losses. Thus, the lift-off force acceptance criteria for individual tendons appearing in Reg. Guide 1.35, Proposed Rev. 3, is stated relative to a lower bound predicted tendon force, which is obtained using the 'plus' tolerances on the fundamental losses. There is an additional acceptance criterion for the lift-off forces which is not specifically addressed in these two Reg. Guides; however, it is included in a proposed Subsection IWX to ASME Code Section XI. This criterion is based on the overriding requirement that the magnitude of prestress in the containment structure be sufficeint to meet the minimum prestress design requirements. This design requirement can be expressed as an average tendon force for each group of vertical hoop, or dome tendons. For the purpose of comparing the actual tendon forces with the required average tendon force, the lift-off forces measured for a sample of tendons within each group can be averaged to construct the average force for the entire group. However, the individual lift-off forces must be 'corrected' (normalized) prior to obtaining the sample average. This paper derives the correction factor to be used for this purpose. (orig./RW)

  4. Fourier analysis in combinatorial number theory

    International Nuclear Information System (INIS)

    Shkredov, Il'ya D

    2010-01-01

    In this survey applications of harmonic analysis to combinatorial number theory are considered. Discussion topics include classical problems of additive combinatorics, colouring problems, higher-order Fourier analysis, theorems about sets of large trigonometric sums, results on estimates for trigonometric sums over subgroups, and the connection between combinatorial and analytic number theory. Bibliography: 162 titles.

  5. Fourier analysis in combinatorial number theory

    Energy Technology Data Exchange (ETDEWEB)

    Shkredov, Il' ya D [M. V. Lomonosov Moscow State University, Moscow (Russian Federation)

    2010-09-16

    In this survey applications of harmonic analysis to combinatorial number theory are considered. Discussion topics include classical problems of additive combinatorics, colouring problems, higher-order Fourier analysis, theorems about sets of large trigonometric sums, results on estimates for trigonometric sums over subgroups, and the connection between combinatorial and analytic number theory. Bibliography: 162 titles.

  6. Average Bandwidth Allocation Model of WFQ

    Directory of Open Access Journals (Sweden)

    Tomáš Balogh

    2012-01-01

    Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.

  7. Modern Higher Education Students within a Non-Traditional Higher Education Space: Not Fitting In, Often Falling Out

    Science.gov (United States)

    Mc Taggart, Breda

    2016-01-01

    A growing number of studies are focusing on the "fit" between the higher education student and the educational institution. These studies show that a lack of fit between the two generates anxiety, ultimately acting as a barrier to student learning. Research involving 23 higher education students attending a dual-sector further and higher…

  8. Acute costs and predictors of higher treatment costs of trauma in New South Wales, Australia.

    Science.gov (United States)

    Curtis, Kate; Lam, Mary; Mitchell, Rebecca; Black, Deborah; Taylor, Colman; Dickson, Cara; Jan, Stephen; Palmer, Cameron S; Langcake, Mary; Myburgh, John

    2014-01-01

    Accurate economic data are fundamental for improving current funding models and ultimately in promoting the efficient delivery of services. The financial burden of a high trauma casemix to designated trauma centres in Australia has not been previously determined, and there is some evidence that the episode funding model used in Australia results in the underfunding of trauma. To describe the costs of acute trauma admissions in trauma centres, identify predictors of higher treatment costs and cost variance in New South Wales (NSW), Australia. Data linkage of admitted trauma patient and financial data provided by 12 Level 1 NSW trauma centres for the 08/09 financial year was performed. Demographic, injury details and injury scores were obtained from trauma registries. Individual patient general ledger costs (actual trauma patient costs), Australian Refined Diagnostic Related Groups (AR-DRG) and state-wide average costs (which form the basis of funding) were obtained. The actual costs incurred by the hospital were then compared with the state-wide AR-DRG average costs. Multivariable multiple linear regression was used for identifying predictors of costs. There were 17,522 patients, the average per patient cost was $10,603 and the median was $4628 (interquartile range: $2179-10,148). The actual costs incurred by trauma centres were on average $134 per bed day above AR-DRG costs-determined costs. Falls, road trauma and violence were the highest causes of total cost. Motor cyclists and pedestrians had higher median costs than motor vehicle occupants. As a result of greater numbers, patients with minor injury had comparable total costs with those generated by patients with severe injury. However the median cost of severely injured patients was nearly four times greater. The count of body regions injured, sex, length of stay, serious traumatic brain injury and admission to the Intensive Care Unit were significantly associated with increased costs (p<0.001). This

  9. Using Bayes Model Averaging for Wind Power Forecasts

    Science.gov (United States)

    Preede Revheim, Pål; Beyer, Hans Georg

    2014-05-01

    does not contain information, but it has the disadvantage of nearly doubling the number of model parameters to be estimated. Second, the BMA procedure is run with group mean wind power as the response variable instead of group mean wind speed. This also solves the problem with longer consecutive periods without information in the input data, but it leaves the power curve to also be estimated from the data. [1] Raftery, A. E., et al. (2005). Using Bayesian Model Averaging to Calibrate Forecast Ensembles. Monthly Weather Review, 133, 1155-1174. [2]Revheim, P. P. and H. G. Beyer (2013). Using Bayesian Model Averaging for wind farm group forecasts. EWEA Wind Power Forecasting Technology Workshop,Rotterdam, 4-5 December 2013. [3]Sloughter, J. M., T. Gneiting and A. E. Raftery (2010). Probabilistic Wind Speed Forecasting Using Ensembles and Bayesian Model Averaging. Journal of the American Statistical Association, Vol. 105, No. 489, 25-35

  10. Average BER analysis of SCM-based free-space optical systems by considering the effect of IM3 with OSSB signals under turbulence channels.

    Science.gov (United States)

    Lim, Wansu; Cho, Tae-Sik; Yun, Changho; Kim, Kiseon

    2009-11-09

    In this paper, we derive the average bit error rate (BER) of subcarrier multiplexing (SCM)-based free space optics (FSO) systems using a dual-drive Mach-Zehnder modulator (DD-MZM) for optical single-sideband (OSSB) signals under atmospheric turbulence channels. In particular, we consider the third-order intermodulation (IM3), a significant performance degradation factor, in the case of high input signal power systems. The derived average BER, as a function of the input signal power and the scintillation index, is employed to determine the optimum number of SCM users upon the designing FSO systems. For instance, when the user number doubles, the input signal power decreases by almost 2 dBm under the log-normal and exponential turbulence channels at a given average BER.

  11. Regional averaging and scaling in relativistic cosmology

    International Nuclear Information System (INIS)

    Buchert, Thomas; Carfora, Mauro

    2002-01-01

    Averaged inhomogeneous cosmologies lie at the forefront of interest, since cosmological parameters such as the rate of expansion or the mass density are to be considered as volume-averaged quantities and only these can be compared with observations. For this reason the relevant parameters are intrinsically scale-dependent and one wishes to control this dependence without restricting the cosmological model by unphysical assumptions. In the latter respect we contrast our way to approach the averaging problem in relativistic cosmology with shortcomings of averaged Newtonian models. Explicitly, we investigate the scale-dependence of Eulerian volume averages of scalar functions on Riemannian three-manifolds. We propose a complementary view of a Lagrangian smoothing of (tensorial) variables as opposed to their Eulerian averaging on spatial domains. This programme is realized with the help of a global Ricci deformation flow for the metric. We explain rigorously the origin of the Ricci flow which, on heuristic grounds, has already been suggested as a possible candidate for smoothing the initial dataset for cosmological spacetimes. The smoothing of geometry implies a renormalization of averaged spatial variables. We discuss the results in terms of effective cosmological parameters that would be assigned to the smoothed cosmological spacetime. In particular, we find that on the smoothed spatial domain B-bar evaluated cosmological parameters obey Ω-bar B-bar m + Ω-bar B-bar R + Ω-bar B-bar A + Ω-bar B-bar Q 1, where Ω-bar B-bar m , Ω-bar B-bar R and Ω-bar B-bar A correspond to the standard Friedmannian parameters, while Ω-bar B-bar Q is a remnant of cosmic variance of expansion and shear fluctuations on the averaging domain. All these parameters are 'dressed' after smoothing out the geometrical fluctuations, and we give the relations of the 'dressed' to the 'bare' parameters. While the former provide the framework of interpreting observations with a 'Friedmannian bias

  12. Neural activation patterns and connectivity in visual attention during Number and Non-number processing: An ERP study using the Ishihara pseudoisochromatic plates.

    Science.gov (United States)

    Al-Marri, Faraj; Reza, Faruque; Begum, Tahamina; Hitam, Wan Hazabbah Wan; Jin, Goh Khean; Xiang, Jing

    2017-10-25

    Visual cognitive function is important to build up executive function in daily life. Perception of visual Number form (e.g., Arabic digit) and numerosity (magnitude of the Number) is of interest to cognitive neuroscientists. Neural correlates and the functional measurement of Number representations are complex occurrences when their semantic categories are assimilated with other concepts of shape and colour. Colour perception can be processed further to modulate visual cognition. The Ishihara pseudoisochromatic plates are one of the best and most common screening tools for basic red-green colour vision testing. However, there is a lack of study of visual cognitive function assessment using these pseudoisochromatic plates. We recruited 25 healthy normal trichromat volunteers and extended these studies using a 128-sensor net to record event-related EEG. Subjects were asked to respond by pressing Numbered buttons when they saw the Number and Non-number plates of the Ishihara colour vision test. Amplitudes and latencies of N100 and P300 event related potential (ERP) components were analysed from 19 electrode sites in the international 10-20 system. A brain topographic map, cortical activation patterns and Granger causation (effective connectivity) were analysed from 128 electrode sites. No major significant differences between N100 ERP components in either stimulus indicate early selective attention processing was similar for Number and Non-number plate stimuli, but Non-number plate stimuli evoked significantly higher amplitudes, longer latencies of the P300 ERP component with a slower reaction time compared to Number plate stimuli imply the allocation of attentional load was more in Non-number plate processing. A different pattern of asymmetric scalp voltage map was noticed for P300 components with a higher intensity in the left hemisphere for Number plate tasks and higher intensity in the right hemisphere for Non-number plate tasks. Asymmetric cortical activation

  13. The Effectiveness of Korean Number Naming on Insight into Numbers in Dutch Students with Mild Intellectual Disabilities

    Science.gov (United States)

    Van Luit, Johannes E. H.; Van der Molen, Mariet J.

    2011-01-01

    Background: Children from Asian countries score higher on early years' arithmetic tests than children from Europe or the United States of America. An explanation for these differences may be the way numbers are named. A clear ten-structure like in the Korean language method leads to a better insight into numbers and arithmetic skills. This…

  14. Type number and rigidity of fibred surfaces

    International Nuclear Information System (INIS)

    Markov, P E

    2001-01-01

    Infinitesimal l-th order bendings, 1≤l≤∞, of higher-dimensional surfaces are considered in higher-dimensional flat spaces (for l=∞ an infinitesimal bending is assumed to be an analytic bending). In terms of the Allendoerfer type number, criteria are established for the (r,l)-rigidity (in the terminology of Sabitov) of such surfaces. In particular, an (r,l)-infinitesimal analogue is proved of the classical theorem of Allendoerfer on the unbendability of surfaces with type number ≥3 and the class of (r,l)-rigid fibred surfaces is distinguished

  15. Experimental demonstration of squeezed-state quantum averaging

    DEFF Research Database (Denmark)

    Lassen, Mikael Østergaard; Madsen, Lars Skovgaard; Sabuncu, Metin

    2010-01-01

    We propose and experimentally demonstrate a universal quantum averaging process implementing the harmonic mean of quadrature variances. The averaged variances are prepared probabilistically by means of linear optical interference and measurement-induced conditioning. We verify that the implemented...

  16. Grade Point Average System of Assessment: the Implementation Peculiarities in Russia

    Directory of Open Access Journals (Sweden)

    B. A. Sazonov

    2012-01-01

    Full Text Available The paper analyzes the specificity, as well as flaws and faults of implementing the Grade Point Average (GPA system of students’ personal assessment in Russian higher schools. Nowadays, the above system is regarded as the basic functional element of educational process organization at the world’s leading universities. The author summarizes the foreign experience and demonstrates the advantages of the GPA system in comparison with the traditional domestic scale of assessment: full records of student’s assessment, objectivity, activation of responsibility for the results achieved, and self-control motivation. The standard GPA model is demonstrated, its application systemizing both the Russian and European requirements to the higher school graduates. The author suggests his own version of the assessment scale estimating and comparing the quality of education in Russian universities and worldwide. The research findings can be of interest to the specialists in the sphere of quality measurement and educational management. 

  17. Aperture averaging in strong oceanic turbulence

    Science.gov (United States)

    Gökçe, Muhsin Caner; Baykal, Yahya

    2018-04-01

    Receiver aperture averaging technique is employed in underwater wireless optical communication (UWOC) systems to mitigate the effects of oceanic turbulence, thus to improve the system performance. The irradiance flux variance is a measure of the intensity fluctuations on a lens of the receiver aperture. Using the modified Rytov theory which uses the small-scale and large-scale spatial filters, and our previously presented expression that shows the atmospheric structure constant in terms of oceanic turbulence parameters, we evaluate the irradiance flux variance and the aperture averaging factor of a spherical wave in strong oceanic turbulence. Irradiance flux variance variations are examined versus the oceanic turbulence parameters and the receiver aperture diameter are examined in strong oceanic turbulence. Also, the effect of the receiver aperture diameter on the aperture averaging factor is presented in strong oceanic turbulence.

  18. Stochastic Averaging and Stochastic Extremum Seeking

    CERN Document Server

    Liu, Shu-Jun

    2012-01-01

    Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering  and analysis of bacterial  convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...

  19. 20 CFR 404.220 - Average-monthly-wage method.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Average-monthly-wage method. 404.220 Section... INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.220 Average-monthly-wage method. (a) Who is eligible for this method. You must...

  20. Taxonomic status of the Bemisia tabaci complex (Hemiptera: Aleyrodidae and reassessment of the number of its constituent species.

    Directory of Open Access Journals (Sweden)

    Wonhoon Lee

    Full Text Available Bemisia tabaci (Hemiptera: Aleyrodidae is one of the most important insect pests in the world. In the present study, the taxonomic status of B. tabaci and the number of species composing the B. tabaci complex were determined based on 1059 COI sequences of B. tabaci and 509 COI sequences of 153 hemipteran species. The genetic divergence within B. tabaci was conspicuously higher (on average, 11.1% than interspecific genetic divergence within the respective genera of the 153 species (on average, 6.5%. This result indicates that B. tabaci is composed of multiple species that may belong to different genera or subfamilies. A phylogenetic tree constructed based on 212 COI sequences without duplications revealed that the B. tabaci complex is composed of a total of 31 putative species, including a new species, JpL. However, genetic divergence within six species (Asia II 1, Asia II 7, Australia, Mediterranean, New World, and Sub Saharan Africa 1 was higher than 3.5%, which has been used as a threshold of species boundaries within the B. tabaci complex. These results suggest that it is necessary to increase the threshold for species boundaries up to 4% to distinguish the constituent species in the B. tabaci complex.

  1. The calculation of average error probability in a digital fibre optical communication system

    Science.gov (United States)

    Rugemalira, R. A. M.

    1980-03-01

    This paper deals with the problem of determining the average error probability in a digital fibre optical communication system, in the presence of message dependent inhomogeneous non-stationary shot noise, additive Gaussian noise and intersymbol interference. A zero-forcing equalization receiver filter is considered. Three techniques for error rate evaluation are compared. The Chernoff bound and the Gram-Charlier series expansion methods are compared to the characteristic function technique. The latter predicts a higher receiver sensitivity

  2. Higher surgical training opportunities in the general hospital setting; getting the balance right.

    Science.gov (United States)

    Robertson, I; Traynor, O; Khan, W; Waldron, R; Barry, K

    2013-12-01

    The general hospital can play an important role in training of higher surgical trainees (HSTs) in Ireland and abroad. Training opportunities in such a setting have not been closely analysed to date. The aim of this study was to quantify operative exposure for HSTs over a 5-year period in a single institution. Analysis of electronic training logbooks (over a 5-year period, 2007-2012) was performed for general surgery trainees on the higher surgical training programme in Ireland. The most commonly performed adult and paediatric procedures per trainee, per year were analysed. Standard general surgery operations such as herniae (average 58, range 32-86) and cholecystectomy (average 60, range 49-72) ranked highly in each logbook. The most frequently performed emergency operations were appendicectomy (average 45, range 33-53) and laparotomy for acute abdomen (average 48, range 10-79). Paediatric surgical experience included appendicectomy, circumcision, orchidopexy and hernia/hydrocoele repair. Overall, the procedure most commonly performed in the adult setting was endoscopy, with each trainee recording an average of 116 (range 98-132) oesophagogastroduodenoscopies and 284 (range 227-354) colonoscopies. General hospitals continue to play a major role in the training of higher surgical trainees. Analysis of the electronic logbooks over a 5-year period reveals the high volume of procedures available to trainees in a non-specialist centre. Such training opportunities are invaluable in the context of changing work practices and limited resources.

  3. Predicting top-of-atmosphere radiance for arbitrary viewing geometries from the visible to thermal infrared: generalization to arbitrary average scene temperatures

    Science.gov (United States)

    Florio, Christopher J.; Cota, Steve A.; Gaffney, Stephanie K.

    2010-08-01

    In a companion paper presented at this conference we described how The Aerospace Corporation's Parameterized Image Chain Analysis & Simulation SOftware (PICASSO) may be used in conjunction with a limited number of runs of AFRL's MODTRAN4 radiative transfer code, to quickly predict the top-of-atmosphere (TOA) radiance received in the visible through midwave IR (MWIR) by an earth viewing sensor, for any arbitrary combination of solar and sensor elevation angles. The method is particularly useful for large-scale scene simulations where each pixel could have a unique value of reflectance/emissivity and temperature, making the run-time required for direct prediction via MODTRAN4 prohibitive. In order to be self-consistent, the method described requires an atmospheric model (defined, at a minimum, as a set of vertical temperature, pressure and water vapor profiles) that is consistent with the average scene temperature. MODTRAN4 provides only six model atmospheres, ranging from sub-arctic winter to tropical conditions - too few to cover with sufficient temperature resolution the full range of average scene temperatures that might be of interest. Model atmospheres consistent with intermediate temperature values can be difficult to come by, and in any event, their use would be too cumbersome for use in trade studies involving a large number of average scene temperatures. In this paper we describe and assess a method for predicting TOA radiance for any arbitrary average scene temperature, starting from only a limited number of model atmospheres.

  4. The relationship between limit of Dysphagia and average volume per swallow in patients with Parkinson's disease.

    Science.gov (United States)

    Belo, Luciana Rodrigues; Gomes, Nathália Angelina Costa; Coriolano, Maria das Graças Wanderley de Sales; de Souza, Elizabete Santos; Moura, Danielle Albuquerque Alves; Asano, Amdore Guescel; Lins, Otávio Gomes

    2014-08-01

    The goal of this study was to obtain the limit of dysphagia and the average volume per swallow in patients with mild to moderate Parkinson's disease (PD) but without swallowing complaints and in normal subjects, and to investigate the relationship between them. We hypothesize there is a direct relationship between these two measurements. The study included 10 patients with idiopathic PD and 10 age-matched normal controls. Surface electromyography was recorded over the suprahyoid muscle group. The limit of dysphagia was obtained by offering increasing volumes of water until piecemeal deglutition occurred. The average volume per swallow was calculated by dividing the time taken by the number of swallows used to drink 100 ml of water. The PD group showed a significantly lower dysphagia limit and lower average volume per swallow. There was a significantly moderate direct correlation and association between the two measurements. About half of the PD patients had an abnormally low dysphagia limit and average volume per swallow, although none had spontaneously related swallowing problems. Both measurements may be used as a quick objective screening test for the early identification of swallowing alterations that may lead to dysphagia in PD patients, but the determination of the average volume per swallow is much quicker and simpler.

  5. Improving Publication: Advice for Busy Higher Education Academics

    Science.gov (United States)

    Gibbs, Anita

    2016-01-01

    A major challenge for higher education academics is to research and publish when faced with substantial teaching responsibilities, higher student numbers, and higher output expectations. The focus of this piece is to encourage publication more generally by educators, and to build publication capacity, which academic developers can facilitate. The…

  6. Light-cone averaging in cosmology: formalism and applications

    International Nuclear Information System (INIS)

    Gasperini, M.; Marozzi, G.; Veneziano, G.; Nugier, F.

    2011-01-01

    We present a general gauge invariant formalism for defining cosmological averages that are relevant for observations based on light-like signals. Such averages involve either null hypersurfaces corresponding to a family of past light-cones or compact surfaces given by their intersection with timelike hypersurfaces. Generalized Buchert-Ehlers commutation rules for derivatives of these light-cone averages are given. After introducing some adapted ''geodesic light-cone'' coordinates, we give explicit expressions for averaging the redshift to luminosity-distance relation and the so-called ''redshift drift'' in a generic inhomogeneous Universe

  7. Lateral dispersion coefficients as functions of averaging time

    International Nuclear Information System (INIS)

    Sheih, C.M.

    1980-01-01

    Plume dispersion coefficients are discussed in terms of single-particle and relative diffusion, and are investigated as functions of averaging time. To demonstrate the effects of averaging time on the relative importance of various dispersion processes, and observed lateral wind velocity spectrum is used to compute the lateral dispersion coefficients of total, single-particle and relative diffusion for various averaging times and plume travel times. The results indicate that for a 1 h averaging time the dispersion coefficient of a plume can be approximated by single-particle diffusion alone for travel times <250 s and by relative diffusion for longer travel times. Furthermore, it is shown that the power-law formula suggested by Turner for relating pollutant concentrations for other averaging times to the corresponding 15 min average is applicable to the present example only when the averaging time is less than 200 s and the tral time smaller than about 300 s. Since the turbulence spectrum used in the analysis is an observed one, it is hoped that the results could represent many conditions encountered in the atmosphere. However, as the results depend on the form of turbulence spectrum, the calculations are not for deriving a set of specific criteria but for demonstrating the need in discriminating various processes in studies of plume dispersion

  8. Identifying students with dyslexia in higher education

    NARCIS (Netherlands)

    Tops, Wim; Callens, Maaike; Lammertyn, Jan; Van Hees, Valerie; Brysbaert, Marc

    2012-01-01

    An increasing number of students with dyslexia enter higher education. As a result, there is a growing need for standardized diagnosis. Previous research has suggested that a small number of tests may suffice to reliably assess students with dyslexia, but these studies were based on post hoc

  9. Noise-free high-efficiency photon-number-resolving detectors

    International Nuclear Information System (INIS)

    Rosenberg, Danna; Lita, Adriana E.; Miller, Aaron J.; Nam, Sae Woo

    2005-01-01

    High-efficiency optical detectors that can determine the number of photons in a pulse of monochromatic light have applications in a variety of physics studies, including post-selection-based entanglement protocols for linear optics quantum computing and experiments that simultaneously close the detection and communication loopholes of Bell's inequalities. Here we report on our demonstration of fiber-coupled, noise-free, photon-number-resolving transition-edge sensors with 88% efficiency at 1550 nm. The efficiency of these sensors could be made even higher at any wavelength in the visible and near-infrared spectrum without resulting in a higher dark-count rate or degraded photon-number resolution

  10. Assessing the Resolution Adaptability of the Zhang-McFarlane Cumulus Parameterization With Spatial and Temporal Averaging: RESOLUTION ADAPTABILITY OF ZM SCHEME

    Energy Technology Data Exchange (ETDEWEB)

    Yun, Yuxing [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA; State Key Laboratory of Severe Weather, Chinese Academy of Meteorological Sciences, Beijing China; Fan, Jiwen [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA; Xiao, Heng [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA; Zhang, Guang J. [Scripps Institution of Oceanography, University of California, San Diego CA USA; Ghan, Steven J. [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA; Xu, Kuan-Man [NASA Langley Research Center, Hampton VA USA; Ma, Po-Lun [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA; Gustafson, William I. [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA

    2017-11-01

    Realistic modeling of cumulus convection at fine model resolutions (a few to a few tens of km) is problematic since it requires the cumulus scheme to adapt to higher resolution than they were originally designed for (~100 km). To solve this problem, we implement the spatial averaging method proposed in Xiao et al. (2015) and also propose a temporal averaging method for the large-scale convective available potential energy (CAPE) tendency in the Zhang-McFarlane (ZM) cumulus parameterization. The resolution adaptability of the original ZM scheme, the scheme with spatial averaging, and the scheme with both spatial and temporal averaging at 4-32 km resolution is assessed using the Weather Research and Forecasting (WRF) model, by comparing with Cloud Resolving Model (CRM) results. We find that the original ZM scheme has very poor resolution adaptability, with sub-grid convective transport and precipitation increasing significantly as the resolution increases. The spatial averaging method improves the resolution adaptability of the ZM scheme and better conserves the total transport of moist static energy and total precipitation. With the temporal averaging method, the resolution adaptability of the scheme is further improved, with sub-grid convective precipitation becoming smaller than resolved precipitation for resolution higher than 8 km, which is consistent with the results from the CRM simulation. Both the spatial distribution and time series of precipitation are improved with the spatial and temporal averaging methods. The results may be helpful for developing resolution adaptability for other cumulus parameterizations that are based on quasi-equilibrium assumption.

  11. Determination of the diagnostic x-ray tube practical peak voltage (PPV) from average or average peak voltage measurements

    Energy Technology Data Exchange (ETDEWEB)

    Hourdakis, C J, E-mail: khour@gaec.gr [Ionizing Radiation Calibration Laboratory-Greek Atomic Energy Commission, PO Box 60092, 15310 Agia Paraskevi, Athens, Attiki (Greece)

    2011-04-07

    The practical peak voltage (PPV) has been adopted as the reference measuring quantity for the x-ray tube voltage. However, the majority of commercial kV-meter models measure the average peak, U-bar{sub P}, the average, U-bar, the effective, U{sub eff} or the maximum peak, U{sub P} tube voltage. This work proposed a method for determination of the PPV from measurements with a kV-meter that measures the average U-bar or the average peak, U-bar{sub p} voltage. The kV-meter reading can be converted to the PPV by applying appropriate calibration coefficients and conversion factors. The average peak k{sub PPV,kVp} and the average k{sub PPV,Uav} conversion factors were calculated from virtual voltage waveforms for conventional diagnostic radiology (50-150 kV) and mammography (22-35 kV) tube voltages and for voltage ripples from 0% to 100%. Regression equation and coefficients provide the appropriate conversion factors at any given tube voltage and ripple. The influence of voltage waveform irregularities, like 'spikes' and pulse amplitude variations, on the conversion factors was investigated and discussed. The proposed method and the conversion factors were tested using six commercial kV-meters at several x-ray units. The deviations between the reference and the calculated - according to the proposed method - PPV values were less than 2%. Practical aspects on the voltage ripple measurement were addressed and discussed. The proposed method provides a rigorous base to determine the PPV with kV-meters from U-bar{sub p} and U-bar measurement. Users can benefit, since all kV-meters, irrespective of their measuring quantity, can be used to determine the PPV, complying with the IEC standard requirements.

  12. The Uses of Institutional Culture: Strengthening Identification and Building Brand Equity in Higher Education. ASHE Higher Education Report, Volume 31, Number 2

    Science.gov (United States)

    Toma, J. Douglas, Ed.; Dubrow, Greg, Ed.; Hartley, Matthew, Ed.

    2005-01-01

    Institutional culture matters in higher education, and universities and colleges commonly express the need to strengthen their culture. A strong culture is perceived, correctly so, to engender a needed sense of connectedness between and among the varied constituents associated with a campus. Linking organizational culture and social cohesion is…

  13. Averaging operations on matrices

    Indian Academy of Sciences (India)

    2014-07-03

    Jul 3, 2014 ... Role of Positive Definite Matrices. • Diffusion Tensor Imaging: 3 × 3 pd matrices model water flow at each voxel of brain scan. • Elasticity: 6 × 6 pd matrices model stress tensors. • Machine Learning: n × n pd matrices occur as kernel matrices. Tanvi Jain. Averaging operations on matrices ...

  14. First Worldwide Proficiency Study on Variable-Number Tandem-Repeat Typing of Mycobacterium tuberculosis Complex Strains

    Science.gov (United States)

    de Beer, Jessica L.; Kremer, Kristin; Ködmön, Csaba; Supply, Philip

    2012-01-01

    Although variable-number tandem-repeat (VNTR) typing has gained recognition as the new standard for the DNA fingerprinting of Mycobacterium tuberculosis complex (MTBC) isolates, external quality control programs have not yet been developed. Therefore, we organized the first multicenter proficiency study on 24-locus VNTR typing. Sets of 30 DNAs of MTBC strains, including 10 duplicate DNA samples, were distributed among 37 participating laboratories in 30 different countries worldwide. Twenty-four laboratories used an in-house-adapted method with fragment sizing by gel electrophoresis or an automated DNA analyzer, nine laboratories used a commercially available kit, and four laboratories used other methods. The intra- and interlaboratory reproducibilities of VNTR typing varied from 0% to 100%, with averages of 72% and 60%, respectively. Twenty of the 37 laboratories failed to amplify particular VNTR loci; if these missing results were ignored, the number of laboratories with 100% interlaboratory reproducibility increased from 1 to 5. The average interlaboratory reproducibility of VNTR typing using a commercial kit was better (88%) than that of in-house-adapted methods using a DNA analyzer (70%) or gel electrophoresis (50%). Eleven laboratories using in-house-adapted manual typing or automated typing scored inter- and intralaboratory reproducibilities of 80% or higher, which suggests that these approaches can be used in a reliable way. In conclusion, this first multicenter study has documented the worldwide quality of VNTR typing of MTBC strains and highlights the importance of international quality control to improve genotyping in the future. PMID:22170917

  15. 20 CFR 404.221 - Computing your average monthly wage.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the average...

  16. Averaged emission factors for the Hungarian car fleet

    Energy Technology Data Exchange (ETDEWEB)

    Haszpra, L. [Inst. for Atmospheric Physics, Budapest (Hungary); Szilagyi, I. [Central Research Inst. for Chemistry, Budapest (Hungary)

    1995-12-31

    The vehicular emission of non-methane hydrocarbon (NMHC) is one of the largest anthropogenic sources of NMHC in Hungary and in most of the industrialized countries. Non-methane hydrocarbon plays key role in the formation of photo-chemical air pollution, usually characterized by the ozone concentration, which seriously endangers the environment and human health. The ozone forming potential of the different NMHCs differs from each other significantly, while the NMHC composition of the car exhaust is influenced by the fuel and engine type, technical condition of the vehicle, vehicle speed and several other factors. In Hungary the majority of the cars are still of Eastern European origin. They represent the technological standard of the 70`s, although there are changes recently. Due to the long-term economical decline in Hungary the average age of the cars was about 9 years in 1990 and reached 10 years by 1993. The condition of the majority of the cars is poor. In addition, almost one third (31.2 %) of the cars are equipped with two-stroke engines which emit less NO{sub x} but much more hydrocarbon. The number of cars equipped with catalytic converter was negligible in 1990 and is slowly increasing only recently. As a consequence of these facts the traffic emission in Hungary may differ from that measured in or estimated for the Western European countries and the differences should be taken into account in the air pollution models. For the estimation of the average emission of the Hungarian car fleet a one-day roadway tunnel experiment was performed in the downtown of Budapest in summer, 1991. (orig.)

  17. Averaging in SU(2) open quantum random walk

    International Nuclear Information System (INIS)

    Ampadu Clement

    2014-01-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT

  18. Averaging in SU(2) open quantum random walk

    Science.gov (United States)

    Clement, Ampadu

    2014-03-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT.

  19. Determining average yarding distance.

    Science.gov (United States)

    Roger H. Twito; Charles N. Mann

    1979-01-01

    Emphasis on environmental and esthetic quality in timber harvesting has brought about increased use of complex boundaries of cutting units and a consequent need for a rapid and accurate method of determining the average yarding distance and area of these units. These values, needed for evaluation of road and landing locations in planning timber harvests, are easily and...

  20. When good = better than average

    Directory of Open Access Journals (Sweden)

    Don A. Moore

    2007-10-01

    Full Text Available People report themselves to be above average on simple tasks and below average on difficult tasks. This paper proposes an explanation for this effect that is simpler than prior explanations. The new explanation is that people conflate relative with absolute evaluation, especially on subjective measures. The paper then presents a series of four studies that test this conflation explanation. These tests distinguish conflation from other explanations, such as differential weighting and selecting the wrong referent. The results suggest that conflation occurs at the response stage during which people attempt to disambiguate subjective response scales in order to choose an answer. This is because conflation has little effect on objective measures, which would be equally affected if the conflation occurred at encoding.

  1. Average bond energies between boron and elements of the fourth, fifth, sixth, and seventh groups of the periodic table

    Science.gov (United States)

    Altshuller, Aubrey P

    1955-01-01

    The average bond energies D(gm)(B-Z) for boron-containing molecules have been calculated by the Pauling geometric-mean equation. These calculated bond energies are compared with the average bond energies D(exp)(B-Z) obtained from experimental data. The higher values of D(exp)(B-Z) in comparison with D(gm)(B-Z) when Z is an element in the fifth, sixth, or seventh periodic group may be attributed to resonance stabilization or double-bond character.

  2. Child Poverty Higher and More Persistent in Rural America. National Issue Brief Number 97

    Science.gov (United States)

    Schaefer, Andrew; Mattingly, Marybeth; Johnson, Kenneth M.

    2016-01-01

    The negative consequences of growing up in a poor family are well known. Poor children are less likely to have timely immunizations, have lower academic achievement, are generally less engaged in school activities, and face higher delinquency rates in adolescent years. Each of these has adverse impacts on their health, earnings, and family status…

  3. Experimental study of average void fraction in low-flow subcooled boiling

    International Nuclear Information System (INIS)

    Sun Qi; Wang Xiaojun; Xi Zhao; Zhao Hua; Yang Ruichang

    2005-01-01

    Low-flow subcooled void fraction in medium pressure was investigated using high-temperature high-pressure single-sensor optical probe in this paper. And then average void fraction was obtained through the integral calculation of local void fraction in the cross-section. The experimental data were compared with the void fraction model proposed in advance. The results show that the predictions of this model agree with the data quite well. The comparisons of Saha and Levy models with low-flow subcooled data show that Saha model overestimates the experimental data distinctively, and Levy model also gets relatively higher predictions although it is better than Saha model. (author)

  4. Just Imaginary: Delimiting Social Inclusion in Higher Education

    Science.gov (United States)

    Gale, Trevor; Hodge, Steven

    2014-01-01

    This paper explores the notion of a "just imaginary" for social inclusion in higher education. It responds to the current strategy of OECD nations to expand higher education and increase graduate numbers, as a way of securing a competitive advantage in the global knowledge economy. The Australian higher education system provides the case…

  5. Hybrid Large-Eddy/Reynolds-Averaged Simulation of a Supersonic Cavity Using VULCAN

    Science.gov (United States)

    Quinlan, Jesse; McDaniel, James; Baurle, Robert A.

    2013-01-01

    Simulations of a supersonic recessed-cavity flow are performed using a hybrid large-eddy/Reynolds-averaged simulation approach utilizing an inflow turbulence recycling procedure and hybridized inviscid flux scheme. Calorically perfect air enters a three-dimensional domain at a free stream Mach number of 2.92. Simulations are performed to assess grid sensitivity of the solution, efficacy of the turbulence recycling, and the effect of the shock sensor used with the hybridized inviscid flux scheme. Analysis of the turbulent boundary layer upstream of the rearward-facing step for each case indicates excellent agreement with theoretical predictions. Mean velocity and pressure results are compared to Reynolds-averaged simulations and experimental data for each case and indicate good agreement on the finest grid. Simulations are repeated on a coarsened grid, and results indicate strong grid density sensitivity. Simulations are performed with and without inflow turbulence recycling on the coarse grid to isolate the effect of the recycling procedure, which is demonstrably critical to capturing the relevant shear layer dynamics. Shock sensor formulations of Ducros and Larsson are found to predict mean flow statistics equally well.

  6. Socio-demographic predictors and average annual rates of caesarean section in Bangladesh between 2004 and 2014.

    Directory of Open Access Journals (Sweden)

    Md Nuruzzaman Khan

    Full Text Available Globally the rates of caesarean section (CS have steadily increased in recent decades. This rise is not fully accounted for by increases in clinical factors which indicate the need for CS. We investigated the socio-demographic predictors of CS and the average annual rates of CS in Bangladesh between 2004 and 2014.Data were derived from four waves of nationally representative Bangladesh Demographic and Health Survey (BDHS conducted between 2004 and 2014. Rate of change analysis was used to calculate the average annual rate of increase in CS from 2004 to 2014, by socio-demographic categories. Multi-level logistic regression was used to identify the socio-demographic predictors of CS in a cross-sectional analysis of the 2014 BDHS data.CS rates increased from 3.5% in 2004 to 23% in 2014. The average annual rate of increase in CS was higher among women of advanced maternal age (≥35 years, urban areas, and relatively high socio-economic status; with higher education, and who regularly accessed antenatal services. The multi-level logistic regression model indicated that lower (≤19 and advanced maternal age (≥35, urban location, relatively high socio-economic status, higher education, birth of few children (≤2, antenatal healthcare visits, overweight or obese were the key factors associated with increased utilization of CS. Underweight was a protective factor for CS.The use of CS has increased considerably in Bangladesh over the survey years. This rising trend and the risk of having CS vary significantly across regions and socio-economic status. Very high use of CS among women of relatively high socio-economic status and substantial urban-rural difference call for public awareness and practice guideline enforcement aimed at optimizing the use of CS.

  7. Socio-demographic predictors and average annual rates of caesarean section in Bangladesh between 2004 and 2014.

    Science.gov (United States)

    Khan, Md Nuruzzaman; Islam, M Mofizul; Shariff, Asma Ahmad; Alam, Md Mahmudul; Rahman, Md Mostafizur

    2017-01-01

    Globally the rates of caesarean section (CS) have steadily increased in recent decades. This rise is not fully accounted for by increases in clinical factors which indicate the need for CS. We investigated the socio-demographic predictors of CS and the average annual rates of CS in Bangladesh between 2004 and 2014. Data were derived from four waves of nationally representative Bangladesh Demographic and Health Survey (BDHS) conducted between 2004 and 2014. Rate of change analysis was used to calculate the average annual rate of increase in CS from 2004 to 2014, by socio-demographic categories. Multi-level logistic regression was used to identify the socio-demographic predictors of CS in a cross-sectional analysis of the 2014 BDHS data. CS rates increased from 3.5% in 2004 to 23% in 2014. The average annual rate of increase in CS was higher among women of advanced maternal age (≥35 years), urban areas, and relatively high socio-economic status; with higher education, and who regularly accessed antenatal services. The multi-level logistic regression model indicated that lower (≤19) and advanced maternal age (≥35), urban location, relatively high socio-economic status, higher education, birth of few children (≤2), antenatal healthcare visits, overweight or obese were the key factors associated with increased utilization of CS. Underweight was a protective factor for CS. The use of CS has increased considerably in Bangladesh over the survey years. This rising trend and the risk of having CS vary significantly across regions and socio-economic status. Very high use of CS among women of relatively high socio-economic status and substantial urban-rural difference call for public awareness and practice guideline enforcement aimed at optimizing the use of CS.

  8. Nuclear fragmentation and the number of particle tracks in tissue

    International Nuclear Information System (INIS)

    Ponomarev, A. L.; Cucinotta, F. A.

    2006-01-01

    For high energy nuclei, the number of particle tracks per cell is modified by local nuclear reactions that occur, with large fluctuations expected for heavy ion tracks. Cells near the interaction site of a reaction will experience a much higher number of tracks than estimated by the average fluence. Two types of reaction products are possible and occur in coincidence; projectile fragments, which generally have smaller charge and similar velocity to that of the projectile, and target fragments, which are produced from the fragmentation of the nuclei of water atoms or other cellular constituents with low velocity. In order to understand the role of fragmentation in biological damage a new model of human tissue irradiated by heavy ions was developed. A box of the tissue is modelled with periodic boundary conditions imposed, which extrapolates the technique to macroscopic volumes of tissue. The cross sections for projectile and target fragmentation products are taken from the quantum multiple scattering fragmentation code previously developed at NASA Johnson Space Center. Statistics of fragmentation pathways occurring in a cell monolayer, as well as in a small volume of 10 x 10 x 10 cells are given. A discussion on approaches to extend the model to describe spatial distributions of inactivated or other cell damage types, as well as highly organised tissues of multiple cell types, is presented. (authors)

  9. Measurement of the average polarization of b baryons in hadronic $Z^0$ decays

    CERN Document Server

    Abbiendi, G.; Alexander, G.; Allison, John; Altekamp, N.; Anderson, K.J.; Anderson, S.; Arcelli, S.; Asai, S.; Ashby, S.F.; Axen, D.; Azuelos, G.; Ball, A.H.; Barberio, E.; Barlow, Roger J.; Bartoldus, R.; Batley, J.R.; Baumann, S.; Bechtluft, J.; Behnke, T.; Bell, Kenneth Watson; Bella, G.; Bellerive, A.; Bentvelsen, S.; Bethke, S.; Betts, S.; Biebel, O.; Biguzzi, A.; Bird, S.D.; Blobel, V.; Bloodworth, I.J.; Bobinski, M.; Bock, P.; Bohme, J.; Bonacorsi, D.; Boutemeur, M.; Braibant, S.; Bright-Thomas, P.; Brigliadori, L.; Brown, Robert M.; Burckhart, H.J.; Burgard, C.; Burgin, R.; Capiluppi, P.; Carnegie, R.K.; Carter, A.A.; Carter, J.R.; Chang, C.Y.; Charlton, David G.; Chrisman, D.; Ciocca, C.; Clarke, P.E.L.; Clay, E.; Cohen, I.; Conboy, J.E.; Cooke, O.C.; Couyoumtzelis, C.; Coxe, R.L.; Cuffiani, M.; Dado, S.; Dallavalle, G.Marco; Davis, R.; De Jong, S.; del Pozo, L.A.; De Roeck, A.; Desch, K.; Dienes, B.; Dixit, M.S.; Dubbert, J.; Duchovni, E.; Duckeck, G.; Duerdoth, I.P.; Eatough, D.; Estabrooks, P.G.; Etzion, E.; Evans, H.G.; Fabbri, F.; Fanti, M.; Faust, A.A.; Fiedler, F.; Fierro, M.; Fleck, I.; Folman, R.; Furtjes, A.; Futyan, D.I.; Gagnon, P.; Gary, J.W.; Gascon, J.; Gascon-Shotkin, S.M.; Gaycken, G.; Geich-Gimbel, C.; Giacomelli, G.; Giacomelli, P.; Gibson, V.; Gibson, W.R.; Gingrich, D.M.; Glenzinski, D.; Goldberg, J.; Gorn, W.; Grandi, C.; Gross, E.; Grunhaus, J.; Gruwe, M.; Hanson, G.G.; Hansroul, M.; Hapke, M.; Harder, K.; Hargrove, C.K.; Hartmann, C.; Hauschild, M.; Hawkes, C.M.; Hawkings, R.; Hemingway, R.J.; Herndon, M.; Herten, G.; Heuer, R.D.; Hildreth, M.D.; Hill, J.C.; Hillier, S.J.; Hobson, P.R.; Hocker, James Andrew; Homer, R.J.; Honma, A.K.; Horvath, D.; Hossain, K.R.; Howard, R.; Huntemeyer, P.; Igo-Kemenes, P.; Imrie, D.C.; Ishii, K.; Jacob, F.R.; Jawahery, A.; Jeremie, H.; Jimack, M.; Jones, C.R.; Jovanovic, P.; Junk, T.R.; Karlen, D.; Kartvelishvili, V.; Kawagoe, K.; Kawamoto, T.; Kayal, P.I.; Keeler, R.K.; Kellogg, R.G.; Kennedy, B.W.; Klier, A.; Kluth, S.; Kobayashi, T.; Kobel, M.; Koetke, D.S.; Kokott, T.P.; Kolrep, M.; Komamiya, S.; Kowalewski, Robert V.; Kress, T.; Krieger, P.; von Krogh, J.; Kuhl, T.; Kyberd, P.; Lafferty, G.D.; Lanske, D.; Lauber, J.; Lautenschlager, S.R.; Lawson, I.; Layter, J.G.; Lazic, D.; Lee, A.M.; Lellouch, D.; Letts, J.; Levinson, L.; Liebisch, R.; List, B.; Littlewood, C.; Lloyd, A.W.; Lloyd, S.L.; Loebinger, F.K.; Long, G.D.; Losty, M.J.; Ludwig, J.; Lui, D.; Macchiolo, A.; Macpherson, A.; Mader, W.; Mannelli, M.; Marcellini, S.; Markopoulos, C.; Martin, A.J.; Martin, J.P.; Martinez, G.; Mashimo, T.; Mattig, Peter; McDonald, W.John; McKenna, J.; Mckigney, E.A.; McMahon, T.J.; McPherson, R.A.; Meijers, F.; Menke, S.; Merritt, F.S.; Mes, H.; Meyer, J.; Michelini, A.; Mihara, S.; Mikenberg, G.; Miller, D.J.; Mir, R.; Mohr, W.; Montanari, A.; Mori, T.; Nagai, K.; Nakamura, I.; Neal, H.A.; Nellen, B.; Nisius, R.; O'Neale, S.W.; Oakham, F.G.; Odorici, F.; Ogren, H.O.; Oreglia, M.J.; Orito, S.; Palinkas, J.; Pasztor, G.; Pater, J.R.; Patrick, G.N.; Patt, J.; Perez-Ochoa, R.; Petzold, S.; Pfeifenschneider, P.; Pilcher, J.E.; Pinfold, J.; Plane, David E.; Poffenberger, P.; Polok, J.; Przybycien, M.; Rembser, C.; Rick, H.; Robertson, S.; Robins, S.A.; Rodning, N.; Roney, J.M.; Roscoe, K.; Rossi, A.M.; Rozen, Y.; Runge, K.; Runolfsson, O.; Rust, D.R.; Sachs, K.; Saeki, T.; Sahr, O.; Sang, W.M.; Sarkisian, E.K.G.; Sbarra, C.; Schaile, A.D.; Schaile, O.; Scharf, F.; Scharff-Hansen, P.; Schieck, J.; Schmitt, B.; Schmitt, S.; Schoning, A.; Schroder, Matthias; Schumacher, M.; Schwick, C.; Scott, W.G.; Seuster, R.; Shears, T.G.; Shen, B.C.; Shepherd-Themistocleous, C.H.; Sherwood, P.; Siroli, G.P.; Sittler, A.; Skuja, A.; Smith, A.M.; Snow, G.A.; Sobie, R.; Soldner-Rembold, S.; Sproston, M.; Stahl, A.; Stephens, K.; Steuerer, J.; Stoll, K.; Strom, David M.; Strohmer, R.; Surrow, B.; Talbot, S.D.; Tanaka, S.; Taras, P.; Tarem, S.; Teuscher, R.; Thiergen, M.; Thomson, M.A.; von Torne, E.; Torrence, E.; Towers, S.; Trigger, I.; Trocsanyi, Z.; Tsur, E.; Turcot, A.S.; Turner-Watson, M.F.; Van Kooten, Rick J.; Vannerem, P.; Verzocchi, M.; Voss, H.; Wackerle, F.; Wagner, A.; Ward, C.P.; Ward, D.R.; Watkins, P.M.; Watson, A.T.; Watson, N.K.; Wells, P.S.; Wermes, N.; White, J.S.; Wilson, G.W.; Wilson, J.A.; Wyatt, T.R.; Yamashita, S.; Yekutieli, G.; Zacek, V.; Zer-Zion, D.

    1998-01-01

    In the Standard Model, b quarks produced in e^+e^- annihilation at the Z^0 peak have a large average longitudinal polarization of -0.94. Some fraction of this polarization is expected to be transferred to b-flavored baryons during hadronization. The average longitudinal polarization of weakly decaying b baryons, , is measured in approximately 4.3 million hadronic Z^0 decays collected with the OPAL detector between 1990 and 1995 at LEP. Those b baryons that decay semileptonically and produce a \\Lambda baryon are identified through the correlation of the baryon number of the \\Lambda and the electric charge of the lepton. In this semileptonic decay, the ratio of the neutrino energy to the lepton energy is a sensitive polarization observable. The neutrino energy is estimated using missing energy measurements. From a fit to the distribution of this ratio, the value = -0.56^{+0.20}_{-0.13} +/- 0.09 is obtained, where the first error is statistical and the second systematic.

  10. Phenomenological Study of Empowering Women Senior Leaders in Higher Education

    Science.gov (United States)

    Cselenszky, Mila P.

    2012-01-01

    The number of women in senior administrative and leadership roles in higher education is minimal compared to the number of women in higher education jobs in general. This phenomenological study explored pathways women took to advance in their careers and barriers that prevent more women from gaining senior administrative and leadership roles.…

  11. Time-averaged molluscan death assemblages: Palimpsests of richness, snapshots of abundance

    Science.gov (United States)

    Kidwell, Susan M.

    2002-09-01

    Field tests that compare living communities to associated dead remains are the primary means of estimating the reliability of biological information in the fossil record; such tests also provide insights into the dynamics of skeletal accumulation. Contrary to expectations, molluscan death assemblages capture a strong signal of living species' rank-order abundances. This finding, combined with independent evidence for exponential postmortem destruction of dead cohorts, argues that, although the species richness of a death assemblage may be a time-averaged palimpsest of the habitat (molluscan death assemblages contain, on average, ˜25% more species than any single census of the local live community, after sample-size standardization), species' relative-abundance data from the same assemblage probably constitute a much higher acuity record dominated by the most recent dead cohorts (e.g., from the past few hundred years or so, rather than the several thousand years recorded by the total assemblage and usually taken as the acuity of species-richness information). The pervasive excess species richness of molluscan death assemblages requires further analysis and modeling to discriminate among possible sources. However, time averaging alone cannot be responsible unless rare species (species with low rates of dead-shell production) are collectively more durable (have longer taphonomic half-lives) than abundant species. Species richness and abundance data thus appear to present fundamentally different taphonomic qualities for paleobiological analysis. Relative- abundance information is more snapshot-like and thus taphonomically more straightforward than expected, especially compared to the complex origins of dead-species richness.

  12. Higher number of pentosidine cross-links induced by ribose does not alter tissue stiffness of cancellous bone

    Energy Technology Data Exchange (ETDEWEB)

    Willems, Nop M.B.K., E-mail: n.willems@acta.nl [Dept. of Orthodontics, Academic Centre for Dentistry Amsterdam (ACTA), University of Amsterdam and VU University, Gustav Mahlerlaan 3004, 1081 LA Amsterdam (Netherlands); Dept. of Oral Cell Biology and Functional Anatomy, MOVE Research Institute, Academic Centre for Dentistry Amsterdam (ACTA), University of Amsterdam and VU University, Gustav Mahlerlaan 3004, 1081 LA Amsterdam (Netherlands); Langenbach, Geerling E.J. [Dept. of Oral Cell Biology and Functional Anatomy, MOVE Research Institute, Academic Centre for Dentistry Amsterdam (ACTA), University of Amsterdam and VU University, Gustav Mahlerlaan 3004, 1081 LA Amsterdam (Netherlands); Stoop, Reinout [Dept. of Metabolic Health Research, TNO, P.O. Box 2215, 2301 CE Leiden (Netherlands); Toonder, Jaap M.J. den [Dept. of Mechanical Engineering, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven (Netherlands); Mulder, Lars [Dept. of Biomedical Engineering, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven (Netherlands); Zentner, Andrej [Dept. of Orthodontics, Academic Centre for Dentistry Amsterdam (ACTA), University of Amsterdam and VU University, Gustav Mahlerlaan 3004, 1081 LA Amsterdam (Netherlands); Everts, Vincent [Dept. of Oral Cell Biology and Functional Anatomy, MOVE Research Institute, Academic Centre for Dentistry Amsterdam (ACTA), University of Amsterdam and VU University, Gustav Mahlerlaan 3004, 1081 LA Amsterdam (Netherlands)

    2014-09-01

    The role of mature collagen cross-links, pentosidine (Pen) cross-links in particular, in the micromechanical properties of cancellous bone is unknown. The aim of this study was to examine nonenzymatic glycation effects on tissue stiffness of demineralized and non-demineralized cancellous bone. A total of 60 bone samples were derived from mandibular condyles of six pigs, and assigned to either control or experimental groups. Experimental handling included incubation in phosphate buffered saline alone or with 0.2 M ribose at 37 °C for 15 days and, in some of the samples, subsequent complete demineralization of the sample surface using 8% EDTA. Before and after experimental handling, bone microarchitecture and tissue mineral density were examined by means of microcomputed tomography. After experimental handling, the collagen content and the number of Pen, hydroxylysylpyridinoline (HP), and lysylpyridinoline (LP) cross-links were estimated using HPLC, and tissue stiffness was assessed by means of nanoindentation. Ribose treatment caused an up to 300-fold increase in the number of Pen cross-links compared to nonribose-incubated controls, but did not affect the number of HP and LP cross-links. This increase in the number of Pen cross-links had no influence on tissue stiffness of both demineralized and nondemineralized bone samples. These findings suggest that Pen cross-links do not play a significant role in bone tissue stiffness. - Highlights: • The assessment of effects of glycation in bone using HPLC, microCT, and nanoindentation • Ribose incubation: 300‐fold increase in the number of pentosidine cross-links • 300‐fold increase in the number of pentosidine cross-links: no changes in bone tissue stiffness.

  13. 47 CFR 80.759 - Average terrain elevation.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Average terrain elevation. 80.759 Section 80.759 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.759 Average terrain elevation. (a)(1) Draw radials...

  14. Higher education reform: getting the incentives right

    NARCIS (Netherlands)

    Canton, Erik; Venniker, Richard; Jongbloed, Benjamin W.A.; Koelman, Jos; Koelman, Jos; van der Meer, Peter; van der Meer, Peter; Vossensteyn, Johan J.

    2001-01-01

    This study is a joint effort by the Netherlands Bureau for Economic Policy Analysis (CPB) and the Center for Higher Education Policy Studies. It analyses a number of `best practices¿ where the design of financial incentives working on the system level of higher education is concerned. In Chapter 1,

  15. Hybrid Reynolds-Averaged/Large Eddy Simulation of the Flow in a Model SCRamjet Cavity Flameholder

    Science.gov (United States)

    Baurle, R. A.

    2016-01-01

    Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. Experimental data available for this configuration include velocity statistics obtained from particle image velocimetry. Several turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged/large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This e ort was undertaken to not only assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community, but to also begin to understand how this capability can best be used to augment standard Reynolds-averaged simulations. The numerical errors were quantified for the steady-state simulations, and at least qualitatively assessed for the scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results displayed a high degree of variability when comparing the flameholder fuel distributions obtained from each turbulence model. This prompted the consideration of applying the higher-fidelity scale-resolving simulations as a surrogate "truth" model to calibrate the Reynolds-averaged closures in a non-reacting setting prior to their use for the combusting simulations. In general, the Reynolds-averaged velocity profile predictions at the lowest fueling level matched the particle imaging measurements almost as well as was observed for the non-reacting condition. However, the velocity field predictions proved to be more sensitive to the flameholder fueling rate than was indicated in the measurements.

  16. Human-experienced temperature changes exceed global average climate changes for all income groups

    Science.gov (United States)

    Hsiang, S. M.; Parshall, L.

    2009-12-01

    Global climate change alters local climates everywhere. Many climate change impacts, such as those affecting health, agriculture and labor productivity, depend on these local climatic changes, not global mean change. Traditional, spatially averaged climate change estimates are strongly influenced by the response of icecaps and oceans, providing limited information on human-experienced climatic changes. If used improperly by decision-makers, these estimates distort estimated costs of climate change. We overlay the IPCC’s 20 GCM simulations on the global population distribution to estimate local climatic changes experienced by the world population in the 21st century. The A1B scenario leads to a well-known rise in global average surface temperature of +2.0°C between the periods 2011-2030 and 2080-2099. Projected on the global population distribution in 2000, the median human will experience an annual average rise of +2.3°C (4.1°F) and the average human will experience a rise of +2.4°C (4.3°F). Less than 1% of the population will experience changes smaller than +1.0°C (1.8°F), while 25% and 10% of the population will experience changes greater than +2.9°C (5.2°F) and +3.5°C (6.2°F) respectively. 67% of the world population experiences temperature changes greater than the area-weighted average change of +2.0°C (3.6°F). Using two approaches to characterize the spatial distribution of income, we show that the wealthiest, middle and poorest thirds of the global population experience similar changes, with no group dominating the global average. Calculations for precipitation indicate that there is little change in average precipitation, but redistributions of precipitation occur in all income groups. These results suggest that economists and policy-makers using spatially averaged estimates of climate change to approximate local changes will systematically and significantly underestimate the impacts of climate change on the 21st century population. Top: The

  17. Term-time Employment and Student Attainment in Higher Education

    Directory of Open Access Journals (Sweden)

    Cath Dennis

    2018-04-01

    Full Text Available The number of UK full-time university students engaging in term-time employment (TTE is rising. Students engaging in TTE have previously been found to achieve less well academically than those who do not. This study aimed to explore patterns of TTE and academic achievement of undergraduates at a large UK higher education institution. Self-reported TTE hours were matched to attainment data for 1304 undergraduate students in levels 1-4 of study (SQCF levels 7-10. The majority of students in TTE (71%, n=621 reported undertaking TTE to cover essential living expenses. Compared to students not undertaking TTE, attainment was significantly better at low levels of TTE (1-10 hours, and only significantly worse when TTE was >30 hours/week. This pattern was magnified when job type was taken into account – students employed in skilled roles for ≤10 hours/week on average attained grades 7% higher than those not in TTE; students working >10 hours/week in unskilled positions showed a mean 1.6% lower grade. The impact of ‘academic potential’ (measured via incoming UCAS tariff was accounted for in the model. The finding that students engaging in some categories of TTE achieve better academic outcomes than their non-employed peers is worthy of further investigation. This study is unable to provide direct evidence of possible causation, but would tentatively suggest that students may benefit from taking on 10 or fewer hours of TTE per week.

  18. Gauge-fixing ambiguity and monopole number

    International Nuclear Information System (INIS)

    Hioki, S.; Miyamura, O.

    1991-01-01

    Gauge-fixing ambiguities of lattice SU(2) QCD are studied in the maximally abelian and unitary gauges. In the former, we find local maxima of a gauge-fixing function which may correspond to Gribov copies. There is a definite anti-correlation between the number of monopoles and the value of the function. Errors of measured quantities coming from the ambiguity are found to be less than inherent dispersion in the ensemble average. No ambiguity is found in the unitary gauges. (orig.)

  19. Do higher-priced generic medicines enjoy a competitive advantage under reference pricing?

    Science.gov (United States)

    Puig-Junoy, Jaume

    2012-11-01

    In many countries with generic reference pricing, generic producers and distributors compete by means of undisclosed discounts offered to pharmacies in order to reduce acquisition costs and to induce them to dispense their generic to patients in preference over others. The objective of this article is to test the hypothesis that under prevailing reference pricing systems for generic medicines, those medicines sold at a higher consumer price may enjoy a competitive advantage. Real transaction prices for 179 generic medicines acquired by pharmacies in Spain have been used to calculate the discount rate on acquisition versus reimbursed costs to pharmacies. Two empirical hypotheses are tested: the discount rate at which pharmacies acquire generic medicines is higher for those pharmaceutical presentations for which there are more generic competitors; and, the discount rate at which pharmacies acquire generic medicines is higher for those pharmaceutical forms for which the consumer price has declined less in relation to the consumer price of the brand drug before generic entry (higher-priced generic medicines). An average discount rate of 39.3% on acquisition versus reimbursed costs to pharmacies has been observed. The magnitude of the discount positively depends on the number of competitors in the market. The higher the ratio of the consumer price of the generic to that of the brand drug prior to generic entry (i.e. the smaller the price reduction of the generic in relation to the brand drug), the larger the discount rate. Under reference pricing there is intense price competition among generic firms in the form of unusually high discounts to pharmacies on official ex-factory prices reimbursed to pharmacies. However, this effect is highly distorting because it favours those medicines with a higher relative price in relation to the brand price before generic entry.

  20. An average-based accounting approach to capital asset investments: The case of project finance

    OpenAIRE

    Carlo Alberto Magni

    2014-01-01

    Literature and textbooks on capital budgeting endorse Net Present Value (NPV) and generally treat accounting rates of return as not being reliable tools. This paper shows that accounting numbers can be reconciled with NPV and fruitfully employed in real-life applications. Focusing on project finance transactions, an Average Return On Investment (AROI) is drawn from the pro forma financial statements, obtained as the ratio of aggregate income to aggregate book value. It is shown that such a me...

  1. Minimizing the Standard Deviation of Spatially Averaged Surface Cross-Sectional Data from the Dual-Frequency Precipitation Radar

    Science.gov (United States)

    Meneghini, Robert; Kim, Hyokyung

    2016-01-01

    For an airborne or spaceborne radar, the precipitation-induced path attenuation can be estimated from the measurements of the normalized surface cross section, sigma 0, in the presence and absence of precipitation. In one implementation, the mean rain-free estimate and its variability are found from a lookup table (LUT) derived from previously measured data. For the dual-frequency precipitation radar aboard the global precipitation measurement satellite, the nominal table consists of the statistics of the rain-free 0 over a 0.5 deg x 0.5 deg latitude-longitude grid using a three-month set of input data. However, a problem with the LUT is an insufficient number of samples in many cells. An alternative table is constructed by a stepwise procedure that begins with the statistics over a 0.25 deg x 0.25 deg grid. If the number of samples at a cell is too few, the area is expanded, cell by cell, choosing at each step that cell that minimizes the variance of the data. The question arises, however, as to whether the selected region corresponds to the smallest variance. To address this question, a second type of variable-averaging grid is constructed using all possible spatial configurations and computing the variance of the data within each region. Comparisons of the standard deviations for the fixed and variable-averaged grids are given as a function of incidence angle and surface type using a three-month set of data. The advantage of variable spatial averaging is that the average standard deviation can be reduced relative to the fixed grid while satisfying the minimum sample requirement.

  2. Event-Triggered Distributed Average Consensus Over Directed Digital Networks With Limited Communication Bandwidth.

    Science.gov (United States)

    Li, Huaqing; Chen, Guo; Huang, Tingwen; Dong, Zhaoyang; Zhu, Wei; Gao, Lan

    2016-12-01

    In this paper, we consider the event-triggered distributed average-consensus of discrete-time first-order multiagent systems with limited communication data rate and general directed network topology. In the framework of digital communication network, each agent has a real-valued state but can only exchange finite-bit binary symbolic data sequence with its neighborhood agents at each time step due to the digital communication channels with energy constraints. Novel event-triggered dynamic encoder and decoder for each agent are designed, based on which a distributed control algorithm is proposed. A scheme that selects the number of channel quantization level (number of bits) at each time step is developed, under which all the quantizers in the network are never saturated. The convergence rate of consensus is explicitly characterized, which is related to the scale of network, the maximum degree of nodes, the network structure, the scaling function, the quantization interval, the initial states of agents, the control gain and the event gain. It is also found that under the designed event-triggered protocol, by selecting suitable parameters, for any directed digital network containing a spanning tree, the distributed average consensus can be always achieved with an exponential convergence rate based on merely one bit information exchange between each pair of adjacent agents at each time step. Two simulation examples are provided to illustrate the feasibility of presented protocol and the correctness of the theoretical results.

  3. Zonally averaged chemical-dynamical model of the lower thermosphere

    International Nuclear Information System (INIS)

    Kasting, J.F.; Roble, R.G.

    1981-01-01

    A zonally averaged numerical model of the thermosphere is used to examine the coupling between neutral composition, including N 2 , O 2 and O, temperature, and winds at solstice for solar minimum conditions. The meridional circulation forced by solar heating results in a summer-to-winter flow, with a winter enhancement in atomic oxygen density that is a factor of about 1.8 greater than the summer hemisphere at 160 km. The O 2 and N 2 variations are associated with a latitudinal gradient in total number density, which is required to achieve pressure balance in the presence of large zonal jets. Latitudinal profiles OI (5577A) green line emission intensity are calculated by using both Chapman and Barth mechanisms. Composition of the lower thermosphere is shown to be strongly influenced by circulation patterns initiated in the stratosphere and lower mesosphere, below the lower boundary used in the model

  4. The effect of Reynolds number on the propulsive efficiency of a biomorphic pulsed-jet underwater vehicle

    International Nuclear Information System (INIS)

    Moslemi, Ali A; Krueger, Paul S

    2011-01-01

    The effect of Reynolds number on the propulsive efficiency of pulsed-jet propulsion was studied experimentally on a self-propelled, pulsed-jet underwater vehicle, dubbed Robosquid due to the similarity of its propulsion system with squid. Robosquid was tested for jet slug length-to-diameter ratios (L/D) in the range 2-6 and dimensionless frequency (St L ) in the range 0.2-0.6 in a glycerin-water mixture. Digital particle image velocimetry was used for measuring the impulse and energy of jet pulses from the velocity and vorticity fields of the jet flow to calculate the pulsed-jet propulsive efficiency, and compare it with an equivalent steady jet system. Robosquid's Reynolds number (Re) based on average vehicle velocity and vehicle diameter ranged between 37 and 60. The current results for propulsive efficiency were compared to the previously published results in water where Re ranged between 1300 and 2700. The results showed that the average propulsive efficiency decreased by 26% as the average Re decreased from 2000 to 50 while the ratio of pulsed-jet to steady jet efficiency (η P /η P,ss ) increased up to 0.15 (26%) as the Re decreased over the same range and for similar pulsing conditions. The improved η P /η P,ss at lower Re suggests that pulsed-jet propulsion can be used as an efficient propulsion system for millimeter-scale propulsion applications. The Re = 37-60 conditions in the present investigation, showed a reduced dependence of η P and η P /η P,ss on L/D compared to higher Re results. This may be due to the lack of clearly observed vortex ring pinch-off as L/D increased for this Re regime.

  5. Grassmann Averages for Scalable Robust PCA

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Black, Michael J.

    2014-01-01

    As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...

  6. Glass Ceiling for Women in Higher Education.

    Science.gov (United States)

    Schedler, Petra; Glastra, Folke; Hake, Barry

    2003-01-01

    Discusses the place of women in higher education in the Netherlands. Suggests that it is not a question of numbers but of orientation, field, and the glass ceiling. Asserts that despite some improvement, higher education may be one of the last bastions against the recognition of women's worth. (Contains 42 references.) (JOW)

  7. Asian Women in Higher Education: Shared Communities

    Science.gov (United States)

    Bhopal, Kalwant

    2010-01-01

    More Asian women are entering higher education in the UK than ever before, and the number looks likely to rise. Their engagement with higher education reflects widespread changes in the attitudes and cultural expectations of their various communities, as awareness grows of the greater long-term value associated with continuing in education. Today…

  8. Peer Learning in Specialist Higher Music Education

    Science.gov (United States)

    Hanken, Ingrid Maria

    2016-01-01

    Research on peer learning in higher education indicates that learning from and together with peers can benefit students in a number of ways. Within higher music education in Western, classical music, however, the master-apprentice tradition with its dominant one-to-one mode of tuition focuses predominantly on knowledge transmission from teacher to…

  9. Scale-invariant Green-Kubo relation for time-averaged diffusivity

    Science.gov (United States)

    Meyer, Philipp; Barkai, Eli; Kantz, Holger

    2017-12-01

    In recent years it was shown both theoretically and experimentally that in certain systems exhibiting anomalous diffusion the time- and ensemble-averaged mean-squared displacement are remarkably different. The ensemble-averaged diffusivity is obtained from a scaling Green-Kubo relation, which connects the scale-invariant nonstationary velocity correlation function with the transport coefficient. Here we obtain the relation between time-averaged diffusivity, usually recorded in single-particle tracking experiments, and the underlying scale-invariant velocity correlation function. The time-averaged mean-squared displacement is given by 〈δ2¯〉 ˜2 DνtβΔν -β , where t is the total measurement time and Δ is the lag time. Here ν is the anomalous diffusion exponent obtained from ensemble-averaged measurements 〈x2〉 ˜tν , while β ≥-1 marks the growth or decline of the kinetic energy 〈v2〉 ˜tβ . Thus, we establish a connection between exponents that can be read off the asymptotic properties of the velocity correlation function and similarly for the transport constant Dν. We demonstrate our results with nonstationary scale-invariant stochastic and deterministic models, thereby highlighting that systems with equivalent behavior in the ensemble average can differ strongly in their time average. If the averaged kinetic energy is finite, β =0 , the time scaling of 〈δ2¯〉 and 〈x2〉 are identical; however, the time-averaged transport coefficient Dν is not identical to the corresponding ensemble-averaged diffusion constant.

  10. Values of average daily gain of swine posted to commercial hybrids on pork in youth phase depending on the type

    Directory of Open Access Journals (Sweden)

    Diana Marin

    2013-10-01

    Full Text Available Values of average daily gain of weight are calculated according to the ratio of total growth and total number of days of feeding. In the case of the four commercial hybrids intensively exploited was observed, as test applied, that there were no statistically significant differences in terms of average daily gain of these hybrids, but the lowest values ​​of this index were recorded in hybrid B (with Large White as terminal boar.

  11. Cognitive Capitalism: Economic Freedom Moderates the Effects of Intellectual and Average Classes on Economic Productivity.

    Science.gov (United States)

    Coyle, Thomas R; Rindermann, Heiner; Hancock, Dale

    2016-10-01

    Cognitive ability stimulates economic productivity. However, the effects of cognitive ability may be stronger in free and open economies, where competition rewards merit and achievement. To test this hypothesis, ability levels of intellectual classes (top 5%) and average classes (country averages) were estimated using international student assessments (Programme for International Student Assessment; Trends in International Mathematics and Science Study; and Progress in International Reading Literacy Study) (N = 99 countries). The ability levels were correlated with indicators of economic freedom (Fraser Institute), scientific achievement (patent rates), innovation (Global Innovation Index), competitiveness (Global Competitiveness Index), and wealth (gross domestic product). Ability levels of intellectual and average classes strongly predicted all economic criteria. In addition, economic freedom moderated the effects of cognitive ability (for both classes), with stronger effects at higher levels of freedom. Effects were particularly robust for scientific achievements when the full range of freedom was analyzed. The results support cognitive capitalism theory: cognitive ability stimulates economic productivity, and its effects are enhanced by economic freedom. © The Author(s) 2016.

  12. Dielectronic recombination of P5+ and Cl7+ in configuration-average, LS-coupling, and intermediate-coupling approximations

    International Nuclear Information System (INIS)

    Badnell, N.R.; Pindzola, M.S.

    1989-01-01

    We have calculated dielectronic recombination cross sections and rate coefficients for the Ne-like ions P 5+ and Cl 7+ in configuration-average, LS-coupling, and intermediate-coupling approximations. Autoionization into excited states reduces the cross sections and rate coefficients by substantial amounts in all three methods. There is only rough agreement between the configuration-average cross-section results and the corresponding intermediate-coupling results. There is good agreement, however, between the LS-coupling cross-section results and the corresponding intermediate-coupling results. The LS-coupling and intermediate-coupling rate coefficients agree to better than 5%, while the configuration-average rate coefficients are about 30% higher than the other two coupling methods. External electric field effects, as calculated in the configuration-average approximation, are found to be relatively small for the cross sections and completely negligible for the rate coefficients. Finally, the general formula of Burgess was found to overestimate the rate coefficients by roughly a factor of 5, mainly due to the neglect of autoionization into excited states

  13. Univariate Lp and ɭ p Averaging, 0 < p < 1, in Polynomial Time by Utilization of Statistical Structure

    Directory of Open Access Journals (Sweden)

    John E. Lavery

    2012-10-01

    Full Text Available We present evidence that one can calculate generically combinatorially expensive Lp and lp averages, 0 < p < 1, in polynomial time by restricting the data to come from a wide class of statistical distributions. Our approach differs from the approaches in the previous literature, which are based on a priori sparsity requirements or on accepting a local minimum as a replacement for a global minimum. The functionals by which Lp averages are calculated are not convex but are radially monotonic and the functionals by which lp averages are calculated are nearly so, which are the keys to solvability in polynomial time. Analytical results for symmetric, radially monotonic univariate distributions are presented. An algorithm for univariate lp averaging is presented. Computational results for a Gaussian distribution, a class of symmetric heavy-tailed distributions and a class of asymmetric heavy-tailed distributions are presented. Many phenomena in human-based areas are increasingly known to be represented by data that have large numbers of outliers and belong to very heavy-tailed distributions. When tails of distributions are so heavy that even medians (L1 and l1 averages do not exist, one needs to consider using lp minimization principles with 0 < p < 1.

  14. Use of performance curves in estimating number of procedures required to achieve proficiency in coronary angiography

    DEFF Research Database (Denmark)

    Räder, Sune B E W; Jørgensen, Erik; Bech, Bo

    2011-01-01

    .001 for all parameters. To approach the experts' level of DAP and contrast media use, trainees need 394 and 588 procedures, respectively. Performance curves showed large individual differences in the development of competence. Conclusion: On average, trainees needed 300 procedures to reach sufficient level...... needed for trainees to reach recommended reference levels was estimated as 226 and 353, for DAP and use of contrast media, respectively. After 300 procedures, trainees' procedure time, fluoroscopy time, DAP, and contrast media volume were significantly higher compared with experts' performance, P ...Background: Current guidelines in cardiology training programs recommend 100-300 coronary angiography procedures for certification. We aimed to assess the number of procedures needed to reach sufficient proficiency. Methods: Procedure time, fluoroscopy time, dose area product (DAP), and contrast...

  15. Overview of Commercial Building Partnerships in Higher Education

    Energy Technology Data Exchange (ETDEWEB)

    Schatz, Glenn [Energy Efficiency and Renewable Energy (EERE), Washington, DC (United States)

    2013-03-01

    Higher education uses less energy per square foot than most commercial building sectors. However, higher education campuses house energy-intensive laboratories and data centers that may spend more than this average; laboratories, in particular, are disproportionately represented in the higher education sector. The Commercial Building Partnership (CBP), a public/private, cost-shared program sponsored by the U.S. Department of Energy (DOE), paired selected commercial building owners and operators with representatives of DOE, its national laboratories, and private-sector technical experts. These teams explored energy-saving measures across building systems–including some considered too costly or technologically challenging–and used advanced energy modeling to achieve peak whole-building performance. Modeling results were then included in new construction or retrofit designs to achieve significant energy reductions.

  16. Threats to the Human Capacity of Regional Higher Education Institutions

    Directory of Open Access Journals (Sweden)

    Evgeny Valentinovich Romanov

    2018-03-01

    Full Text Available In recent years, the sphere of science and education in Russia undergoes significant reforms. However, the existing framework guiding the development of the higher education contradict the Strategy of Scientific and Technological Development of Russia. These contradictions concern the conditions for building an integral system of personnel reserve and recruitment, which is necessary for the scientific and technological development of the country. The change of the funding model and the transition to two-tier higher education contribute to the outflow of talented youth to the cities where branded universities are concentrated. It creates threats to the human capacity of regional higher education institutions (both regarding staffing number, and regarding personnel reserve. Decreasing trend in number of students because of the federal budget appropriation and the existing system of per capita funding for regional higher education institutions are the threats for regional higher education. These threats can result in permanent reduction of the number of academic teaching staff and in potential decline in quality of education due to increasing teachers’ workloads. The transition to the two-tier model of university education has changed the approach to evaluating the efficiency of scientific research. The number of publications in the journals, which are indexed in the Web of Science and Scopus, has increased, but the patent activity of the leading higher education institutions has decreased many times. The ratio of number of articles to the number of the granted patents in the leading Russian universities significantly exceeds a similar indicator of the leading foreign universities. It can be regarded as «brain drain». Furthermore, this fact explains why the specific weight of income from the results of intellectual activity in total income in the majority of the Russian universities is close to zero. Regional higher education institutions need

  17. Time-resolved study of the electron temperature and number density of argon metastable atoms in argon-based dielectric barrier discharges

    Science.gov (United States)

    Desjardins, E.; Laurent, M.; Durocher-Jean, A.; Laroche, G.; Gherardi, N.; Naudé, N.; Stafford, L.

    2018-01-01

    A combination of optical emission spectroscopy and collisional-radiative modelling is used to determine the time-resolved electron temperature (assuming Maxwellian electron energy distribution function) and number density of Ar 1s states in atmospheric pressure Ar-based dielectric barrier discharges in presence of either NH3 or ethyl lactate. In both cases, T e values were higher early in the discharge cycle (around 0.8 eV), decreased down to about 0.35 eV with the rise of the discharge current, and then remained fairly constant during discharge extinction. The opposite behaviour was observed for Ar 1s states, with cycle-averaged values in the 1017 m-3 range. Based on these findings, a link was established between the discharge ionization kinetics (and thus the electron temperature) and the number density of Ar 1s state.

  18. Average-case analysis of numerical problems

    CERN Document Server

    2000-01-01

    The average-case analysis of numerical problems is the counterpart of the more traditional worst-case approach. The analysis of average error and cost leads to new insight on numerical problems as well as to new algorithms. The book provides a survey of results that were mainly obtained during the last 10 years and also contains new results. The problems under consideration include approximation/optimal recovery and numerical integration of univariate and multivariate functions as well as zero-finding and global optimization. Background material, e.g. on reproducing kernel Hilbert spaces and random fields, is provided.

  19. The effect of above average weight gains on the incidence of radiographic bone aberrations and epiphysitis in growing horses

    International Nuclear Information System (INIS)

    Thompson, K.N.; Jackson, S.G.; Rooney, J.R.

    1988-01-01

    The relationship between body weight gain and the onset of bone aberrations (e.g. epiphysitis) is described. A model was derived which described the increase in transverse epiphyseal width, and the major factor found to affect epiphyseal width was average daily gain in body weight. In addition, a radiographic examination of the epiphyseal areas showed a larger number of bone aberrations in groups gaining weight at an above-average rate. Thus, a rapid increase in body weight can be suggested as a significant factor in the onset of epiphysitis

  20. Relationship between the species-representative phenotype and intraspecific variation in Ranunculaceae floral organ and Asteraceae flower numbers.

    Science.gov (United States)

    Kitazawa, Miho S; Fujimoto, Koichi

    2016-04-01

    Phenotypic variation in floral morphologies contributes to speciation by testing various morphologies that might have higher adaptivity, leading eventually to phylogenetic diversity. Species diversity has been recognized, however, by modal morphologies where the variation is averaged out, so little is known about the relationship between the variation and the diversity. We analysed quantitatively the intraspecific variation of the organ numbers within flowers of Ranunculaceae, a family which branched near the monocot-eudicot separation, and the numbers of flowers within the capitula of Asteraceae, one of the most diverse families of eudicots. We used four elementary statistical quantities: mean, standard deviation (s.d.), degree of symmetry (skewness) and steepness (kurtosis). While these four quantities vary among populations, we found a common relationship between s.d. and the mean number of petals and sepals in Ranunculaceae and number of flowers per capitulum in Asteraceae. The s.d. is equal to the square root of the difference between the mean and specific number, showing robustness: for example, 3 in Ficaria sepals, 5 in Ranunculus petals and Anemone tepals, and 13 in Farfugium ray florets. This square-root relationship was not applicable to Eranthis petals which show little correlation between the s.d. and mean, and the stamens and carpels of Ranunculaceae whose s.d. is proportional to the mean. The specific values found in the square-root relationship provide a novel way to find the species-representative phenotype among varied morphologies. The representative phenotype is, in most cases, unique to the species or genus level, despite intraspecific differences of average phenotype among populations. The type of variation shown by the statistical quantities indicates not only the robustness of the morphologies but also how flowering plants changed during evolution among representative phenotypes that eventually led to phylogenetic diversification. © The

  1. Post-model selection inference and model averaging

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  2. Veterans' Mental Health in Higher Education Settings: Services and Clinician Education Needs.

    Science.gov (United States)

    Niv, Noosha; Bennett, Lauren

    2017-06-01

    Utilization of the GI Bill and attendance at higher education institutions among student veterans have significantly increased since passage of the Post-9/11 GI Bill. Campus counseling centers should be prepared to meet the mental health needs of student veterans. This study identified the mental health resources and services that colleges provide student veterans and the education needs of clinical staff on how to serve student veterans. Directors of mental health services from 80 California colleges completed a semistructured phone interview. Few schools track the number, demographic characteristics, or presenting needs of student veterans who utilize campus mental health services or offer priority access or special mental health services for veterans. Directors wanted centers to receive education for an average of 5.8 veteran-related mental health topics and preferred workshops and lectures to handouts and online training. Significant training needs exist among clinical staff of campus mental health services to meet the needs of student veterans.

  3. Average L-shell fluorescence, Auger, and electron yields

    International Nuclear Information System (INIS)

    Krause, M.O.

    1980-01-01

    The dependence of the average L-shell fluorescence and Auger yields on the initial vacancy distribution is shown to be small. By contrast, the average electron yield pertaining to both Auger and Coster-Kronig transitions is shown to display a strong dependence. Numerical examples are given on the basis of Krause's evaluation of subshell radiative and radiationless yields. Average yields are calculated for widely differing vacancy distributions and are intercompared graphically for 40 3 subshell yields in most cases of inner-shell ionization

  4. Two-mode bosonic quantum metrology with number fluctuations

    Science.gov (United States)

    De Pasquale, Antonella; Facchi, Paolo; Florio, Giuseppe; Giovannetti, Vittorio; Matsuoka, Koji; Yuasa, Kazuya

    2015-10-01

    We search for the optimal quantum pure states of identical bosonic particles for applications in quantum metrology, in particular, in the estimation of a single parameter for the generic two-mode interferometric setup. We consider the general case in which the total number of particles is fluctuating around an average N with variance Δ N2 . By recasting the problem in the framework of classical probability, we clarify the maximal accuracy attainable and show that it is always larger than the one reachable with a fixed number of particles (i.e., Δ N =0 ). In particular, for larger fluctuations, the error in the estimation diminishes proportionally to 1 /Δ N , below the Heisenberg-like scaling 1 /N . We also clarify the best input state, which is a quasi-NOON state for a generic setup and, for some special cases, a two-mode Schrödinger-cat state with a vacuum component. In addition, we search for the best state within the class of pure Gaussian states with a given average N , which is revealed to be a product state (with no entanglement) with a squeezed vacuum in one mode and the vacuum in the other.

  5. Bayesian Averaging is Well-Temperated

    DEFF Research Database (Denmark)

    Hansen, Lars Kai

    2000-01-01

    Bayesian predictions are stochastic just like predictions of any other inference scheme that generalize from a finite sample. While a simple variational argument shows that Bayes averaging is generalization optimal given that the prior matches the teacher parameter distribution the situation is l...

  6. CT number of the fatty liver

    International Nuclear Information System (INIS)

    Maeda, Hiroko; Kawai, Takeshi; Kanasaki, Yoshiki; Akagi, Hiroaki

    1981-01-01

    This report is studied on CT number and CT images of the eight cases with fatty liver. Five of these cases showed the reversal of densities of the liver and vessels. In these cases, the diagnoses of the fatty liver were easible. In other cases, the diagnoses were possible only by comparison of the CT number of the liver and spleen because the CT number of normal liver were higher than those of the spleen. In the results which we examined the correlation of the CT number and specific gravities of the blood, normal saline, distilled water, mayonnaise, eatable iol, ethyl alcohol and lard, we observed the linear relationship between CT number and specific gravities. And so, we think that the diagnosis of the fatty liver and the degree of fatty infiltration can be guessed by the CT number of the liver and spleen. (author)

  7. Higher education institutions, regional labour markets and population development

    OpenAIRE

    Stambøl, Lasse Sigbjørn

    2011-01-01

    An important motivation to establish and develop higher education institutions across regions is to improve and restructure the regional labour markets toward higher education jobs, contribute to maintain the regional settlement patterns of the population generally and to increase the numbers of higher educated labour especially. This paper introduces a short description of the Norwegian regional higher education institution system, followed by analyses of the impact of higher education insti...

  8. Higher order perturbation theory applied to radiative transfer in non-plane-parallel media

    International Nuclear Information System (INIS)

    Box, M.A.; Polonsky, I.N.; Davis, A.B.

    2003-01-01

    Radiative transfer in non-plane-parallel media is a very challenging problem, which is currently the subject of concerted efforts to develop computational techniques which may be used to tackle different tasks. In this paper we develop the full formalism for another technique, based on radiative perturbation theory. With this approach, one starts with a plane-parallel 'base model', for which many solution techniques exist, and treat the horizontal variability as a perturbation. We show that under the most logical assumption as to the base model, the first-order perturbation term is zero for domain-average radiation quantities, so that it is necessary to go to higher order terms. This requires the computation of the Green's function. While this task is by no means simple, once the various pieces have been assembled they may be re-used for any number of perturbations--that is, any horizontal variations

  9. Analysis of Known Linear Distributed Average Consensus Algorithms on Cycles and Paths

    Directory of Open Access Journals (Sweden)

    Jesús Gutiérrez-Gutiérrez

    2018-03-01

    Full Text Available In this paper, we compare six known linear distributed average consensus algorithms on a sensor network in terms of convergence time (and therefore, in terms of the number of transmissions required. The selected network topologies for the analysis (comparison are the cycle and the path. Specifically, in the present paper, we compute closed-form expressions for the convergence time of four known deterministic algorithms and closed-form bounds for the convergence time of two known randomized algorithms on cycles and paths. Moreover, we also compute a closed-form expression for the convergence time of the fastest deterministic algorithm considered on grids.

  10. Low Nephron Number and Its Clinical Consequences

    Directory of Open Access Journals (Sweden)

    Valerie A. Luyckx

    2011-10-01

    Full Text Available decades ago, that developmental programming of the kidney impacts an individual’s risk for hypertension and renal disease in later life. Low birth weight is the strongest current clinical surrogate marker for an adverse intrauterine environment and, based on animal and human studies, is associated with a low nephron number. Other clinical correlates of low nephron number include female gender, short adult stature, small kidney size, and prematurity. Low nephron number in Caucasian and Australian Aboriginal subjects has been shown to be associated with higher blood pressures, and, conversely, hypertension is less prevalent in individuals with higher nephron numbers. In addition to nephron number, other programmed factors associated with the increased risk of hypertension include salt sensitivity, altered expression of renal sodium transporters, altered vascular reactivity, and sympathetic nervous system overactivity. Glomerular volume is universally found to vary inversely with nephron number, suggesting a degree of compensatory hypertrophy and hyperfunction in the setting of a low nephron number. This adaptation may become overwhelmed in the setting of superimposed renal insults, e.g. diabetes mellitus or rapid catch-up growth, leading to the vicious cycle of on-going hyperfiltration, proteinuria, nephron loss and progressive renal functional decline. Many millions of babies are born with low birth weight every year, and hypertension and renal disease prevalences are increasing around the globe. At present, little can be done clinically to augment nephron number; therefore adequate prenatal care and careful postnatal nutrition are crucial to optimize an individual’s nephron number during development and potentially to stem the tide of the growing cardiovascular and renal disease epidemics worldwide.

  11. Technical Note: Modification of the standard gain correction algorithm to compensate for the number of used reference flat frames in detector performance studies

    International Nuclear Information System (INIS)

    Konstantinidis, Anastasios C.; Olivo, Alessandro; Speller, Robert D.

    2011-01-01

    Purpose: The x-ray performance evaluation of digital x-ray detectors is based on the calculation of the modulation transfer function (MTF), the noise power spectrum (NPS), and the resultant detective quantum efficiency (DQE). The flat images used for the extraction of the NPS should not contain any fixed pattern noise (FPN) to avoid contamination from nonstochastic processes. The ''gold standard'' method used for the reduction of the FPN (i.e., the different gain between pixels) in linear x-ray detectors is based on normalization with an average reference flat-field. However, the noise in the corrected image depends on the number of flat frames used for the average flat image. The aim of this study is to modify the standard gain correction algorithm to make it independent on the used reference flat frames. Methods: Many publications suggest the use of 10-16 reference flat frames, while other studies use higher numbers (e.g., 48 frames) to reduce the propagated noise from the average flat image. This study quantifies experimentally the effect of the number of used reference flat frames on the NPS and DQE values and appropriately modifies the gain correction algorithm to compensate for this effect. Results: It is shown that using the suggested gain correction algorithm a minimum number of reference flat frames (i.e., down to one frame) can be used to eliminate the FPN from the raw flat image. This saves computer memory and time during the x-ray performance evaluation. Conclusions: The authors show that the method presented in the study (a) leads to the maximum DQE value that one would have by using the conventional method and very large number of frames and (b) has been compared to an independent gain correction method based on the subtraction of flat-field images, leading to identical DQE values. They believe this provides robust validation of the proposed method.

  12. Total Path Length and Number of Terminal Nodes for Decision Trees

    KAUST Repository

    Hussain, Shahid

    2014-01-01

    This paper presents a new tool for study of relationships between total path length (average depth) and number of terminal nodes for decision trees. These relationships are important from the point of view of optimization of decision trees

  13. Simultaneous inference for model averaging of derived parameters

    DEFF Research Database (Denmark)

    Jensen, Signe Marie; Ritz, Christian

    2015-01-01

    Model averaging is a useful approach for capturing uncertainty due to model selection. Currently, this uncertainty is often quantified by means of approximations that do not easily extend to simultaneous inference. Moreover, in practice there is a need for both model averaging and simultaneous...... inference for derived parameters calculated in an after-fitting step. We propose a method for obtaining asymptotically correct standard errors for one or several model-averaged estimates of derived parameters and for obtaining simultaneous confidence intervals that asymptotically control the family...

  14. Choices in higher education: Majoring in and changing from the sciences

    Science.gov (United States)

    Minear, Nancy Ann

    This dissertation addresses patterns of retention of undergraduate science, engineering and mathematics (SEM) students, with special attention paid to female and under represented minority students. As such, the study is focused on issues related to academic discipline and institutional retention, rather than the retention of students in the overall system of higher education. While previous retention studies have little to say about rates of retention that are specific to the sciences (or any other specific area of study) or employ models that rely on students' performance at the college level, this work address both points by identifying the post secondary academic performance characteristics of persisters and non-persisters in the sciences by gender, ethnicity and matriculating major as well as identifying introductory SEM course requirements that prevent students from persisting in sciencegender, ethnicity and matriculating major as well as identifying introductory SEM course requirements that prevent students from persisting in science majors. A secondary goal of investigating the usefulness of institutional records for retention research is addressed. Models produced for the entire population and selected subpopulations consistently classified higher-performing (both SEM and non-SEM grade point averages) students into Bachelor of Science categories using the number of Introductory Chemistry courses attempted at the university. For lower performing students, those with more introductory chemistry courses were classified as changing majors out of the sciences, and in general as completing a Bachelor of Arts degree. Performance in gatekeeper courses as a predictor of terminal academic status was limited to Introductory Physics for a small number of cases. Performance in Introductory Calculus and Introductory Chemistry were not consistently utilized as predictor variables. The models produced for various subpopulations (women, ethnic groups and matriculation

  15. Analytical expressions for conditional averages: A numerical test

    DEFF Research Database (Denmark)

    Pécseli, H.L.; Trulsen, J.

    1991-01-01

    Conditionally averaged random potential fluctuations are an important quantity for analyzing turbulent electrostatic plasma fluctuations. Experimentally, this averaging can be readily performed by sampling the fluctuations only when a certain condition is fulfilled at a reference position...

  16. Design of a high average-power FEL driven by an existing 20 MV electrostatic-accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Kimel, I.; Elias, L.R. [Univ. of Central Florida, Orlando, FL (United States)

    1995-12-31

    There are some important applications where high average-power radiation is required. Two examples are industrial machining and space power-beaming. Unfortunately, up to date no FEL has been able to show more than 10 Watts of average power. To remedy this situation we started a program geared towards the development of high average-power FELs. As a first step we are building in our CREOL laboratory, a compact FEL which will generate close to 1 kW in CW operation. As the next step we are also engaged in the design of a much higher average-power system based on a 20 MV electrostatic accelerator. This FEL will be capable of operating CW with a power output of 60 kW. The idea is to perform a high power demonstration using the existing 20 MV electrostatic accelerator at the Tandar facility in Buenos Aires. This machine has been dedicated to accelerate heavy ions for experiments and applications in nuclear and atomic physics. The necessary adaptations required to utilize the machine to accelerate electrons will be described. An important aspect of the design of the 20 MV system, is the electron beam optics through almost 30 meters of accelerating and decelerating tubes as well as the undulator. Of equal importance is a careful design of the long resonator with mirrors able to withstand high power loading with proper heat dissipation features.

  17. Combining Service and Learning in Higher Education

    National Research Council Canada - National Science Library

    Gray, Maryann

    1999-01-01

    .... Hundreds of college and university presidents, most of the major higher education associations, and a number of highly influential scholars actively support the development of service-learning...

  18. Dutch higher education and Chinese students in the Netherlands

    NARCIS (Netherlands)

    Hong, T.M.; Pieke, F.N.; Steehouder, L.; Veldhuizen, van J.L.

    2017-01-01

    The number of Chinese students in the Dutch higher education sector has grown rapidly. In 2014 the number of Chinese BA and MA students reached 4638, or about 7 percent of the population of international students in the Netherlands. The number of formally employed PhD students in that year was 427.

  19. Cryogenic wind tunnel technology. A way to measurement at higher Reynolds numbers

    Science.gov (United States)

    Beck, J. W.

    1984-01-01

    The goals, design, problems, and value of cryogenic transonic wind tunnels being developed in Europe are discussed. The disadvantages inherent in low-Reynolds-number (Re) wind tunnel simulations of aircraft flight at high Re are reviewed, and the cryogenic tunnel is shown to be the most practical method to achieve high Re. The design proposed for the European Transonic Wind tunnel (ETW) is presented: parameters include cross section. DISPLAY 83A46484/2 = 4 sq m, operating pressure = 5 bar, temperature = 110-120 K, maximum Re = 40 x 10 to the 6th, liquid N2 consumption = 40,000 metric tons/year, and power = 39,5 MW. The smaller Cologne subsonic tunnel being adapted to cryogenic use for preliminary studies is described. Problems of configuration, materials, and liquid N2 evaporation and handling and the research underway to solve them are outlined. The benefits to be gained by the construction of these costly installations are seen more in applied aerodynamics than in basic research in fluid physics. The need for parallel development of both high Re tunnels and computers capable of performing high-Re numerical analysis is stressed.

  20. Nonequilibrium statistical averages and thermo field dynamics

    International Nuclear Information System (INIS)

    Marinaro, A.; Scarpetta, Q.

    1984-01-01

    An extension of thermo field dynamics is proposed, which permits the computation of nonequilibrium statistical averages. The Brownian motion of a quantum oscillator is treated as an example. In conclusion it is pointed out that the procedure proposed to computation of time-dependent statistical average gives the correct two-point Green function for the damped oscillator. A simple extension can be used to compute two-point Green functions of free particles

  1. Analysis and Design of Improved Weighted Average Current Control Strategy for LCL-Type Grid-Connected Inverters

    DEFF Research Database (Denmark)

    Han, Yang; Li, Zipeng; Yang, Ping

    2017-01-01

    The LCL grid-connected inverter has the ability to attenuate the high-frequency current harmonics. However, the inherent resonance of the LCL filter affects the system stability significantly. To damp the resonance effect, the dual-loop current control can be used to stabilize the system. The grid...... Control Strategy for LCL-Type Grid-Connected Inverters. Available from: https://www.researchgate.net/publication/313734269_Analysis_and_Design_of_Improved_Weighted_Average_Current_Control_Strategy_for_LCL-Type_Grid-Connected_Inverters [accessed Apr 20, 2017]....... current plus capacitor current feedback system is widely used for its better transient response and high robustness against the grid impedance variations. While the weighted average current (WAC) feedback scheme is capable to provide a wider bandwidth at higher frequencies but show poor stability...

  2. In situ formation and spatial variability of particle number concentration in a European megacity

    Science.gov (United States)

    Pikridas, M.; Sciare, J.; Freutel, F.; Crumeyrolle, S.; von der Weiden-Reinmüller, S.-L.; Borbon, A.; Schwarzenboeck, A.; Merkel, M.; Crippa, M.; Kostenidou, E.; Psichoudaki, M.; Hildebrandt, L.; Engelhart, G. J.; Petäjä, T.; Prévôt, A. S. H.; Drewnick, F.; Baltensperger, U.; Wiedensohler, A.; Kulmala, M.; Beekmann, M.; Pandis, S. N.

    2015-09-01

    Ambient particle number size distributions were measured in Paris, France, during summer (1-31 July 2009) and winter (15 January to 15 February 2010) at three fixed ground sites and using two mobile laboratories and one airplane. The campaigns were part of the Megacities: Emissions, urban, regional and Global Atmospheric POLlution and climate effects, and Integrated tools for assessment and mitigation (MEGAPOLI) project. New particle formation (NPF) was observed only during summer on approximately 50 % of the campaign days, assisted by the low condensation sink (about 10.7 ± 5.9 × 10-3 s-1). NPF events inside the Paris plume were also observed at 600 m altitude onboard an aircraft simultaneously with regional events identified on the ground. Increased particle number concentrations were measured aloft also outside of the Paris plume at the same altitude, and were attributed to NPF. The Paris plume was identified, based on increased particle number and black carbon concentration, up to 200 km away from the Paris center during summer. The number concentration of particles with diameters exceeding 2.5 nm measured on the surface at the Paris center was on average 6.9 ± 8.7 × 104 and 12.1 ± 8.6 × 104 cm-3 during summer and winter, respectively, and was found to decrease exponentially with distance from Paris. However, further than 30 km from the city center, the particle number concentration at the surface was similar during both campaigns. During summer, one suburban site in the NE was not significantly affected by Paris emissions due to higher background number concentrations, while the particle number concentration at the second suburban site in the SW increased by a factor of 3 when it was downwind of Paris.

  3. Time average vibration fringe analysis using Hilbert transformation

    International Nuclear Information System (INIS)

    Kumar, Upputuri Paul; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad

    2010-01-01

    Quantitative phase information from a single interferogram can be obtained using the Hilbert transform (HT). We have applied the HT method for quantitative evaluation of Bessel fringes obtained in time average TV holography. The method requires only one fringe pattern for the extraction of vibration amplitude and reduces the complexity in quantifying the data experienced in the time average reference bias modulation method, which uses multiple fringe frames. The technique is demonstrated for the measurement of out-of-plane vibration amplitude on a small scale specimen using a time average microscopic TV holography system.

  4. From conventional averages to individual dose painting in radiotherapy for human tumors: challenge to non-uniformity

    International Nuclear Information System (INIS)

    Maciejewski, B.; Rodney Withers, H.

    2004-01-01

    The exploitation of a number of current clinical trials and reports on outcomes after radiation therapy (i.e. breast, head and neck, prostate) in clinical practice reflects many limitations for conventional techniques and dose-fractionation schedules and for 'average' conclusions. Even after decades of evolution of radiation therapy we still do not know how to optimize treatment for the individual patient and only have 'averages' and ill-defined 'probabilities' to guide treatment prescription. Wide clinical and biological heterogeneity within the groups of patients recruited into clinical trials with a few-fold variation in tumour volume within one stage of disease is obvious. Basic radiobiological guidelines concerning average cell killing of uniformly distributed and equally radiosensitive tumour cells arose from elegant but idealistic in vitro experiments and seem to be of uncertain validity. Therefore, we are confronted with more dilemmas than dogmas. Nonlinearity and in homogeneity of human tumour pattern and response to irradiation are discussed. The purpose of this paper is to present and discuss various aspects of non-uniform tumour cell targeted radiotherapy using conformal and dose intensity modulated techniques. (author)

  5. Bounds on Average Time Complexity of Decision Trees

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any

  6. Nodal O(h4)-superconvergence in 3D by averaging piecewise linear, bilinear, and trilinear FE approximations

    Czech Academy of Sciences Publication Activity Database

    Hannukainen, A.; Korotov, S.; Křížek, Michal

    2010-01-01

    Roč. 28, č. 1 (2010), s. 1-10 ISSN 0254-9409 R&D Projects: GA AV ČR(CZ) IAA100190803 Institutional research plan: CEZ:AV0Z10190503 Keywords : higher order error estimates * tetrahedral and prismatic elements * superconvergence * averaging operators Subject RIV: BA - General Mathematics Impact factor: 0.760, year: 2010 http://www.jstor.org/stable/43693564

  7. An average salary: approaches to the index determination

    Directory of Open Access Journals (Sweden)

    T. M. Pozdnyakova

    2017-01-01

    Full Text Available The article “An average salary: approaches to the index determination” is devoted to studying various methods of calculating this index, both used by official state statistics of the Russian Federation and offered by modern researchers.The purpose of this research is to analyze the existing approaches to calculating the average salary of employees of enterprises and organizations, as well as to make certain additions that would help to clarify this index.The information base of the research is laws and regulations of the Russian Federation Government, statistical and analytical materials of the Federal State Statistics Service of Russia for the section «Socio-economic indexes: living standards of the population», as well as materials of scientific papers, describing different approaches to the average salary calculation. The data on the average salary of employees of educational institutions of the Khabarovsk region served as the experimental base of research. In the process of conducting the research, the following methods were used: analytical, statistical, calculated-mathematical and graphical.The main result of the research is an option of supplementing the method of calculating average salary index within enterprises or organizations, used by Goskomstat of Russia, by means of introducing a correction factor. Its essence consists in the specific formation of material indexes for different categories of employees in enterprises or organizations, mainly engaged in internal secondary jobs. The need for introducing this correction factor comes from the current reality of working conditions of a wide range of organizations, when an employee is forced, in addition to the main position, to fulfill additional job duties. As a result, the situation is frequent when the average salary at the enterprise is difficult to assess objectively because it consists of calculating multiple rates per staff member. In other words, the average salary of

  8. 7 CFR 1437.11 - Average market price and payment factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Average market price and payment factors. 1437.11... ASSISTANCE PROGRAM General Provisions § 1437.11 Average market price and payment factors. (a) An average... average market price by the applicable payment factor (i.e., harvested, unharvested, or prevented planting...

  9. Anomalous behavior of q-averages in nonextensive statistical mechanics

    International Nuclear Information System (INIS)

    Abe, Sumiyoshi

    2009-01-01

    A generalized definition of average, termed the q-average, is widely employed in the field of nonextensive statistical mechanics. Recently, it has however been pointed out that such an average value may behave unphysically under specific deformations of probability distributions. Here, the following three issues are discussed and clarified. Firstly, the deformations considered are physical and may be realized experimentally. Secondly, in view of the thermostatistics, the q-average is unstable in both finite and infinite discrete systems. Thirdly, a naive generalization of the discussion to continuous systems misses a point, and a norm better than the L 1 -norm should be employed for measuring the distance between two probability distributions. Consequently, stability of the q-average is shown not to be established in all of the cases

  10. Books average previous decade of economic misery.

    Science.gov (United States)

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  11. Asynchronous Gossip for Averaging and Spectral Ranking

    Science.gov (United States)

    Borkar, Vivek S.; Makhijani, Rahul; Sundaresan, Rajesh

    2014-08-01

    We consider two variants of the classical gossip algorithm. The first variant is a version of asynchronous stochastic approximation. We highlight a fundamental difficulty associated with the classical asynchronous gossip scheme, viz., that it may not converge to a desired average, and suggest an alternative scheme based on reinforcement learning that has guaranteed convergence to the desired average. We then discuss a potential application to a wireless network setting with simultaneous link activation constraints. The second variant is a gossip algorithm for distributed computation of the Perron-Frobenius eigenvector of a nonnegative matrix. While the first variant draws upon a reinforcement learning algorithm for an average cost controlled Markov decision problem, the second variant draws upon a reinforcement learning algorithm for risk-sensitive control. We then discuss potential applications of the second variant to ranking schemes, reputation networks, and principal component analysis.

  12. Benchmarking statistical averaging of spectra with HULLAC

    Science.gov (United States)

    Klapisch, Marcel; Busquet, Michel

    2008-11-01

    Knowledge of radiative properties of hot plasmas is important for ICF, astrophysics, etc When mid-Z or high-Z elements are present, the spectra are so complex that one commonly uses statistically averaged description of atomic systems [1]. In a recent experiment on Fe[2], performed under controlled conditions, high resolution transmission spectra were obtained. The new version of HULLAC [3] allows the use of the same model with different levels of details/averaging. We will take advantage of this feature to check the effect of averaging with comparison with experiment. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Quant. Spectros. Rad. Transf. 65, 43 (2000). [2] J. E. Bailey, G. A. Rochau, C. A. Iglesias et al., Phys. Rev. Lett. 99, 265002-4 (2007). [3]. M. Klapisch, M. Busquet, and A. Bar-Shalom, AIP Conference Proceedings 926, 206-15 (2007).

  13. OrthoANI: An improved algorithm and software for calculating average nucleotide identity.

    Science.gov (United States)

    Lee, Imchang; Ouk Kim, Yeong; Park, Sang-Cheol; Chun, Jongsik

    2016-02-01

    Species demarcation in Bacteria and Archaea is mainly based on overall genome relatedness, which serves a framework for modern microbiology. Current practice for obtaining these measures between two strains is shifting from experimentally determined similarity obtained by DNA-DNA hybridization (DDH) to genome-sequence-based similarity. Average nucleotide identity (ANI) is a simple algorithm that mimics DDH. Like DDH, ANI values between two genome sequences may be different from each other when reciprocal calculations are compared. We compared 63 690 pairs of genome sequences and found that the differences in reciprocal ANI values are significantly high, exceeding 1 % in some cases. To resolve this problem of not being symmetrical, a new algorithm, named OrthoANI, was developed to accommodate the concept of orthology for which both genome sequences were fragmented and only orthologous fragment pairs taken into consideration for calculating nucleotide identities. OrthoANI is highly correlated with ANI (using BLASTn) and the former showed approximately 0.1 % higher values than the latter. In conclusion, OrthoANI provides a more robust and faster means of calculating average nucleotide identity for taxonomic purposes. The standalone software tools are freely available at http://www.ezbiocloud.net/sw/oat.

  14. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....

  15. High average power supercontinuum sources

    Indian Academy of Sciences (India)

    The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium.

  16. How to pass higher English colour

    CERN Document Server

    Bridges, Ann

    2009-01-01

    How to Pass is the Number 1 revision series for Scottish qualifications across the three examination levels of Standard Grade, Intermediate and Higher! Second editions of the books present all of the material in full colour for the first time.

  17. Average monthly and annual climate maps for Bolivia

    KAUST Repository

    Vicente-Serrano, Sergio M.

    2015-02-24

    This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.

  18. USA: Economics, Politics, Ideology, Number 6, June 1978.

    Science.gov (United States)

    1978-07-26

    great practical interest. Calculations of this kind involve great difficulties and require the use of detailed econometric models . 22. See footnote... municipal services, however, could have given work to 98,000 persons.-*-" Huge military expenditures result in higher taxes, a larger national debt...in constant prices) national income per capita. Since the average annual rate of population growth in the United States during the postwar period

  19. Prospective memory deficits in illicit polydrug users are associated with the average long-term typical dose of ecstasy typically consumed in a single session.

    Science.gov (United States)

    Gallagher, Denis T; Hadjiefthyvoulou, Florentia; Fisk, John E; Montgomery, Catharine; Robinson, Sarita J; Judge, Jeannie

    2014-01-01

    Neuroimaging evidence suggests that ecstasy-related reductions in SERT densities relate more closely to the number of tablets typically consumed per session rather than estimated total lifetime use. To better understand the basis of drug related deficits in prospective memory (p.m.) we explored the association between p.m. and average long-term typical dose and long-term frequency of use. Study 1: Sixty-five ecstasy/polydrug users and 85 nonecstasy users completed an event-based, a short-term and a long-term time-based p.m. task. Study 2: Study 1 data were merged with outcomes on the same p.m. measures from a previous study creating a combined sample of 103 ecstasy/polydrug users, 38 cannabis-only users, and 65 nonusers of illicit drugs. Study 1: Ecstasy/polydrug users had significant impairments on all p.m. outcomes compared with nonecstasy users. Study 2: Ecstasy/polydrug users were impaired in event-based p.m. compared with both other groups and in long-term time-based p.m. compared with nonillicit drug users. Both drug using groups did worse on the short-term time-based p.m. task compared with nonusers. Higher long-term average typical dose of ecstasy was associated with poorer performance on the event and short-term time-based p.m. tasks and accounted for unique variance in the two p.m. measures over and above the variance associated with cannabis and cocaine use. The typical ecstasy dose consumed in a single session is an important predictor of p.m. impairments with higher doses reflecting increasing tolerance giving rise to greater p.m. impairment.

  20. Colorectal cancer screening for average-risk North Americans: an economic evaluation.

    Directory of Open Access Journals (Sweden)

    Steven J Heitman

    Full Text Available BACKGROUND: Colorectal cancer (CRC fulfills the World Health Organization criteria for mass screening, but screening uptake is low in most countries. CRC screening is resource intensive, and it is unclear if an optimal strategy exists. The objective of this study was to perform an economic evaluation of CRC screening in average risk North American individuals considering all relevant screening modalities and current CRC treatment costs. METHODS AND FINDINGS: An incremental cost-utility analysis using a Markov model was performed comparing guaiac-based fecal occult blood test (FOBT or fecal immunochemical test (FIT annually, fecal DNA every 3 years, flexible sigmoidoscopy or computed tomographic colonography every 5 years, and colonoscopy every 10 years. All strategies were also compared to a no screening natural history arm. Given that different FIT assays and collection methods have been previously tested, three distinct FIT testing strategies were considered, on the basis of studies that have reported "low," "mid," and "high" test performance characteristics for detecting adenomas and CRC. Adenoma and CRC prevalence rates were based on a recent systematic review whereas screening adherence, test performance, and CRC treatment costs were based on publicly available data. The outcome measures included lifetime costs, number of cancers, cancer-related deaths, quality-adjusted life-years gained, and incremental cost-utility ratios. Sensitivity and scenario analyses were performed. Annual FIT, assuming mid-range testing characteristics, was more effective and less costly compared to all strategies (including no screening except FIT-high. Among the lifetimes of 100,000 average-risk patients, the number of cancers could be reduced from 4,857 to 1,393 [corrected] and the number of CRC deaths from 1,782 [corrected] to 457, while saving CAN$68 per person. Although screening patients with FIT became more expensive than a strategy of no screening when the

  1. Average stress in a Stokes suspension of disks

    NARCIS (Netherlands)

    Prosperetti, Andrea

    2004-01-01

    The ensemble-average velocity and pressure in an unbounded quasi-random suspension of disks (or aligned cylinders) are calculated in terms of average multipoles allowing for the possibility of spatial nonuniformities in the system. An expression for the stress due to the suspended particles is

  2. Average wind statistics for SRP area meteorological towers

    International Nuclear Information System (INIS)

    Laurinat, J.E.

    1987-01-01

    A quality assured set of average wind Statistics for the seven SRP area meteorological towers has been calculated for the five-year period 1982--1986 at the request of DOE/SR. A Similar set of statistics was previously compiled for the years 1975-- 1979. The updated wind statistics will replace the old statistics as the meteorological input for calculating atmospheric radionuclide doses from stack releases, and will be used in the annual environmental report. This report details the methods used to average the wind statistics and to screen out bad measurements and presents wind roses generated by the averaged statistics

  3. The basic reproduction number (R0) of measles: a systematic review.

    Science.gov (United States)

    Guerra, Fiona M; Bolotin, Shelly; Lim, Gillian; Heffernan, Jane; Deeks, Shelley L; Li, Ye; Crowcroft, Natasha S

    2017-12-01

    The basic reproduction number, R nought (R 0 ), is defined as the average number of secondary cases of an infectious disease arising from a typical case in a totally susceptible population, and can be estimated in populations if pre-existing immunity can be accounted for in the calculation. R 0 determines the herd immunity threshold and therefore the immunisation coverage required to achieve elimination of an infectious disease. As R 0 increases, higher immunisation coverage is required to achieve herd immunity. In July, 2010, a panel of experts convened by WHO concluded that measles can and should be eradicated. Despite the existence of an effective vaccine, regions have had varying success in measles control, in part because measles is one of the most contagious infections. For measles, R 0 is often cited to be 12-18, which means that each person with measles would, on average, infect 12-18 other people in a totally susceptible population. We did a systematic review to find studies reporting rigorous estimates and determinants of measles R 0 . Studies were included if they were a primary source of R 0 , addressed pre-existing immunity, and accounted for pre-existing immunity in their calculation of R 0 . A search of key databases was done in January, 2015, and repeated in November, 2016, and yielded 10 883 unique citations. After screening for relevancy and quality, 18 studies met inclusion criteria, providing 58 R 0 estimates. We calculated median measles R 0 values stratified by key covariates. We found that R 0 estimates vary more than the often cited range of 12-18. Our results highlight the importance of countries calculating R 0 using locally derived data or, if this is not possible, using parameter estimates from similar settings. Additional data and agreed review methods are needed to strengthen the evidence base for measles elimination modelling. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. ORLIB: a computer code that produces one-energy group, time- and spatially-averaged neutron cross sections

    International Nuclear Information System (INIS)

    Blink, J.A.; Dye, R.E.; Kimlinger, J.R.

    1981-12-01

    Calculation of neutron activation of proposed fusion reactors requires a library of neutron-activation cross sections. One such library is ACTL, which is being updated and expanded by Howerton. If the energy-dependent neutron flux is also known as a function of location and time, the buildup and decay of activation products can be calculated. In practice, hand calculation is impractical without energy-averaged cross sections because of the large number of energy groups. A widely used activation computer code, ORIGEN2, also requires energy-averaged cross sections. Accordingly, we wrote the ORLIB code to collapse the ACTL library, using the flux as a weighting function. The ORLIB code runs on the LLNL Cray computer network. We have also modified ORIGEN2 to accept the expanded activation libraries produced by ORLIB

  5. Local and average structure of Mn- and La-substituted BiFeO{sub 3}

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Bo; Selbach, Sverre M., E-mail: selbach@ntnu.no

    2017-06-15

    The local and average structure of solid solutions of the multiferroic perovskite BiFeO{sub 3} is investigated by synchrotron X-ray diffraction (XRD) and electron density functional theory (DFT) calculations. The average experimental structure is determined by Rietveld refinement and the local structure by total scattering data analyzed in real space with the pair distribution function (PDF) method. With equal concentrations of La on the Bi site or Mn on the Fe site, La causes larger structural distortions than Mn. Structural models based on DFT relaxed geometry give an improved fit to experimental PDFs compared to models constrained by the space group symmetry. Berry phase calculations predict a higher ferroelectric polarization than the experimental literature values, reflecting that structural disorder is not captured in either average structure space group models or DFT calculations with artificial long range order imposed by periodic boundary conditions. Only by including point defects in a supercell, here Bi vacancies, can DFT calculations reproduce the literature results on the structure and ferroelectric polarization of Mn-substituted BiFeO{sub 3}. The combination of local and average structure sensitive experimental methods with DFT calculations is useful for illuminating the structure-property-composition relationships in complex functional oxides with local structural distortions. - Graphical abstract: The experimental and simulated partial pair distribution functions (PDF) for BiFeO{sub 3}, BiFe{sub 0.875}Mn{sub 0.125}O{sub 3}, BiFe{sub 0.75}Mn{sub 0.25}O{sub 3} and Bi{sub 0.9}La{sub 0.1}FeO{sub 3}.

  6. Developmental patterns of privatization in higher education

    DEFF Research Database (Denmark)

    Jamshidi, Laleh; Arasteh, Hamidreza; NavehEbrahim, Abdolrahim

    2012-01-01

    In most developing countries, as the young population increase in number and consequently, the demands for higher education rise, the governments cannot respond to all demands. Accordingly, they develop private higher education sectors as an alternative solution. In developed countries, some moving......: Indonesia, Malaysia, and Kenya. After a short outline of theoretical foundations, this study provides more in-depth explanations of the principal and common effective factors....

  7. NOAA Average Annual Salinity (3-Zone)

    Data.gov (United States)

    California Natural Resource Agency — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...

  8. The Love of Large Numbers: A Popularity Bias in Consumer Choice.

    Science.gov (United States)

    Powell, Derek; Yu, Jingqi; DeWolf, Melissa; Holyoak, Keith J

    2017-10-01

    Social learning-the ability to learn from observing the decisions of other people and the outcomes of those decisions-is fundamental to human evolutionary and cultural success. The Internet now provides social evidence on an unprecedented scale. However, properly utilizing this evidence requires a capacity for statistical inference. We examined how people's interpretation of online review scores is influenced by the numbers of reviews-a potential indicator both of an item's popularity and of the precision of the average review score. Our task was designed to pit statistical information against social information. We modeled the behavior of an "intuitive statistician" using empirical prior information from millions of reviews posted on Amazon.com and then compared the model's predictions with the behavior of experimental participants. Under certain conditions, people preferred a product with more reviews to one with fewer reviews even though the statistical model indicated that the latter was likely to be of higher quality than the former. Overall, participants' judgments suggested that they failed to make meaningful statistical inferences.

  9. An approach to averaging digitized plantagram curves.

    Science.gov (United States)

    Hawes, M R; Heinemeyer, R; Sovak, D; Tory, B

    1994-07-01

    The averaging of outline shapes of the human foot for the purposes of determining information concerning foot shape and dimension within the context of comfort of fit of sport shoes is approached as a mathematical problem. An outline of the human footprint is obtained by standard procedures and the curvature is traced with a Hewlett Packard Digitizer. The paper describes the determination of an alignment axis, the identification of two ray centres and the division of the total curve into two overlapping arcs. Each arc is divided by equiangular rays which intersect chords between digitized points describing the arc. The radial distance of each ray is averaged within groups of foot lengths which vary by +/- 2.25 mm (approximately equal to 1/2 shoe size). The method has been used to determine average plantar curves in a study of 1197 North American males (Hawes and Sovak 1993).

  10. Books Average Previous Decade of Economic Misery

    Science.gov (United States)

    Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  11. CD3+/CD16+CD56+ cell numbers in peripheral blood are correlated with higher tumor burden in patients with diffuse large B-cell lymphoma

    Directory of Open Access Journals (Sweden)

    Anna Twardosz

    2011-04-01

    Full Text Available Diffuse large B-cell lymphoma is the commonest histological type of malignant lymphoma, andremains incurable in many cases. Developing more efficient immunotherapy strategies will require betterunderstanding of the disorders of immune responses in cancer patients. NKT (natural killer-like T cells wereoriginally described as a unique population of T cells with the co-expression of NK cell markers. Apart fromtheir role in protecting against microbial pathogens and controlling autoimmune diseases, NKT cells havebeen recently revealed as one of the key players in the immune responses against tumors. The objective of thisstudy was to evaluate the frequency of CD3+/CD16+CD56+ cells in the peripheral blood of 28 diffuse largeB-cell lymphoma (DLBCL patients in correlation with clinical and laboratory parameters. Median percentagesof CD3+/CD16+CD56+ were significantly lower in patients with DLBCL compared to healthy donors(7.37% vs. 9.01%, p = 0.01; 4.60% vs. 5.81%, p = 0.03, although there were no differences in absolute counts.The frequency and the absolute numbers of CD3+/CD16+CD56+ cells were lower in advanced clinical stagesthan in earlier ones. The median percentage of CD3+/CD16+CD56+ cells in patients in Ann Arbor stages 1–2 was5.55% vs. 3.15% in stages 3–4 (p = 0.02, with median absolute counts respectively 0.26 G/L vs. 0.41 G/L (p == 0.02. The percentage and absolute numbers of CD3+/CD16+CD56+ cells were significantly higher in DL-BCL patients without B-symptoms compared to the patients with B-symptoms, (5.51% vs. 2.46%, p = 0.04;0.21 G/L vs. 0.44 G/L, p = 0.04. The percentage of CD3+/CD16+CD56+ cells correlated adversely with serumlactate dehydrogenase (R= –445; p < 0.05 which might influence NKT count. These figures suggest a relationshipbetween higher tumor burden and more aggressive disease and decreased NKT numbers. But it remains tobe explained whether low NKT cell counts in the peripheral blood of patients with DLBCL are the result

  12. Mean-field theory of spin-glasses with finite coordination number

    Science.gov (United States)

    Kanter, I.; Sompolinsky, H.

    1987-01-01

    The mean-field theory of dilute spin-glasses is studied in the limit where the average coordination number is finite. The zero-temperature phase diagram is calculated and the relationship between the spin-glass phase and the percolation transition is discussed. The present formalism is applicable also to graph optimization problems.

  13. The Demise of Higher Education Performance Funding Systems in Three States. CCRC Brief. Number 41

    Science.gov (United States)

    Dougherty, Kevin J.; Natow, Rebecca S.

    2009-01-01

    Performance funding in higher education ties state funding directly to institutional performance on specific indicators, such as rates of retention, graduation, and job placement. One of the great puzzles about performance funding is that it has been both popular and unstable. Between 1979 and 2007, 26 states enacted it, but 14 of those states…

  14. Effects of Video-Based and Applied Problems on the Procedural Math Skills of Average- and Low-Achieving Adolescents.

    Science.gov (United States)

    Bottge, Brian A.; Heinrichs, Mary; Chan, Shih-Yi; Mehta, Zara Dee; Watson, Elizabeth

    2003-01-01

    This study examined effects of video-based, anchored instruction and applied problems on the ability of 11 low-achieving (LA) and 26 average-achieving (AA) eighth graders to solve computation and word problems. Performance for both groups was higher during anchored instruction than during baseline, but no differences were found between instruction…

  15. Accreditation and Expansion in Danish Higher Education

    DEFF Research Database (Denmark)

    Rasmussen, Palle

    2014-01-01

    During the last decade, an accreditation system for higher education has been introduced in Denmark. Accreditation partly represents continuity from an earlier evaluation system, but it is also part of a government policy to increasingly define higher education institutions as market actors....... The attempts of universities to increase their student enrolments have combined with the logic of accreditation to produce an increasing number of higher education degrees, often overlapping in content. Students’ scope for choice has been widened, but the basis for and the consequences of choice have become...

  16. High average daily intake of PCDD/Fs and serum levels in residents living near a deserted factory producing pentachlorophenol (PCP) in Taiwan: Influence of contaminated fish consumption

    Energy Technology Data Exchange (ETDEWEB)

    Lee, C.C. [Department of Environmental and Occupational Health, Medical College, National Cheng Kung University, Tainan, Taiwan (China); Research Center of Environmental Trace Toxic Substances, Medical College, National Cheng Kung University, Tainan, Taiwan (China); Lin, W.T. [Department of Environmental and Occupational Health, Medical College, National Cheng Kung University, Tainan, Taiwan (China); Liao, P.C. [Department of Environmental and Occupational Health, Medical College, National Cheng Kung University, Tainan, Taiwan (China); Research Center of Environmental Trace Toxic Substances, Medical College, National Cheng Kung University, Tainan, Taiwan (China); Su, H.J. [Department of Environmental and Occupational Health, Medical College, National Cheng Kung University, Tainan, Taiwan (China); Research Center of Environmental Trace Toxic Substances, Medical College, National Cheng Kung University, Tainan, Taiwan (China); Chen, H.L. [Department of Industrial Safety and Health, Hung Kuang University, Taichung, 34 Chung Chie Rd. Sha Lu, Taichung 433, Taiwan (China)]. E-mail: hsiulin@sunrise.hk.edu.tw

    2006-05-15

    An abandoned pentachlorophenol plant and nearby area in southern Taiwan was heavily contaminated by dioxins, impurities formed in the PCP production process. The investigation showed that the average serum PCDD/Fs of residents living nearby area (62.5 pg WHO-TEQ/g lipid) was higher than those living in the non-polluted area (22.5 and 18.2 pg WHO-TEQ/g lipid) (P < 0.05). In biota samples, average PCDD/F of milkfish in sea reservoir (28.3 pg WHO-TEQ/g) was higher than those in the nearby fish farm (0.15 pg WHO-TEQ/g), and Tilapia and shrimp showed the similar trend. The average daily PCDD/Fs intake of 38% participants was higher than 4 pg WHO-TEQ/kg/day suggested by the world health organization. Serum PCDD/F was positively associated with average daily intake (ADI) after adjustment for age, sex, BMI, and smoking status. In addition, a prospective cohort study is suggested to determine the long-term health effects on the people living near factory. - Inhabitants living near a deserted PCP factory are exposed to high PCDD/F levels.

  17. The north–south divide in the Italian higher education system

    DEFF Research Database (Denmark)

    Abramo, Giovanni; D’Angelo, Ciriaco Andrea; Rosati, Francesco

    2016-01-01

    This work examines whether the macroeconomic divide between northern and southern Italy is also present at the level of higher education. The analysis confirms that the research performance in the sciences of the professors in the south is on average less than that of the professors in the north...

  18. Obesity-related eating behaviors are associated with higher food energy density and higher consumption of sugary and alcoholic beverages: a cross-sectional study.

    Directory of Open Access Journals (Sweden)

    Maritza Muñoz-Pareja

    Full Text Available Obesity-related eating behaviors (OREB are associated with higher energy intake. Total energy intake can be decomposed into the following constituents: food portion size, food energy density, the number of eating occasions, and the energy intake from energy-rich beverages. To our knowledge this is the first study to examine the association between the OREB and these energy components.Data were taken from a cross-sectional study conducted in 2008-2010 among 11,546 individuals representative of the Spanish population aged ≥ 18 years. Information was obtained on the following 8 self-reported OREB: not planning how much to eat before sitting down, eating precooked/canned food or snacks bought at vending machines or at fast-food restaurants, not choosing low-energy foods, not removing visible fat from meat or skin from chicken, and eating while watching TV. Usual diet was assessed with a validated diet history. Analyses were performed with linear regression with adjustment for main confounders.Compared to individuals with ≤ 1 OREB, those with ≥ 5 OREB had a higher food energy density (β 0.10; 95% CI 0.08, 0.12 kcal/g/day; p-trend<0.001 and a higher consumption of sugary drinks (β 7; 95% CI -7, 20 ml/day; p-trend<0.05 and of alcoholic beverages (β 24; 95% CI 10, 38 ml/day; p-trend<0.001. Specifically, a higher number of OREB was associated with higher intake of dairy products and red meat, and with lower consumption of fresh fruit, oily fish and white meat. No association was found between the number of OREB and food portion size or the number of eating occasions.OREB were associated with higher food energy density and higher consumption of sugary and alcoholic beverages. Avoiding OREB may prove difficult because they are firmly socially rooted, but these results may nevertheless serve to palliate the undesirable effects of OREB by reducing the associated energy intake.

  19. Bootstrapping pre-averaged realized volatility under market microstructure noise

    DEFF Research Database (Denmark)

    Hounyo, Ulrich; Goncalves, Sílvia; Meddahi, Nour

    The main contribution of this paper is to propose a bootstrap method for inference on integrated volatility based on the pre-averaging approach of Jacod et al. (2009), where the pre-averaging is done over all possible overlapping blocks of consecutive observations. The overlapping nature of the pre......-averaged returns implies that these are kn-dependent with kn growing slowly with the sample size n. This motivates the application of a blockwise bootstrap method. We show that the "blocks of blocks" bootstrap method suggested by Politis and Romano (1992) (and further studied by Bühlmann and Künsch (1995......)) is valid only when volatility is constant. The failure of the blocks of blocks bootstrap is due to the heterogeneity of the squared pre-averaged returns when volatility is stochastic. To preserve both the dependence and the heterogeneity of squared pre-averaged returns, we propose a novel procedure...

  20. What Do s- and p-Wave Neutron Average Radiative Widths Reveal

    Energy Technology Data Exchange (ETDEWEB)

    Mughabghab, S.F.

    2010-04-30

    A first observation of two resonance-like structures at mass numbers 92 and 112 in the average capture widths of the p-wave neutron resonances relative to the s-wave component is interpreted in terms of a spin-orbit splitting of the 3p single-particle state into P{sub 3/2} and P{sub 1/2} components at the neutron separation energy. A third structure at about A = 124, which is not correlated with the 3p-wave neutron strength function, is possibly due to the Pygmy Dipole Resonance. Five significant results emerge from this investigation: (i) The strength of the spin-orbit potential of the optical-model is determined as 5.7 {+-} 0.5 MeV, (ii) Non-statistical effects dominate the p-wave neutron-capture in the mass region A = 85 - 130, (iii) The background magnitude of the p-wave average capture-width relative to that of the s-wave is determined as 0.50 {+-} 0.05, which is accounted for quantitatively in tenns of the generalized Fermi liquid model of Mughabghab and Dunford, (iv) The p-wave resonances arc partially decoupled from the giant-dipole resonance (GDR), and (v) Gamma-ray transitions, enhanced over the predictions of the GDR, are observed in the {sup 90}Zr - {sup 98}Mo and Sn-Ba regions.

  1. An Exponentially Weighted Moving Average Control Chart for Bernoulli Data

    DEFF Research Database (Denmark)

    Spliid, Henrik

    2010-01-01

    of the transformation is given and its limit for small values of p is derived. Control of high yield processes is discussed and the chart is shown to perform very well in comparison with both the most common alternative EWMA chart and the CUSUM chart. The construction and the use of the proposed EWMA chart......We consider a production process in which units are produced in a sequential manner. The units can, for example, be manufactured items or services, provided to clients. Each unit produced can be a failure with probability p or a success (non-failure) with probability (1-p). A novel exponentially...... weighted moving average (EWMA) control chart intended for surveillance of the probability of failure, p, is described. The chart is based on counting the number of non-failures produced between failures in combination with a variance-stabilizing transformation. The distribution function...

  2. MN Temperature Average (1961-1990) - Line

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  3. MN Temperature Average (1961-1990) - Polygon

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  4. High average-power induction linacs

    International Nuclear Information System (INIS)

    Prono, D.S.; Barrett, D.; Bowles, E.

    1989-01-01

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  5. Robust Determinants of Growth in Asian Developing Economies: A Bayesian Panel Data Model Averaging Approach

    OpenAIRE

    LEON-GONZALEZ, Roberto; VINAYAGATHASAN, Thanabalasingam

    2013-01-01

    This paper investigates the determinants of growth in the Asian developing economies. We use Bayesian model averaging (BMA) in the context of a dynamic panel data growth regression to overcome the uncertainty over the choice of control variables. In addition, we use a Bayesian algorithm to analyze a large number of competing models. Among the explanatory variables, we include a non-linear function of inflation that allows for threshold effects. We use an unbalanced panel data set of 27 Asian ...

  6. Interpreting Bivariate Regression Coefficients: Going beyond the Average

    Science.gov (United States)

    Halcoussis, Dennis; Phillips, G. Michael

    2010-01-01

    Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…

  7. Unscrambling The "Average User" Of Habbo Hotel

    Directory of Open Access Journals (Sweden)

    Mikael Johnson

    2007-01-01

    Full Text Available The “user” is an ambiguous concept in human-computer interaction and information systems. Analyses of users as social actors, participants, or configured users delineate approaches to studying design-use relationships. Here, a developer’s reference to a figure of speech, termed the “average user,” is contrasted with design guidelines. The aim is to create an understanding about categorization practices in design through a case study about the virtual community, Habbo Hotel. A qualitative analysis highlighted not only the meaning of the “average user,” but also the work that both the developer and the category contribute to this meaning. The average user a represents the unknown, b influences the boundaries of the target user groups, c legitimizes the designer to disregard marginal user feedback, and d keeps the design space open, thus allowing for creativity. The analysis shows how design and use are intertwined and highlights the developers’ role in governing different users’ interests.

  8. Effects of donor fibroblast cell type and transferred cloned embryo number on the efficiency of pig cloning.

    Science.gov (United States)

    Li, Zicong; Shi, Junsong; Liu, Dewu; Zhou, Rong; Zeng, Haiyu; Zhou, Xiu; Mai, Ranbiao; Zeng, Shaofen; Luo, Lvhua; Yu, Wanxian; Zhang, Shouquan; Wu, Zhenfang

    2013-02-01

    Currently, cloning efficiency in pigs is very low. Donor cell type and number of cloned embryos transferred to an individual surrogate are two major factors that affect the successful rate of somatic cell nuclear transfer (SCNT) in pigs. This study aimed to compare the influence of different donor fibroblast cell types and different transferred embryo numbers on recipients' pregnancy rate and delivery rate, the average number of total clones born, clones born alive and clones born healthy per litter, and the birth rate of healthy clones (=total number of healthy cloned piglets born /total number of transferred cloned embryos). Three types of donor fibroblasts were tested in large-scale production of cloned pigs, including fetal fibroblasts (FFBs) from four genetically similar Western swine breeds of Pietrain (P), Duroc (D), Landrace (L), and Yorkshire (Y), which are referred to as P,D,LY-FFBs, adult fibroblasts (AFBs) from the same four breeds, which are designated P,D,L,Y-AFBs, and AFBs from a Chinese pig breed of Laiwu (LW), which is referred to as LW-AFBs. Within each donor fibroblast cell type group, five transferred cloned embryo number groups were tested. In each embryo number group, 150-199, 200-249, 250-299, 300-349, or 350-450 cloned embryos were transferred to each individual recipient sow. For the entire experiment, 92,005 cloned embryos were generated from nearly 115,000 matured oocytes and transferred to 328 recipients; in total, 488 cloned piglets were produced. The results showed that the mean clones born healthy per litter resulted from transfer of embryos cloned from LW-AFBs (2.53 ± 0.34) was similar with that associated with P,D,L,Y-FFBs (2.72 ± 0.29), but was significantly higher than that resulted from P,D,L,Y-AFBs (1.47 ± 0.18). Use of LW-AFBs as donor cells for SCNT resulted in a significantly higher pregnancy rate (72.00% vs. 59.30% and 48.11%) and delivery rate (60.00% vs. 45.93% and 35.85%) for cloned embryo recipients, and a

  9. Trends in International Trade in Higher Education: Implications and Options for Developing Countries. Education Working Paper Series, Number 6

    Science.gov (United States)

    Bashir, Sajitha

    2007-01-01

    This paper analyzes the trends, underlying factors and implications of the trade in higher education services. The term "trade in higher education" refers to the purchase of higher education services from a foreign country using domestic resources. The objectives of this paper are to provide policy makers in developing countries, World Bank staff,…

  10. GI Joe or Average Joe? The impact of average-size and muscular male fashion models on men's and women's body image and advertisement effectiveness.

    Science.gov (United States)

    Diedrichs, Phillippa C; Lee, Christina

    2010-06-01

    Increasing body size and shape diversity in media imagery may promote positive body image. While research has largely focused on female models and women's body image, men may also be affected by unrealistic images. We examined the impact of average-size and muscular male fashion models on men's and women's body image and perceived advertisement effectiveness. A sample of 330 men and 289 women viewed one of four advertisement conditions: no models, muscular, average-slim or average-large models. Men and women rated average-size models as equally effective in advertisements as muscular models. For men, exposure to average-size models was associated with more positive body image in comparison to viewing no models, but no difference was found in comparison to muscular models. Similar results were found for women. Internalisation of beauty ideals did not moderate these effects. These findings suggest that average-size male models can promote positive body image and appeal to consumers. 2010 Elsevier Ltd. All rights reserved.

  11. Accounting for Institutional Variation in Expected Returns to Higher Education

    Science.gov (United States)

    Dorius, Shawn F.; Tandberg, David A.; Cram, Bridgette

    2017-01-01

    This study leverages human capital theory to identify the correlates of expected returns on investment in higher education at the level of institutions. We leverage estimates of average ROI in post-secondary education among more than 400 baccalaureate degree conferring colleges and universities to understand the correlates of a relatively new…

  12. Average: the juxtaposition of procedure and context

    Science.gov (United States)

    Watson, Jane; Chick, Helen; Callingham, Rosemary

    2014-09-01

    This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.

  13. High average-power induction linacs

    International Nuclear Information System (INIS)

    Prono, D.S.; Barrett, D.; Bowles, E.; Caporaso, G.J.; Chen, Yu-Jiuan; Clark, J.C.; Coffield, F.; Newton, M.A.; Nexsen, W.; Ravenscroft, D.; Turner, W.C.; Watson, J.A.

    1989-01-01

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of ∼ 50-ns duration pulses to > 100 MeV. In this paper the authors report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  14. Serpent-COREDAX analysis of CANDU-6 time-average model

    Energy Technology Data Exchange (ETDEWEB)

    Motalab, M.A.; Cho, B.; Kim, W.; Cho, N.Z.; Kim, Y., E-mail: yongheekim@kaist.ac.kr [Korea Advanced Inst. of Science and Technology (KAIST), Dept. of Nuclear and Quantum Engineering Daejeon (Korea, Republic of)

    2015-07-01

    COREDAX-2 is the nuclear core analysis nodal code that has adopted the Analytic Function Expansion Nodal (AFEN) methodology which has been developed in Korea. AFEN method outperforms in terms of accuracy compared to other conventional nodal methods. To evaluate the possibility of CANDU-type core analysis using the COREDAX-2, the time-average analysis code system was developed. The two-group homogenized cross-sections were calculated using Monte Carlo code, Serpent2. A stand-alone time-average module was developed to determine the time-average burnup distribution in the core for a given fuel management strategy. The coupled Serpent-COREDAX-2 calculation converges to an equilibrium time-average model for the CANDU-6 core. (author)

  15. Decreased Number of Self-Paced Saccades in Post-Concussion Syndrome Associated with Higher Symptom Burden and Reduced White Matter Integrity.

    Science.gov (United States)

    Taghdiri, Foad; Chung, Jonathan; Irwin, Samantha; Multani, Namita; Tarazi, Apameh; Ebraheem, Ahmed; Khodadadi, Mozghan; Goswami, Ruma; Wennberg, Richard; Mikulis, David; Green, Robin; Davis, Karen; Tator, Charles; Eizenman, Moshe; Tartaglia, Maria Carmela

    2018-03-01

    The aim of this study was to examine the potential utility of a self-paced saccadic eye movement as a marker of post-concussion syndrome (PCS) and monitoring the recovery from PCS. Fifty-nine persistently symptomatic participants with at least two concussions performed the self-paced saccade (SPS) task. We evaluated the relationships between the number of SPSs and 1) number of self-reported concussion symptoms, and 2) integrity of major white matter (WM) tracts (as measured by fractional anisotropy [FA] and mean diffusivity) that are directly or indirectly involved in saccadic eye movements and often affected by concussion. These tracts included the uncinate fasciculus (UF), cingulum (Cg) and its three subcomponents (subgenual, retrosplenial, and parahippocampal), superior longitudinal fasciculus, and corpus callosum. Mediation analyses were carried out to examine whether specific WM tracts (left UF and left subgenual Cg) mediated the relationship between the number of SPSs and 1) interval from last concussion or 2) total number of self-reported symptoms. The number of SPSs was negatively correlated with the total number of self-reported symptoms (r = -0.419, p = 0.026). The number of SPSs were positively correlated with FA of left UF and left Cg (r = 0.421, p = 0.013 and r = 0.452, p = 0.008; respectively). FA of the subgenual subcomponent of the left Cg partially mediated the relationship between the total number of symptoms and the number of SPSs, while FA of the left UF mediated the relationship between interval from last concussion and the number of SPSs. In conclusion, SPS testing as a fast and objective assessment may reflect symptom burden in patients with PCS. In addition, since the number of SPSs is associated with the integrity of some WM tracts, it may be useful as a diagnostic biomarker in patients with PCS.

  16. Average configuration of the geomagnetic tail

    International Nuclear Information System (INIS)

    Fairfield, D.H.

    1979-01-01

    Over 3000 hours of Imp 6 magnetic field data obtained between 20 and 33 R/sub E/ in the geomagnetic tail have been used in a statistical study of the tail configuration. A distribution of 2.5-min averages of B/sub z/ as a function of position across the tail reveals that more flux crosses the equatorial plane near the dawn and dusk flanks (B-bar/sub z/=3.γ) than near midnight (B-bar/sub z/=1.8γ). The tail field projected in the solar magnetospheric equatorial plane deviates from the x axis due to flaring and solar wind aberration by an angle α=-0.9 Y/sub SM/-2.7, where Y/sub SM/ is in earth radii and α is in degrees. After removing these effects, the B/sub y/ component of the tail field is found to depend on interplanetary sector structure. During an 'away' sector the B/sub y/ component of the tail field is on average 0.5γ greater than that during a 'toward' sector, a result that is true in both tail lobes and is independent of location across the tail. This effect means the average field reversal between northern and southern lobes of the tail is more often 178 0 rather than the 180 0 that is generally supposed

  17. Coping Strategies Applied to Comprehend Multistep Arithmetic Word Problems by Students with Above-Average Numeracy Skills and Below-Average Reading Skills

    Science.gov (United States)

    Nortvedt, Guri A.

    2011-01-01

    This article discusses how 13-year-old students with above-average numeracy skills and below-average reading skills cope with comprehending word problems. Compared to other students who are proficient in numeracy and are skilled readers, these students are more disadvantaged when solving single-step and multistep arithmetic word problems. The…

  18. On the motion of non-spherical particles at high Reynolds number

    DEFF Research Database (Denmark)

    Mandø, Matthias; Rosendahl, Lasse

    2010-01-01

    This paper contains a critical review of available methodology for dealing with the motion of non-spherical particles at higher Reynolds numbers in the Eulerian- Lagrangian methodology for dispersed flow. First, an account of the various attempts to classify the various shapes and the efforts...... motion it is necessary to account for the non-coincidence between the center of pressure and center of gravity which is a direct consequence of the inertial pressure forces associated with particles at high Reynolds number flow. Extensions for non-spherical particles at higher Reynolds numbers are far...

  19. Vibrations in force-and-mass disordered alloys in the average local-information transfer approximation. Application to Al-Ag

    International Nuclear Information System (INIS)

    Czachor, A.

    1979-01-01

    The configuration-averaged displacement-displacement Green's function, derived in the locator-based approximation accounting for average transfer of information on local coupling and mass, has been applied to study the force-and-mass-disorder induced modifications of phonon dispersion relations in substitutional alloys of cubic structures. In this approach the translational invariance condition is obeyed whereas damping is neglected. The force-disorder was found to lead to additional splitting of phonon curves besides that due to mass-disorder, even in the small impurity-concentration case; at larger concentrations the number of splits (frequency gaps) should be still greater. The use of a quasi-locator in the Green's function derivation allows one to partly reconcile the present results with those of the average t-matrix approximation. The experimentally observed splitting in the [100]T phonon dispersion curve for Al-Ag alloys has been interpreted in terms of the above theory and of a quasi-mass of heavy impurity atoms. (Author)

  20. Trends in size classified particle number concentration in subtropical Brisbane, Australia, based on a 5 year study

    Science.gov (United States)

    Mejía, J. F.; Wraith, D.; Mengersen, K.; Morawska, L.

    Particle number size distribution data in the range from 0.015 to 0.630 μm were collected over a 5-year period in the central business district (CBD) of Brisbane, Australia. Particle size distribution was summarised by total number concentration and number median diameter (NMD) as well as the number concentration of the 0.015-0.030 ( N15-30), 0.030-0.050 ( N30-50), 0.050-0.100 ( N50-100), 0.100-0.300 ( N100-300) and 0.300-0.630 ( N300-630) μm size classes. Morning (6:00-10:00) and afternoon (16:00-19:00) measurements, the former representing fresh traffic emissions (based on the local meteorological conditions) and the latter well-mixed emissions from the CBD, during weekdays were extracted and the respective monthly mean values were estimated for time series analysis. For all size fractions, average morning concentrations were about 1.5 higher than in the afternoon whereas NMD did not vary between the morning and afternoon. The trend and seasonal components were extracted through weighted linear regression models, using the monthly variance as weights. Only the morning measurements exhibited significant trends. During this time of the day, total particle number increased by 105.7% and the increase was greater for larger particles, resulting in a shift in NMD by 7.9%. Although no seasonal component was detected the evidence against it remained weak due to the limitations of the database.

  1. Gustatory Dysfunction and Decreased Number of Fungiform Taste Buds in Patients With Chronic Otitis Media With Cholesteatoma.

    Science.gov (United States)

    Saito, Takehisa; Ito, Tetsufumi; Ito, Yumi; Yamada, Takechiyo; Okamoto, Masayuki; Manabe, Yasuhiro

    2016-09-01

    To compare the number of fungiform taste buds among patients with chronic otitis media (COM), those with pars flaccida retraction type cholesteatoma, and those with pars tensa retraction type cholesteatoma in combination with gustatory function. Thirty-seven patients with COM, 22 patients with pars flaccida retraction type cholesteatoma, and 17 patients with pars tensa retraction type cholesteatoma were included. An average of 10 fungiform papillae (FP) per patient in the midlateral region of the tongue were observed by confocal laser scanning microscopy in vivo, and the average number of taste buds were counted. Just before the observation of FP, electrogustometry (EGM) was performed to evaluate gustatory function. A significant decrease of the average number of fungiform taste buds and significant elevation of EGM thresholds were clarified in the pars tensa retraction type cholesteatoma group but not in the COM or pars flaccida type cholesteatoma group. It was suggested that some neurotoxic cytokines produced by cholesteatoma tissue might affect the CTN morphology, resulting in a decreased number of fungiform taste buds and elevation of EGM threshold in patients with pars tensa retraction type cholesteatoma. © The Author(s) 2016.

  2. Less Physician Practice Competition Is Associated With Higher Prices Paid For Common Procedures.

    Science.gov (United States)

    Austin, Daniel R; Baker, Laurence C

    2015-10-01

    Concentration among physician groups has been steadily increasing, which may affect prices for physician services. We assessed the relationship in 2010 between physician competition and prices paid by private preferred provider organizations for fifteen common, high-cost procedures to understand whether higher concentration of physician practices and accompanying increased market power were associated with higher prices for services. Using county-level measures of the concentration of physician practices and county average prices, and statistically controlling for a range of other regional characteristics, we found that physician practice concentration and prices were significantly associated for twelve of the fifteen procedures we studied. For these procedures, counties with the highest average physician concentrations had prices 8-26 percent higher than prices in the lowest counties. We concluded that physician competition is frequently associated with prices. Policies that would influence physician practice organization should take this into consideration. Project HOPE—The People-to-People Health Foundation, Inc.

  3. Theorem Proving In Higher Order Logics

    Science.gov (United States)

    Carreno, Victor A. (Editor); Munoz, Cesar A.; Tahar, Sofiene

    2002-01-01

    The TPHOLs International Conference serves as a venue for the presentation of work in theorem proving in higher-order logics and related areas in deduction, formal specification, software and hardware verification, and other applications. Fourteen papers were submitted to Track B (Work in Progress), which are included in this volume. Authors of Track B papers gave short introductory talks that were followed by an open poster session. The FCM 2002 Workshop aimed to bring together researchers working on the formalisation of continuous mathematics in theorem proving systems with those needing such libraries for their applications. Many of the major higher order theorem proving systems now have a formalisation of the real numbers and various levels of real analysis support. This work is of interest in a number of application areas, such as formal methods development for hardware and software application and computer supported mathematics. The FCM 2002 consisted of three papers, presented by their authors at the workshop venue, and one invited talk.

  4. Estimating reproduction numbers for adults and children from case data

    Science.gov (United States)

    Glass, K.; Mercer, G. N.; Nishiura, H.; McBryde, E. S.; Becker, N. G.

    2011-01-01

    We present a method for estimating reproduction numbers for adults and children from daily onset data, using pandemic influenza A(H1N1) data as a case study. We investigate the impact of different underlying transmission assumptions on our estimates, and identify that asymmetric reproduction matrices are often appropriate. Under-reporting of cases can bias estimates of the reproduction numbers if reporting rates are not equal across the two age groups. However, we demonstrate that the estimate of the higher reproduction number is robust to disproportionate data-thinning. Applying the method to 2009 pandemic influenza H1N1 data from Japan, we demonstrate that the reproduction number for children was considerably higher than that of adults, and that our estimates are insensitive to our choice of reproduction matrix. PMID:21345858

  5. High-Average, High-Peak Current Injector Design

    CERN Document Server

    Biedron, S G; Virgo, M

    2005-01-01

    There is increasing interest in high-average-power (>100 kW), um-range FELs. These machines require high peak current (~1 kA), modest transverse emittance, and beam energies of ~100 MeV. High average currents (~1 A) place additional constraints on the design of the injector. We present a design for an injector intended to produce the required peak currents at the injector, eliminating the need for magnetic compression within the linac. This reduces the potential for beam quality degradation due to CSR and space charge effects within magnetic chicanes.

  6. History of the theory of numbers

    CERN Document Server

    Dickson, Leonard Eugene

    2005-01-01

    The three-volume series History of the Theory of Numbers is the work of the distinguished mathematician Leonard Eugene Dickson, who taught at the University of Chicago for four decades and is celebrated for his many contributions to number theory and group theory. This second volume in the series, which is suitable for upper-level undergraduates and graduate students, is devoted to the subject of diophantine analysis. It can be read independently of the preceding volume, which explores divisibility and primality, and volume III, which examines quadratic and higher forms.Featured topics include

  7. Statistics on exponential averaging of periodograms

    Energy Technology Data Exchange (ETDEWEB)

    Peeters, T.T.J.M. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Ciftcioglu, Oe. [Istanbul Technical Univ. (Turkey). Dept. of Electrical Engineering

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a {chi}{sup 2} distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.).

  8. Statistics on exponential averaging of periodograms

    International Nuclear Information System (INIS)

    Peeters, T.T.J.M.; Ciftcioglu, Oe.

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a χ 2 distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.)

  9. Development of Portable Automatic Number Plate Recognition System on Android Mobile Phone

    Science.gov (United States)

    Mutholib, Abdul; Gunawan, Teddy S.; Chebil, Jalel; Kartiwi, Mira

    2013-12-01

    The Automatic Number Plate Recognition (ANPR) System has performed as the main role in various access control and security, such as: tracking of stolen vehicles, traffic violations (speed trap) and parking management system. In this paper, the portable ANPR implemented on android mobile phone is presented. The main challenges in mobile application are including higher coding efficiency, reduced computational complexity, and improved flexibility. Significance efforts are being explored to find suitable and adaptive algorithm for implementation of ANPR on mobile phone. ANPR system for mobile phone need to be optimize due to its limited CPU and memory resources, its ability for geo-tagging image captured using GPS coordinates and its ability to access online database to store the vehicle's information. In this paper, the design of portable ANPR on android mobile phone will be described as follows. First, the graphical user interface (GUI) for capturing image using built-in camera was developed to acquire vehicle plate number in Malaysia. Second, the preprocessing of raw image was done using contrast enhancement. Next, character segmentation using fixed pitch and an optical character recognition (OCR) using neural network were utilized to extract texts and numbers. Both character segmentation and OCR were using Tesseract library from Google Inc. The proposed portable ANPR algorithm was implemented and simulated using Android SDK on a computer. Based on the experimental results, the proposed system can effectively recognize the license plate number at 90.86%. The required processing time to recognize a license plate is only 2 seconds on average. The result is consider good in comparison with the results obtained from previous system that was processed in a desktop PC with the range of result from 91.59% to 98% recognition rate and 0.284 second to 1.5 seconds recognition time.

  10. Fitting a function to time-dependent ensemble averaged data

    DEFF Research Database (Denmark)

    Fogelmark, Karl; Lomholt, Michael A.; Irbäck, Anders

    2018-01-01

    Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion...... method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software....

  11. Determination of averaged axisymmetric flow surfaces according to results obtained by numerical simulation of flow in turbomachinery

    Directory of Open Access Journals (Sweden)

    Bogdanović-Jovanović Jasmina B.

    2012-01-01

    Full Text Available In the increasing need for energy saving worldwide, the designing process of turbomachinery, as an essential part of thermal and hydroenergy systems, goes in the direction of enlarging efficiency. Therefore, the optimization of turbomachinery designing strongly affects the energy efficiency of the entire system. In the designing process of turbomachinery blade profiling, the model of axisymmetric fluid flows is commonly used in technical practice, even though this model suits only the profile cascades with infinite number of infinitely thin blades. The actual flow in turbomachinery profile cascades is not axisymmetric, and it can be fictively derived into the axisymmetric flow by averaging flow parameters in the blade passages according to the circular coordinate. Using numerical simulations of flow in turbomachinery runners, its operating parameters can be preliminarily determined. Furthermore, using the numerically obtained flow parameters in the blade passages, averaged axisymmetric flow surfaces in blade profile cascades can also be determined. The method of determination of averaged flow parameters and averaged meridian streamlines is presented in this paper, using the integral continuity equation for averaged flow parameters. With thus obtained results, every designer can be able to compare the obtained averaged flow surfaces with axisymmetric flow surfaces, as well as the specific work of elementary stages, which are used in the procedure of blade designing. Numerical simulations of flow in an exemplary axial flow pump, used as a part of the thermal power plant cooling system, were performed using Ansys CFX. [Projekat Ministarstva nauke Republike Srbije, br. TR33040: Revitalization of existing and designing new micro and mini hydropower plants (from 100 kW to 1000 kW in the territory of South and Southeast Serbia

  12. Construction of a voxel model from CT images with density derived from CT numbers

    International Nuclear Information System (INIS)

    Cheng Mengyun; Zeng Qin; Cao Ruifen; Li Gui; Zheng Huaqing; Huang Shanqing; Song Gang; Wu Yican

    2010-01-01

    The voxel models representing human anatomy have been developed to calculate dose distribution in human body, while the density is the most important physical property of voxel model. Traditionally, when creating the Monte Carlo input files, the average tissue parameters recommended in ICRP report were used to assign each voxel in the existing voxel models. However, as each tissue consists of many voxels in which voxels are different in their densities, the method of assigning average tissue parameters doesn't take account of the voxel's discrepancy, and can't represent human anatomy faithfully. To represent human anatomy more faithfully, a method was implemented to assign each voxel, the density of which was derived from CT number. In order to compare with the traditional method, we have constructed two models from a same cadaver specimen date set. A CT-based pelvic voxel model called Pelvis-CT model, was constructed, the densities of which were derived from the CT numbers. A color photograph-based pelvic voxel model called Pelvis-Photo model, was also constructed, the densities of which were taken from ICRP Publication. The CT images and color photographs were obtained from the same female cadaver specimen. The Pelvis-CT and Pelvis-Photo models were ported into Monte Carlo code MCNP to calculate the conversion coefficients from kerma free-in-air to absorbed dose for external monoenergetic photon beams with energies of 0.1, 1 and 10 MeV under anterior-posterior (AP) geometries. The results were compared with those of given in ICRP74. Differences of up to 50% were observed between conversion coefficients of Pelvis-CT and Pelvis-Photo models, moreover the discrepancies decreased for the photon beams with higher energies. The overall trend of conversion coefficients of the Pelvis-CT model were agreed well with that of ICRP74 data. (author)

  13. Construction of a voxel model from CT images with density derived from CT numbers

    International Nuclear Information System (INIS)

    Cheng Mengyun; Zeng Qin; Cao Ruifen; Li Gui; Zheng Huaqing; Huang Shanqing; Song Gang; Wu Yican

    2011-01-01

    The voxel models representing human anatomy have been developed to calculate dose distribution in human body, while the density and elemental composition are the most important physical properties of voxel model. Usually, when creating the Monte Carlo input files, the average tissue densities recommended in ICRP Publication were used to assign each voxel in the existing voxel models. As each tissue consists of many voxels with different densities, the conventional method of average tissue densities failed to take account of the voxel's discrepancy, and therefore could not represent human anatomy faithfully. To represent human anatomy more faithfully, a method was implemented to assign each voxel, the densities of which were derived from CT number. In order to compare with the traditional method, we constructed two models from the cadaver specimen dataset. A CT-based pelvic voxel model called Pelvis-CT model was constructed, the densities of which were derived from the CT numbers. A color photograph-based pelvic voxel model called Pelvis-Photo model was also constructed, the densities of which were taken from ICRP Publication. The CT images and the color photographs were obtained from the same female cadaver specimen. The Pelvis-CT and Pelvis-Photo models were both ported into Monte Carlo code MCNP to calculate the conversion coefficients from kerma free-in-air to absorbed dose for external monoenergetic photon beams with energies of 0.1, 1 and 10 MeV under anterior-posterior (AP) geometry. The results were compared with those of given in ICRP Publication 74. Differences of up to 50% were observed between conversion coefficients of Pelvis-CT and Pelvis- Photo models, moreover the discrepancies decreased for the photon beams with higher energies. The overall trend of conversion coefficients of the Pelvis-CT model agreed well with that of ICRP Publication 74 data. (author)

  14. Average Soil Water Retention Curves Measured by Neutron Radiography

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Chu-Lin [ORNL; Perfect, Edmund [University of Tennessee, Knoxville (UTK); Kang, Misun [ORNL; Voisin, Sophie [ORNL; Bilheux, Hassina Z [ORNL; Horita, Juske [Texas Tech University (TTU); Hussey, Dan [NIST Center for Neutron Research (NCRN), Gaithersburg, MD

    2011-01-01

    Water retention curves are essential for understanding the hydrologic behavior of partially-saturated porous media and modeling flow transport processes within the vadose zone. In this paper we report direct measurements of the main drying and wetting branches of the average water retention function obtained using 2-dimensional neutron radiography. Flint sand columns were saturated with water and then drained under quasi-equilibrium conditions using a hanging water column setup. Digital images (2048 x 2048 pixels) of the transmitted flux of neutrons were acquired at each imposed matric potential (~10-15 matric potential values per experiment) at the NCNR BT-2 neutron imaging beam line. Volumetric water contents were calculated on a pixel by pixel basis using Beer-Lambert s law after taking into account beam hardening and geometric corrections. To remove scattering effects at high water contents the volumetric water contents were normalized (to give relative saturations) by dividing the drying and wetting sequences of images by the images obtained at saturation and satiation, respectively. The resulting pixel values were then averaged and combined with information on the imposed basal matric potentials to give average water retention curves. The average relative saturations obtained by neutron radiography showed an approximate one-to-one relationship with the average values measured volumetrically using the hanging water column setup. There were no significant differences (at p < 0.05) between the parameters of the van Genuchten equation fitted to the average neutron radiography data and those estimated from replicated hanging water column data. Our results indicate that neutron imaging is a very effective tool for quantifying the average water retention curve.

  15. Measurement of average radon gas concentration at workplaces

    International Nuclear Information System (INIS)

    Kavasi, N.; Somlai, J.; Kovacs, T.; Gorjanacz, Z.; Nemeth, Cs.; Szabo, T.; Varhegyi, A.; Hakl, J.

    2003-01-01

    In this paper results of measurement of average radon gas concentration at workplaces (the schools and kindergartens and the ventilated workplaces) are presented. t can be stated that the one month long measurements means very high variation (as it is obvious in the cases of the hospital cave and the uranium tailing pond). Consequently, in workplaces where the expectable changes of radon concentration considerable with the seasons should be measure for 12 months long. If it is not possible, the chosen six months period should contain summer and winter months as well. The average radon concentration during working hours can be differ considerable from the average of the whole time in the cases of frequent opening the doors and windows or using artificial ventilation. (authors)

  16. The flattening of the average potential in models with fermions

    International Nuclear Information System (INIS)

    Bornholdt, S.

    1993-01-01

    The average potential is a scale dependent scalar effective potential. In a phase with spontaneous symmetry breaking its inner region becomes flat as the averaging extends over infinite volume and the average potential approaches the convex effective potential. Fermion fluctuations affect the shape of the average potential in this region and its flattening with decreasing physical scale. They have to be taken into account to find the true minimum of the scalar potential which determines the scale of spontaneous symmetry breaking. (orig.)

  17. Sea Surface Temperature Average_SST_Master

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using ArcGIS...

  18. Futures for Higher Education: Analysing Trends. Higher Education: Meeting the Challenges of the 21st Century

    Science.gov (United States)

    Universities UK, 2012

    2012-01-01

    Higher education in the United Kingdom is undergoing a period of significant change. This is being driven by a number of factors: political, cultural, economic, and technological. The trends are global in their scope, and far reaching in their impact. They affect every aspect of university provision, the environment in which universities operate,…

  19. The effects of undergraduate nursing student-faculty interaction outside the classroom on college grade point average.

    Science.gov (United States)

    Al-Hussami, Mahmoud; Saleh, Mohammad Y N; Hayajneh, Ferial; Abdalkader, Raghed Hussein; Mahadeen, Alia I

    2011-09-01

    The effects of student-faculty interactions in higher education have received considerable empirical attention. However, there has been no empirical study that has examined the relation between student-faculty interaction and college grade point average. This is aimed at identifying the effect of nursing student-faculty interaction outside the classroom on students' semester college grade point average at a public university in Jordan. The research was cross-sectional study of the effect of student-faculty interaction outside the classroom on the students' semester college grade point average of participating juniors and seniors. Total interaction of the students was crucial as it is extremely significant (t = 16.2, df = 271, P ≤ 0.001) in relation to students' academic scores between those students who had ≥70 and those who had <70 academic scores. However, gender differences between students, and other variables were not significant either to affect students' academic scores or students' interaction. This study provides some evidence that student-faculty interactions outside classrooms are significantly associated with student's academically achievements. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Measurement of the average number of prompt neutrons emitted per fission of 235U relative to 252Cf for the energy region 500 eV to 10 MeV

    International Nuclear Information System (INIS)

    Gwin, R.; Spencer, R.R.; Ingle, R.W.; Todd, J.H.; Weaver, H.

    1980-01-01

    The average number of prompt neutrons emitted per fission ν/sub p/-bar(E), was measured for 235 U relative to ν/sub p/-bar for the spontaneous fission of 252 Cf over the neutron energy range from 500 eV to 10 MeV. The samples of 235 U and 252 Cf were contained in fission chambers located in the center of a large liquid scintillator. Fission neutrons were detected by the large liquid scintillator. The present values of ν/sub p/-bar(E) for 235 U are about 0.8% larger than those measured by Boldeman. In earlier work with the present system, it was noted that Boldeman's value of ν/sub p/-bar(E) for thermal energy neutrons was about 0.8% lower than obtained at ORELA. It is suggested that the thickness of the fission foil used in Boldeman's experiment may cause some of the discrepancy between his and the present values of ν/sub p/-bar(E). For the energy region up to 700 keV, the present values of ν/sub p/-bar(E) for 235 U agree, within the uncertainty, with those given in ENDF/B-V. Above 1 MeV the present results for ν/sub p/-bar(E) range about the ENDF/B-V values with differences up to 1.3%. 6 figures, 1 table