The average crossing number of equilateral random polygons
International Nuclear Information System (INIS)
Diao, Y; Dobay, A; Kusner, R B; Millett, K; Stasiak, A
2003-01-01
In this paper, we study the average crossing number of equilateral random walks and polygons. We show that the mean average crossing number ACN of all equilateral random walks of length n is of the form (3/16)n ln n + O(n). A similar result holds for equilateral random polygons. These results are confirmed by our numerical studies. Furthermore, our numerical studies indicate that when random polygons of length n are divided into individual knot types, the for each knot type K can be described by a function of the form = a(n-n 0 )ln(n-n 0 ) + b(n-n 0 ) + c where a, b and c are constants depending on K and n 0 is the minimal number of segments required to form K. The profiles diverge from each other, with more complex knots showing higher than less complex knots. Moreover, the profiles intersect with the profile of all closed walks. These points of intersection define the equilibrium length of K, i.e., the chain length n e (K) at which a statistical ensemble of configurations with given knot type K-upon cutting, equilibration and reclosure to a new knot type K'-does not show a tendency to increase or decrease . This concept of equilibrium length seems to be universal, and applies also to other length-dependent observables for random knots, such as the mean radius of gyration g >
Directory of Open Access Journals (Sweden)
Saveliev Peter
2005-01-01
Full Text Available Suppose , are manifolds, are maps. The well-known coincidence problem studies the coincidence set . The number is called the codimension of the problem. More general is the preimage problem. For a map and a submanifold of , it studies the preimage set , and the codimension is . In case of codimension , the classical Nielsen number is a lower estimate of the number of points in changing under homotopies of , and for an arbitrary codimension, of the number of components of . We extend this theory to take into account other topological characteristics of . The goal is to find a "lower estimate" of the bordism group of . The answer is the Nielsen group defined as follows. In the classical definition, the Nielsen equivalence of points of based on paths is replaced with an equivalence of singular submanifolds of based on bordisms. We let , then the Nielsen group of order is the part of preserved under homotopies of . The Nielsen number of order is the rank of this group (then . These numbers are new obstructions to removability of coincidences and preimages. Some examples and computations are provided.
Self-similarity of higher-order moving averages
Arianos, Sergio; Carbone, Anna; Türk, Christian
2011-10-01
In this work, higher-order moving average polynomials are defined by straightforward generalization of the standard moving average. The self-similarity of the polynomials is analyzed for fractional Brownian series and quantified in terms of the Hurst exponent H by using the detrending moving average method. We prove that the exponent H of the fractional Brownian series and of the detrending moving average variance asymptotically agree for the first-order polynomial. Such asymptotic values are compared with the results obtained by the simulations. The higher-order polynomials correspond to trend estimates at shorter time scales as the degree of the polynomial increases. Importantly, the increase of polynomial degree does not require to change the moving average window. Thus trends at different time scales can be obtained on data sets with the same size. These polynomials could be interesting for those applications relying on trend estimates over different time horizons (financial markets) or on filtering at different frequencies (image analysis).
Directory of Open Access Journals (Sweden)
Peter Saveliev
2005-04-01
Full Text Available Suppose X, Y are manifolds, f,g:XÃ¢Â†Â’Y are maps. The well-known coincidence problem studies the coincidence set C={x:f(x=g(x}. The number m=dimÃ¢Â€Â‰XÃ¢ÂˆÂ’dimÃ¢Â€Â‰Y is called the codimension of the problem. More general is the preimage problem. For a map f:XÃ¢Â†Â’Z and a submanifold Y of Z, it studies the preimage set C={x:f(xÃ¢ÂˆÂˆY}, and the codimension is m=dimÃ¢Â€Â‰X+dimÃ¢Â€Â‰YÃ¢ÂˆÂ’dimÃ¢Â€Â‰Z. In case of codimension 0, the classical Nielsen number N(f,Y is a lower estimate of the number of points in C changing under homotopies of f, and for an arbitrary codimension, of the number of components of C. We extend this theory to take into account other topological characteristics of C. The goal is to find a Ã¢Â€Âœlower estimateÃ¢Â€Â of the bordism group ÃŽÂ©p(C of C. The answer is the Nielsen group Sp(f,Y defined as follows. In the classical definition, the Nielsen equivalence of points of C based on paths is replaced with an equivalence of singular submanifolds of C based on bordisms. We let Sp'(f,Y=ÃŽÂ©p(C/Ã¢ÂˆÂ¼N, then the Nielsen group of order p is the part of Sp'(f,Y preserved under homotopies of f. The Nielsen number Np(F,Y of order p is the rank of this group (then N(f,Y=N0(f,Y. These numbers are new obstructions to removability of coincidences and preimages. Some examples and computations are provided.
String fields, higher spins and number theory
Polyakov, Dimitri
2018-01-01
The book aims to analyze and explore deep and profound relations between string field theory, higher spin gauge theories and holography the disciplines that have been on the cutting edge of theoretical high energy physics and other fields. These intriguing relations and connections involve some profound ideas in number theory, which appear to be part of a unifying language to describe these connections.
A main factors affecting average number of teats in pigs
Directory of Open Access Journals (Sweden)
Emil Krupa
2016-09-01
Full Text Available The influence of factors (breed, year and season of farrowing, herd, parity order, sire of litter, total number of born piglets - TNB, number of piglets born alive - NBA, number of weaned piglets - NW, and linear and quadratic regression on the number of teats, found for all piglets in the litter till ten days after born, expressed as arithmetic mean for each litter as sum of all teats number of each piglet in appropriate litter divided by number of piglets in this litter at first litter (ANT1 and second and subsequent litters (ANT2+ were analysed. The coefficient of determination was 0.46 and 0.33 for ANT1 and ANT2+, respectively. The statistically high influence (P<0.001 on ANT1 and ANT2+ was determined for year and season of farrowing, herd, parity order (only for ANT2+ and sire of litter effects. Impact of breed was found only on ANT2+ (P<0.001. The rest of factors have negligible of no impact on traits. Based on the data available for analyses, obtained results will serve as a relevant set-up in developing the model for genetic evaluation for these traits.
Average gluon and quark jet multiplicities at higher orders
Energy Technology Data Exchange (ETDEWEB)
Bolzoni, Paolo; Kniehl, Bernd A. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Kotikov, Anatoly V. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Joint Institute of Nuclear Research, Moscow (Russian Federation). Bogoliubov Lab. of Theoretical Physics
2013-05-15
We develop a new formalism for computing and including both the perturbative and nonperturbative QCD contributions to the scale evolution of average gluon and quark jet multiplicities. The new method is motivated by recent progress in timelike small-x resummation obtained in the MS factorization scheme. We obtain next-to-next-to-leading-logarithmic (NNLL) resummed expressions, which represent generalizations of previous analytic results. Our expressions depend on two nonperturbative parameters with clear and simple physical interpretations. A global fit of these two quantities to all available experimental data sets that are compatible with regard to the jet algorithms demonstrates by its goodness how our results solve a longstanding problem of QCD. We show that the statistical and theoretical uncertainties both do not exceed 5% for scales above 10 GeV. We finally propose to use the jet multiplicity data as a new way to extract the strong-coupling constant. Including all the available theoretical input within our approach, we obtain {alpha}{sub s}{sup (5)}(M{sub Z})=0.1199{+-}0.0026 in the MS scheme in an approximation equivalent to next-to-next-to-leading order enhanced by the resummations of ln(x) terms through the NNLL level and of ln Q{sup 2} terms by the renormalization group, in excellent agreement with the present world average.
Decreases in average bacterial community rRNA operon copy number during succession.
Nemergut, Diana R; Knelman, Joseph E; Ferrenberg, Scott; Bilinski, Teresa; Melbourne, Brett; Jiang, Lin; Violle, Cyrille; Darcy, John L; Prest, Tiffany; Schmidt, Steven K; Townsend, Alan R
2016-05-01
Trait-based studies can help clarify the mechanisms driving patterns of microbial community assembly and coexistence. Here, we use a trait-based approach to explore the importance of rRNA operon copy number in microbial succession, building on prior evidence that organisms with higher copy numbers respond more rapidly to nutrient inputs. We set flasks of heterotrophic media into the environment and examined bacterial community assembly at seven time points. Communities were arrayed along a geographic gradient to introduce stochasticity via dispersal processes and were analyzed using 16 S rRNA gene pyrosequencing, and rRNA operon copy number was modeled using ancestral trait reconstruction. We found that taxonomic composition was similar between communities at the beginning of the experiment and then diverged through time; as well, phylogenetic clustering within communities decreased over time. The average rRNA operon copy number decreased over the experiment, and variance in rRNA operon copy number was lowest both early and late in succession. We then analyzed bacterial community data from other soil and sediment primary and secondary successional sequences from three markedly different ecosystem types. Our results demonstrate that decreases in average copy number are a consistent feature of communities across various drivers of ecological succession. Importantly, our work supports the scaling of the copy number trait over multiple levels of biological organization, ranging from cells to populations and communities, with implications for both microbial ecology and evolution.
On averaging the Kubo-Hall conductivity of magnetic Bloch bands leading to Chern numbers
International Nuclear Information System (INIS)
Riess, J.
1997-01-01
The authors re-examine the topological approach to the integer quantum Hall effect in its original form where an average of the Kubo-Hall conductivity of a magnetic Bloch band has been considered. For the precise definition of this average it is crucial to make a sharp distinction between the discrete Bloch wave numbers k 1 , k 2 and the two continuous integration parameters α 1 , α 2 . The average over the parameter domain 0 ≤ α j 1 , k 2 . They show how this can be transformed into a single integral over the continuous magnetic Brillouin zone 0 ≤ α j j , j = 1, 2, n j = number of unit cells in j-direction, keeping k 1 , k 2 fixed. This average prescription for the Hall conductivity of a magnetic Bloch band is exactly the same as the one used for a many-body system in the presence of disorder
The average number of partons per clan in rapidity intervals in parton showers
Energy Technology Data Exchange (ETDEWEB)
Giovannini, A. [Turin Univ. (Italy). Ist. di Fisica Teorica; Lupia, S. [Max-Planck-Institut fuer Physik, Muenchen (Germany). Werner-Heisenberg-Institut; Ugoccioni, R. [Lund Univ. (Sweden). Dept. of Theoretical Physics
1996-04-01
The dependence of the average number of partons per clan on virtuality and rapidity variables is analytically predicted in the framework of the Generalized Simplified Parton Shower model, based on the idea that clans are genuine elementary subprocesses. The obtained results are found to be qualitatively consistent with experimental trends. This study extends previous results on the behavior of the average number of clans in virtuality and rapidity and shows how important physical quantities can be calculated analytically in a model based on essentials of QCD allowing local violations of the energy-momentum conservation law, still requiring its global validity. (orig.)
The average number of partons per clan in rapidity intervals in parton showers
International Nuclear Information System (INIS)
Giovannini, A.; Lupia, S.; Ugoccioni, R.
1996-01-01
The dependence of the average number of partons per clan on virtuality and rapidity variables is analytically predicted in the framework of the Generalized Simplified Parton Shower model, based on the idea that clans are genuine elementary subprocesses. The obtained results are found to be qualitatively consistent with experimental trends. This study extends previous results on the behavior of the average number of clans in virtuality and rapidity and shows how important physical quantities can be calculated analytically in a model based on essentials of QCD allowing local violations of the energy-momentum conservation law, still requiring its global validity. (orig.)
Relationships between average depth and number of misclassifications for decision trees
Chikalov, Igor
2014-02-14
This paper presents a new tool for the study of relationships between the total path length or the average depth and the number of misclassifications for decision trees. In addition to algorithm, the paper also presents the results of experiments with datasets from UCI ML Repository [9] and datasets representing Boolean functions with 10 variables.
Relationships Between Average Depth and Number of Nodes for Decision Trees
Chikalov, Igor
2013-07-24
This paper presents a new tool for the study of relationships between total path length or average depth and number of nodes of decision trees. In addition to algorithm, the paper also presents the results of experiments with datasets from UCI ML Repository [1]. © Springer-Verlag Berlin Heidelberg 2014.
Relationships Between Average Depth and Number of Nodes for Decision Trees
Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail
2013-01-01
This paper presents a new tool for the study of relationships between total path length or average depth and number of nodes of decision trees. In addition to algorithm, the paper also presents the results of experiments with datasets from UCI ML
Relationships between average depth and number of misclassifications for decision trees
Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail
2014-01-01
This paper presents a new tool for the study of relationships between the total path length or the average depth and the number of misclassifications for decision trees. In addition to algorithm, the paper also presents the results of experiments with datasets from UCI ML Repository [9] and datasets representing Boolean functions with 10 variables.
International Nuclear Information System (INIS)
Zhao Xinyu; Wang Xiaoli; Lin Hai; Wang Zhiqiang
2008-01-01
On the basis of new electronegativity values, electronic polarizability and optical basicity of lanthanide oxides are calculated from the concept of average electronegativity given by Asokamani and Manjula. The estimated values are in close agreement with our previous conclusion. Particularly, we attempt to obtain new data of electronic polarizability and optical basicity of lanthanide sesquioxides for different coordination numbers (6-12). The present investigation suggests that both electronic polarizability and optical basicity increase gradually with increasing coordination number. We also looked for another double peak effect, that is, electronic polarizability and optical basicity of trivalent lanthanide oxides show a gradual decrease and then an abrupt increase at the Europia and Ytterbia. Furthermore, close correlations are investigated among average electronegativity, optical basicity, electronic polarizability and coordination number in this paper
The average inter-crossing number of equilateral random walks and polygons
International Nuclear Information System (INIS)
Diao, Y; Dobay, A; Stasiak, A
2005-01-01
In this paper, we study the average inter-crossing number between two random walks and two random polygons in the three-dimensional space. The random walks and polygons in this paper are the so-called equilateral random walks and polygons in which each segment of the walk or polygon is of unit length. We show that the mean average inter-crossing number ICN between two equilateral random walks of the same length n is approximately linear in terms of n and we were able to determine the prefactor of the linear term, which is a = 3ln2/8 ∼ 0.2599. In the case of two random polygons of length n, the mean average inter-crossing number ICN is also linear, but the prefactor of the linear term is different from that of the random walks. These approximations apply when the starting points of the random walks and polygons are of a distance ρ apart and ρ is small compared to n. We propose a fitting model that would capture the theoretical asymptotic behaviour of the mean average ICN for large values of ρ. Our simulation result shows that the model in fact works very well for the entire range of ρ. We also study the mean ICN between two equilateral random walks and polygons of different lengths. An interesting result is that even if one random walk (polygon) has a fixed length, the mean average ICN between the two random walks (polygons) would still approach infinity if the length of the other random walk (polygon) approached infinity. The data provided by our simulations match our theoretical predictions very well
DEFF Research Database (Denmark)
Gutin, Gregory; Van Iersel, Leo; Mnich, Matthias
2010-01-01
A ternary Permutation-CSP is specified by a subset Π of the symmetric group S3. An instance of such a problem consists of a set of variables V and a multiset of constraints, which are ordered triples of distinct variables of V. The objective is to find a linear ordering α of V that maximizes...... the number of triples whose rearrangement (under α) follows a permutation in Π. We prove that all ternary Permutation-CSPs parameterized above average have kernels with quadratic numbers of variables....
Interpolation of property-values between electron numbers is inconsistent with ensemble averaging
Energy Technology Data Exchange (ETDEWEB)
Miranda-Quintana, Ramón Alain [Laboratory of Computational and Theoretical Chemistry, Faculty of Chemistry, University of Havana, Havana (Cuba); Department of Chemistry and Chemical Biology, McMaster University, Hamilton, Ontario L8S 4M1 (Canada); Ayers, Paul W. [Department of Chemistry and Chemical Biology, McMaster University, Hamilton, Ontario L8S 4M1 (Canada)
2016-06-28
In this work we explore the physical foundations of models that study the variation of the ground state energy with respect to the number of electrons (E vs. N models), in terms of general grand-canonical (GC) ensemble formulations. In particular, we focus on E vs. N models that interpolate the energy between states with integer number of electrons. We show that if the interpolation of the energy corresponds to a GC ensemble, it is not differentiable. Conversely, if the interpolation is smooth, then it cannot be formulated as any GC ensemble. This proves that interpolation of electronic properties between integer electron numbers is inconsistent with any form of ensemble averaging. This emphasizes the role of derivative discontinuities and the critical role of a subsystem’s surroundings in determining its properties.
The true bladder dose: on average thrice higher than the ICRU reference
International Nuclear Information System (INIS)
Barillot, I.; Horiot, J.C.; Maingon, P.; Bone-Lepinoy, M.C.; D'Hombres, A.; Comte, J.; Delignette, A.; Feutray, S.; Vaillant, D.
1996-01-01
The aim of this study is to compare ICRU dose to doses at the bladder base located from ultrasonography measurements. Since 1990, the dose delivered to the bladder during utero-vaginal brachytherapy was systematically calculated at 3 or 4 points representative of bladder base determined with ultrasonography. The ICRU Reference Dose (IRD) from films, the Maximum Dose (Dmax), the Mean Dose (Dmean) representative of the dose received by a large area of bladder mucosa, the Reference Dose Rate (RDR) and the Mean Dose Rate (MDR) were recorded. Material: from 1990 to 1994, 198 measurements were performed in 152 patients. 98 patients were treated for cervix carcinomas, 54 for endometrial carcinomas. Methods: Bladder complications were classified using French Italian Syllabus. The influence of doses and dose rates on complications were tested using non parametric t test. Results: On average IRD is 21 Gy +/- 12 Gy, Dmax is 51Gy +/- 21Gy, Dmean is 40 Gy +/16 Gy. On average Dmax is thrice higher than IRD and Dmean twice higher than IRD. The same results are obtained for cervix and endometrium. Comparisons on dose rates were also performed: MDR is on average twice higher than RDR (RDR 48 cGy/h vs MDR 88 cGy/h). The five observed complications consist of incontinence only (3 G1, 1G2, 1G3). They are only statistically correlated with RDR p=0.01 (46 cGy/h in patients without complications vs 74 cGy/h in patients with complications). However the full responsibility of RT remains doubtful and should be shared with surgery in all cases. In summary: Bladder mucosa seems to tolerate well much higher doses than previous recorded without increased risk of severe sequelae. However this finding is probably explained by our efforts to spare most of bladder mucosa by 1 deg. ) customised external irradiation therapy (4 fields, full bladder) 2 deg. ) reproduction of physiologic bladder filling during brachytherapy by intermittent clamping of the Foley catheter
Higher arithmetic an algorithmic introduction to number theory
Edwards, Harold M
2008-01-01
Although number theorists have sometimes shunned and even disparaged computation in the past, today's applications of number theory to cryptography and computer security demand vast arithmetical computations. These demands have shifted the focus of studies in number theory and have changed attitudes toward computation itself. The important new applications have attracted a great many students to number theory, but the best reason for studying the subject remains what it was when Gauss published his classic Disquisitiones Arithmeticae in 1801: Number theory is the equal of Euclidean geometry--some would say it is superior to Euclidean geometry--as a model of pure, logical, deductive thinking. An arithmetical computation, after all, is the purest form of deductive argument. Higher Arithmetic explains number theory in a way that gives deductive reasoning, including algorithms and computations, the central role. Hands-on experience with the application of algorithms to computational examples enables students to m...
On the time-averaging of ultrafine particle number size spectra in vehicular plumes
Directory of Open Access Journals (Sweden)
X. H. Yao
2006-01-01
Full Text Available Ultrafine vehicular particle (<100 nm number size distributions presented in the literature are mostly averages of long scan-time (~30 s or more spectra mainly due to the non-availability of commercial instruments that can measure particle distributions in the <10 nm to 100 nm range faster than 30 s even though individual researchers have built faster (1–2.5 s scanning instruments. With the introduction of the Engine Exhaust Particle Sizer (EEPS in 2004, high time-resolution (1 full 32-channel spectrum per second particle size distribution data become possible and allow atmospheric researchers to study the characteristics of ultrafine vehicular particles in rapidly and perhaps randomly varying high concentration environments such as roadside, on-road and tunnel. In this study, particle size distributions in these environments were found to vary as rapidly as one second frequently. This poses the question on the generality of using averages of long scan-time spectra for dynamic and/or mechanistic studies in rapidly and perhaps randomly varying high concentration environments. One-second EEPS data taken at roadside, on roads and in tunnels by a mobile platform are time-averaged to yield 5, 10, 30 and 120 s distributions to answer this question.
Control of underactuated driftless systems using higher-order averaging theory
Vela, Patricio A.; Burdick, Joel W.
2003-01-01
This paper applies a recently developed "generalized averaging theory" to construct stabilizing feedback control laws for underactuated driftless systems. These controls exponentialy stabilize in the average; the actual system may orbit around the average. Conditions for which the orbit collapses to the averaged trajectory are given. An example validates the theory, demonstrating its utility.
The growth of the mean average crossing number of equilateral polygons in confinement
International Nuclear Information System (INIS)
Arsuaga, J; Borgo, B; Scharein, R; Diao, Y
2009-01-01
The physical and biological properties of collapsed long polymer chains as well as of highly condensed biopolymers (such as DNA in all organisms) are known to be determined, at least in part, by their topological and geometrical properties. With this purpose of characterizing the topological properties of such condensed systems equilateral random polygons restricted to confined volumes are often used. However, very few analytical results are known. In this paper, we investigate the effect of volume confinement on the mean average crossing number (ACN) of equilateral random polygons. The mean ACN of knots and links under confinement provides a simple alternative measurement for the topological complexity of knots and links in the statistical sense. For an equilateral random polygon of n segments without any volume confinement constrain, it is known that its mean ACN (ACN) is of the order 3/16 n log n + O(n). Here we model the confining volume as a simple sphere of radius R. We provide an analytical argument which shows that (ACN) of an equilateral random polygon of n segments under extreme confinement (meaning R 2 ). We propose to model the growth of (ACN) as a(R)n 2 + b(R)nln(n) under a less-extreme confinement condition, where a(R) and b(R) are functions of R with R being the radius of the confining sphere. Computer simulations performed show a fairly good fit using this model.
The association between higher education and approximate number system acuity
Lindskog, Marcus; Winman, Anders; Juslin, Peter
2014-01-01
Humans are equipped with an approximate number system (ANS) supporting non-symbolic numerosity representation. Studies indicate a relationship between ANS-precision (acuity) and math achievement. Whether the ANS is a prerequisite for learning mathematics or if mathematics education enhances the ANS remains an open question. We investigated the association between higher education and ANS acuity with university students majoring in subjects with varying amounts of mathematics (mathematics, business, and humanities), measured either early (First year) or late (Third year) in their studies. The results suggested a non-significant trend where students taking more mathematics had better ANS acuity and a significant improvement in ANS acuity as a function of study length that was mainly confined to the business students. The results provide partial support for the hypothesis that education in mathematics can enhance the ANS acuity. PMID:24904478
The association between higher education and approximate number system acuity.
Lindskog, Marcus; Winman, Anders; Juslin, Peter
2014-01-01
Humans are equipped with an approximate number system (ANS) supporting non-symbolic numerosity representation. Studies indicate a relationship between ANS-precision (acuity) and math achievement. Whether the ANS is a prerequisite for learning mathematics or if mathematics education enhances the ANS remains an open question. We investigated the association between higher education and ANS acuity with university students majoring in subjects with varying amounts of mathematics (mathematics, business, and humanities), measured either early (First year) or late (Third year) in their studies. The results suggested a non-significant trend where students taking more mathematics had better ANS acuity and a significant improvement in ANS acuity as a function of study length that was mainly confined to the business students. The results provide partial support for the hypothesis that education in mathematics can enhance the ANS acuity.
The Association Between Higher Education and Approximate Number System Acuity
Directory of Open Access Journals (Sweden)
Marcus eLindskog
2014-05-01
Full Text Available Humans are equipped with an Approximate Number System (ANS supporting non-symbolic numerosity representation. Studies indicate a relationship between ANS-precision (acuity and math achievement. Whether the ANS is a prerequisite for learning mathematics or if mathematics education enhances the ANS remains an open question. We investigated the association between higher education and ANS acuity with university students majoring in subjects with varying amounts of mathematics (mathematics, business, and humanities, measured either early (1th year or late (3rd year in their studies. The results suggested a non-significant trend where students taking more mathematics had better ANS acuity and a significant improvement in ANS acuity as a function of study length that was mainly confined to the business students. The results provide partial support for the hypothesis that education in mathematics can enhance the ANS acuity.
The average number of critical rank-one approximations to a tensor
Draisma, J.; Horobet, E.
2014-01-01
Motivated by the many potential applications of low-rank multi-way tensor approximations, we set out to count the rank-one tensors that are critical points of the distance function to a general tensor v. As this count depends on v, we average over v drawn from a Gaussian distribution, and find
Average formation number n-barOH of colloid-type indium hydroxide
International Nuclear Information System (INIS)
Stefanowicz, T.; Szent-Kirallyine Gajda, J.
1983-01-01
Indium perchlorate in perchloric acid solution was titrated with sodium hydroxide solution to various pH values. Indium hydroxide colloid was removed by ultracentrifugation and supernatant solution was titrated with base to neutral pH. The two-stage titration data were used to calculate the formation number of indium hydroxide colloid, which was found to equal n-bar OH = 2.8. (author)
Higher P-Wave Dispersion in Migraine Patients with Higher Number of Attacks
Directory of Open Access Journals (Sweden)
A. Koçer
2012-01-01
Full Text Available Objective and Aim. An imbalance of the sympathetic system may explain many of the clinical manifestations of the migraine. We aimed to evaluate P-waves as a reveal of sympathetic system function in migraine patients and healthy controls. Materials and Methods. Thirty-five episodic type of migraine patients (complained of migraine during 5 years or more, BMI < 30 kg/m2 and 30 controls were included in our study. We measured P-wave durations (minimum, maximum, and dispersion from 12-lead ECG recording during pain-free periods. ECGs were transferred to a personal computer via a scanner and then used for magnification of x400 by Adobe Photoshop software. Results. P-wave durations were found to be similar between migraine patients and controls. Although P WD (P-wave dispersion was similar, the mean value was higher in migraine subjects. P WD was positively correlated with P max (P<0.01. Attacks number per month and male gender were the factors related to the P WD (P<0.01. Conclusions. Many previous studies suggested that increased sympathetic activity may cause an increase in P WD. We found that P WD of migraine patients was higher than controls, and P WD was related to attacks number per month and male gender. Further studies are needed to explain the chronic effects of migraine.
Gao, Peng
2018-04-01
This work concerns the problem associated with averaging principle for a higher order nonlinear Schrödinger equation perturbed by a oscillating term arising as the solution of a stochastic reaction-diffusion equation evolving with respect to the fast time. This model can be translated into a multiscale stochastic partial differential equations. Stochastic averaging principle is a powerful tool for studying qualitative analysis of stochastic dynamical systems with different time-scales. To be more precise, under suitable conditions, we prove that there is a limit process in which the fast varying process is averaged out and the limit process which takes the form of the higher order nonlinear Schrödinger equation is an average with respect to the stationary measure of the fast varying process. Finally, by using the Khasminskii technique we can obtain the rate of strong convergence for the slow component towards the solution of the averaged equation, and as a consequence, the system can be reduced to a single higher order nonlinear Schrödinger equation with a modified coefficient.
Gao, Peng
2018-06-01
This work concerns the problem associated with averaging principle for a higher order nonlinear Schrödinger equation perturbed by a oscillating term arising as the solution of a stochastic reaction-diffusion equation evolving with respect to the fast time. This model can be translated into a multiscale stochastic partial differential equations. Stochastic averaging principle is a powerful tool for studying qualitative analysis of stochastic dynamical systems with different time-scales. To be more precise, under suitable conditions, we prove that there is a limit process in which the fast varying process is averaged out and the limit process which takes the form of the higher order nonlinear Schrödinger equation is an average with respect to the stationary measure of the fast varying process. Finally, by using the Khasminskii technique we can obtain the rate of strong convergence for the slow component towards the solution of the averaged equation, and as a consequence, the system can be reduced to a single higher order nonlinear Schrödinger equation with a modified coefficient.
Total number albedo and average cosine of the polar angle of low-energy photons reflected from water
Directory of Open Access Journals (Sweden)
Marković Srpko
2007-01-01
Full Text Available The total number albedo and average cosine of the polar angle for water and initial photon energy range from 20 keV to 100 keV are presented in this pa per. A water shield in the form of a thick, homogenous plate and per pendicular incidence of the monoenergetic photon beam are assumed. The results were obtained through Monte Carlo simulations of photon reflection by means of the MCNP computer code. Calculated values for the total number albedo were compared with data previously published and good agreement was confirmed. The dependence of the average cosine of the polar angle on energy is studied in detail. It has been found that the total average cosine of the polar angle has values in the narrow interval of 0.66-0.67, approximately corresponding to the reflection angle of 48°, and that it does not depend on the initial photon energy.
Diversity Leadership in Higher Education. ASHE Higher Education Report, Volume 32, Number 3
Aguirre, Adalberto, Jr., Ed.; Martinez, Ruben O., Ed.
2006-01-01
This monograph examines and discusses the context for diversity leadership roles and practices in higher education by using research and theoretical and applied literatures from a variety of fields, including the social sciences, business, and higher education. Framing the discussion on leadership in this monograph is the perspective that American…
Mars, Matthew M.; Metcalf, Amy Scott
2009-01-01
This volume draws on a diverse set of literatures to represent the various ways in which entrepreneurship is understood in and applied to higher education. It provides a platform for debate for those considering applications of entrepreneurial principles to academic research and practices. Using academic entrepreneurship in the United States as…
International Nuclear Information System (INIS)
Roeske, John C; Stinchcomb, Thomas G
2006-01-01
Alpha-particle emitters are currently being considered for the treatment of micrometastatic disease. Based on in vitro studies, it has been speculated that only a few alpha-particle hits to the cell nucleus are considered lethal. However, such estimates do not consider the stochastic variations in the number of alpha-particle hits, energy deposited, or in the cell survival process itself. Using a tumour control probability (TCP) model for alpha-particle emitters, we derive an estimate of the average number of hits to the cell nucleus required to provide a high probability of eradicating a tumour cell population. In simulation studies, our results demonstrate that the average number of hits required to achieve a 90% TCP for 10 4 clonogenic cells ranges from 18 to 108. Those cells that have large cell nuclei, high radiosensitivities and alpha-particle emissions occurring primarily in the nuclei tended to require more hits. As the clinical implementation of alpha-particle emitters is considered, this type of analysis may be useful in interpreting clinical results and in designing treatment strategies to achieve a favourable therapeutic outcome. (note)
Bullard, Robert L.; Stanier, Charles O.; Ogren, John A.; Sheridan, Patrick J.
2013-05-01
The impact of aerosols on Earth's radiation balance and the associated climate forcing effects of aerosols represent significant uncertainties in assessment reports. The main source of ultrafine aerosols in the atmosphere is the nucleation and subsequent growth of gas phase aerosol precursors into liquid or solid phase particles. Long term records of aerosol number, nucleation event frequency, and vertical profiles of number concentration are rare. The data record from multiagency monitoring assets at Bondville, IL can contribute important information on long term and vertically resolved patterns. Although particle number size distribution data are only occasionally available at Bondville, highly time-resolved particle number concentration data have been measured for nearly twenty years by the NOAA ESRL Global Monitoring Division. Furthermore, vertically-resolved aerosol counts and other aerosol physical parameters are available from more than 300 flights of the NOAA Airborne Aerosol Observatory (AAO). These data sources are used to better understand the seasonal, diurnal, and vertical variation and trends in atmospheric aerosols. The highest peaks in condensation nuclei greater than 14 nm occur during the spring months (May, April) with slightly lower peaks during the fall months (September, October). The diurnal pattern of aerosol number has a midday peak and the timing of the peak has seasonal patterns (earlier during warm months and later during colder months). The seasonal and diurnal patterns of high particle number peaks correspond to seasons and times of day associated with low aerosol mass and surface area. Average vertical profiles show a nearly monotonic decrease with altitude in all months, and with peak magnitudes occurring in the spring and fall. Individual flight tracks show evidence of plumes (i.e., enhanced aerosol number is limited to a small altitude range, is not homogeneous horizontally, or both) as well as periods with enhanced particle number
U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...
International Nuclear Information System (INIS)
Majidi, Pasha; Pickup, Peter G.
2015-01-01
The energy efficiency of a direct ethanol fuel cell (DEFC) is directly proportional to the average number of electrons released per ethanol molecule (n-value) at the anode. An approach to measuring n-values in DEFC hardware is presented, validated for the oxidation of methanol, and shown to provide n-values for ethanol oxidation that are consistent with trends and estimates from full product analysis. The method is based on quantitative oxidation of fuel that crosses through the membrane to avoid the errors that would otherwise result from crossover. It will be useful for rapid screening of catalysts, and allows performances (polarization curves) and n-values to be determined simultaneously under well controlled transport conditions.
International Nuclear Information System (INIS)
Vasil'ev, Yu.A.; Barashkov, Yu.A.; Golovanov, O.A.; Sidorov, L.V.
1977-01-01
A method for determining the average number of secondary neutrons anti ν produced in nuclear fission by the neutrons of the 252 Cf fission spectra by means of a 4π time-of-flight spectrometer is described. Layers of 252 Cf and an isotope studied are placed close to each other; if the isotope layer density is 1 mg/cm 2 probability of its fission is about 10 -5 per one spontaneous fission of californium. Fission fragments of 252 Cf and the isotope investigated have been detected by two surface-barrier counters with an efficiency close to 100%. The layers and the counters are situated in a measuring chamber placed in the center of the 4π time-of-flight spectrometer. The latter is utilized as a neutron counter because of its fast response. The method has been verified by carrying out measurements for 235 U and 239 Pu. A comparison of the experimental and calculated results shows that the method suggested can apply to determine the number of secondary neutrons in fission of isotopes that have not been investigated yet
Directory of Open Access Journals (Sweden)
Romeo B. Lee
2016-02-01
Full Text Available The study seeks to estimate gender variations in the direct effects of (a number of organizational memberships, (b number of social networking sites (SNS, and (c grade-point average (GPA on global social responsibility (GSR; and in the indirect effects of (a and of (b through (c on GSR. Cross-sectional survey data were drawn from questionnaire interviews involving 3,173 Filipino university students. Based on a path model, the three factors were tested to determine their inter-relationships and their relationships with GSR. The direct and total effects of the exogenous factors on the dependent variable are statistically significantly robust. The indirect effects of organizational memberships on GSR through GPA are also statistically significant, but the indirect effects of SNS on GSR through GPA are marginal. Men and women significantly differ only in terms of the total effects of their organizational memberships on GSR. The lack of broad gender variations in the effects of SNS, organizational memberships and GPA on GSR may be linked to the relatively homogenous characteristics and experiences of the university students interviewed. There is a need for more path models to better understand the predictors of GSR in local students.
Lee, Romeo B.; Baring, Rito V.; Sta. Maria, Madelene A.
2016-01-01
The study seeks to estimate gender variations in the direct effects of (a) number of organizational memberships, (b) number of social networking sites (SNS), and (c) grade-point average (GPA) on global social responsibility (GSR); and in the indirect effects of (a) and of (b) through (c) on GSR. Cross-sectional survey data were drawn from questionnaire interviews involving 3,173 Filipino university students. Based on a path model, the three factors were tested to determine their inter-relationships and their relationships with GSR. The direct and total effects of the exogenous factors on the dependent variable are statistically significantly robust. The indirect effects of organizational memberships on GSR through GPA are also statistically significant, but the indirect effects of SNS on GSR through GPA are marginal. Men and women significantly differ only in terms of the total effects of their organizational memberships on GSR. The lack of broad gender variations in the effects of SNS, organizational memberships and GPA on GSR may be linked to the relatively homogenous characteristics and experiences of the university students interviewed. There is a need for more path models to better understand the predictors of GSR in local students. PMID:27247700
Lee, Romeo B; Baring, Rito V; Sta Maria, Madelene A
2016-02-01
The study seeks to estimate gender variations in the direct effects of (a) number of organizational memberships, (b) number of social networking sites (SNS), and (c) grade-point average (GPA) on global social responsibility (GSR); and in the indirect effects of (a) and of (b) through (c) on GSR. Cross-sectional survey data were drawn from questionnaire interviews involving 3,173 Filipino university students. Based on a path model, the three factors were tested to determine their inter-relationships and their relationships with GSR. The direct and total effects of the exogenous factors on the dependent variable are statistically significantly robust. The indirect effects of organizational memberships on GSR through GPA are also statistically significant, but the indirect effects of SNS on GSR through GPA are marginal. Men and women significantly differ only in terms of the total effects of their organizational memberships on GSR. The lack of broad gender variations in the effects of SNS, organizational memberships and GPA on GSR may be linked to the relatively homogenous characteristics and experiences of the university students interviewed. There is a need for more path models to better understand the predictors of GSR in local students.
International Nuclear Information System (INIS)
Gagarinskiy, A.A.; Saprykin, V.V.
2009-01-01
RRC 'Kurchatov Institute' has performed an extensive cycle of calculations intended to validate the opportunities of improving different fuel cycles for WWER-440 reactors. Works were performed to upgrade and improve WWER-440 fuel cycles on the basis of second-generation fuel assemblies allowing core thermal power to be uprated to 107 108 % of its nominal value (1375 MW), while maintaining the same fuel operation lifetime. Currently intensive work is underway to develop fuel cycles based on second-generation assemblies with higher fuel capacity and average fuel enrichment per assembly increased up to 4.87 % of U-235. Fuel capacity of second-generation assemblies was increased by means of eliminated central apertures of fuel pellets, and pellet diameter extended due to reduced fuel cladding thickness. This paper intends to summarize the results of works performed in the field of WWER-440 fuel cycle modernization, and to present yet unemployed opportunities and prospects of further improvement of WWER-440 neutronic and operating parameters by means of additional optimization of fuel assembly designs and fuel element arrangements applied. (Authors)
Loposer, J. Dan; Rumsey, Charles B.
1954-01-01
Measurement of average skin-friction coefficients have been made on six rocket-powered free-flight models by using the boundary-layer rake technique. The model configuration was the NACA RM-10, a 12.2-fineness-ratio parabolic body of revolution with a flat base. Measurements were made over a Mach number range from 1 to 3.7, a Reynolds number range 40 x 10(exp 6) to 170 x 10(exp 6) based on length to the measurement station, and with aerodynamic heating conditions varying from strong skin heating to strong skin cooling. The measurements show the same trends over the test ranges as Van Driest's theory for turbulent boundary layer on a flat plate. The measured values are approximately 7 percent higher than the values of the flat-plate theory. A comparison which takes into account the differences in Reynolds number is made between the present results and skin-friction measurements obtained on NACA RM-10 scale models in the Langley 4- by 4-foot supersonic pressure tunnel, the Lewis 8- by 6-foot supersonic tunnel, and the Langley 9-inch supersonic tunnel. Good agreement is shown at all but the lowest tunnel Reynolds number conditions. A simple empirical equation is developed which represents the measurements over the range of the tests.
LENUS (Irish Health Repository)
Dowling, Adam H
2011-06-01
The aim was to investigate the influence of number average molecular weight and concentration of the poly(acrylic) acid (PAA) liquid constituent of a GI restorative on the compressive fracture strength (σ) and modulus (E).
Wang, S.; van der Waart, K.; Somers, B.; de Goey, P.
2017-01-01
The optimal fuel for partially premixed combustion (PPC) is considered to be a gasoline boiling range fuel with an octane number around 70. Higher octane number fuels are considered problematic with low load and idle conditions. In previous studies mostly the intake air temperature did not exceed 30
Directory of Open Access Journals (Sweden)
SAJJAD ALIMEMON
2017-10-01
Full Text Available Multicarrier transmission technique has become a prominent transmission technique in high-speed wireless communication systems. It is due to its frequency diversity,small inter-symbol interference in the multipath fading channel, simple equalizer structure, and high bandwidth efficiency. Nevertheless, in thetime domain, multicarrier transmission signal has high PAPR (Peak-to-Average Power Ratio thatinterprets to low power amplifier efficiencies. To decrease the PAPR, a CCSLM (Convolutional Code Selective Mapping scheme for multicarrier transmission with a high number of subcarriers is proposed in this paper. Proposed scheme is based on SLM method and employs interleaver and convolutional coding. Related works on the PAPR reduction have considered either 128 or 256 number of subcarriers. However, PAPR of multicarrier transmission signal will increase as a number of subcarriers increases. The proposed method achieves significant PAPR reduction for ahigher number of subcarriers as well as better power amplifier efficiency. Simulation outcomes validate the usefulness of projected scheme.
Dietrich, Ariana B; Hu, Xiaoqing; Rosenfeld, J Peter
2014-03-01
In the first of two experiments, we compared the accuracy of the P300 concealed information test protocol as a function of numbers of trials experienced by subjects and ERP averages analyzed by investigators. Contrary to Farwell et al. (Cogn Neurodyn 6(2):115-154, 2012), we found no evidence that 100 trial based averages are more accurate than 66 or 33 trial based averages (all numbers led to accuracies of 84-94 %). There was actually a trend favoring the lowest trial numbers. The second study compared numbers of irrelevant stimuli recalled and recognized in the 3-stimulus protocol versus the complex trial protocol (Rosenfeld in Memory detection: theory and application of the concealed information test, Cambridge University Press, New York, pp 63-89, 2011). Again, in contrast to expectations from Farwell et al. (Cogn Neurodyn 6(2):115-154, 2012), there were no differences between protocols, although there were more irrelevant stimuli recognized than recalled, and irrelevant 4-digit number group stimuli were neither recalled nor recognized as well as irrelevant city name stimuli. We therefore conclude that stimulus processing in the P300-based complex trial protocol-with no more than 33 sweep averages-is adequate to allow accurate detection of concealed information.
International Nuclear Information System (INIS)
Cecen, Songul; Demirer, R. Murat; Bayrak, Coskun
2009-01-01
We propose a nonlinear congruential pseudorandom number generator consisting of summation of higher order composition of random logistic maps under certain congruential mappings. We change both bifurcation parameters of logistic maps in the interval of U=[3.5599,4) and coefficients of the polynomials in each higher order composition of terms up to degree d. This helped us to obtain a perfect random decorrelated generator which is infinite and aperiodic. It is observed from the simulation results that our new PRNG has good uniformity and power spectrum properties with very flat white noise characteristics. The results are interesting, new and may have applications in cryptography and in Monte Carlo simulations.
Potential host number in cuckoo bees (Psithyrus subgen. increases toward higher elevations
Directory of Open Access Journals (Sweden)
Jean-Nicolas Pradervand
2013-07-01
Full Text Available In severe and variable conditions, specialized resource selection strategies should be less frequent because extinction risks increase for species that depend on a single and unstable resource. Psithyrus (Bombus subgenus Psithyrus are bumblebee parasites that usurp Bombus nests and display inter‐specific variation in the number of hosts they parasitize. Using a phylogenetic comparative framework, we show that Psithyrus species at higher elevations display a higher number of hosts species compared with species restricted to lower elevations. Species inhabiting high elevations also cover a larger temperature range, suggesting that species able to occur in colder conditions may benefit from recruitment from populations occurring in warmer conditions. Our results provide evidence for an ‘altitudinal niche breadth hypothesis’ in parasitic species, showing a decrease in the parasites’ specialization along the elevational gradient, and also suggesting that Rapoport’s rule might apply to Psithyrus.
Regulation of chloroplast number and DNA synthesis in higher plants. Final report
Energy Technology Data Exchange (ETDEWEB)
Mullet, J.E.
1995-11-10
The long term objective of this research is to understand the process of chloroplast development and its coordination with leaf development in higher plants. This is important because the photosynthetic capacity of plants is directly related to leaf and chloroplast development. This research focuses on obtaining a detailed description of leaf development and the early steps in chloroplast development including activation of plastid DNA synthesis, changes in plastid DNA copy number, activation of chloroplast transcription and increases in plastid number per cell. The grant will also begin analysis of specific biochemical mechanisms by isolation of the plastid DNA polymerase, and identification of genetic mutants which are altered in their accumulation of plastid DNA and plastid number per cell.
Applicability of higher-order TVD method to low mach number compressible flows
International Nuclear Information System (INIS)
Akamatsu, Mikio
1995-01-01
Steep gradients of fluid density are the influential factor of spurious oscillation in numerical solutions of low Mach number (M<<1) compressible flows. The total variation diminishing (TVD) scheme is a promising remedy to overcome this problem and obtain accurate solutions. TVD schemes for high-speed flows are, however, not compatible with commonly used methods in low Mach number flows using pressure-based formulation. In the present study a higher-order TVD scheme is constructed on a modified form of each individual scalar equation of primitive variables. It is thus clarified that the concept of TVD is applicable to low Mach number flows within the framework of the existing numerical method. Results of test problems of the moving interface of two-component gases with the density ratio ≥ 4, demonstrate the accurate and robust (wiggle-free) profile of the scheme. (author)
Directory of Open Access Journals (Sweden)
Suzana eHerculano-Houzel
2015-06-01
Full Text Available There is a strong trend toward increased brain size in mammalian evolution, with larger brains composed of more and larger neurons than smaller brains across species within each mammalian order. Does the evolution of increased numbers of brain neurons, and thus larger brain size, occur simply through the selection of individuals with more and larger neurons, and thus larger brains, within a population? That is, do individuals with larger brains also have more, and larger, neurons than individuals with smaller brains, such that allometric relationships across species are simply an extension of intraspecific scaling? Here we show that this is not the case across adult male mice of a similar age. Rather, increased numbers of neurons across individuals are accompanied by increased numbers of other cells and smaller average cell size of both types, in a trade-off that explains how increased brain mass does not necessarily ensue. Fundamental regulatory mechanisms thus must exist that tie numbers of neurons to numbers of other cells and to average cell size within individual brains. Finally, our results indicate that changes in brain size in evolution are not an extension of individual variation in numbers of neurons, but rather occur through step changes that must simultaneously increase numbers of neurons and cause cell size to increase, rather than decrease.
Sombun, S.; Steinheimer, J.; Herold, C.; Limphirat, A.; Yan, Y.; Bleicher, M.
2018-02-01
We study the dependence of the normalized moments of the net-proton multiplicity distributions on the definition of centrality in relativistic nuclear collisions at a beam energy of \\sqrt{{s}{NN}}=7.7 {GeV}. Using the ultra relativistic quantum molecular dynamics model as event generator we find that the centrality definition has a large effect on the extracted cumulant ratios. Furthermore we find that the finite efficiency for the determination of the centrality introduces an additional systematic uncertainty. Finally, we quantitatively investigate the effects of event-pile up and other possible spurious effects which may change the measured proton number. We find that pile-up alone is not sufficient to describe the data and show that a random double counting of events, adding significantly to the measured proton number, affects mainly the higher order cumulants in most central collisions.
Average number of neutrons in π-p, π-n, and π-12C interactions at 4 GeV/c
International Nuclear Information System (INIS)
Bekmirzaev, R.N.; Grishin, V.G.; Muminov, M.M.; Suvanov, I.; Trka, Z.; Trkova, J.
1984-01-01
The average numbers of secondary neutrons in π - p, π - n, and π -12 C interactions at 4 GeV/c have been determined by investigating secondary neutral stars produced by neutrons in a propane bubble chamber. The following values were obtained for the charge-exchange coefficients: α(p→n) = 0.39 +- 0.04 and α(n→p) = 0.37 +- 0.08
Lovell, Cheryl D.; Sanchez, Maria Dolores Soler
This working paper analyzes higher education faculty characteristics in Mexico and the United States. The first section describes and compares Mexican and U.S. faculty characteristics and conditions, including total number of faculty, student-teacher ratios, full- versus part-time status, rank, tenure, average salaries, gender and ethnicity, and…
Saito, Takehisa; Ito, Tetsufumi; Ito, Yumi; Manabe, Yasuhiro; Sano, Kazuo
2017-02-01
The aim of this study was to elucidate the relationship between the gustatory function and average number of taste buds per fungiform papilla (FP) in humans. Systemically healthy volunteers (n = 211), pre-operative patients with chronic otitis media (n = 79), and postoperative patients, with or without a chorda tympani nerve (CTN) severed during middle ear surgery (n = 63), were included. Confocal laser scanning microscopy was employed to observe fungiform taste buds because it allows many FP to be observed non-invasively in a short period of time. Taste buds in an average of 10 FP in the midlateral region of the tongue were counted. In total, 3,849 FP were observed in 353 subjects. The gustatory function was measured by electrogustometry (EGM). An inverse relationship was found between the gustatory function and average number of fungiform taste buds per papilla. The healthy volunteers showed a lower EGM threshold (better gustatory function) and had more taste buds than did the patients with otitis media, and the patients with otitis media showed a lower EGM threshold and had more taste buds than did postoperative patients, reflecting the severity of damage to the CTN. It was concluded that the confocal laser scanning microscope is a very useful tool for using to observe a large number of taste buds non-invasively. © 2017 Eur J Oral Sci.
Arko, Bryan M.
Design trends for the low-pressure turbine (LPT) section of modern gas turbine engines include increasing the loading per airfoil, which promises a decreased airfoil count resulting in reduced manufacturing and operating costs. Accurate Reynolds-Averaged Navier-Stokes predictions of separated boundary layers and transition to turbulence are needed, as the lack of an economical and reliable computational model has contributed to this high-lift concept not reaching its full potential. Presented here for what is believed to be the first time applied to low-Re computations of high-lift linear cascade simulations is the Abe-Kondoh-Nagano (AKN) linear low-Re two-equation turbulence model which utilizes the Kolmogorov velocity scale for improved predictions of separated boundary layers. A second turbulence model investigated is the Kato-Launder modified version of the AKN, denoted MPAKN, which damps turbulent production in highly strained regions of flow. Fully Laminar solutions have also been calculated in an effort to elucidate the transitional quality of the turbulence model solutions. Time accurate simulations of three modern high-lift blades at a Reynolds number of 25,000 are compared to experimental data and higher-order computations in order to judge the accuracy of the results, where it is shown that the RANS simulations with highly refined grids can produce both quantitatively and qualitatively similar separation behavior as found in experiments. In particular, the MPAKN model is shown to predict the correct boundary layer behavior for all three blades, and evidence of transition is found through inspection of the components of the Reynolds Stress Tensor, spectral analysis, and the turbulence production parameter. Unfortunately, definitively stating that transition is occurring becomes an uncertain task, as similar evidence of the transition process is found in the Laminar predictions. This reveals that boundary layer reattachment may be a result of laminar
Earnest, Arul; Chen, Mark I; Ng, Donald; Sin, Leo Yee
2005-05-11
The main objective of this study is to apply autoregressive integrated moving average (ARIMA) models to make real-time predictions on the number of beds occupied in Tan Tock Seng Hospital, during the recent SARS outbreak. This is a retrospective study design. Hospital admission and occupancy data for isolation beds was collected from Tan Tock Seng hospital for the period 14th March 2003 to 31st May 2003. The main outcome measure was daily number of isolation beds occupied by SARS patients. Among the covariates considered were daily number of people screened, daily number of people admitted (including observation, suspect and probable cases) and days from the most recent significant event discovery. We utilized the following strategy for the analysis. Firstly, we split the outbreak data into two. Data from 14th March to 21st April 2003 was used for model development. We used structural ARIMA models in an attempt to model the number of beds occupied. Estimation is via the maximum likelihood method using the Kalman filter. For the ARIMA model parameters, we considered the simplest parsimonious lowest order model. We found that the ARIMA (1,0,3) model was able to describe and predict the number of beds occupied during the SARS outbreak well. The mean absolute percentage error (MAPE) for the training set and validation set were 5.7% and 8.6% respectively, which we found was reasonable for use in the hospital setting. Furthermore, the model also provided three-day forecasts of the number of beds required. Total number of admissions and probable cases admitted on the previous day were also found to be independent prognostic factors of bed occupancy. ARIMA models provide useful tools for administrators and clinicians in planning for real-time bed capacity during an outbreak of an infectious disease such as SARS. The model could well be used in planning for bed-capacity during outbreaks of other infectious diseases as well.
Garwood, Candice L; Clemente, Jennifer L; Ibe, George N; Kandula, Vijay A; Curtis, Kristy D; Whittaker, Peter
2010-06-15
Studies report that warfarin doses required to maintain therapeutic anticoagulation decrease with age; however, these studies almost exclusively enrolled patients of European ancestry. Consequently, universal application of dosing paradigms based on such evidence may be confounded because ethnicity also influences dose. Therefore, we determined if warfarin dose decreased with age in Americans of African ancestry, if older African and European ancestry patients required different doses, and if their daily dose frequency distributions differed. Our chart review examined 170 patients of African ancestry and 49 patients of European ancestry cared for in our anticoagulation clinic. We calculated the average weekly dose required for each stable, anticoagulated patient to maintain an international normalized ratio of 2.0 to 3.0, determined dose averages for groups 80 years of age and plotted dose as a function of age. The maintenance dose in patients of African ancestry decreased with age (PAfrican ancestry required higher average weekly doses than patients of European ancestry: 33% higher in the 70- to 79-year-old group (38.2+/-1.9 vs. 28.8+/-1.7 mg; P=0.006) and 52% in the >80-year-old group (33.2+/-1.7 vs. 21.8+/-3.8 mg; P=0.011). Therefore, 43% of older patients of African ancestry required daily doses >5mg and hence would have been under-dosed using current starting-dose guidelines. The dose frequency distribution was wider for older patients of African ancestry compared to those of European ancestry (PAfrican ancestry indicate that strategies for initiating warfarin therapy based on studies of patients of European ancestry could result in insufficient anticoagulation and thereby potentially increase their thromboembolism risk. Copyright 2010 Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Suzana eHerculano-Houzel
2014-08-01
Full Text Available Enough species have now been subject to systematic quantitative analysis of the relationship between the morphology and cellular composition of their brain that patterns begin to emerge and shed light on the evolutionary path that led to mammalian brain diversity. Based on an analysis of the shared and clade-specific characteristics of 41 modern mammalian species in 6 clades, and in light of the phylogenetic relationships among them, here we propose that ancestral mammal brains were composed and scaled in their cellular composition like modern afrotherian and glire brains: with an addition of neurons that is accompanied by a decrease in neuronal density and very little modification in glial cell density, implying a significant increase in average neuronal cell size in larger brains, and the allocation of approximately 2 neurons in the cerebral cortex and 8 neurons in the cerebellum for every neuron allocated to the rest of brain. We also propose that in some clades the scaling of different brain structures has diverged away from the common ancestral layout through clade-specific (or clade-defining changes in how average neuronal cell mass relates to numbers of neurons in each structure, and how numbers of neurons are differentially allocated to each structure relative to the number of neurons in the rest of brain. Thus, the evolutionary expansion of mammalian brains has involved both concerted and mosaic patterns of scaling across structures. This is, to our knowledge, the first mechanistic model that explains the generation of brains large and small in mammalian evolution, and it opens up new horizons for seeking the cellular pathways and genes involved in brain evolution.
Directory of Open Access Journals (Sweden)
Earnest Arul
2005-05-01
Full Text Available Abstract Background The main objective of this study is to apply autoregressive integrated moving average (ARIMA models to make real-time predictions on the number of beds occupied in Tan Tock Seng Hospital, during the recent SARS outbreak. Methods This is a retrospective study design. Hospital admission and occupancy data for isolation beds was collected from Tan Tock Seng hospital for the period 14th March 2003 to 31st May 2003. The main outcome measure was daily number of isolation beds occupied by SARS patients. Among the covariates considered were daily number of people screened, daily number of people admitted (including observation, suspect and probable cases and days from the most recent significant event discovery. We utilized the following strategy for the analysis. Firstly, we split the outbreak data into two. Data from 14th March to 21st April 2003 was used for model development. We used structural ARIMA models in an attempt to model the number of beds occupied. Estimation is via the maximum likelihood method using the Kalman filter. For the ARIMA model parameters, we considered the simplest parsimonious lowest order model. Results We found that the ARIMA (1,0,3 model was able to describe and predict the number of beds occupied during the SARS outbreak well. The mean absolute percentage error (MAPE for the training set and validation set were 5.7% and 8.6% respectively, which we found was reasonable for use in the hospital setting. Furthermore, the model also provided three-day forecasts of the number of beds required. Total number of admissions and probable cases admitted on the previous day were also found to be independent prognostic factors of bed occupancy. Conclusion ARIMA models provide useful tools for administrators and clinicians in planning for real-time bed capacity during an outbreak of an infectious disease such as SARS. The model could well be used in planning for bed-capacity during outbreaks of other infectious
Directory of Open Access Journals (Sweden)
Carlos S. Garcia
2016-08-01
Full Text Available Firm lifecycle theory predicts that the Weighted Average Cost of Capital (WACC will tend to fall over the lifecycle of the firm (Mueller, 2003, p. 80-81. However, given that previous research finds that corporate governance deteriorates as firms get older (Mueller and Yun, 1998; Saravia, 2014 there is good reason to suspect that the opposite could be the case, that is, that the WACC is higher for older firms. Since our literature review indicates that no direct tests to clarify this question have been carried out up till now, this paper aims to fill the gap by testing this prediction empirically. Our findings support the proposition that the WACC of younger firms is higher than that of mature firms. Thus, we find that the mature firm overinvestment problem is not intensified by a higher cost of capital, on the contrary, our results suggest that mature firms manage to invest in negative net present value projects even though they have access to cheaper capital. This finding sheds new light on the magnitude of the corporate governance problems found in mature firms.
Treatments that generate higher number of adverse drug reactions and their symptoms
Directory of Open Access Journals (Sweden)
Lucía Fernández-López
2015-12-01
Full Text Available Objectives: Adverse drug reactions (ADRs are an important cause of morbidity and mortality worldwide and generate high health costs. Therefore, the aims of this study were to determine the treatments which produce more ADRs in general population and the main symptoms they generate. Methods: An observational, cross-sectional study consisting in performing a self-rated questionnaire was carried out. 510 patients were asked about the treatments, illnesses and ADRs, they had suffered from. Results: 26.7% of patients had suffered from some ADR. Classifying patients according to the type of prescribed treatment and studying the number of ADR that they had, we obtained significant differences (p ≤ 0.05 for treatments against arthrosis, anemia and nervous disorders (anxiety, depression, insomnia. Moreover, determining absolute frequencies of these ADRs appearance in each treatment, higher frequencies were again for drugs against arthrosis (22.6% of patients treated for arthrosis suffered some ADR, anemia (14.28%, nerve disorders (13.44% and also asthma (16%. Regarding the symptoms produced by ADRs, the most frequent were gastrointestinal (60% of patients who suffered an ADR, had gastrointestinal symptoms and nervous alterations (dizziness, headache, sleep disturbances etc (24.6%. Conclusion: Therapeutic groups which produce more commonly ADRs are those for arthrosis, anemia, nervous disorders and asthma. In addition, symptoms which are generated more frequently are gastrointestinal and nervous problems. This is in accordance with the usual side effects of mentioned treatments. Health professionals should be informed about it, so that they would be more alert about a possible emergence of an ADR in these treatments. They also could provide enough information to empower patients and thus, they probably could detect ADR events. This would facilitate ADR detection and would avoid serious consequences generated to both patients' health and health economics.
Can Higher Education Foster Economic Growth? Chicago Fed Letter. Number 229
Mattoon, Richard H.
2006-01-01
Not all observers agree that higher education and economic growth are obvious or necessary complements to each other. The controversy may be exacerbated because of the difficulty of measuring the exact contribution of colleges and universities to economic growth. Recognizing that a model based on local conditions and higher education's response…
Fighting for the Profession: A History of AFT Higher Education. Item Number 36-0701
American Federation of Teachers, 2003
2003-01-01
This document provides a history of the relationship between higher education faculty and the American Federation of Teachers (AFT). Highlights include the first AFT higher education local formed in 1918, the role played by the union in the expansion of the G.I. Bill following World War II, increased activism in the 1950s and 1960s to win…
Middlemas, David A.; Manning, James M.; Gazzillo, Linda M.; Young, John
2001-06-01
OBJECTIVE: To determine whether grade point average, hours of clinical education, or both are significant predictors of performance on the National Athletic Trainers' Association Board of Certification examination and whether curriculum and internship candidates' scores on the certification examination can be differentially predicted. DESIGN AND SETTING: Data collection forms and consent forms were mailed to the subjects to collect data for predictor variables. Subject scores on the certification examination were obtained from Columbia Assessment Services. SUBJECTS: A total of 270 first-time candidates for the April and June 1998 certification examinations. MEASUREMENTS: Grade point average, number of clinical hours completed, sex, route to certification eligibility (curriculum or internship), scores on each section of the certification examination, and pass/fail criteria for each section. RESULTS: We found no significant difference between the scores of men and women on any section of the examination. Scores for curriculum and internship candidates differed significantly on the written and practical sections of the examination but not on the simulation section. Grade point average was a significant predictor of scores on each section of the examination and the examination as a whole. Clinical hours completed did not add a significant increment for any section but did add a significant increment for the examination overall. Although no significant difference was noted between curriculum and internship candidates in predicting scores on sections of the examination, a significant difference by route was found in predicting whether candidates would pass the examination as a whole (P =.047). Proportion of variance accounted for was less than R(2) = 0.0723 for any section of the examination and R(2) = 0.057 for the examination as a whole. CONCLUSIONS: Potential predictors of performance on the certification examination can be useful to athletic training educators in
Widjaja, E; Mahmoodabadi, S Z; Rea, D; Moineddin, R; Vidarsson, L; Nilsson, D
2009-01-01
Tensor estimation can be improved by increasing the number of gradient directions (NGD) or increasing the number of signal averages (NSA), but at a cost of increased scan time. To evaluate the effects of NGD and NSA on fractional anisotropy (FA) and fiber density index (FDI) in vivo. Ten healthy adults were scanned on a 1.5T system using nine different diffusion tensor sequences. Combinations of 7 NGD, 15 NGD, and 25 NGD with 1 NSA, 2 NSA, and 3 NSA were used, with scan times varying from 2 to 18 min. Regions of interest (ROIs) were placed in the internal capsules, middle cerebellar peduncles, and splenium of the corpus callosum, and FA and FDI were calculated. Analysis of variance was used to assess whether there was a difference in FA and FDI of different combinations of NGD and NSA. There was no significant difference in FA of different combinations of NGD and NSA of the ROIs (P>0.005). There was a significant difference in FDI between 7 NGD/1 NSA and 25 NGD/3 NSA in all three ROIs (PNSA, 25 NGD/1 NSA, and 25 NGD/2 NSA and 25 NGD/3 NSA in all ROIs (P>0.005). We have not found any significant difference in FA with varying NGD and NSA in vivo in areas with relatively high anisotropy. However, lower NGD resulted in reduced FDI in vivo. With larger NGD, NSA has less influence on FDI. The optimal sequence among the nine sequences tested with the shortest scan time was 25 NGD/1 NSA.
Numerical simulations of turbulent heat transfer in a channel at Prandtl numbers higher than 100
International Nuclear Information System (INIS)
Bergant, R.; Tiselj, I.
2005-01-01
During the last years, many attempts have been made to extend turbulent heat transfer at low Prandtl numbers to high Prandtl numbers in the channel based on a very accurate pseudo-spectral code of direct numerical simulation (DNS). DNS describes all the length and time scales for velocity and temperature fields, which are different when Prandtl number is not equal to 1. DNS can be used at low Reynolds (Re τ =150. Very similar approach as for Pr=5.4 was done for numerical simulations at Pr=100 and Pr=200. Comparison was made with results of temperature fields performed on 9-times finer numerical grid, however without damping of the highest Fourier coefficients. The results of mean temperature profiles show no differences larger than statistical uncertainties (∼1%), while slightly larger differences are seen for temperature fluctuations. (author)
Perez-Calatayud, Jose; Ballester, Facundo; Das, Rupak K; Dewerd, Larry A; Ibbott, Geoffrey S; Meigooni, Ali S; Ouhib, Zoubir; Rivard, Mark J; Sloboda, Ron S; Williamson, Jeffrey F
2012-05-01
Recommendations of the American Association of Physicists in Medicine (AAPM) and the European Society for Radiotherapy and Oncology (ESTRO) on dose calculations for high-energy (average energy higher than 50 keV) photon-emitting brachytherapy sources are presented, including the physical characteristics of specific (192)Ir, (137)Cs, and (60)Co source models. This report has been prepared by the High Energy Brachytherapy Source Dosimetry (HEBD) Working Group. This report includes considerations in the application of the TG-43U1 formalism to high-energy photon-emitting sources with particular attention to phantom size effects, interpolation accuracy dependence on dose calculation grid size, and dosimetry parameter dependence on source active length. Consensus datasets for commercially available high-energy photon sources are provided, along with recommended methods for evaluating these datasets. Recommendations on dosimetry characterization methods, mainly using experimental procedures and Monte Carlo, are established and discussed. Also included are methodological recommendations on detector choice, detector energy response characterization and phantom materials, and measurement specification methodology. Uncertainty analyses are discussed and recommendations for high-energy sources without consensus datasets are given. Recommended consensus datasets for high-energy sources have been derived for sources that were commercially available as of January 2010. Data are presented according to the AAPM TG-43U1 formalism, with modified interpolation and extrapolation techniques of the AAPM TG-43U1S1 report for the 2D anisotropy function and radial dose function.
Child Poverty Higher and More Persistent in Rural America. National Issue Brief Number 97
Schaefer, Andrew; Mattingly, Marybeth; Johnson, Kenneth M.
2016-01-01
The negative consequences of growing up in a poor family are well known. Poor children are less likely to have timely immunizations, have lower academic achievement, are generally less engaged in school activities, and face higher delinquency rates in adolescent years. Each of these has adverse impacts on their health, earnings, and family status…
The Demise of Higher Education Performance Funding Systems in Three States. CCRC Brief. Number 41
Dougherty, Kevin J.; Natow, Rebecca S.
2009-01-01
Performance funding in higher education ties state funding directly to institutional performance on specific indicators, such as rates of retention, graduation, and job placement. One of the great puzzles about performance funding is that it has been both popular and unstable. Between 1979 and 2007, 26 states enacted it, but 14 of those states…
Higher first Chern numbers in one-dimensional Bose-Fermi mixtures
Knakkergaard Nielsen, Kristian; Wu, Zhigang; Bruun, G. M.
2018-02-01
We propose to use a one-dimensional system consisting of identical fermions in a periodically driven lattice immersed in a Bose gas, to realise topological superfluid phases with Chern numbers larger than 1. The bosons mediate an attractive induced interaction between the fermions, and we derive a simple formula to analyse the topological properties of the resulting pairing. When the coherence length of the bosons is large compared to the lattice spacing and there is a significant next-nearest neighbour hopping for the fermions, the system can realise a superfluid with Chern number ±2. We show that this phase is stable in a large region of the phase diagram as a function of the filling fraction of the fermions and the coherence length of the bosons. Cold atomic gases offer the possibility to realise the proposed system using well-known experimental techniques.
A supersymmetric matrix model: II. Exploring higher-fermion-number sectors
Veneziano, Gabriele
2006-01-01
Continuing our previous analysis of a supersymmetric quantum-mechanical matrix model, we study in detail the properties of its sectors with fermion number F=2 and 3. We confirm all previous expectations, modulo the appearance, at strong coupling, of {\\it two} new bosonic ground states causing a further jump in Witten's index across a previously identified critical 't Hooft coupling $\\lambda_c$. We are able to elucidate the origin of these new SUSY vacua by considering the $\\lambda \\to \\infty$ limit and a strong coupling expansion around it.
Batey, Peter; Brown, Peter; Corver, Mark
Higher education in England has expanded rapidly in the last ten years with the result that currently more than 30% of young people go on to university. Expansion is likely to continue following the recommendations of a national committee of inquiry (the Dearing Committee). The participation rate is known to vary substantially among social groups and between geographical areas. In this paper the participation rate is calculated using a new measure, the Young Entrants Index (YEI), and the extent of variation by region, gender and residential neighbourhood type established. The Super Profiles geodemographic system is used to facilitate the latter. This is shown to be a powerful discriminator and to offer great potential as an alternative analytical approach to the conventional social class categories, based on parental occupation, that have formed the basis of most participation studies to date.
Cryogenic wind tunnel technology. A way to measurement at higher Reynolds numbers
Beck, J. W.
1984-01-01
The goals, design, problems, and value of cryogenic transonic wind tunnels being developed in Europe are discussed. The disadvantages inherent in low-Reynolds-number (Re) wind tunnel simulations of aircraft flight at high Re are reviewed, and the cryogenic tunnel is shown to be the most practical method to achieve high Re. The design proposed for the European Transonic Wind tunnel (ETW) is presented: parameters include cross section. DISPLAY 83A46484/2 = 4 sq m, operating pressure = 5 bar, temperature = 110-120 K, maximum Re = 40 x 10 to the 6th, liquid N2 consumption = 40,000 metric tons/year, and power = 39,5 MW. The smaller Cologne subsonic tunnel being adapted to cryogenic use for preliminary studies is described. Problems of configuration, materials, and liquid N2 evaporation and handling and the research underway to solve them are outlined. The benefits to be gained by the construction of these costly installations are seen more in applied aerodynamics than in basic research in fluid physics. The need for parallel development of both high Re tunnels and computers capable of performing high-Re numerical analysis is stressed.
Energy Technology Data Exchange (ETDEWEB)
Sayin, Cenk; Kilicaslan, Ibrahim; Canakci, Mustafa; Ozsezen, Necati [Kocaeli Univ., Dept. of Mechanical Education, Izmit (Turkey)
2005-06-01
In this study, the effect of using higher-octane gasoline than that of engine requirement on the performance and exhaust emissions was experimentally studied. The test engine chosen has a fuel system with carburettor because 60% of the vehicles in Turkey are equipped with the carburettor. The engine, which required 91-RON (Research Octane Number) gasoline, was tested using 95-RON and 91-RON. Results show that using octane ratings higher than the requirement of an engine not only decreases engine performance but also increases exhaust emissions. (Author)
Marginson, Simon
Since the 1960s there has been a major expansion in the number of people in Australia holding post school educational credentials and the proportion of the full time work force with those credentials. The penalties of not holding credentials, in terms of the incidence and duration of unemployment, are increasingly severe. At the same time, there…
Directory of Open Access Journals (Sweden)
Maria Célia Borges
2012-01-01
Full Text Available This paper presents a discussion on the policies to expand Higher Education, stating the influences of neoliberalism and explaining the contradictions in legislation and reforms at this level of education in Brazil after the 1990s. It questions the model of the New University with regard to the Brazilian reality and the poor investments available for such a reform. It calls attention to the danger of prioritizing the increase in the number of vacancies instead of the quality of teaching, something which would represent the scrapping of the public university. It highlights the contradictions of Reuni, with improvised actions and conditioning of funds, through the achievement of goals. On one hand, it recognizes the increasing number of vacancies in Higher Education and, on the other, it reaffirms that democratization of access requires universities with financial autonomy, well-structured courses with innovative curricula, qualified professors, adequate infrastructure, and high quality teaching, with research aiming the production of new knowledge, as well as university extension.
Cheng, Shawn; Kirton, Laurence G.; Panandam, Jothi M.; Siraj, Siti S.; Ng, Kevin Kit-Siong; Tan, Soon-Guan
2011-01-01
Termites of the genus Odontotermes are important decomposers in the Old World tropics and are sometimes important pests of crops, timber and trees. The species within the genus often have overlapping size ranges and are difficult to differentiate based on morphology. As a result, the taxonomy of Odontotermes in Peninsular Malaysia has not been adequately worked out. In this study, we examined the phylogeny of 40 samples of Odontotermes from Peninsular Malaysia using two mitochondrial DNA regions, that is, the 16S ribosomal RNA and cytochrome oxidase subunit I genes, to aid in elucidating the number of species in the peninsula. Phylogenies were reconstructed from the individual gene and combined gene data sets using parsimony and likelihood criteria. The phylogenies supported the presence of up to eleven species in Peninsular Malaysia, which were identified as O. escherichi, O. hainanensis, O. javanicus, O. longignathus, O. malaccensis, O. oblongatus, O. paraoblongatus, O. sarawakensis, and three possibly new species. Additionally, some of our taxa are thought to comprise a complex of two or more species. The number of species found in this study using DNA methods was more than the initial nine species thought to occur in Peninsular Malaysia. The support values for the clades and morphology of the soldiers provided further evidence for the existence of eleven or more species. Higher resolution genetic markers such as microsatellites would be required to confirm the presence of cryptic species in some taxa. PMID:21687629
Toma, J. Douglas, Ed.; Dubrow, Greg, Ed.; Hartley, Matthew, Ed.
2005-01-01
Institutional culture matters in higher education, and universities and colleges commonly express the need to strengthen their culture. A strong culture is perceived, correctly so, to engender a needed sense of connectedness between and among the varied constituents associated with a campus. Linking organizational culture and social cohesion is…
Watson, Jane; Chick, Helen
2012-01-01
This paper analyses the responses of 247 middle school students to items requiring the concept of average in three different contexts: a city's weather reported in maximum daily temperature, the number of children in a family, and the price of houses. The mixed but overall disappointing performance on the six items in the three contexts indicates…
Bashir, Sajitha
2007-01-01
This paper analyzes the trends, underlying factors and implications of the trade in higher education services. The term "trade in higher education" refers to the purchase of higher education services from a foreign country using domestic resources. The objectives of this paper are to provide policy makers in developing countries, World Bank staff,…
International Nuclear Information System (INIS)
Hegland, P.; Dahlquist, J.
1985-01-01
A process for determining the relative quantity of low atomic energy material mixed with a higher atomic energy material is carried out by directing a first and second beam of x-rays into the mixture. The process includes transmitting x-rays directly to detectors to set one criterion, shielding the detectors from the x-ray sources to set another criterion and then passing samples of known relative composition to provide data for storage and calibration carrying out the process of mixtures to be measured
DEFF Research Database (Denmark)
Oliveira, Rodrigo Gouveia; Pedersen, Anders Gorm
2009-01-01
, and on the probability of disease transmission. We note that in addition to humans, the variance phenomenon described here is likely to play a role for sexually transmitted diseases in other species also. We also show, again by examining published, empirical data, that the female to male prevalence ratio increases...... of sexually transmitted diseases: compared to the situation where the genders have identical sex partner distributions, men will reach a lower equilibrium value, while women will stay at the same level (meaning that female prevalence becomes higher than male). We carefully analyse model behaviour and derive...... with the overall prevalence of a sexually transmitted disease (i.e., the more widespread the disease, the more women are affected). We suggest that this pattern may be caused by the effect described above in highly prevalent sexually transmitted diseases, while its impact in low-prevalence epidemics is surpassed...
Energy Technology Data Exchange (ETDEWEB)
Willems, Nop M.B.K., E-mail: n.willems@acta.nl [Dept. of Orthodontics, Academic Centre for Dentistry Amsterdam (ACTA), University of Amsterdam and VU University, Gustav Mahlerlaan 3004, 1081 LA Amsterdam (Netherlands); Dept. of Oral Cell Biology and Functional Anatomy, MOVE Research Institute, Academic Centre for Dentistry Amsterdam (ACTA), University of Amsterdam and VU University, Gustav Mahlerlaan 3004, 1081 LA Amsterdam (Netherlands); Langenbach, Geerling E.J. [Dept. of Oral Cell Biology and Functional Anatomy, MOVE Research Institute, Academic Centre for Dentistry Amsterdam (ACTA), University of Amsterdam and VU University, Gustav Mahlerlaan 3004, 1081 LA Amsterdam (Netherlands); Stoop, Reinout [Dept. of Metabolic Health Research, TNO, P.O. Box 2215, 2301 CE Leiden (Netherlands); Toonder, Jaap M.J. den [Dept. of Mechanical Engineering, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven (Netherlands); Mulder, Lars [Dept. of Biomedical Engineering, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven (Netherlands); Zentner, Andrej [Dept. of Orthodontics, Academic Centre for Dentistry Amsterdam (ACTA), University of Amsterdam and VU University, Gustav Mahlerlaan 3004, 1081 LA Amsterdam (Netherlands); Everts, Vincent [Dept. of Oral Cell Biology and Functional Anatomy, MOVE Research Institute, Academic Centre for Dentistry Amsterdam (ACTA), University of Amsterdam and VU University, Gustav Mahlerlaan 3004, 1081 LA Amsterdam (Netherlands)
2014-09-01
The role of mature collagen cross-links, pentosidine (Pen) cross-links in particular, in the micromechanical properties of cancellous bone is unknown. The aim of this study was to examine nonenzymatic glycation effects on tissue stiffness of demineralized and non-demineralized cancellous bone. A total of 60 bone samples were derived from mandibular condyles of six pigs, and assigned to either control or experimental groups. Experimental handling included incubation in phosphate buffered saline alone or with 0.2 M ribose at 37 °C for 15 days and, in some of the samples, subsequent complete demineralization of the sample surface using 8% EDTA. Before and after experimental handling, bone microarchitecture and tissue mineral density were examined by means of microcomputed tomography. After experimental handling, the collagen content and the number of Pen, hydroxylysylpyridinoline (HP), and lysylpyridinoline (LP) cross-links were estimated using HPLC, and tissue stiffness was assessed by means of nanoindentation. Ribose treatment caused an up to 300-fold increase in the number of Pen cross-links compared to nonribose-incubated controls, but did not affect the number of HP and LP cross-links. This increase in the number of Pen cross-links had no influence on tissue stiffness of both demineralized and nondemineralized bone samples. These findings suggest that Pen cross-links do not play a significant role in bone tissue stiffness. - Highlights: • The assessment of effects of glycation in bone using HPLC, microCT, and nanoindentation • Ribose incubation: 300‐fold increase in the number of pentosidine cross-links • 300‐fold increase in the number of pentosidine cross-links: no changes in bone tissue stiffness.
Marsh, D. T.
Provision for postsecondary higher education in Wales, the nature of the Welsh system, and future concerns are discussed. The roles of the Welsh Office and the Welsh Joint Education Committee contrast greatly with central organizations in England. There is one university in Wales, comprising seven constituent colleges. Additional institutions in…
Michaels, Matthew S; Balthrop, Tia; Pulido, Alejandro; Rudd, M David; Joiner, Thomas E
2018-01-01
The present study represents an early stage investigation into the phenomenon whereby those with bipolar disorder attempt suicide more frequently than those with unipolar depression, but do not tend to attempt suicide during mania. Data for this study were obtained from baseline measurements collected in a randomized treatment study at a major southwestern United States military medical center. We demonstrated the rarity of suicide attempts during mania, the higher frequency of suicide attempts in those with bipolar disorder compared to those with depression, and the persistence of effects after accounting for severity of illness. These results provide the impetus for the development and testing of theoretical explanations.
Energy Technology Data Exchange (ETDEWEB)
Kiely, Patrick D.; Call, Douglas F.; Yates, Matthew D.; Regan, John M.; Logan, Bruce E. [Pennsylvania State Univ., University Park, PA (United States). Dept. of Civil and Environmental Engineering
2010-09-15
Microbial fuel cell (MFC) anode communities often reveal just a few genera, but it is not known to what extent less abundant bacteria could be important for improving performance. We examined the microbial community in an MFC fed with formic acid for more than 1 year and determined using 16S rRNA gene cloning and fluorescent in situ hybridization that members of the Paracoccus genus comprised most ({proportional_to}30%) of the anode community. A Paracoccus isolate obtained from this biofilm (Paracoccus denitrificans strain PS-1) produced only 5.6 mW/m{sup 2}, whereas the original mixed culture produced up to 10 mW/m{sup 2}. Despite the absence of any Shewanella species in the clone library, we isolated a strain of Shewanella putrefaciens (strain PS-2) from the same biofilm capable of producing a higher-power density (17.4 mW/m{sup 2}) than the mixed culture, although voltage generation was variable. Our results suggest that the numerical abundance of microorganisms in biofilms cannot be assumed a priori to correlate to capacities of these predominant species for high-power production. Detailed screening of bacterial biofilms may therefore be needed to identify important strains capable of high-power generation for specific substrates. (orig.)
Kiely, Patrick D.; Call, Douglas F.; Yates, Matthew D.; Regan, John M.; Logan, Bruce E.
2010-01-01
Microbial fuel cell (MFC) anode communities often reveal just a few genera, but it is not known to what extent less abundant bacteria could be important for improving performance. We examined the microbial community in an MFC fed with formic acid for more than 1 year and determined using 16S rRNA gene cloning and fluorescent in situ hybridization that members of the Paracoccus genus comprised most (~30%) of the anode community. A Paracoccus isolate obtained from this biofilm (Paracoccus denitrificans strain PS-1) produced only 5.6 mW/m 2, whereas the original mixed culture produced up to 10 mW/m 2. Despite the absence of any Shewanella species in the clone library, we isolated a strain of Shewanella putrefaciens (strain PS-2) from the same biofilm capable of producing a higher-power density (17.4 mW/m2) than the mixed culture, although voltage generation was variable. Our results suggest that the numerical abundance of microorganisms in biofilms cannot be assumed a priori to correlate to capacities of these predominant species for high-power production. Detailed screening of bacterial biofilms may therefore be needed to identify important strains capable of high-power generation for specific substrates. © 2010 Springer-Verlag.
Kiely, Patrick D.
2010-07-15
Microbial fuel cell (MFC) anode communities often reveal just a few genera, but it is not known to what extent less abundant bacteria could be important for improving performance. We examined the microbial community in an MFC fed with formic acid for more than 1 year and determined using 16S rRNA gene cloning and fluorescent in situ hybridization that members of the Paracoccus genus comprised most (~30%) of the anode community. A Paracoccus isolate obtained from this biofilm (Paracoccus denitrificans strain PS-1) produced only 5.6 mW/m 2, whereas the original mixed culture produced up to 10 mW/m 2. Despite the absence of any Shewanella species in the clone library, we isolated a strain of Shewanella putrefaciens (strain PS-2) from the same biofilm capable of producing a higher-power density (17.4 mW/m2) than the mixed culture, although voltage generation was variable. Our results suggest that the numerical abundance of microorganisms in biofilms cannot be assumed a priori to correlate to capacities of these predominant species for high-power production. Detailed screening of bacterial biofilms may therefore be needed to identify important strains capable of high-power generation for specific substrates. © 2010 Springer-Verlag.
Rotational averaging of multiphoton absorption cross sections
Energy Technology Data Exchange (ETDEWEB)
Friese, Daniel H., E-mail: daniel.h.friese@uit.no; Beerepoot, Maarten T. P.; Ruud, Kenneth [Centre for Theoretical and Computational Chemistry, University of Tromsø — The Arctic University of Norway, N-9037 Tromsø (Norway)
2014-11-28
Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.
Nitsch, J.; Wolters, L.P.; Fonseca Guerra, C.; Bickelhaupt, F.M.; Steffen, A.
2016-01-01
We aim to understand the electronic factors determining the stability and coordination number of d10 transition-metal complexes bearing N-heterocyclic carbene (NHC) ligands, with a particular emphasis on higher coordinated species. In this DFT study on the formation and bonding of Group 9–12 d10
Nitsch, J.S.; Wolters, L.P.; Fonseca Guerra, C.; Bickelhaupt, F.M.; Steffen, A.
2017-01-01
We aim to understand the electronic factors determining the stability and coordination number of d10 transition-metal complexes bearing N-heterocyclic carbene (NHC) ligands, with a particular emphasis on higher coordinated species. In this DFT study on the formation and bonding of Group 9–12 d10
International Nuclear Information System (INIS)
Gwin, R.; Spencer, R.R.; Ingle, R.W.; Todd, J.H.; Weaver, H.
1980-01-01
The average number of prompt neutrons emitted per fission ν/sub p/-bar(E), was measured for 235 U relative to ν/sub p/-bar for the spontaneous fission of 252 Cf over the neutron energy range from 500 eV to 10 MeV. The samples of 235 U and 252 Cf were contained in fission chambers located in the center of a large liquid scintillator. Fission neutrons were detected by the large liquid scintillator. The present values of ν/sub p/-bar(E) for 235 U are about 0.8% larger than those measured by Boldeman. In earlier work with the present system, it was noted that Boldeman's value of ν/sub p/-bar(E) for thermal energy neutrons was about 0.8% lower than obtained at ORELA. It is suggested that the thickness of the fission foil used in Boldeman's experiment may cause some of the discrepancy between his and the present values of ν/sub p/-bar(E). For the energy region up to 700 keV, the present values of ν/sub p/-bar(E) for 235 U agree, within the uncertainty, with those given in ENDF/B-V. Above 1 MeV the present results for ν/sub p/-bar(E) range about the ENDF/B-V values with differences up to 1.3%. 6 figures, 1 table
International Nuclear Information System (INIS)
Chrien, R.E.
1986-10-01
The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs
International Nuclear Information System (INIS)
Tzanos, C.P.
1992-01-01
A higher-order differencing method was recently proposed for the convection-diffusion equation, which even with a coarse mesh gives oscillation-free solutions that are far more accurate than those of the upwind scheme. In this paper, the performance of this method is investigated in conjunction with the performance of different iterative solvers for the solution of the Navier-Stokes equations in the vorticity-streamfunction formulation for incompressible flow at high Reynolds numbers. Flow in a square cavity with a moving lid was chosen as a model problem. Solvers that performed well at low Re numbers either failed to converge or had a computationally prohibitive convergence rate at high Re numbers. The additive correction method of Settari and Aziz and an iterative incomplete lower and upper (ILU) solver were used in a multigrid approach that performed well in the whole range of Re numbers considered (from 1000 to 10,000) and for uniform as well as nonuniform grids. At high Re numbers, point or line Gauss-Seidel solvers converged with uniform grids, but failed to converge with nonuniform grids
Safety Impact of Average Speed Control in the UK
DEFF Research Database (Denmark)
Lahrmann, Harry Spaabæk; Brassøe, Bo; Johansen, Jonas Wibert
2016-01-01
of automatic speed control was point-based, but in recent years a potentially more effective alternative automatic speed control method has been introduced. This method is based upon records of drivers’ average travel speed over selected sections of the road and is normally called average speed control...... in the UK. The study demonstrates that the introduction of average speed control results in statistically significant and substantial reductions both in speed and in number of accidents. The evaluation indicates that average speed control has a higher safety effect than point-based automatic speed control....
International Nuclear Information System (INIS)
Rocha, Gifone A; Rocha, Andreia MC; Gomes, Adriana D; Faria, César LL Jr; Melo, Fabrício F; Batista, Sérgio A; Fernandes, Viviane C; Almeida, Nathálie BF; Teixeira, Kádima N; Brito, Kátia S; Queiroz, Dulciene Maria Magalhães
2015-01-01
Because to date there is no available study on STAT3 polymorphism and gastric cancer in Western populations and taking into account that Helicobacter pylori CagA EPIYA-C segment deregulates SHP-2/ERK-JAK/STAT3 pathways, we evaluated whether the two variables are independently associated with gastric cancer. We included 1048 subjects: H. pylori-positive patients with gastric carcinoma (n = 232) and with gastritis (n = 275) and 541 blood donors. Data were analyzed using logistic regression model. The rs744166 polymorphic G allele (p = 0.01; OR = 1.76; 95 % CI = 1.44-2.70), and CagA-positive (OR = 12.80; 95 % CI = 5.58-19.86) status were independently associated with gastric cancer in comparison with blood donors. The rs744166 polymorphism (p = 0.001; OR = 1.64; 95 % CI = 1.16-2.31) and infection with H. pylori CagA-positive strains possessing higher number of EPIYA-C segments (p = 0.001; OR = 2.28; 95 % CI = 1.41-3.68) were independently associated with gastric cancer in comparison with gastritis. The association was stronger when host and bacterium genotypes were combined (p < 0.001; OR = 3.01; 95 % CI = 2.29-3.98). When stimulated with LPS (lipopolysaccharide) or Pam3Cys, peripheral mononuclear cells of healthy carriers of the rs744166 GG and AG genotypes expressed higher levels of STAT3 mRNA than those carrying AA genotype (p = 0.04 for both). The nuclear expression of phosphorylated p-STAT3 protein was significantly higher in the antral gastric tissue of carriers of rs744166 GG genotype than in carriers of AG and AA genotypes. Our study provides evidence that STAT3 rs744166 G allele and infection with CagA-positive H. pylori with higher number of EPIYA-C segments are independent risk factors for gastric cancer. The odds ratio of having gastric cancer was greater when bacterium and host high risk genotypes were combined
Kulmala, Jenni; Hinrichs, Timo; Törmäkangas, Timo; von Bonsdorff, Mikaela B; von Bonsdorff, Monika E; Nygård, Clas-Håkan; Klockars, Matti; Seitsamo, Jorma; Ilmarinen, Juhani; Rantanen, Taina
2014-01-01
The aim of this study is to investigate whether work-related stress symptoms in midlife are associated with a number of mobility limitations during three decades from midlife to late life. Data for the study come from the Finnish Longitudinal Study of Municipal Employees (FLAME). The study includes a total of 5429 public sector employees aged 44-58 years at baseline who had information available on work-related stress symptoms in 1981 and 1985 and mobility limitation score during the subsequent 28-year follow-up. Four midlife work-related stress profiles were identified: negative reactions to work and depressiveness, perceived decrease in cognition, sleep disturbances, and somatic symptoms. People with a high number of stress symptoms in 1981 and 1985 were categorized as having constant stress. The number of self-reported mobility limitations was computed based on an eight-item list of mobility tasks presented to the participants in 1992, 1997, and 2009. Data were analyzed using joint Poisson regression models. The study showed that depending on the stress profile, persons suffering from constant stress in midlife had a higher risk of 30-70 % for having one more mobility limitation during the following 28 years compared to persons without stress after adjusting for mortality, several lifestyle factors, and chronic conditions. A less pronounced risk increase (20-40 %) was observed for persons with occasional symptoms. The study suggests that effective interventions aiming to reduce work-related stress should focus on both primary and secondary prevention.
DEFF Research Database (Denmark)
Clasen, Julie; Mellerup, Anders; Olsen, John Elmerdahl
2016-01-01
The primary objective of this study was to determine the minimum number of individual fecal samples to pool together in order to obtain a representative sample for herd level quantification of antimicrobial resistance (AMR) genes in a Danish pig herd, using a novel high-throughput qPCR assay...
Directory of Open Access Journals (Sweden)
Anna Twardosz
2011-04-01
Full Text Available Diffuse large B-cell lymphoma is the commonest histological type of malignant lymphoma, andremains incurable in many cases. Developing more efficient immunotherapy strategies will require betterunderstanding of the disorders of immune responses in cancer patients. NKT (natural killer-like T cells wereoriginally described as a unique population of T cells with the co-expression of NK cell markers. Apart fromtheir role in protecting against microbial pathogens and controlling autoimmune diseases, NKT cells havebeen recently revealed as one of the key players in the immune responses against tumors. The objective of thisstudy was to evaluate the frequency of CD3+/CD16+CD56+ cells in the peripheral blood of 28 diffuse largeB-cell lymphoma (DLBCL patients in correlation with clinical and laboratory parameters. Median percentagesof CD3+/CD16+CD56+ were significantly lower in patients with DLBCL compared to healthy donors(7.37% vs. 9.01%, p = 0.01; 4.60% vs. 5.81%, p = 0.03, although there were no differences in absolute counts.The frequency and the absolute numbers of CD3+/CD16+CD56+ cells were lower in advanced clinical stagesthan in earlier ones. The median percentage of CD3+/CD16+CD56+ cells in patients in Ann Arbor stages 1–2 was5.55% vs. 3.15% in stages 3–4 (p = 0.02, with median absolute counts respectively 0.26 G/L vs. 0.41 G/L (p == 0.02. The percentage and absolute numbers of CD3+/CD16+CD56+ cells were significantly higher in DL-BCL patients without B-symptoms compared to the patients with B-symptoms, (5.51% vs. 2.46%, p = 0.04;0.21 G/L vs. 0.44 G/L, p = 0.04. The percentage of CD3+/CD16+CD56+ cells correlated adversely with serumlactate dehydrogenase (R= –445; p < 0.05 which might influence NKT count. These figures suggest a relationshipbetween higher tumor burden and more aggressive disease and decreased NKT numbers. But it remains tobe explained whether low NKT cell counts in the peripheral blood of patients with DLBCL are the result
DEFF Research Database (Denmark)
Gramkow, Claus
1999-01-01
In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belo...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion......In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...
International Nuclear Information System (INIS)
Ichiguchi, Katsuji
1998-01-01
A new reduced set of resistive MHD equations is derived by averaging the full MHD equations on specified flux coordinates, which is consistent with 3D equilibria. It is confirmed that the total energy is conserved and the linearized equations for ideal modes are self-adjoint. (author)
Determining average yarding distance.
Roger H. Twito; Charles N. Mann
1979-01-01
Emphasis on environmental and esthetic quality in timber harvesting has brought about increased use of complex boundaries of cutting units and a consequent need for a rapid and accurate method of determining the average yarding distance and area of these units. These values, needed for evaluation of road and landing locations in planning timber harvests, are easily and...
Averaging operations on matrices
Indian Academy of Sciences (India)
2014-07-03
Jul 3, 2014 ... Role of Positive Definite Matrices. • Diffusion Tensor Imaging: 3 × 3 pd matrices model water flow at each voxel of brain scan. • Elasticity: 6 × 6 pd matrices model stress tensors. • Machine Learning: n × n pd matrices occur as kernel matrices. Tanvi Jain. Averaging operations on matrices ...
Directory of Open Access Journals (Sweden)
Patricia Bouyer
2015-09-01
Full Text Available Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in NP inter coNP and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements.
DEFF Research Database (Denmark)
Gramkow, Claus
2001-01-01
In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong ...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation.......In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...
The average size of ordered binary subgraphs
van Leeuwen, J.; Hartel, Pieter H.
To analyse the demands made on the garbage collector in a graph reduction system, the change in size of an average graph is studied when an arbitrary edge is removed. In ordered binary trees the average number of deleted nodes as a result of cutting a single edge is equal to the average size of a
Eliazar, Iddo
2018-02-01
The popular perception of statistical distributions is depicted by the iconic bell curve which comprises of a massive bulk of 'middle-class' values, and two thin tails - one of small left-wing values, and one of large right-wing values. The shape of the bell curve is unimodal, and its peak represents both the mode and the mean. Thomas Friedman, the famous New York Times columnist, recently asserted that we have entered a human era in which "Average is Over" . In this paper we present mathematical models for the phenomenon that Friedman highlighted. While the models are derived via different modeling approaches, they share a common foundation. Inherent tipping points cause the models to phase-shift from a 'normal' bell-shape statistical behavior to an 'anomalous' statistical behavior: the unimodal shape changes to an unbounded monotone shape, the mode vanishes, and the mean diverges. Hence: (i) there is an explosion of small values; (ii) large values become super-large; (iii) 'middle-class' values are wiped out, leaving an infinite rift between the small and the super large values; and (iv) "Average is Over" indeed.
Average nuclear surface properties
International Nuclear Information System (INIS)
Groote, H. von.
1979-01-01
The definition of the nuclear surface energy is discussed for semi-infinite matter. This definition is extended also for the case that there is a neutron gas instead of vacuum on the one side of the plane surface. The calculations were performed with the Thomas-Fermi Model of Syler and Blanchard. The parameters of the interaction of this model were determined by a least squares fit to experimental masses. The quality of this fit is discussed with respect to nuclear masses and density distributions. The average surface properties were calculated for different particle asymmetry of the nucleon-matter ranging from symmetry beyond the neutron-drip line until the system no longer can maintain the surface boundary and becomes homogeneous. The results of the calculations are incorporated in the nuclear Droplet Model which then was fitted to experimental masses. (orig.)
Americans' Average Radiation Exposure
International Nuclear Information System (INIS)
2000-01-01
We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body
Sullivan, Sharon G.; Grabois, Andrew; Greco, Albert N.
2003-01-01
Includes six reports related to book trade statistics, including prices of U.S. and foreign materials; book title output and average prices; book sales statistics; book exports and imports; book outlets in the U.S. and Canada; and numbers of books and other media reviewed by major reviewing publications. (LRW)
Taghdiri, Foad; Chung, Jonathan; Irwin, Samantha; Multani, Namita; Tarazi, Apameh; Ebraheem, Ahmed; Khodadadi, Mozghan; Goswami, Ruma; Wennberg, Richard; Mikulis, David; Green, Robin; Davis, Karen; Tator, Charles; Eizenman, Moshe; Tartaglia, Maria Carmela
2018-03-01
The aim of this study was to examine the potential utility of a self-paced saccadic eye movement as a marker of post-concussion syndrome (PCS) and monitoring the recovery from PCS. Fifty-nine persistently symptomatic participants with at least two concussions performed the self-paced saccade (SPS) task. We evaluated the relationships between the number of SPSs and 1) number of self-reported concussion symptoms, and 2) integrity of major white matter (WM) tracts (as measured by fractional anisotropy [FA] and mean diffusivity) that are directly or indirectly involved in saccadic eye movements and often affected by concussion. These tracts included the uncinate fasciculus (UF), cingulum (Cg) and its three subcomponents (subgenual, retrosplenial, and parahippocampal), superior longitudinal fasciculus, and corpus callosum. Mediation analyses were carried out to examine whether specific WM tracts (left UF and left subgenual Cg) mediated the relationship between the number of SPSs and 1) interval from last concussion or 2) total number of self-reported symptoms. The number of SPSs was negatively correlated with the total number of self-reported symptoms (r = -0.419, p = 0.026). The number of SPSs were positively correlated with FA of left UF and left Cg (r = 0.421, p = 0.013 and r = 0.452, p = 0.008; respectively). FA of the subgenual subcomponent of the left Cg partially mediated the relationship between the total number of symptoms and the number of SPSs, while FA of the left UF mediated the relationship between interval from last concussion and the number of SPSs. In conclusion, SPS testing as a fast and objective assessment may reflect symptom burden in patients with PCS. In addition, since the number of SPSs is associated with the integrity of some WM tracts, it may be useful as a diagnostic biomarker in patients with PCS.
Chokshi, Falgun H; Kang, Jian; Kundu, Suprateek; Castillo, Mauricio
Our purpose was to determine if associations exist between titles characteristics and citation numbers in Radiology, American Journal of Roentgenology (AJR), and American Journal of Neuroradiology (AJNR). This retrospective study is Institutional Review Board exempt. We searched Web of Science for all original research and review articles in Radiology, AJR, and AJNR between 2006 and 2012 and tabulated number of words in the title, presence of a colon symbol, and presence of an acronym. We used a Poisson regression model to evaluate the association between number of citations and title characteristics. We then used the Wald test to detect pairwise differences in the effect of title characteristics on number of citations among the 3 journals. Between 2006 and 2012, Radiology published 2662, AJR 3998, and AJNR 2581 original research and review articles. There was a citation number increase per title word increase of 1.6% for AJNR and 2.6% for AJR and decrease of 0.8% for Radiology. For all, P citation increases for AJNR (16%), Radiology (14%), and AJR (7.4%). Title acronym was associated with citation increases for AJNR (10%), Radiology (14%), and AJR (13.3%). All P citation numbers in Radiology, AJR, and AJNR. Copyright © 2016 Elsevier Inc. All rights reserved.
Connery, Robert H., Ed.
The purpose of the conference was to bring together educational leaders, corporation executives, and spokesmen for minority groups to examine problems in higher education. The papers include: "The Urban Crisis," by Robert C. Wood, and Harriet A. Zuckerman; "Minority Groups," by Charles V. Hamilton; "The Community and the Campus," by Franklin H.…
Atelsek, Frank J.; Gomberg, Irene L.
A survey was initiated at the request of the U.S. Office of Education and the Energy Task Force to: (1) measure the increase in energy expenditures since the OPEC oil embargo of 1973-74; (2) assess changes in energy consumption over a two-year period; and (3) examine some of the specific conservation practices of higher education institutions.…
Dougherty, Kevin J.; Natow, Rebecca S.
2010-01-01
This study analyzes changes over time in long-lasting state performance funding systems for higher education. It addresses two research questions: First, in what ways have long-lasting systems changed over time in funding levels, indicators used to allocate funds, and measures used for those indicators? Second, what political actors, actions, and…
Improving consensus structure by eliminating averaging artifacts
Directory of Open Access Journals (Sweden)
KC Dukka B
2009-03-01
Full Text Available Abstract Background Common structural biology methods (i.e., NMR and molecular dynamics often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures. However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%; in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38 of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA 1, our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction 2, which
The difference between alternative averages
Directory of Open Access Journals (Sweden)
James Vaupel
2012-09-01
Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.
Function reconstruction from noisy local averages
International Nuclear Information System (INIS)
Chen Yu; Huang Jianguo; Han Weimin
2008-01-01
A regularization method is proposed for the function reconstruction from noisy local averages in any dimension. Error bounds for the approximate solution in L 2 -norm are derived. A number of numerical examples are provided to show computational performance of the method, with the regularization parameters selected by different strategies
A dynamic analysis of moving average rules
Chiarella, C.; He, X.Z.; Hommes, C.H.
2006-01-01
The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type
Neocortical glial cell numbers in human brains
DEFF Research Database (Denmark)
Pelvig, D.P.; Pakkenberg, H.; Stark, A.K.
2008-01-01
Stereological cell counting was applied to post-mortem neocortices of human brains from 31 normal individuals, age 18-93 years, 18 females (average age 65 years, range 18-93) and 13 males (average age 57 years, range 19-87). The cells were differentiated in astrocytes, oligodendrocytes, microglia...... while the total astrocyte number is constant through life; finally males have a 28% higher number of neocortical glial cells and a 19% higher neocortical neuron number than females. The overall total number of neocortical neurons and glial cells was 49.3 billion in females and 65.2 billion in males...... and neurons and counting were done in each of the four lobes. The study showed that the different subpopulations of glial cells behave differently as a function of age; the number of oligodendrocytes showed a significant 27% decrease over adult life and a strong correlation to the total number of neurons...
How to average logarithmic retrievals?
Directory of Open Access Journals (Sweden)
B. Funke
2012-04-01
Full Text Available Calculation of mean trace gas contributions from profiles obtained by retrievals of the logarithm of the abundance rather than retrievals of the abundance itself are prone to biases. By means of a system simulator, biases of linear versus logarithmic averaging were evaluated for both maximum likelihood and maximum a priori retrievals, for various signal to noise ratios and atmospheric variabilities. These biases can easily reach ten percent or more. As a rule of thumb we found for maximum likelihood retrievals that linear averaging better represents the true mean value in cases of large local natural variability and high signal to noise ratios, while for small local natural variability logarithmic averaging often is superior. In the case of maximum a posteriori retrievals, the mean is dominated by the a priori information used in the retrievals and the method of averaging is of minor concern. For larger natural variabilities, the appropriateness of the one or the other method of averaging depends on the particular case because the various biasing mechanisms partly compensate in an unpredictable manner. This complication arises mainly because of the fact that in logarithmic retrievals the weight of the prior information depends on abundance of the gas itself. No simple rule was found on which kind of averaging is superior, and instead of suggesting simple recipes we cannot do much more than to create awareness of the traps related with averaging of mixing ratios obtained from logarithmic retrievals.
Lagrangian averaging with geodesic mean.
Oliver, Marcel
2017-11-01
This paper revisits the derivation of the Lagrangian averaged Euler (LAE), or Euler- α equations in the light of an intrinsic definition of the averaged flow map as the geodesic mean on the volume-preserving diffeomorphism group. Under the additional assumption that first-order fluctuations are statistically isotropic and transported by the mean flow as a vector field, averaging of the kinetic energy Lagrangian of an ideal fluid yields the LAE Lagrangian. The derivation presented here assumes a Euclidean spatial domain without boundaries.
Averaging in spherically symmetric cosmology
International Nuclear Information System (INIS)
Coley, A. A.; Pelavas, N.
2007-01-01
The averaging problem in cosmology is of fundamental importance. When applied to study cosmological evolution, the theory of macroscopic gravity (MG) can be regarded as a long-distance modification of general relativity. In the MG approach to the averaging problem in cosmology, the Einstein field equations on cosmological scales are modified by appropriate gravitational correlation terms. We study the averaging problem within the class of spherically symmetric cosmological models. That is, we shall take the microscopic equations and effect the averaging procedure to determine the precise form of the correlation tensor in this case. In particular, by working in volume-preserving coordinates, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. We find that the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background must be of the form of a spatial curvature. Inhomogeneities and spatial averaging, through this spatial curvature correction term, can have a very significant dynamical effect on the dynamics of the Universe and cosmological observations; in particular, we discuss whether spatial averaging might lead to a more conservative explanation of the observed acceleration of the Universe (without the introduction of exotic dark matter fields). We also find that the correlation tensor for a non-FLRW background can be interpreted as the sum of a spatial curvature and an anisotropic fluid. This may lead to interesting effects of averaging on astrophysical scales. We also discuss the results of averaging an inhomogeneous Lemaitre-Tolman-Bondi solution as well as calculations of linear perturbations (that is, the backreaction) in an FLRW background, which support the main conclusions of the analysis
Averaging models: parameters estimation with the R-Average procedure
Directory of Open Access Journals (Sweden)
S. Noventa
2010-01-01
Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.
Improved averaging for non-null interferometry
Fleig, Jon F.; Murphy, Paul E.
2013-09-01
Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.
Evaluations of average level spacings
International Nuclear Information System (INIS)
Liou, H.I.
1980-01-01
The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of 168 Er data. 19 figures, 2 tables
Ergodic averages via dominating processes
DEFF Research Database (Denmark)
Møller, Jesper; Mengersen, Kerrie
2006-01-01
We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain....
Average subentropy, coherence and entanglement of random mixed quantum states
Energy Technology Data Exchange (ETDEWEB)
Zhang, Lin, E-mail: godyalin@163.com [Institute of Mathematics, Hangzhou Dianzi University, Hangzhou 310018 (China); Singh, Uttam, E-mail: uttamsingh@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India); Pati, Arun K., E-mail: akpati@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India)
2017-02-15
Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate that mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.
Exploiting scale dependence in cosmological averaging
International Nuclear Information System (INIS)
Mattsson, Teppo; Ronkainen, Maria
2008-01-01
We study the role of scale dependence in the Buchert averaging method, using the flat Lemaitre–Tolman–Bondi model as a testing ground. Within this model, a single averaging scale gives predictions that are too coarse, but by replacing it with the distance of the objects R(z) for each redshift z, we find an O(1%) precision at z<2 in the averaged luminosity and angular diameter distances compared to their exact expressions. At low redshifts, we show the improvement for generic inhomogeneity profiles, and our numerical computations further verify it up to redshifts z∼2. At higher redshifts, the method breaks down due to its inability to capture the time evolution of the inhomogeneities. We also demonstrate that the running smoothing scale R(z) can mimic acceleration, suggesting that it could be at least as important as the backreaction in explaining dark energy as an inhomogeneity induced illusion
High average power supercontinuum sources
Indian Academy of Sciences (India)
The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium.
Model averaging, optimal inference and habit formation
Directory of Open Access Journals (Sweden)
Thomas H B FitzGerald
2014-06-01
Full Text Available Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function – the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge – that of determining which model or models of their environment are the best for guiding behaviour. Bayesian model averaging – which says that an agent should weight the predictions of different models according to their evidence – provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent’s behaviour should show an equivalent balance. We hypothesise that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realisable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behaviour. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded Bayesian inference, focussing particularly upon the relationship between goal-directed and habitual behaviour.
Nakasone, Shoko; Mimaki, Sachiyo; Ichikawa, Tomohiro; Aokage, Keiju; Miyoshi, Tomohiro; Sugano, Masato; Kojima, Motohiro; Fujii, Satoshi; Kuwata, Takeshi; Ochiai, Atsushi; Tsuboi, Masahiro; Goto, Koichi; Tsuchihara, Katsuya; Ishii, Genichiro
2018-05-01
Podoplanin-positive cancer-associated fibroblasts (CAFs) play an essential role in tumor progression. However, it is still unclear whether specific genomic alterations of cancer cells are required to recruit podoplanin-positive CAFs. The aim of this study was to investigate the relationship between the mutation status of lung adenocarcinoma cells and the presence of podoplanin-positive CAFs. Ninety-seven lung adenocarcinomas for which whole exome sequencing data were available were enrolled. First, we analyzed the clinicopathological features of the cases, and then, evaluated the relationship between genetic features of cancer cells (major driver mutations and the number of single nucleotide variants, SNVs) and the presence of podoplanin-positive CAFs. The presence of podoplanin-positive CAFs was associated with smoking history, solid predominant subtype, and lymph node metastasis. We could not find any significant correlations between major genetic mutations (EGFR, KRAS, TP53, MET, ERBB2, BRAF, and PIC3CA) in cancer cells and the presence of podoplanin-positive CAFs. However, cases with podoplanin-positive CAFs had a significantly higher number of SNVs in cancer cells than the podoplanin-negative CAFs cases (median 84 vs 37, respectively; p = 0.001). This was also detected in a non-smoker subgroup (p = 0.037). Multivariate analyses revealed that the number of SNVs in cancer cells was the only statistically significant independent predictor for the presence of podoplanin-positive CAFs (p = 0.044). In lung adenocarcinoma, the presence of podoplanin-positive CAFs was associated with higher numbers of SNVs in cancer cells, suggesting a relationship between accumulations of SNVs in cancer cells and the generation of a tumor-promoting microenvironment.
Independence, Odd Girth, and Average Degree
DEFF Research Database (Denmark)
Löwenstein, Christian; Pedersen, Anders Sune; Rautenbach, Dieter
2011-01-01
We prove several tight lower bounds in terms of the order and the average degree for the independence number of graphs that are connected and/or satisfy some odd girth condition. Our main result is the extension of a lower bound for the independence number of triangle-free graphs of maximum...... degree at most three due to Heckman and Thomas [Discrete Math 233 (2001), 233–237] to arbitrary triangle-free graphs. For connected triangle-free graphs of order n and size m, our result implies the existence of an independent set of order at least (4n−m−1) / 7. ...
When good = better than average
Directory of Open Access Journals (Sweden)
Don A. Moore
2007-10-01
Full Text Available People report themselves to be above average on simple tasks and below average on difficult tasks. This paper proposes an explanation for this effect that is simpler than prior explanations. The new explanation is that people conflate relative with absolute evaluation, especially on subjective measures. The paper then presents a series of four studies that test this conflation explanation. These tests distinguish conflation from other explanations, such as differential weighting and selecting the wrong referent. The results suggest that conflation occurs at the response stage during which people attempt to disambiguate subjective response scales in order to choose an answer. This is because conflation has little effect on objective measures, which would be equally affected if the conflation occurred at encoding.
Autoregressive Moving Average Graph Filtering
Isufi, Elvin; Loukas, Andreas; Simonetto, Andrea; Leus, Geert
2016-01-01
One of the cornerstones of the field of signal processing on graphs are graph filters, direct analogues of classical filters, but intended for signals defined on graphs. This work brings forth new insights on the distributed graph filtering problem. We design a family of autoregressive moving average (ARMA) recursions, which (i) are able to approximate any desired graph frequency response, and (ii) give exact solutions for tasks such as graph signal denoising and interpolation. The design phi...
Averaging Robertson-Walker cosmologies
International Nuclear Information System (INIS)
Brown, Iain A.; Robbers, Georg; Behrend, Juliane
2009-01-01
The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ω eff 0 ≈ 4 × 10 −6 , with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10 −8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w eff < −1/3 can be found for strongly phantom models
Directory of Open Access Journals (Sweden)
Martha Luevano
Full Text Available Adoptive natural killer (NK cell therapy relies on the acquisition of large numbers of NK cells that are cytotoxic but not exhausted. NK cell differentiation from hematopoietic stem cells (HSC has become an alluring option for NK cell therapy, with umbilical cord blood (UCB and mobilized peripheral blood (PBCD34(+ being the most accessible HSC sources as collection procedures are less invasive. In this study we compared the capacity of frozen or freshly isolated UCB hematopoietic stem cells (CBCD34(+ and frozen PBCD34(+ to generate NK cells in vitro. By modifying a previously published protocol, we showed that frozen CBCD34(+ cultures generated higher NK cell numbers without loss of function compared to fresh CBCD34(+ cultures. NK cells generated from CBCD34(+ and PBCD34(+ expressed low levels of killer-cell immunoglobulin-like receptors but high levels of activating receptors and of the myeloid marker CD33. However, blocking studies showed that CD33 expression did not impact on the functions of the generated cells. CBCD34(+-NK cells exhibited increased capacity to secrete IFN-γ and kill K562 in vitro and in vivo as compared to PBCD34(+-NK cells. Moreover, K562 killing by the generated NK cells could be further enhanced by IL-12 stimulation. Our data indicate that the use of frozen CBCD34(+ for the production of NK cells in vitro results in higher cell numbers than PBCD34(+, without jeopardizing their functionality, rendering them suitable for NK cell immunotherapy. The results presented here provide an optimal strategy to generate NK cells in vitro for immunotherapy that exhibit enhanced effector function when compared to alternate sources of HSC.
Sedimentological regimes for turbidity currents: Depth-averaged theory
Halsey, Thomas C.; Kumar, Amit; Perillo, Mauricio M.
2017-07-01
Turbidity currents are one of the most significant means by which sediment is moved from the continents into the deep ocean; their properties are interesting both as elements of the global sediment cycle and due to their role in contributing to the formation of deep water oil and gas reservoirs. One of the simplest models of the dynamics of turbidity current flow was introduced three decades ago, and is based on depth-averaging of the fluid mechanical equations governing the turbulent gravity-driven flow of relatively dilute turbidity currents. We examine the sedimentological regimes of a simplified version of this model, focusing on the role of the Richardson number Ri [dimensionless inertia] and Rouse number Ro [dimensionless sedimentation velocity] in determining whether a current is net depositional or net erosional. We find that for large Rouse numbers, the currents are strongly net depositional due to the disappearance of local equilibria between erosion and deposition. At lower Rouse numbers, the Richardson number also plays a role in determining the degree of erosion versus deposition. The currents become more erosive at lower values of the product Ro × Ri, due to the effect of clear water entrainment. At higher values of this product, the turbulence becomes insufficient to maintain the sediment in suspension, as first pointed out by Knapp and Bagnold. We speculate on the potential for two-layer solutions in this insufficiently turbulent regime, which would comprise substantial bedload flow with an overlying turbidity current.
HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS
International Nuclear Information System (INIS)
2005-01-01
Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department
ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE
Directory of Open Access Journals (Sweden)
Carmen BOGHEAN
2013-12-01
Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.
Topological quantization of ensemble averages
International Nuclear Information System (INIS)
Prodan, Emil
2009-01-01
We define the current of a quantum observable and, under well-defined conditions, we connect its ensemble average to the index of a Fredholm operator. The present work builds on a formalism developed by Kellendonk and Schulz-Baldes (2004 J. Funct. Anal. 209 388) to study the quantization of edge currents for continuous magnetic Schroedinger operators. The generalization given here may be a useful tool to scientists looking for novel manifestations of the topological quantization. As a new application, we show that the differential conductance of atomic wires is given by the index of a certain operator. We also comment on how the formalism can be used to probe the existence of edge states
Flexible time domain averaging technique
Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng
2013-09-01
Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.
Beta-energy averaging and beta spectra
International Nuclear Information System (INIS)
Stamatelatos, M.G.; England, T.R.
1976-07-01
A simple yet highly accurate method for approximately calculating spectrum-averaged beta energies and beta spectra for radioactive nuclei is presented. This method should prove useful for users who wish to obtain accurate answers without complicated calculations of Fermi functions, complex gamma functions, and time-consuming numerical integrations as required by the more exact theoretical expressions. Therefore, this method should be a good time-saving alternative for investigators who need to make calculations involving large numbers of nuclei (e.g., fission products) as well as for occasional users interested in restricted number of nuclides. The average beta-energy values calculated by this method differ from those calculated by ''exact'' methods by no more than 1 percent for nuclides with atomic numbers in the 20 to 100 range and which emit betas of energies up to approximately 8 MeV. These include all fission products and the actinides. The beta-energy spectra calculated by the present method are also of the same quality
Influence of Averaging Preprocessing on Image Analysis with a Markov Random Field Model
Sakamoto, Hirotaka; Nakanishi-Ohno, Yoshinori; Okada, Masato
2018-02-01
This paper describes our investigations into the influence of averaging preprocessing on the performance of image analysis. Averaging preprocessing involves a trade-off: image averaging is often undertaken to reduce noise while the number of image data available for image analysis is decreased. We formulated a process of generating image data by using a Markov random field (MRF) model to achieve image analysis tasks such as image restoration and hyper-parameter estimation by a Bayesian approach. According to the notions of Bayesian inference, posterior distributions were analyzed to evaluate the influence of averaging. There are three main results. First, we found that the performance of image restoration with a predetermined value for hyper-parameters is invariant regardless of whether averaging is conducted. We then found that the performance of hyper-parameter estimation deteriorates due to averaging. Our analysis of the negative logarithm of the posterior probability, which is called the free energy based on an analogy with statistical mechanics, indicated that the confidence of hyper-parameter estimation remains higher without averaging. Finally, we found that when the hyper-parameters are estimated from the data, the performance of image restoration worsens as averaging is undertaken. We conclude that averaging adversely influences the performance of image analysis through hyper-parameter estimation.
Determining average path length and average trapping time on generalized dual dendrimer
Li, Ling; Guan, Jihong
2015-03-01
Dendrimer has wide number of important applications in various fields. In some cases during transport or diffusion process, it transforms into its dual structure named Husimi cactus. In this paper, we study the structure properties and trapping problem on a family of generalized dual dendrimer with arbitrary coordination numbers. We first calculate exactly the average path length (APL) of the networks. The APL increases logarithmically with the network size, indicating that the networks exhibit a small-world effect. Then we determine the average trapping time (ATT) of the trapping process in two cases, i.e., the trap placed on a central node and the trap is uniformly distributed in all the nodes of the network. In both case, we obtain explicit solutions of ATT and show how they vary with the networks size. Besides, we also discuss the influence of the coordination number on trapping efficiency.
Multistage parallel-serial time averaging filters
International Nuclear Information System (INIS)
Theodosiou, G.E.
1980-01-01
Here, a new time averaging circuit design, the 'parallel filter' is presented, which can reduce the time jitter, introduced in time measurements using counters of large dimensions. This parallel filter could be considered as a single stage unit circuit which can be repeated an arbitrary number of times in series, thus providing a parallel-serial filter type as a result. The main advantages of such a filter over a serial one are much less electronic gate jitter and time delay for the same amount of total time uncertainty reduction. (orig.)
Average Nuclear properties based on statistical model
International Nuclear Information System (INIS)
El-Jaick, L.J.
1974-01-01
The rough properties of nuclei were investigated by statistical model, in systems with the same and different number of protons and neutrons, separately, considering the Coulomb energy in the last system. Some average nuclear properties were calculated based on the energy density of nuclear matter, from Weizsscker-Beth mass semiempiric formulae, generalized for compressible nuclei. In the study of a s surface energy coefficient, the great influence exercised by Coulomb energy and nuclear compressibility was verified. For a good adjust of beta stability lines and mass excess, the surface symmetry energy were established. (M.C.K.) [pt
Decision trees with minimum average depth for sorting eight elements
AbouEisha, Hassan M.
2015-11-19
We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees is approximately equal to 8.548×10^326365), has also minimum depth. Both problems were considered by Knuth (1998). To obtain these results, we use tools based on extensions of dynamic programming which allow us to make sequential optimization of decision trees relative to depth and average depth, and to count the number of decision trees with minimum average depth.
The average Indian female nose.
Patil, Surendra B; Kale, Satish M; Jaiswal, Sumeet; Khare, Nishant; Math, Mahantesh
2011-12-01
This study aimed to delineate the anthropometric measurements of the noses of young women of an Indian population and to compare them with the published ideals and average measurements for white women. This anthropometric survey included a volunteer sample of 100 young Indian women ages 18 to 35 years with Indian parents and no history of previous surgery or trauma to the nose. Standardized frontal, lateral, oblique, and basal photographs of the subjects' noses were taken, and 12 standard anthropometric measurements of the nose were determined. The results were compared with published standards for North American white women. In addition, nine nasal indices were calculated and compared with the standards for North American white women. The nose of Indian women differs significantly from the white nose. All the nasal measurements for the Indian women were found to be significantly different from those for North American white women. Seven of the nine nasal indices also differed significantly. Anthropometric analysis suggests differences between the Indian female nose and the North American white nose. Thus, a single aesthetic ideal is inadequate. Noses of Indian women are smaller and wider, with a less projected and rounded tip than the noses of white women. This study established the nasal anthropometric norms for nasal parameters, which will serve as a guide for cosmetic and reconstructive surgery in Indian women.
Asymmetric network connectivity using weighted harmonic averages
Morrison, Greg; Mahadevan, L.
2011-02-01
We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.
Increasing average period lengths by switching of robust chaos maps in finite precision
Nagaraj, N.; Shastry, M. C.; Vaidya, P. G.
2008-12-01
Grebogi, Ott and Yorke (Phys. Rev. A 38, 1988) have investigated the effect of finite precision on average period length of chaotic maps. They showed that the average length of periodic orbits (T) of a dynamical system scales as a function of computer precision (ɛ) and the correlation dimension (d) of the chaotic attractor: T ˜ɛ-d/2. In this work, we are concerned with increasing the average period length which is desirable for chaotic cryptography applications. Our experiments reveal that random and chaotic switching of deterministic chaotic dynamical systems yield higher average length of periodic orbits as compared to simple sequential switching or absence of switching. To illustrate the application of switching, a novel generalization of the Logistic map that exhibits Robust Chaos (absence of attracting periodic orbits) is first introduced. We then propose a pseudo-random number generator based on chaotic switching between Robust Chaos maps which is found to successfully pass stringent statistical tests of randomness.
A time averaged background compensator for Geiger-Mueller counters
International Nuclear Information System (INIS)
Bhattacharya, R.C.; Ghosh, P.K.
1983-01-01
The GM tube compensator described stores background counts to cancel an equal number of pulses from the measuring channel providing time averaged compensation. The method suits portable instruments. (orig.)
DEFF Research Database (Denmark)
Jørgensen, Claus Bjørn; Suetens, Sigrid; Tyran, Jean-Robert
numbers based on recent drawings. While most players pick the same set of numbers week after week without regards of numbers drawn or anything else, we find that those who do change, act on average in the way predicted by the law of small numbers as formalized in recent behavioral theory. In particular......We investigate the “law of small numbers” using a unique panel data set on lotto gambling. Because we can track individual players over time, we can measure how they react to outcomes of recent lotto drawings. We can therefore test whether they behave as if they believe they can predict lotto......, on average they move away from numbers that have recently been drawn, as suggested by the “gambler’s fallacy”, and move toward numbers that are on streak, i.e. have been drawn several weeks in a row, consistent with the “hot hand fallacy”....
Decision trees with minimum average depth for sorting eight elements
AbouEisha, Hassan M.; Chikalov, Igor; Moshkov, Mikhail
2015-01-01
We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees
Sullivan, Sharon G.; Barr, Catherine; Grabois, Andrew
2002-01-01
Includes six articles that report on prices of U.S. and foreign published materials; book title output and average prices; book sales statistics; book exports and imports; book outlets in the U.S. and Canada; and review media statistics. (LRW)
2010-07-01
... volume of gasoline produced or imported in batch i. Si=The sulfur content of batch i determined under § 80.330. n=The number of batches of gasoline produced or imported during the averaging period. i=Individual batch of gasoline produced or imported during the averaging period. (b) All annual refinery or...
Bennett, Ruth, Ed.; And Others
An introduction to the Hupa number system is provided in this workbook, one in a series of numerous materials developed to promote the use of the Hupa language. The book is written in English with Hupa terms used only for the names of numbers. The opening pages present the numbers from 1-10, giving the numeral, the Hupa word, the English word, and…
Indian Academy of Sciences (India)
Admin
Triangular number, figurate num- ber, rangoli, Brahmagupta–Pell equation, Jacobi triple product identity. Figure 1. The first four triangular numbers. Left: Anuradha S Garge completed her PhD from. Pune University in 2008 under the supervision of Prof. S A Katre. Her research interests include K-theory and number theory.
Directory of Open Access Journals (Sweden)
Schwarzweller Christoph
2015-02-01
Full Text Available In this article we introduce Proth numbers and prove two theorems on such numbers being prime [3]. We also give revised versions of Pocklington’s theorem and of the Legendre symbol. Finally, we prove Pepin’s theorem and that the fifth Fermat number is not prime.
Averaged null energy condition from causality
Hartman, Thomas; Kundu, Sandipan; Tajdini, Amirhossein
2017-07-01
Unitary, Lorentz-invariant quantum field theories in flat spacetime obey mi-crocausality: commutators vanish at spacelike separation. For interacting theories in more than two dimensions, we show that this implies that the averaged null energy, ∫ duT uu , must be non-negative. This non-local operator appears in the operator product expansion of local operators in the lightcone limit, and therefore contributes to n-point functions. We derive a sum rule that isolates this contribution and is manifestly positive. The argument also applies to certain higher spin operators other than the stress tensor, generating an infinite family of new constraints of the form ∫ duX uuu··· u ≥ 0. These lead to new inequalities for the coupling constants of spinning operators in conformal field theory, which include as special cases (but are generally stronger than) the existing constraints from the lightcone bootstrap, deep inelastic scattering, conformal collider methods, and relative entropy. We also comment on the relation to the recent derivation of the averaged null energy condition from relative entropy, and suggest a more general connection between causality and information-theoretic inequalities in QFT.
Chaotic Universe, Friedmannian on the average 2
Energy Technology Data Exchange (ETDEWEB)
Marochnik, L S [AN SSSR, Moscow. Inst. Kosmicheskikh Issledovanij
1980-11-01
The cosmological solutions are found for the equations for correlators, describing a statistically chaotic Universe, Friedmannian on the average in which delta-correlated fluctuations with amplitudes h >> 1 are excited. For the equation of state of matter p = n epsilon, the kind of solutions depends on the position of maximum of the spectrum of the metric disturbances. The expansion of the Universe, in which long-wave potential and vortical motions and gravitational waves (modes diverging at t ..-->.. 0) had been excited, tends asymptotically to the Friedmannian one at t ..-->.. identity and depends critically on n: at n < 0.26, the solution for the scalefactor is situated higher than the Friedmannian one, and lower at n > 0.26. The influence of finite at t ..-->.. 0 long-wave fluctuation modes leads to an averaged quasiisotropic solution. The contribution of quantum fluctuations and of short-wave parts of the spectrum of classical fluctuations to the expansion law is considered. Their influence is equivalent to the contribution from an ultrarelativistic gas with corresponding energy density and pressure. The restrictions are obtained for the degree of chaos (the spectrum characteristics) compatible with the observed helium abundance, which could have been retained by a completely chaotic Universe during its expansion up to the nucleosynthesis epoch.
FEL system with homogeneous average output
Energy Technology Data Exchange (ETDEWEB)
Douglas, David R.; Legg, Robert; Whitney, R. Roy; Neil, George; Powers, Thomas Joseph
2018-01-16
A method of varying the output of a free electron laser (FEL) on very short time scales to produce a slightly broader, but smooth, time-averaged wavelength spectrum. The method includes injecting into an accelerator a sequence of bunch trains at phase offsets from crest. Accelerating the particles to full energy to result in distinct and independently controlled, by the choice of phase offset, phase-energy correlations or chirps on each bunch train. The earlier trains will be more strongly chirped, the later trains less chirped. For an energy recovered linac (ERL), the beam may be recirculated using a transport system with linear and nonlinear momentum compactions M.sub.56, which are selected to compress all three bunch trains at the FEL with higher order terms managed.
Mendonça, J. Ricardo G.
2012-01-01
We define a new class of numbers based on the first occurrence of certain patterns of zeros and ones in the expansion of irracional numbers in a given basis and call them Sagan numbers, since they were first mentioned, in a special case, by the North-american astronomer Carl E. Sagan in his science-fiction novel "Contact." Sagan numbers hold connections with a wealth of mathematical ideas. We describe some properties of the newly defined numbers and indicate directions for further amusement.
Averaging of nonlinearity-managed pulses
International Nuclear Information System (INIS)
Zharnitsky, Vadim; Pelinovsky, Dmitry
2005-01-01
We consider the nonlinear Schroedinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons
Neocortical glial cell numbers in human brains.
Pelvig, D P; Pakkenberg, H; Stark, A K; Pakkenberg, B
2008-11-01
Stereological cell counting was applied to post-mortem neocortices of human brains from 31 normal individuals, age 18-93 years, 18 females (average age 65 years, range 18-93) and 13 males (average age 57 years, range 19-87). The cells were differentiated in astrocytes, oligodendrocytes, microglia and neurons and counting were done in each of the four lobes. The study showed that the different subpopulations of glial cells behave differently as a function of age; the number of oligodendrocytes showed a significant 27% decrease over adult life and a strong correlation to the total number of neurons while the total astrocyte number is constant through life; finally males have a 28% higher number of neocortical glial cells and a 19% higher neocortical neuron number than females. The overall total number of neocortical neurons and glial cells was 49.3 billion in females and 65.2 billion in males, a difference of 24% with a high biological variance. These numbers can serve as reference values in quantitative studies of the human neocortex.
Petersen, T Kyle
2015-01-01
This text presents the Eulerian numbers in the context of modern enumerative, algebraic, and geometric combinatorics. The book first studies Eulerian numbers from a purely combinatorial point of view, then embarks on a tour of how these numbers arise in the study of hyperplane arrangements, polytopes, and simplicial complexes. Some topics include a thorough discussion of gamma-nonnegativity and real-rootedness for Eulerian polynomials, as well as the weak order and the shard intersection order of the symmetric group. The book also includes a parallel story of Catalan combinatorics, wherein the Eulerian numbers are replaced with Narayana numbers. Again there is a progression from combinatorics to geometry, including discussion of the associahedron and the lattice of noncrossing partitions. The final chapters discuss how both the Eulerian and Narayana numbers have analogues in any finite Coxeter group, with many of the same enumerative and geometric properties. There are four supplemental chapters throughout, ...
DEFF Research Database (Denmark)
Suetens, Sigrid; Galbo-Jørgensen, Claus B.; Tyran, Jean-Robert Karl
2016-01-01
We investigate the ‘law of small numbers’ using a data set on lotto gambling that allows us to measure players’ reactions to draws. While most players pick the same set of numbers week after week, we find that those who do change react on average as predicted by the law of small numbers...... as formalized in recent behavioral theory. In particular, players tend to bet less on numbers that have been drawn in the preceding week, as suggested by the ‘gambler’s fallacy’, and bet more on a number if it was frequently drawn in the recent past, consistent with the ‘hot-hand fallacy’....
Indian Academy of Sciences (India)
Transfinite Numbers. What is Infinity? S M Srivastava. In a series of revolutionary articles written during the last quarter of the nineteenth century, the great Ger- man mathematician Georg Cantor removed the age-old mistrust of infinity and created an exceptionally beau- tiful and useful theory of transfinite numbers. This is.
African Journals Online (AJOL)
Kunle Amuwo: Higher Education Transformation: A Paradigm Shilt in South Africa? ... ty of such skills, especially at the middle management levels within the higher ... istics and virtues of differentiation and diversity. .... may be forced to close shop for lack of capacity to attract ..... necessarily lead to racial and gender equity,.
Ji, Caleb; Khovanova, Tanya; Park, Robin; Song, Angela
2015-01-01
In this paper, we consider a game played on a rectangular $m \\times n$ gridded chocolate bar. Each move, a player breaks the bar along a grid line. Each move after that consists of taking any piece of chocolate and breaking it again along existing grid lines, until just $mn$ individual squares remain. This paper enumerates the number of ways to break an $m \\times n$ bar, which we call chocolate numbers, and introduces four new sequences related to these numbers. Using various techniques, we p...
Andrews, George E
1994-01-01
Although mathematics majors are usually conversant with number theory by the time they have completed a course in abstract algebra, other undergraduates, especially those in education and the liberal arts, often need a more basic introduction to the topic.In this book the author solves the problem of maintaining the interest of students at both levels by offering a combinatorial approach to elementary number theory. In studying number theory from such a perspective, mathematics majors are spared repetition and provided with new insights, while other students benefit from the consequent simpl
& Development (LDRD) National Security Education Center (NSEC) Office of Science Programs Richard P Databases National Security Education Center (NSEC) Center for Nonlinear Studies Engineering Institute Scholarships STEM Education Programs Teachers (K-12) Students (K-12) Higher Education Regional Education
Barnes, John
2016-01-01
In this intriguing book, John Barnes takes us on a journey through aspects of numbers much as he took us on a geometrical journey in Gems of Geometry. Similarly originating from a series of lectures for adult students at Reading and Oxford University, this book touches a variety of amusing and fascinating topics regarding numbers and their uses both ancient and modern. The author intrigues and challenges his audience with both fundamental number topics such as prime numbers and cryptography, and themes of daily needs and pleasures such as counting one's assets, keeping track of time, and enjoying music. Puzzles and exercises at the end of each lecture offer additional inspiration, and numerous illustrations accompany the reader. Furthermore, a number of appendices provides in-depth insights into diverse topics such as Pascal’s triangle, the Rubik cube, Mersenne’s curious keyboards, and many others. A theme running through is the thought of what is our favourite number. Written in an engaging and witty sty...
Average and local structure of α-CuI by configurational averaging
International Nuclear Information System (INIS)
Mohn, Chris E; Stoelen, Svein
2007-01-01
Configurational Boltzmann averaging together with density functional theory are used to study in detail the average and local structure of the superionic α-CuI. We find that the coppers are spread out with peaks in the atom-density at the tetrahedral sites of the fcc sublattice of iodines. We calculate Cu-Cu, Cu-I and I-I pair radial distribution functions, the distribution of coordination numbers and the distribution of Cu-I-Cu, I-Cu-I and Cu-Cu-Cu bond-angles. The partial pair distribution functions are in good agreement with experimental neutron diffraction-reverse Monte Carlo, extended x-ray absorption fine structure and ab initio molecular dynamics results. In particular, our results confirm the presence of a prominent peak at around 2.7 A in the Cu-Cu pair distribution function as well as a broader, less intense peak at roughly 4.3 A. We find highly flexible bonds and a range of coordination numbers for both iodines and coppers. This structural flexibility is of key importance in order to understand the exceptional conductivity of coppers in α-CuI; the iodines can easily respond to changes in the local environment as the coppers diffuse, and a myriad of different diffusion-pathways is expected due to the large variation in the local motifs
Number names and number understanding
DEFF Research Database (Denmark)
Ejersbo, Lisser Rye; Misfeldt, Morten
2014-01-01
This paper concerns the results from the first year of a three-year research project involving the relationship between Danish number names and their corresponding digits in the canonical base 10 system. The project aims to develop a system to help the students’ understanding of the base 10 syste...... the Danish number names are more complicated than in other languages. Keywords: A research project in grade 0 and 1th in a Danish school, Base-10 system, two-digit number names, semiotic, cognitive perspectives....
The concept of average LET values determination
International Nuclear Information System (INIS)
Makarewicz, M.
1981-01-01
The concept of average LET (linear energy transfer) values determination, i.e. ordinary moments of LET in absorbed dose distribution vs. LET of ionizing radiation of any kind and any spectrum (even the unknown ones) has been presented. The method is based on measurement of ionization current with several values of voltage supplying an ionization chamber operating in conditions of columnar recombination of ions or ion recombination in clusters while the chamber is placed in the radiation field at the point of interest. By fitting a suitable algebraic expression to the measured current values one can obtain coefficients of the expression which can be interpreted as values of LET moments. One of the advantages of the method is its experimental and computational simplicity. It has been shown that for numerical estimation of certain effects dependent on LET of radiation it is not necessary to know the dose distribution but only a number of parameters of the distribution, i.e. the LET moments. (author)
Directory of Open Access Journals (Sweden)
Theodore M. Porter
2012-12-01
Full Text Available The struggle over cure rate measures in nineteenth-century asylums provides an exemplary instance of how, when used for official assessments of institutions, these numbers become sites of contestation. The evasion of goals and corruption of measures tends to make these numbers “funny” in the sense of becoming dis-honest, while the mismatch between boring, technical appearances and cunning backstage manipulations supplies dark humor. The dangers are evident in recent efforts to decentralize the functions of governments and corporations using incen-tives based on quantified targets.
Murty, M Ram
2014-01-01
This book provides an introduction to the topic of transcendental numbers for upper-level undergraduate and graduate students. The text is constructed to support a full course on the subject, including descriptions of both relevant theorems and their applications. While the first part of the book focuses on introducing key concepts, the second part presents more complex material, including applications of Baker’s theorem, Schanuel’s conjecture, and Schneider’s theorem. These later chapters may be of interest to researchers interested in examining the relationship between transcendence and L-functions. Readers of this text should possess basic knowledge of complex analysis and elementary algebraic number theory.
Salecker-Wigner-Peres clock and average tunneling times
International Nuclear Information System (INIS)
Lunardi, Jose T.; Manzoni, Luiz A.; Nystrom, Andrew T.
2011-01-01
The quantum clock of Salecker-Wigner-Peres is used, by performing a post-selection of the final state, to obtain average transmission and reflection times associated to the scattering of localized wave packets by static potentials in one dimension. The behavior of these average times is studied for a Gaussian wave packet, centered around a tunneling wave number, incident on a rectangular barrier and, in particular, on a double delta barrier potential. The regime of opaque barriers is investigated and the results show that the average transmission time does not saturate, showing no evidence of the Hartman effect (or its generalized version).
Hendrickson, Robert M.
This chapter reports 1982 cases involving aspects of higher education. Interesting cases noted dealt with the federal government's authority to regulate state employees' retirement and raised the questions of whether Title IX covers employment, whether financial aid makes a college a program under Title IX, and whether sex segregated mortality…
Indian Academy of Sciences (India)
this is a characteristic difference between finite and infinite sets and created an immensely useful branch of mathematics based on this idea which had a great impact on the whole of mathe- matics. For example, the question of what is a number (finite or infinite) is almost a philosophical one. However Cantor's work turned it ...
Limits on hypothesizing new quantum numbers
International Nuclear Information System (INIS)
Goldstein, G.R.; Moravcsik, M.J.
1986-01-01
According to a recent theorem, for a general quantum-mechanical system undergoing a process, one can tell from measurements on this system whether or not it is characterized by a quantum number, the existence of which is unknown to the observer, even though the detecting equipment used by the observer is unable to distinguish among the various possible values of the ''secret'' quantum number and hence always averages over them. The present paper deals with situations in which this averaging is avoided and hence the ''secret'' quantum number remains ''secret.'' This occurs when a new quantum number is hypothesized in such a way that all the past measurements pertain to the system with one and the same value of the ''secret'' quantum number, or when the new quantum number is related to the old ones by a specific dynamical model providing a one-to-one correspondence. In the first of these cases, however, the one and the same state of the ''secret'' quantum number needs to be a nondegenerate one. If it is degenerate, the theorem can again be applied. This last feature provides a tool for experimentally testing symmetry breaking and the reestablishment of symmetries in asymptotic regions. The situation is illustrated on historical examples like isospin and strangeness, as well as on some contemporary schemes involving spaces of higher dimensionality
Directory of Open Access Journals (Sweden)
Jonathan H. Soslow
2017-12-01
Full Text Available Duchenne muscular dystrophy (DMD is an X-linked disorder that leads to cardiac and skeletal myopathy. The complex immune activation in boys with DMD is incompletely understood. To better understand the contribution of the immune system into the progression of DMD, we performed a systematic characterization of immune cell subpopulations obtained from peripheral blood of DMD subjects and control donors. We found that the number of CD8 cells expressing CD26 (also known as adenosine deaminase complexing protein 2 was increased in DMD subjects compared to control. No differences, however, were found in the levels of circulating factors associated with pro-inflammatory activation of CD8/CD26 cells, such as tumor necrosis factor-α (TNFα, granzyme B, and interferon-γ (IFNγ. The number of CD8/CD26 cells correlated directly with quantitative muscle testing (QMT in DMD subjects. Since CD26 mediates binding of adenosine deaminase (ADA to the T cell surface, we tested ADA-binding capacity of CD8/CD26 cells and the activity of bound ADA. We found that mononuclear cells (MNC obtained from DMD subjects with an increased number of CD8/CD26 T cells had a greater capacity to bind ADA. In addition, these MNC demonstrated increased hydrolytic deamination of adenosine to inosine. Altogether, our data demonstrated that (1 an increased number of circulating CD8/CD26 T cells is associated with preservation of muscle strength in DMD subjects, and (2 CD8/CD26 T cells from DMD subjects mediated degradation of adenosine by adenosine deaminase. These results support a role for T cells in slowing the decline in skeletal muscle function, and a need for further investigation into contribution of CD8/CD26 T cells in the regulation of chronic inflammation associated with DMD.
Delineation of facial archetypes by 3d averaging.
Shaweesh, Ashraf I; Thomas, C David L; Bankier, Agnes; Clement, John G
2004-10-01
The objective of this study was to investigate the feasibility of creating archetypal 3D faces through computerized 3D facial averaging. A 3D surface scanner Fiore and its software were used to acquire the 3D scans of the faces while 3D Rugle3 and locally-developed software generated the holistic facial averages. 3D facial averages were created from two ethnic groups; European and Japanese and from children with three previous genetic disorders; Williams syndrome, achondroplasia and Sotos syndrome as well as the normal control group. The method included averaging the corresponding depth (z) coordinates of the 3D facial scans. Compared with other face averaging techniques there was not any warping or filling in the spaces by interpolation; however, this facial average lacked colour information. The results showed that as few as 14 faces were sufficient to create an archetypal facial average. In turn this would make it practical to use face averaging as an identification tool in cases where it would be difficult to recruit a larger number of participants. In generating the average, correcting for size differences among faces was shown to adjust the average outlines of the facial features. It is assumed that 3D facial averaging would help in the identification of the ethnic status of persons whose identity may not be known with certainty. In clinical medicine, it would have a great potential for the diagnosis of syndromes with distinctive facial features. The system would also assist in the education of clinicians in the recognition and identification of such syndromes.
Averaging for solitons with nonlinearity management
International Nuclear Information System (INIS)
Pelinovsky, D.E.; Kevrekidis, P.G.; Frantzeskakis, D.J.
2003-01-01
We develop an averaging method for solitons of the nonlinear Schroedinger equation with a periodically varying nonlinearity coefficient, which is used to effectively describe solitons in Bose-Einstein condensates, in the context of the recently proposed technique of Feshbach resonance management. Using the derived local averaged equation, we study matter-wave bright and dark solitons and demonstrate a very good agreement between solutions of the averaged and full equations
DSCOVR Magnetometer Level 2 One Minute Averages
National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-minute average of Level 1 data
DSCOVR Magnetometer Level 2 One Second Averages
National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-second average of Level 1 data
Spacetime averaging of exotic singularity universes
International Nuclear Information System (INIS)
Dabrowski, Mariusz P.
2011-01-01
Taking a spacetime average as a measure of the strength of singularities we show that big-rips (type I) are stronger than big-bangs. The former have infinite spacetime averages while the latter have them equal to zero. The sudden future singularities (type II) and w-singularities (type V) have finite spacetime averages. The finite scale factor (type III) singularities for some values of the parameters may have an infinite average and in that sense they may be considered stronger than big-bangs.
NOAA Average Annual Salinity (3-Zone)
California Natural Resource Agency — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...
40 CFR 76.11 - Emissions averaging.
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Emissions averaging. 76.11 Section 76.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General...
Determinants of College Grade Point Averages
Bailey, Paul Dean
2012-01-01
Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the…
Average cross sections for the 252Cf neutron spectrum
International Nuclear Information System (INIS)
Dezso, Z.; Csikai, J.
1977-01-01
A number of average cross sections have been measured for 252 Cf neutrons in (n, γ), (n,p), (n,2n), (n,α) reactions by the activation method and for fission by fission chamber. Cross sections have been determined for 19 elements and 45 reactions. The (n,γ) cross section values lie in the interval from 0.3 to 200 mb. The data as a function of target neutron number increases up to about N=60 with minimum near to dosed shells. The values lie between 0.3 mb and 113 mb. These cross sections decrease significantly with increasing the threshold energy. The values are below 20 mb. The data do not exceed 10 mb. Average (n,p) cross sections as a function of the threshold energy and average fission cross sections as a function of Zsup(4/3)/A are shown. The results obtained are summarized in tables
A high speed digital signal averager for pulsed NMR
International Nuclear Information System (INIS)
Srinivasan, R.; Ramakrishna, J.; Ra agopalan, S.R.
1978-01-01
A 256-channel digital signal averager suitable for pulsed nuclear magnetic resonance spectroscopy is described. It implements 'stable averaging' algorithm and hence provides a calibrated display of the average signal at all times during the averaging process on a CRT. It has a maximum sampling rate of 2.5 μ sec and a memory capacity of 256 x 12 bit words. Number of sweeps is selectable through a front panel control in binary steps from 2 3 to 2 12 . The enhanced signal can be displayed either on a CRT or by a 3.5-digit LED display. The maximum S/N improvement that can be achieved with this instrument is 36 dB. (auth.)
Adaptive Spontaneous Transitions between Two Mechanisms of Numerical Averaging.
Brezis, Noam; Bronfman, Zohar Z; Usher, Marius
2015-06-04
We investigated the mechanism with which humans estimate numerical averages. Participants were presented with 4, 8 or 16 (two-digit) numbers, serially and rapidly (2 numerals/second) and were instructed to convey the sequence average. As predicted by a dual, but not a single-component account, we found a non-monotonic influence of set-size on accuracy. Moreover, we observed a marked decrease in RT as set-size increases and RT-accuracy tradeoff in the 4-, but not in the 16-number condition. These results indicate that in accordance with the normative directive, participants spontaneously employ analytic/sequential thinking in the 4-number condition and intuitive/holistic thinking in the 16-number condition. When the presentation rate is extreme (10 items/sec) we find that, while performance still remains high, the estimations are now based on intuitive processing. The results are accounted for by a computational model postulating population-coding underlying intuitive-averaging and working-memory-mediated symbolic procedures underlying analytical-averaging, with flexible allocation between the two.
Crystallographic extraction and averaging of data from small image areas
Perkins, GA; Downing, KH; Glaeser, RM
The accuracy of structure factor phases determined from electron microscope images is determined mainly by the level of statistical significance, which is limited by the low level of allowed electron exposure and by the number of identical unit cells that can be averaged. It is shown here that
Understanding coastal morphodynamic patterns from depth-averaged sediment concentration
Ribas, F.; Falques, A.; de Swart, H. E.; Dodd, N.; Garnier, R.; Calvete, D.
This review highlights the important role of the depth-averaged sediment concentration (DASC) to understand the formation of a number of coastal morphodynamic features that have an alongshore rhythmic pattern: beach cusps, surf zone transverse and crescentic bars, and shoreface-connected sand
Bounds on Average Time Complexity of Decision Trees
Chikalov, Igor
2011-01-01
In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any diagnostic problem, the minimum average depth of decision tree is bounded from below by the entropy of probability distribution (with a multiplier 1/log2 k for a problem over a k-valued information system). Among diagnostic problems, the problems with a complete set of attributes have the lowest minimum average depth of decision trees (e.g, the problem of building optimal prefix code [1] and a blood test study in assumption that exactly one patient is ill [23]). For such problems, the minimum average depth of decision tree exceeds the lower bound by at most one. The minimum average depth reaches the maximum on the problems in which each attribute is "indispensable" [44] (e.g., a diagnostic problem with n attributes and kn pairwise different rows in the decision table and the problem of implementing the modulo 2 summation function). These problems have the minimum average depth of decision tree equal to the number of attributes in the problem description. © Springer-Verlag Berlin Heidelberg 2011.
Small Bandwidth Asymptotics for Density-Weighted Average Derivatives
DEFF Research Database (Denmark)
Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael
This paper proposes (apparently) novel standard error formulas for the density-weighted average derivative estimator of Powell, Stock, and Stoker (1989). Asymptotic validity of the standard errors developed in this paper does not require the use of higher-order kernels and the standard errors...
Trend of Average Wages as Indicator of Hypothetical Money Illusion
Directory of Open Access Journals (Sweden)
Julian Daszkowski
2010-06-01
Full Text Available The definition of wage in Poland not before 1998 includes any value of social security contribution. Changed definition creates higher level of reported wages, but was expected not to influence the take home pay. Nevertheless, the trend of average wages, after a short period, has returned to its previous line. Such effect is explained in the term of money illusion.
40 CFR 63.652 - Emissions averaging provisions.
2010-07-01
... emissions more than the reference control technology, but the combination of the pollution prevention... emissions average. This must include any Group 1 emission points to which the reference control technology... agrees has a higher nominal efficiency than the reference control technology. Information on the nominal...
Data Point Averaging for Computational Fluid Dynamics Data
Norman, Jr., David (Inventor)
2016-01-01
A system and method for generating fluid flow parameter data for use in aerodynamic heating analysis. Computational fluid dynamics data is generated for a number of points in an area on a surface to be analyzed. Sub-areas corresponding to areas of the surface for which an aerodynamic heating analysis is to be performed are identified. A computer system automatically determines a sub-set of the number of points corresponding to each of the number of sub-areas and determines a value for each of the number of sub-areas using the data for the sub-set of points corresponding to each of the number of sub-areas. The value is determined as an average of the data for the sub-set of points corresponding to each of the number of sub-areas. The resulting parameter values then may be used to perform an aerodynamic heating analysis.
Computation of the bounce-average code
International Nuclear Information System (INIS)
Cutler, T.A.; Pearlstein, L.D.; Rensink, M.E.
1977-01-01
The bounce-average computer code simulates the two-dimensional velocity transport of ions in a mirror machine. The code evaluates and bounce-averages the collision operator and sources along the field line. A self-consistent equilibrium magnetic field is also computed using the long-thin approximation. Optionally included are terms that maintain μ, J invariance as the magnetic field changes in time. The assumptions and analysis that form the foundation of the bounce-average code are described. When references can be cited, the required results are merely stated and explained briefly. A listing of the code is appended
Behera, Nirbhay Kumar
Lattice QCD predicts that at extreme temperature and energy density, QCD matter will undergo a phase transition from hadronic matter to partonic matter called as QGP. One of the fundamental goals of heavy ion collision experiments to map the QCD phase diagram as a function of temperature (T) and baryo-chemical potential ($\\mu_{B}$). There are many proposed experimental signatures of QGP and fluctuations study are regarded as sensitive tool for it. It is proposed that fluctuation of conserved quantities like net-charge and net-proton can be used to map the QCD phase diagram. The mean ($\\mu$), sigma ($\\sigma$), skewness (S) and kurtosis ($\\kappa$) of the distribution of net charge and net proton are believed to be sensitive probes in fluctuation analysis. It has been argued that critical phenomena are signaled with increase and divergence of correlation length. The dependence of $n^{th}$ order higher moments (cumulants, $c_{n}$) with the correlation length $\\xi$ is as $c_{n}\\sim\\xi^{2.5n-3}$. At LHC energy, the...
Sea Surface Temperature Average_SST_Master
National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using ArcGIS...
Trajectory averaging for stochastic approximation MCMC algorithms
Liang, Faming
2010-01-01
to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic
Should the average tax rate be marginalized?
Czech Academy of Sciences Publication Activity Database
Feldman, N. E.; Katuščák, Peter
-, č. 304 (2006), s. 1-65 ISSN 1211-3298 Institutional research plan: CEZ:MSM0021620846 Keywords : tax * labor supply * average tax Subject RIV: AH - Economics http://www.cerge-ei.cz/pdf/wp/Wp304.pdf
A practical guide to averaging functions
Beliakov, Gleb; Calvo Sánchez, Tomasa
2016-01-01
This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...
MN Temperature Average (1961-1990) - Line
Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...
MN Temperature Average (1961-1990) - Polygon
Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...
Fokin, M V
2013-01-01
State Budgetary Educational Institution of Higher Professional Education "I.M. Sechenov First Moscow State Medical University" of the Ministry of Health care and Social Development, Moscow, Russian Federation. The assessment of health risks from air pollution with emissions from industrial facilities, without the average annual background of air pollution does not meet sanitary legislation. However Russian Federal Service for Hydrometeorology and Environmental Monitoring issues official certificates for a limited number of areas covered by the observations of the full program on the stationary points. Questions of accounting average background air pollution in the evaluation of health risks from exposure to emissions from industrial facilities are considered.
Average Bandwidth Allocation Model of WFQ
Directory of Open Access Journals (Sweden)
Tomáš Balogh
2012-01-01
Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.
Nonequilibrium statistical averages and thermo field dynamics
International Nuclear Information System (INIS)
Marinaro, A.; Scarpetta, Q.
1984-01-01
An extension of thermo field dynamics is proposed, which permits the computation of nonequilibrium statistical averages. The Brownian motion of a quantum oscillator is treated as an example. In conclusion it is pointed out that the procedure proposed to computation of time-dependent statistical average gives the correct two-point Green function for the damped oscillator. A simple extension can be used to compute two-point Green functions of free particles
An approximate analytical approach to resampling averages
DEFF Research Database (Denmark)
Malzahn, Dorthe; Opper, M.
2004-01-01
Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....
Can the bivariate Hurst exponent be higher than an average of the separate Hurst exponents?
Czech Academy of Sciences Publication Activity Database
Krištoufek, Ladislav
2015-01-01
Roč. 431, č. 1 (2015), s. 124-127 ISSN 0378-4371 R&D Projects: GA ČR(CZ) GP14-11402P Institutional support: RVO:67985556 Keywords : Correlations * Power- law cross-correlations * Bivariate Hurst exponent * Spectrum coherence Subject RIV: AH - Economics Impact factor: 1.785, year: 2015 http://library.utia.cas.cz/separaty/2015/E/kristoufek-0452314.pdf
Determination of the average number of neutrons per fission event for californium-252
International Nuclear Information System (INIS)
Aleksandrov, B.M.; Belov, L.M.; Drapchinskij, L.V.
1982-01-01
By means of a separate determination of neutron yields and fission event rates, the value of #betta#-bar( 252 Cf) has been measured for a series of new high-purity sources. The improved quality of the source active layers has reduced the error in determining the fission rate to 0.35%. The value obtained for #betta#-bar( 252 Cf) is 3.747+-0.036. A description is given of the design and the parameters of a spherical manganese bath in which the work on refining the value of #betta#-bar( 252 Cf) will be continued. (author)
Wang, Ling; Abdel-Aty, Mohamed; Wang, Xuesong; Yu, Rongjie
2018-02-01
There have been plenty of traffic safety studies based on average daily traffic (ADT), average hourly traffic (AHT), or microscopic traffic at 5 min intervals. Nevertheless, not enough research has compared the performance of these three types of safety studies, and seldom of previous studies have intended to find whether the results of one type of study is transferable to the other two studies. First, this study built three models: a Bayesian Poisson-lognormal model to estimate the daily crash frequency using ADT, a Bayesian Poisson-lognormal model to estimate the hourly crash frequency using AHT, and a Bayesian logistic regression model for the real-time safety analysis using microscopic traffic. The model results showed that the crash contributing factors found by different models were comparable but not the same. Four variables, i.e., the logarithm of volume, the standard deviation of speed, the logarithm of segment length, and the existence of diverge segment, were positively significant in the three models. Additionally, weaving segments experienced higher daily and hourly crash frequencies than merge and basic segments. Then, each of the ADT-based, AHT-based, and real-time models was used to estimate safety conditions at different levels: daily and hourly, meanwhile, the real-time model was also used in 5 min intervals. The results uncovered that the ADT- and AHT-based safety models performed similar in predicting daily and hourly crash frequencies, and the real-time safety model was able to provide hourly crash frequency. Copyright © 2017 Elsevier Ltd. All rights reserved.
Artificial Intelligence Can Predict Daily Trauma Volume and Average Acuity.
Stonko, David P; Dennis, Bradley M; Betzold, Richard D; Peetz, Allan B; Gunter, Oliver L; Guillamondegui, Oscar D
2018-04-19
The goal of this study was to integrate temporal and weather data in order to create an artificial neural network (ANN) to predict trauma volume, the number of emergent operative cases, and average daily acuity at a level 1 trauma center. Trauma admission data from TRACS and weather data from the National Oceanic and Atmospheric Administration (NOAA) was collected for all adult trauma patients from July 2013-June 2016. The ANN was constructed using temporal (time, day of week), and weather factors (daily high, active precipitation) to predict four points of daily trauma activity: number of traumas, number of penetrating traumas, average ISS, and number of immediate OR cases per day. We trained a two-layer feed-forward network with 10 sigmoid hidden neurons via the Levenberg-Marquardt backpropagation algorithm, and performed k-fold cross validation and accuracy calculations on 100 randomly generated partitions. 10,612 patients over 1,096 days were identified. The ANN accurately predicted the daily trauma distribution in terms of number of traumas, number of penetrating traumas, number of OR cases, and average daily ISS (combined training correlation coefficient r = 0.9018+/-0.002; validation r = 0.8899+/- 0.005; testing r = 0.8940+/-0.006). We were able to successfully predict trauma and emergent operative volume, and acuity using an ANN by integrating local weather and trauma admission data from a level 1 center. As an example, for June 30, 2016, it predicted 9.93 traumas (actual: 10), and a mean ISS score of 15.99 (actual: 13.12); see figure 3. This may prove useful for predicting trauma needs across the system and hospital administration when allocating limited resources. Level III STUDY TYPE: Prognostic/Epidemiological.
Asynchronous Gossip for Averaging and Spectral Ranking
Borkar, Vivek S.; Makhijani, Rahul; Sundaresan, Rajesh
2014-08-01
We consider two variants of the classical gossip algorithm. The first variant is a version of asynchronous stochastic approximation. We highlight a fundamental difficulty associated with the classical asynchronous gossip scheme, viz., that it may not converge to a desired average, and suggest an alternative scheme based on reinforcement learning that has guaranteed convergence to the desired average. We then discuss a potential application to a wireless network setting with simultaneous link activation constraints. The second variant is a gossip algorithm for distributed computation of the Perron-Frobenius eigenvector of a nonnegative matrix. While the first variant draws upon a reinforcement learning algorithm for an average cost controlled Markov decision problem, the second variant draws upon a reinforcement learning algorithm for risk-sensitive control. We then discuss potential applications of the second variant to ranking schemes, reputation networks, and principal component analysis.
Benchmarking statistical averaging of spectra with HULLAC
Klapisch, Marcel; Busquet, Michel
2008-11-01
Knowledge of radiative properties of hot plasmas is important for ICF, astrophysics, etc When mid-Z or high-Z elements are present, the spectra are so complex that one commonly uses statistically averaged description of atomic systems [1]. In a recent experiment on Fe[2], performed under controlled conditions, high resolution transmission spectra were obtained. The new version of HULLAC [3] allows the use of the same model with different levels of details/averaging. We will take advantage of this feature to check the effect of averaging with comparison with experiment. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Quant. Spectros. Rad. Transf. 65, 43 (2000). [2] J. E. Bailey, G. A. Rochau, C. A. Iglesias et al., Phys. Rev. Lett. 99, 265002-4 (2007). [3]. M. Klapisch, M. Busquet, and A. Bar-Shalom, AIP Conference Proceedings 926, 206-15 (2007).
An approach to averaging digitized plantagram curves.
Hawes, M R; Heinemeyer, R; Sovak, D; Tory, B
1994-07-01
The averaging of outline shapes of the human foot for the purposes of determining information concerning foot shape and dimension within the context of comfort of fit of sport shoes is approached as a mathematical problem. An outline of the human footprint is obtained by standard procedures and the curvature is traced with a Hewlett Packard Digitizer. The paper describes the determination of an alignment axis, the identification of two ray centres and the division of the total curve into two overlapping arcs. Each arc is divided by equiangular rays which intersect chords between digitized points describing the arc. The radial distance of each ray is averaged within groups of foot lengths which vary by +/- 2.25 mm (approximately equal to 1/2 shoe size). The method has been used to determine average plantar curves in a study of 1197 North American males (Hawes and Sovak 1993).
Books average previous decade of economic misery.
Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.
Books Average Previous Decade of Economic Misery
Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159
Stochastic Averaging and Stochastic Extremum Seeking
Liu, Shu-Jun
2012-01-01
Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering and analysis of bacterial convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...
Aperture averaging in strong oceanic turbulence
Gökçe, Muhsin Caner; Baykal, Yahya
2018-04-01
Receiver aperture averaging technique is employed in underwater wireless optical communication (UWOC) systems to mitigate the effects of oceanic turbulence, thus to improve the system performance. The irradiance flux variance is a measure of the intensity fluctuations on a lens of the receiver aperture. Using the modified Rytov theory which uses the small-scale and large-scale spatial filters, and our previously presented expression that shows the atmospheric structure constant in terms of oceanic turbulence parameters, we evaluate the irradiance flux variance and the aperture averaging factor of a spherical wave in strong oceanic turbulence. Irradiance flux variance variations are examined versus the oceanic turbulence parameters and the receiver aperture diameter are examined in strong oceanic turbulence. Also, the effect of the receiver aperture diameter on the aperture averaging factor is presented in strong oceanic turbulence.
Regional averaging and scaling in relativistic cosmology
International Nuclear Information System (INIS)
Buchert, Thomas; Carfora, Mauro
2002-01-01
Averaged inhomogeneous cosmologies lie at the forefront of interest, since cosmological parameters such as the rate of expansion or the mass density are to be considered as volume-averaged quantities and only these can be compared with observations. For this reason the relevant parameters are intrinsically scale-dependent and one wishes to control this dependence without restricting the cosmological model by unphysical assumptions. In the latter respect we contrast our way to approach the averaging problem in relativistic cosmology with shortcomings of averaged Newtonian models. Explicitly, we investigate the scale-dependence of Eulerian volume averages of scalar functions on Riemannian three-manifolds. We propose a complementary view of a Lagrangian smoothing of (tensorial) variables as opposed to their Eulerian averaging on spatial domains. This programme is realized with the help of a global Ricci deformation flow for the metric. We explain rigorously the origin of the Ricci flow which, on heuristic grounds, has already been suggested as a possible candidate for smoothing the initial dataset for cosmological spacetimes. The smoothing of geometry implies a renormalization of averaged spatial variables. We discuss the results in terms of effective cosmological parameters that would be assigned to the smoothed cosmological spacetime. In particular, we find that on the smoothed spatial domain B-bar evaluated cosmological parameters obey Ω-bar B-bar m + Ω-bar B-bar R + Ω-bar B-bar A + Ω-bar B-bar Q 1, where Ω-bar B-bar m , Ω-bar B-bar R and Ω-bar B-bar A correspond to the standard Friedmannian parameters, while Ω-bar B-bar Q is a remnant of cosmic variance of expansion and shear fluctuations on the averaging domain. All these parameters are 'dressed' after smoothing out the geometrical fluctuations, and we give the relations of the 'dressed' to the 'bare' parameters. While the former provide the framework of interpreting observations with a 'Friedmannian bias
Average: the juxtaposition of procedure and context
Watson, Jane; Chick, Helen; Callingham, Rosemary
2014-09-01
This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.
Average-case analysis of numerical problems
2000-01-01
The average-case analysis of numerical problems is the counterpart of the more traditional worst-case approach. The analysis of average error and cost leads to new insight on numerical problems as well as to new algorithms. The book provides a survey of results that were mainly obtained during the last 10 years and also contains new results. The problems under consideration include approximation/optimal recovery and numerical integration of univariate and multivariate functions as well as zero-finding and global optimization. Background material, e.g. on reproducing kernel Hilbert spaces and random fields, is provided.
Grassmann Averages for Scalable Robust PCA
DEFF Research Database (Denmark)
Hauberg, Søren; Feragen, Aasa; Black, Michael J.
2014-01-01
As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...
Intuitive numbers guide decisions
Directory of Open Access Journals (Sweden)
Ellen Peters
2008-12-01
Full Text Available Measuring reaction times to number comparisons is thought to reveal a processing stage in elementary numerical cognition linked to internal, imprecise representations of number magnitudes. These intuitive representations of the mental number line have been demonstrated across species and human development but have been little explored in decision making. This paper develops and tests hypotheses about the influence of such evolutionarily ancient, intuitive numbers on human decisions. We demonstrate that individuals with more precise mental-number-line representations are higher in numeracy (number skills consistent with previous research with children. Individuals with more precise representations (compared to those with less precise representations also were more likely to choose larger, later amounts over smaller, immediate amounts, particularly with a larger proportional difference between the two monetary outcomes. In addition, they were more likely to choose an option with a larger proportional but smaller absolute difference compared to those with less precise representations. These results are consistent with intuitive number representations underlying: a perceived differences between numbers, b the extent to which proportional differences are weighed in decisions, and, ultimately, c the valuation of decision options. Human decision processes involving numbers important to health and financial matters may be rooted in elementary, biological processes shared with other species.
A Martian PFS average spectrum: Comparison with ISO SWS
Formisano, V.; Encrenaz, T.; Fonti, S.; Giuranna, M.; Grassi, D.; Hirsh, H.; Khatuntsev, I.; Ignatiev, N.; Lellouch, E.; Maturilli, A.; Moroz, V.; Orleanski, P.; Piccioni, G.; Rataj, M.; Saggin, B.; Zasova, L.
2005-08-01
The evaluation of the planetary Fourier spectrometer performance at Mars is presented by comparing an average spectrum with the ISO spectrum published by Lellouch et al. [2000. Planet. Space Sci. 48, 1393.]. First, the average conditions of Mars atmosphere are compared, then the mixing ratios of the major gases are evaluated. Major and minor bands of CO 2 are compared, from the point of view of features characteristics and bands depth. The spectral resolution is also compared using several solar lines. The result indicates that PFS radiance is valid to better than 1% in the wavenumber range 1800-4200 cm -1 for the average spectrum considered (1680 measurements). The PFS monochromatic transfer function generates an overshooting on the left-hand side of strong narrow lines (solar or atmospheric). The spectral resolution of PFS is of the order of 1.3 cm -1 or better. A large number of narrow features to be identified are discovered.
A virtual pebble game to ensemble average graph rigidity.
González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J
2015-01-01
The body-bar Pebble Game (PG) algorithm is commonly used to calculate network rigidity properties in proteins and polymeric materials. To account for fluctuating interactions such as hydrogen bonds, an ensemble of constraint topologies are sampled, and average network properties are obtained by averaging PG characterizations. At a simpler level of sophistication, Maxwell constraint counting (MCC) provides a rigorous lower bound for the number of internal degrees of freedom (DOF) within a body-bar network, and it is commonly employed to test if a molecular structure is globally under-constrained or over-constrained. MCC is a mean field approximation (MFA) that ignores spatial fluctuations of distance constraints by replacing the actual molecular structure by an effective medium that has distance constraints globally distributed with perfect uniform density. The Virtual Pebble Game (VPG) algorithm is a MFA that retains spatial inhomogeneity in the density of constraints on all length scales. Network fluctuations due to distance constraints that may be present or absent based on binary random dynamic variables are suppressed by replacing all possible constraint topology realizations with the probabilities that distance constraints are present. The VPG algorithm is isomorphic to the PG algorithm, where integers for counting "pebbles" placed on vertices or edges in the PG map to real numbers representing the probability to find a pebble. In the VPG, edges are assigned pebble capacities, and pebble movements become a continuous flow of probability within the network. Comparisons between the VPG and average PG results over a test set of proteins and disordered lattices demonstrate the VPG quantitatively estimates the ensemble average PG results well. The VPG performs about 20% faster than one PG, and it provides a pragmatic alternative to averaging PG rigidity characteristics over an ensemble of constraint topologies. The utility of the VPG falls in between the most
Directory of Open Access Journals (Sweden)
Behar E.
2006-12-01
Full Text Available This article is divided into two parts. In the first part, the authors present a comparison of the major techniques for the measurement of the molecular weight of macromolecules. The bibliographic results are gathered in several tables. In the second part, a comparative ebulliometer for the measurement of the number average molecular weight (Mn of heavy crude oil fractions is described. The high efficiency of the apparatus is demonstrated with a preliminary study of atmospheric distillation residues and resins. The measurement of molecular weights up to 2000 g/mol is possible in less than 4 hours with an uncertainty of about 2%. Cet article comprend deux parties. Dans la première, les auteurs présentent une comparaison entre les principales techniques de détermination de la masse molaire de macromolécules. Les résultats de l'étude bibliographique sont rassemblés dans plusieurs tableaux. La seconde partie décrit un ébulliomètre comparatif conçu pour la mesure de la masse molaire moyenne en nombre (Mn des fractions lourdes des bruts. Une illustration de l'efficacité de cet appareil est indiquée avec l'étude préliminaire de résidus de distillation atmosphérique et de résines. En particulier, la mesure de masses molaires pouvant atteindre 2000 g/mol est possible en moins de 4 heures avec une incertitude expérimentale de l'ordre de 2 %.
Generalized Jackknife Estimators of Weighted Average Derivatives
DEFF Research Database (Denmark)
Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael
With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic...
Average beta measurement in EXTRAP T1
International Nuclear Information System (INIS)
Hedin, E.R.
1988-12-01
Beginning with the ideal MHD pressure balance equation, an expression for the average poloidal beta, Β Θ , is derived. A method for unobtrusively measuring the quantities used to evaluate Β Θ in Extrap T1 is described. The results if a series of measurements yielding Β Θ as a function of externally applied toroidal field are presented. (author)
Bayesian Averaging is Well-Temperated
DEFF Research Database (Denmark)
Hansen, Lars Kai
2000-01-01
Bayesian predictions are stochastic just like predictions of any other inference scheme that generalize from a finite sample. While a simple variational argument shows that Bayes averaging is generalization optimal given that the prior matches the teacher parameter distribution the situation is l...
Gibbs equilibrium averages and Bogolyubov measure
International Nuclear Information System (INIS)
Sankovich, D.P.
2011-01-01
Application of the functional integration methods in equilibrium statistical mechanics of quantum Bose-systems is considered. We show that Gibbs equilibrium averages of Bose-operators can be represented as path integrals over a special Gauss measure defined in the corresponding space of continuous functions. We consider some problems related to integration with respect to this measure
High average-power induction linacs
International Nuclear Information System (INIS)
Prono, D.S.; Barrett, D.; Bowles, E.; Caporaso, G.J.; Chen, Yu-Jiuan; Clark, J.C.; Coffield, F.; Newton, M.A.; Nexsen, W.; Ravenscroft, D.; Turner, W.C.; Watson, J.A.
1989-01-01
Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of ∼ 50-ns duration pulses to > 100 MeV. In this paper the authors report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs
A singularity theorem based on spatial averages
Indian Academy of Sciences (India)
journal of. July 2007 physics pp. 31–47. A singularity theorem based on spatial ... In this paper I would like to present a result which confirms – at least partially – ... A detailed analysis of how the model fits in with the .... Further, the statement that the spatial average ...... Financial support under grants FIS2004-01626 and no.
Multiphase averaging of periodic soliton equations
International Nuclear Information System (INIS)
Forest, M.G.
1979-01-01
The multiphase averaging of periodic soliton equations is considered. Particular attention is given to the periodic sine-Gordon and Korteweg-deVries (KdV) equations. The periodic sine-Gordon equation and its associated inverse spectral theory are analyzed, including a discussion of the spectral representations of exact, N-phase sine-Gordon solutions. The emphasis is on physical characteristics of the periodic waves, with a motivation from the well-known whole-line solitons. A canonical Hamiltonian approach for the modulational theory of N-phase waves is prescribed. A concrete illustration of this averaging method is provided with the periodic sine-Gordon equation; explicit averaging results are given only for the N = 1 case, laying a foundation for a more thorough treatment of the general N-phase problem. For the KdV equation, very general results are given for multiphase averaging of the N-phase waves. The single-phase results of Whitham are extended to general N phases, and more importantly, an invariant representation in terms of Abelian differentials on a Riemann surface is provided. Several consequences of this invariant representation are deduced, including strong evidence for the Hamiltonian structure of N-phase modulational equations
Essays on model averaging and political economics
Wang, W.
2013-01-01
This thesis first investigates various issues related with model averaging, and then evaluates two policies, i.e. West Development Drive in China and fiscal decentralization in U.S, using econometric tools. Chapter 2 proposes a hierarchical weighted least squares (HWALS) method to address multiple
2010-01-01
... 7 Agriculture 10 2010-01-01 2010-01-01 false On average. 1209.12 Section 1209.12 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS... CONSUMER INFORMATION ORDER Mushroom Promotion, Research, and Consumer Information Order Definitions § 1209...
High average-power induction linacs
International Nuclear Information System (INIS)
Prono, D.S.; Barrett, D.; Bowles, E.
1989-01-01
Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs
Average Costs versus Net Present Value
E.A. van der Laan (Erwin); R.H. Teunter (Ruud)
2000-01-01
textabstractWhile the net present value (NPV) approach is widely accepted as the right framework for studying production and inventory control systems, average cost (AC) models are more widely used. For the well known EOQ model it can be verified that (under certain conditions) the AC approach gives
Average beta-beating from random errors
Tomas Garcia, Rogelio; Langner, Andy Sven; Malina, Lukas; Franchi, Andrea; CERN. Geneva. ATS Department
2018-01-01
The impact of random errors on average β-beating is studied via analytical derivations and simulations. A systematic positive β-beating is expected from random errors quadratic with the sources or, equivalently, with the rms β-beating. However, random errors do not have a systematic eﬀect on the tune.
Reliability Estimates for Undergraduate Grade Point Average
Westrick, Paul A.
2017-01-01
Undergraduate grade point average (GPA) is a commonly employed measure in educational research, serving as a criterion or as a predictor depending on the research question. Over the decades, researchers have used a variety of reliability coefficients to estimate the reliability of undergraduate GPA, which suggests that there has been no consensus…
Tendon surveillance requirements - average tendon force
International Nuclear Information System (INIS)
Fulton, J.F.
1982-01-01
Proposed Rev. 3 to USNRC Reg. Guide 1.35 discusses the need for comparing, for individual tendons, the measured and predicted lift-off forces. Such a comparison is intended to detect any abnormal tendon force loss which might occur. Recognizing that there are uncertainties in the prediction of tendon losses, proposed Guide 1.35.1 has allowed specific tolerances on the fundamental losses. Thus, the lift-off force acceptance criteria for individual tendons appearing in Reg. Guide 1.35, Proposed Rev. 3, is stated relative to a lower bound predicted tendon force, which is obtained using the 'plus' tolerances on the fundamental losses. There is an additional acceptance criterion for the lift-off forces which is not specifically addressed in these two Reg. Guides; however, it is included in a proposed Subsection IWX to ASME Code Section XI. This criterion is based on the overriding requirement that the magnitude of prestress in the containment structure be sufficeint to meet the minimum prestress design requirements. This design requirement can be expressed as an average tendon force for each group of vertical hoop, or dome tendons. For the purpose of comparing the actual tendon forces with the required average tendon force, the lift-off forces measured for a sample of tendons within each group can be averaged to construct the average force for the entire group. However, the individual lift-off forces must be 'corrected' (normalized) prior to obtaining the sample average. This paper derives the correction factor to be used for this purpose. (orig./RW)
Nuclear fuel management via fuel quality factor averaging
International Nuclear Information System (INIS)
Mingle, J.O.
1978-01-01
The numerical procedure of prime number averaging is applied to the fuel quality factor distribution of once and twice-burned fuel in order to evolve a fuel management scheme. The resulting fuel shuffling arrangement produces a near optimal flat power profile both under beginning-of-life and end-of-life conditions. The procedure is easily applied requiring only the solution of linear algebraic equations. (author)
Capillary Electrophoresis Sensitivity Enhancement Based on Adaptive Moving Average Method.
Drevinskas, Tomas; Telksnys, Laimutis; Maruška, Audrius; Gorbatsova, Jelena; Kaljurand, Mihkel
2018-06-05
In the present work, we demonstrate a novel approach to improve the sensitivity of the "out of lab" portable capillary electrophoretic measurements. Nowadays, many signal enhancement methods are (i) underused (nonoptimal), (ii) overused (distorts the data), or (iii) inapplicable in field-portable instrumentation because of a lack of computational power. The described innovative migration velocity-adaptive moving average method uses an optimal averaging window size and can be easily implemented with a microcontroller. The contactless conductivity detection was used as a model for the development of a signal processing method and the demonstration of its impact on the sensitivity. The frequency characteristics of the recorded electropherograms and peaks were clarified. Higher electrophoretic mobility analytes exhibit higher-frequency peaks, whereas lower electrophoretic mobility analytes exhibit lower-frequency peaks. On the basis of the obtained data, a migration velocity-adaptive moving average algorithm was created, adapted, and programmed into capillary electrophoresis data-processing software. Employing the developed algorithm, each data point is processed depending on a certain migration time of the analyte. Because of the implemented migration velocity-adaptive moving average method, the signal-to-noise ratio improved up to 11 times for sampling frequency of 4.6 Hz and up to 22 times for sampling frequency of 25 Hz. This paper could potentially be used as a methodological guideline for the development of new smoothing algorithms that require adaptive conditions in capillary electrophoresis and other separation methods.
Control by Numbers: New Managerialism and Ranking in Higher Education
Lynch, Kathleen
2015-01-01
This paper analyses the role of rankings as an instrument of new managerialism. It shows how rankings are reconstituting the purpose of universities, the role of academics and the definition of what it is to be a student. The paper opens by examining the forces that have facilitated the emergence of the ranking industry and the ideologies…
Yearly, seasonal and monthly daily average diffuse sky radiation models
International Nuclear Information System (INIS)
Kassem, A.S.; Mujahid, A.M.; Turner, D.W.
1993-01-01
A daily average diffuse sky radiation regression model based on daily global radiation was developed utilizing two year data taken near Blytheville, Arkansas (Lat. =35.9 0 N, Long. = 89.9 0 W), U.S.A. The model has a determination coefficient of 0.91 and 0.092 standard error of estimate. The data were also analyzed for a seasonal dependence and four seasonal average daily models were developed for the spring, summer, fall and winter seasons. The coefficient of determination is 0.93, 0.81, 0.94 and 0.93, whereas the standard error of estimate is 0.08, 0.102, 0.042 and 0.075 for spring, summer, fall and winter, respectively. A monthly average daily diffuse sky radiation model was also developed. The coefficient of determination is 0.92 and the standard error of estimate is 0.083. A seasonal monthly average model was also developed which has 0.91 coefficient of determination and 0.085 standard error of estimate. The developed monthly daily average and daily models compare well with a selected number of previously developed models. (author). 11 ref., figs., tabs
Accurate phenotyping: Reconciling approaches through Bayesian model averaging.
Directory of Open Access Journals (Sweden)
Carla Chia-Ming Chen
Full Text Available Genetic research into complex diseases is frequently hindered by a lack of clear biomarkers for phenotype ascertainment. Phenotypes for such diseases are often identified on the basis of clinically defined criteria; however such criteria may not be suitable for understanding the genetic composition of the diseases. Various statistical approaches have been proposed for phenotype definition; however our previous studies have shown that differences in phenotypes estimated using different approaches have substantial impact on subsequent analyses. Instead of obtaining results based upon a single model, we propose a new method, using Bayesian model averaging to overcome problems associated with phenotype definition. Although Bayesian model averaging has been used in other fields of research, this is the first study that uses Bayesian model averaging to reconcile phenotypes obtained using multiple models. We illustrate the new method by applying it to simulated genetic and phenotypic data for Kofendred personality disorder-an imaginary disease with several sub-types. Two separate statistical methods were used to identify clusters of individuals with distinct phenotypes: latent class analysis and grade of membership. Bayesian model averaging was then used to combine the two clusterings for the purpose of subsequent linkage analyses. We found that causative genetic loci for the disease produced higher LOD scores using model averaging than under either individual model separately. We attribute this improvement to consolidation of the cores of phenotype clusters identified using each individual method.
Statistics on exponential averaging of periodograms
Energy Technology Data Exchange (ETDEWEB)
Peeters, T.T.J.M. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Ciftcioglu, Oe. [Istanbul Technical Univ. (Turkey). Dept. of Electrical Engineering
1994-11-01
The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a {chi}{sup 2} distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.).
Statistics on exponential averaging of periodograms
International Nuclear Information System (INIS)
Peeters, T.T.J.M.; Ciftcioglu, Oe.
1994-11-01
The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a χ 2 distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.)
Averages, Areas and Volumes; Cambridge Conference on School Mathematics Feasibility Study No. 45.
Cambridge Conference on School Mathematics, Newton, MA.
Presented is an elementary approach to areas, columns and other mathematical concepts usually treated in calculus. The approach is based on the idea of average and this concept is utilized throughout the report. In the beginning the average (arithmetic mean) of a set of numbers is considered and two properties of the average which often simplify…
Weighted estimates for the averaging integral operator
Czech Academy of Sciences Publication Activity Database
Opic, Bohumír; Rákosník, Jiří
2010-01-01
Roč. 61, č. 3 (2010), s. 253-262 ISSN 0010-0757 R&D Projects: GA ČR GA201/05/2033; GA ČR GA201/08/0383 Institutional research plan: CEZ:AV0Z10190503 Keywords : averaging integral operator * weighted Lebesgue spaces * weights Subject RIV: BA - General Mathematics Impact factor: 0.474, year: 2010 http://link.springer.com/article/10.1007%2FBF03191231
Average Transverse Momentum Quantities Approaching the Lightfront
Boer, Daniel
2015-01-01
In this contribution to Light Cone 2014, three average transverse momentum quantities are discussed: the Sivers shift, the dijet imbalance, and the $p_T$ broadening. The definitions of these quantities involve integrals over all transverse momenta that are overly sensitive to the region of large transverse momenta, which conveys little information about the transverse momentum distributions of quarks and gluons inside hadrons. TMD factorization naturally suggests alternative definitions of su...
Time-averaged MSD of Brownian motion
Andreanov, Alexei; Grebenkov, Denis
2012-01-01
We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we de...
Average configuration of the geomagnetic tail
International Nuclear Information System (INIS)
Fairfield, D.H.
1979-01-01
Over 3000 hours of Imp 6 magnetic field data obtained between 20 and 33 R/sub E/ in the geomagnetic tail have been used in a statistical study of the tail configuration. A distribution of 2.5-min averages of B/sub z/ as a function of position across the tail reveals that more flux crosses the equatorial plane near the dawn and dusk flanks (B-bar/sub z/=3.γ) than near midnight (B-bar/sub z/=1.8γ). The tail field projected in the solar magnetospheric equatorial plane deviates from the x axis due to flaring and solar wind aberration by an angle α=-0.9 Y/sub SM/-2.7, where Y/sub SM/ is in earth radii and α is in degrees. After removing these effects, the B/sub y/ component of the tail field is found to depend on interplanetary sector structure. During an 'away' sector the B/sub y/ component of the tail field is on average 0.5γ greater than that during a 'toward' sector, a result that is true in both tail lobes and is independent of location across the tail. This effect means the average field reversal between northern and southern lobes of the tail is more often 178 0 rather than the 180 0 that is generally supposed
Unscrambling The "Average User" Of Habbo Hotel
Directory of Open Access Journals (Sweden)
Mikael Johnson
2007-01-01
Full Text Available The “user” is an ambiguous concept in human-computer interaction and information systems. Analyses of users as social actors, participants, or configured users delineate approaches to studying design-use relationships. Here, a developer’s reference to a figure of speech, termed the “average user,” is contrasted with design guidelines. The aim is to create an understanding about categorization practices in design through a case study about the virtual community, Habbo Hotel. A qualitative analysis highlighted not only the meaning of the “average user,” but also the work that both the developer and the category contribute to this meaning. The average user a represents the unknown, b influences the boundaries of the target user groups, c legitimizes the designer to disregard marginal user feedback, and d keeps the design space open, thus allowing for creativity. The analysis shows how design and use are intertwined and highlights the developers’ role in governing different users’ interests.
Changing mortality and average cohort life expectancy
Directory of Open Access Journals (Sweden)
Robert Schoen
2005-10-01
Full Text Available Period life expectancy varies with changes in mortality, and should not be confused with the life expectancy of those alive during that period. Given past and likely future mortality changes, a recent debate has arisen on the usefulness of the period life expectancy as the leading measure of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure, the average cohort life expectancy (ACLE, to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate measures of mortality are calculated for England and Wales, Norway, and Switzerland for the years 1880 to 2000. CAL is found to be sensitive to past and present changes in death rates. ACLE requires the most data, but gives the best representation of the survivorship of cohorts present at a given time.
New Nordic diet versus average Danish diet
DEFF Research Database (Denmark)
Khakimov, Bekzod; Poulsen, Sanne Kellebjerg; Savorani, Francesco
2016-01-01
and 3-hydroxybutanoic acid were related to a higher weight loss, while higher concentrations of salicylic, lactic and N-aspartic acids, and 1,5-anhydro-D-sorbitol were related to a lower weight loss. Specific gender- and seasonal differences were also observed. The study strongly indicates that healthy...... metabolites reflecting specific differences in the diets, especially intake of plant foods and seafood, and in energy metabolism related to ketone bodies and gluconeogenesis, formed the predominant metabolite pattern discriminating the intervention groups. Among NND subjects higher levels of vaccenic acid...
Averaging processes in granular flows driven by gravity
Rossi, Giulia; Armanini, Aronne
2016-04-01
One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental
Average time complexity of decision trees
Chikalov, Igor; Jain, Lakhmi C
2011-01-01
This monograph generalizes several known results on the topic and considers a number of fresh problems. Combinatorics, probability theory and complexity theory are used in the proofs, as well as concepts from discrete mathematics and computer science.
Exactly averaged equations for flow and transport in random media
International Nuclear Information System (INIS)
Shvidler, Mark; Karasaki, Kenzi
2001-01-01
It is well known that exact averaging of the equations of flow and transport in random porous media can be realized only for a small number of special, occasionally exotic, fields. On the other hand, the properties of approximate averaging methods are not yet fully understood. For example, the convergence behavior and the accuracy of truncated perturbation series. Furthermore, the calculation of the high-order perturbations is very complicated. These problems for a long time have stimulated attempts to find the answer for the question: Are there in existence some exact general and sufficiently universal forms of averaged equations? If the answer is positive, there arises the problem of the construction of these equations and analyzing them. There exist many publications related to these problems and oriented on different applications: hydrodynamics, flow and transport in porous media, theory of elasticity, acoustic and electromagnetic waves in random fields, etc. We present a method of finding the general form of exactly averaged equations for flow and transport in random fields by using (1) an assumption of the existence of Green's functions for appropriate stochastic problems, (2) some general properties of the Green's functions, and (3) the some basic information about the random fields of the conductivity, porosity and flow velocity. We present a general form of the exactly averaged non-local equations for the following cases. 1. Steady-state flow with sources in porous media with random conductivity. 2. Transient flow with sources in compressible media with random conductivity and porosity. 3. Non-reactive solute transport in random porous media. We discuss the problem of uniqueness and the properties of the non-local averaged equations, for the cases with some types of symmetry (isotropic, transversal isotropic, orthotropic) and we analyze the hypothesis of the structure non-local equations in general case of stochastically homogeneous fields. (author)
Operator product expansion and its thermal average
Energy Technology Data Exchange (ETDEWEB)
Mallik, S [Saha Inst. of Nuclear Physics, Calcutta (India)
1998-05-01
QCD sum rules at finite temperature, like the ones at zero temperature, require the coefficients of local operators, which arise in the short distance expansion of the thermal average of two-point functions of currents. We extend the configuration space method, applied earlier at zero temperature, to the case at finite temperature. We find that, upto dimension four, two new operators arise, in addition to the two appearing already in the vacuum correlation functions. It is argued that the new operators would contribute substantially to the sum rules, when the temperature is not too low. (orig.) 7 refs.
Fluctuations of wavefunctions about their classical average
International Nuclear Information System (INIS)
Benet, L; Flores, J; Hernandez-Saldana, H; Izrailev, F M; Leyvraz, F; Seligman, T H
2003-01-01
Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics
Phase-averaged transport for quasiperiodic Hamiltonians
Bellissard, J; Schulz-Baldes, H
2002-01-01
For a class of discrete quasi-periodic Schroedinger operators defined by covariant re- presentations of the rotation algebra, a lower bound on phase-averaged transport in terms of the multifractal dimensions of the density of states is proven. This result is established under a Diophantine condition on the incommensuration parameter. The relevant class of operators is distinguished by invariance with respect to symmetry automorphisms of the rotation algebra. It includes the critical Harper (almost-Mathieu) operator. As a by-product, a new solution of the frame problem associated with Weyl-Heisenberg-Gabor lattices of coherent states is given.
Baseline-dependent averaging in radio interferometry
Wijnholds, S. J.; Willis, A. G.; Salvini, S.
2018-05-01
This paper presents a detailed analysis of the applicability and benefits of baseline-dependent averaging (BDA) in modern radio interferometers and in particular the Square Kilometre Array. We demonstrate that BDA does not affect the information content of the data other than a well-defined decorrelation loss for which closed form expressions are readily available. We verify these theoretical findings using simulations. We therefore conclude that BDA can be used reliably in modern radio interferometry allowing a reduction of visibility data volume (and hence processing costs for handling visibility data) by more than 80 per cent.
Time-averaged MSD of Brownian motion
International Nuclear Information System (INIS)
Andreanov, Alexei; Grebenkov, Denis S
2012-01-01
We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution
Time-dependent angularly averaged inverse transport
International Nuclear Information System (INIS)
Bal, Guillaume; Jollivet, Alexandre
2009-01-01
This paper concerns the reconstruction of the absorption and scattering parameters in a time-dependent linear transport equation from knowledge of angularly averaged measurements performed at the boundary of a domain of interest. Such measurement settings find applications in medical and geophysical imaging. We show that the absorption coefficient and the spatial component of the scattering coefficient are uniquely determined by such measurements. We obtain stability results on the reconstruction of the absorption and scattering parameters with respect to the measured albedo operator. The stability results are obtained by a precise decomposition of the measurements into components with different singular behavior in the time domain
Bootstrapping Density-Weighted Average Derivatives
DEFF Research Database (Denmark)
Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael
Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...... (1989). In many cases validity of bootstrap-based inference procedures is found to depend crucially on whether the bandwidth sequence satisfies a particular (asymptotic linearity) condition. An exception to this rule occurs for inference procedures involving a studentized estimator employing a "robust...
Time-averaged MSD of Brownian motion
Andreanov, Alexei; Grebenkov, Denis S.
2012-07-01
We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution.
De Luca, G.; Magnus, J.R.
2011-01-01
In this article, we describe the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals, which implement, respectively, the exact Bayesian model-averaging estimator and the weighted-average least-squares
Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.
Dirks, Jean; And Others
1983-01-01
Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)
Calculating ensemble averaged descriptions of protein rigidity without sampling.
Directory of Open Access Journals (Sweden)
Luis C González
Full Text Available Previous works have demonstrated that protein rigidity is related to thermodynamic stability, especially under conditions that favor formation of native structure. Mechanical network rigidity properties of a single conformation are efficiently calculated using the integer body-bar Pebble Game (PG algorithm. However, thermodynamic properties require averaging over many samples from the ensemble of accessible conformations to accurately account for fluctuations in network topology. We have developed a mean field Virtual Pebble Game (VPG that represents the ensemble of networks by a single effective network. That is, all possible number of distance constraints (or bars that can form between a pair of rigid bodies is replaced by the average number. The resulting effective network is viewed as having weighted edges, where the weight of an edge quantifies its capacity to absorb degrees of freedom. The VPG is interpreted as a flow problem on this effective network, which eliminates the need to sample. Across a nonredundant dataset of 272 protein structures, we apply the VPG to proteins for the first time. Our results show numerically and visually that the rigidity characterizations of the VPG accurately reflect the ensemble averaged [Formula: see text] properties. This result positions the VPG as an efficient alternative to understand the mechanical role that chemical interactions play in maintaining protein stability.
Calculating ensemble averaged descriptions of protein rigidity without sampling.
González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J
2012-01-01
Previous works have demonstrated that protein rigidity is related to thermodynamic stability, especially under conditions that favor formation of native structure. Mechanical network rigidity properties of a single conformation are efficiently calculated using the integer body-bar Pebble Game (PG) algorithm. However, thermodynamic properties require averaging over many samples from the ensemble of accessible conformations to accurately account for fluctuations in network topology. We have developed a mean field Virtual Pebble Game (VPG) that represents the ensemble of networks by a single effective network. That is, all possible number of distance constraints (or bars) that can form between a pair of rigid bodies is replaced by the average number. The resulting effective network is viewed as having weighted edges, where the weight of an edge quantifies its capacity to absorb degrees of freedom. The VPG is interpreted as a flow problem on this effective network, which eliminates the need to sample. Across a nonredundant dataset of 272 protein structures, we apply the VPG to proteins for the first time. Our results show numerically and visually that the rigidity characterizations of the VPG accurately reflect the ensemble averaged [Formula: see text] properties. This result positions the VPG as an efficient alternative to understand the mechanical role that chemical interactions play in maintaining protein stability.
Bounding quantum gate error rate based on reported average fidelity
International Nuclear Information System (INIS)
Sanders, Yuval R; Wallman, Joel J; Sanders, Barry C
2016-01-01
Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli distance as a measure of this deviation, and we show that knowledge of the Pauli distance enables tighter estimates of the error rate of quantum gates. (fast track communication)
Trajectory averaging for stochastic approximation MCMC algorithms
Liang, Faming
2010-10-01
The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.
Asymptotic Time Averages and Frequency Distributions
Directory of Open Access Journals (Sweden)
Muhammad El-Taha
2016-01-01
Full Text Available Consider an arbitrary nonnegative deterministic process (in a stochastic setting {X(t, t≥0} is a fixed realization, i.e., sample-path of the underlying stochastic process with state space S=(-∞,∞. Using a sample-path approach, we give necessary and sufficient conditions for the long-run time average of a measurable function of process to be equal to the expectation taken with respect to the same measurable function of its long-run frequency distribution. The results are further extended to allow unrestricted parameter (time space. Examples are provided to show that our condition is not superfluous and that it is weaker than uniform integrability. The case of discrete-time processes is also considered. The relationship to previously known sufficient conditions, usually given in stochastic settings, will also be discussed. Our approach is applied to regenerative processes and an extension of a well-known result is given. For researchers interested in sample-path analysis, our results will give them the choice to work with the time average of a process or its frequency distribution function and go back and forth between the two under a mild condition.
Averaging in the presence of sliding errors
International Nuclear Information System (INIS)
Yost, G.P.
1991-08-01
In many cases the precision with which an experiment can measure a physical quantity depends on the value of that quantity. Not having access to the true value, experimental groups are forced to assign their errors based on their own measured value. Procedures which attempt to derive an improved estimate of the true value by a suitable average of such measurements usually weight each experiment's measurement according to the reported variance. However, one is in a position to derive improved error estimates for each experiment from the average itself, provided an approximate idea of the functional dependence of the error on the central value is known. Failing to do so can lead to substantial biases. Techniques which avoid these biases without loss of precision are proposed and their performance is analyzed with examples. These techniques are quite general and can bring about an improvement even when the behavior of the errors is not well understood. Perhaps the most important application of the technique is in fitting curves to histograms
Average of delta: a new quality control tool for clinical laboratories.
Jones, Graham R D
2016-01-01
Average of normals is a tool used to control assay performance using the average of a series of results from patients' samples. Delta checking is a process of identifying errors in individual patient results by reviewing the difference from previous results of the same patient. This paper introduces a novel alternate approach, average of delta, which combines these concepts to use the average of a number of sequential delta values to identify changes in assay performance. Models for average of delta and average of normals were developed in a spreadsheet application. The model assessed the expected scatter of average of delta and average of normals functions and the effect of assay bias for different values of analytical imprecision and within- and between-subject biological variation and the number of samples included in the calculations. The final assessment was the number of patients' samples required to identify an added bias with 90% certainty. The model demonstrated that with larger numbers of delta values, the average of delta function was tighter (lower coefficient of variation). The optimal number of samples for bias detection with average of delta was likely to be between 5 and 20 for most settings and that average of delta outperformed average of normals when the within-subject biological variation was small relative to the between-subject variation. Average of delta provides a possible additional assay quality control tool which theoretical modelling predicts may be more valuable than average of normals for analytes where the group biological variation is wide compared with within-subject variation and where there is a high rate of repeat testing in the laboratory patient population. © The Author(s) 2015.
Mental health care and average happiness: strong effect in developed nations.
Touburg, Giorgio; Veenhoven, Ruut
2015-07-01
Mental disorder is a main cause of unhappiness in modern society and investment in mental health care is therefore likely to add to average happiness. This prediction was checked in a comparison of 143 nations around 2005. Absolute investment in mental health care was measured using the per capita number of psychiatrists and psychologists working in mental health care. Relative investment was measured using the share of mental health care in the total health budget. Average happiness in nations was measured with responses to survey questions about life-satisfaction. Average happiness appeared to be higher in countries that invest more in mental health care, both absolutely and relative to investment in somatic medicine. A data split by level of development shows that this difference exists only among developed nations. Among these nations the link between mental health care and happiness is quite strong, both in an absolute sense and compared to other known societal determinants of happiness. The correlation between happiness and share of mental health care in the total health budget is twice as strong as the correlation between happiness and size of the health budget. A causal effect is likely, but cannot be proved in this cross-sectional analysis.
Measuring average angular velocity with a smartphone magnetic field sensor
Pili, Unofre; Violanda, Renante
2018-02-01
The angular velocity of a spinning object is, by standard, measured using a device called a tachometer. However, by directly using it in a classroom setting, the activity is likely to appear as less instructive and less engaging. Indeed, some alternative classroom-suitable methods for measuring angular velocity have been presented. In this paper, we present a further alternative that is smartphone-based, making use of the real-time magnetic field (simply called B-field in what follows) data gathering capability of the B-field sensor of the smartphone device as the timer for measuring average rotational period and average angular velocity. The in-built B-field sensor in smartphones has already found a number of uses in undergraduate experimental physics. For instance, in elementary electrodynamics, it has been used to explore the well-known Bio-Savart law and in a measurement of the permeability of air.
High average power linear induction accelerator development
International Nuclear Information System (INIS)
Bayless, J.R.; Adler, R.J.
1987-07-01
There is increasing interest in linear induction accelerators (LIAs) for applications including free electron lasers, high power microwave generators and other types of radiation sources. Lawrence Livermore National Laboratory has developed LIA technology in combination with magnetic pulse compression techniques to achieve very impressive performance levels. In this paper we will briefly discuss the LIA concept and describe our development program. Our goals are to improve the reliability and reduce the cost of LIA systems. An accelerator is presently under construction to demonstrate these improvements at an energy of 1.6 MeV in 2 kA, 65 ns beam pulses at an average beam power of approximately 30 kW. The unique features of this system are a low cost accelerator design and an SCR-switched, magnetically compressed, pulse power system. 4 refs., 7 figs
Quetelet, the average man and medical knowledge.
Caponi, Sandra
2013-01-01
Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.
[Quetelet, the average man and medical knowledge].
Caponi, Sandra
2013-01-01
Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.
Angle-averaged Compton cross sections
International Nuclear Information System (INIS)
Nickel, G.H.
1983-01-01
The scattering of a photon by an individual free electron is characterized by six quantities: α = initial photon energy in units of m 0 c 2 ; α/sub s/ = scattered photon energy in units of m 0 c 2 ; β = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV
Average Gait Differential Image Based Human Recognition
Directory of Open Access Journals (Sweden)
Jinyan Chen
2014-01-01
Full Text Available The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI, AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition.
Reynolds averaged simulation of unsteady separated flow
International Nuclear Information System (INIS)
Iaccarino, G.; Ooi, A.; Durbin, P.A.; Behnia, M.
2003-01-01
The accuracy of Reynolds averaged Navier-Stokes (RANS) turbulence models in predicting complex flows with separation is examined. The unsteady flow around square cylinder and over a wall-mounted cube are simulated and compared with experimental data. For the cube case, none of the previously published numerical predictions obtained by steady-state RANS produced a good match with experimental data. However, evidence exists that coherent vortex shedding occurs in this flow. Its presence demands unsteady RANS computation because the flow is not statistically stationary. The present study demonstrates that unsteady RANS does indeed predict periodic shedding, and leads to much better concurrence with available experimental data than has been achieved with steady computation
Angle-averaged Compton cross sections
Energy Technology Data Exchange (ETDEWEB)
Nickel, G.H.
1983-01-01
The scattering of a photon by an individual free electron is characterized by six quantities: ..cap alpha.. = initial photon energy in units of m/sub 0/c/sup 2/; ..cap alpha../sub s/ = scattered photon energy in units of m/sub 0/c/sup 2/; ..beta.. = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV.
The balanced survivor average causal effect.
Greene, Tom; Joffe, Marshall; Hu, Bo; Li, Liang; Boucher, Ken
2013-05-07
Statistical analysis of longitudinal outcomes is often complicated by the absence of observable values in patients who die prior to their scheduled measurement. In such cases, the longitudinal data are said to be "truncated by death" to emphasize that the longitudinal measurements are not simply missing, but are undefined after death. Recently, the truncation by death problem has been investigated using the framework of principal stratification to define the target estimand as the survivor average causal effect (SACE), which in the context of a two-group randomized clinical trial is the mean difference in the longitudinal outcome between the treatment and control groups for the principal stratum of always-survivors. The SACE is not identified without untestable assumptions. These assumptions have often been formulated in terms of a monotonicity constraint requiring that the treatment does not reduce survival in any patient, in conjunction with assumed values for mean differences in the longitudinal outcome between certain principal strata. In this paper, we introduce an alternative estimand, the balanced-SACE, which is defined as the average causal effect on the longitudinal outcome in a particular subset of the always-survivors that is balanced with respect to the potential survival times under the treatment and control. We propose a simple estimator of the balanced-SACE that compares the longitudinal outcomes between equivalent fractions of the longest surviving patients between the treatment and control groups and does not require a monotonicity assumption. We provide expressions for the large sample bias of the estimator, along with sensitivity analyses and strategies to minimize this bias. We consider statistical inference under a bootstrap resampling procedure.
Stochastic Optimal Prediction with Application to Averaged Euler Equations
Energy Technology Data Exchange (ETDEWEB)
Bell, John [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chorin, Alexandre J. [Univ. of California, Berkeley, CA (United States); Crutchfield, William [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2017-04-24
Optimal prediction (OP) methods compensate for a lack of resolution in the numerical solution of complex problems through the use of an invariant measure as a prior measure in the Bayesian sense. In first-order OP, unresolved information is approximated by its conditional expectation with respect to the invariant measure. In higher-order OP, unresolved information is approximated by a stochastic estimator, leading to a system of random or stochastic differential equations. We explain the ideas through a simple example, and then apply them to the solution of Averaged Euler equations in two space dimensions.
Glycogen with short average chain length enhances bacterial durability
Wang, Liang; Wise, Michael J.
2011-09-01
Glycogen is conventionally viewed as an energy reserve that can be rapidly mobilized for ATP production in higher organisms. However, several studies have noted that glycogen with short average chain length in some bacteria is degraded very slowly. In addition, slow utilization of glycogen is correlated with bacterial viability, that is, the slower the glycogen breakdown rate, the longer the bacterial survival time in the external environment under starvation conditions. We call that a durable energy storage mechanism (DESM). In this review, evidence from microbiology, biochemistry, and molecular biology will be assembled to support the hypothesis of glycogen as a durable energy storage compound. One method for testing the DESM hypothesis is proposed.
Competitiveness - higher education
Directory of Open Access Journals (Sweden)
Labas Istvan
2016-03-01
Full Text Available Involvement of European Union plays an important role in the areas of education and training equally. The member states are responsible for organizing and operating their education and training systems themselves. And, EU policy is aimed at supporting the efforts of member states and trying to find solutions for the common challenges which appear. In order to make our future sustainable maximally; the key to it lies in education. The highly qualified workforce is the key to development, advancement and innovation of the world. Nowadays, the competitiveness of higher education institutions has become more and more appreciated in the national economy. In recent years, the frameworks of operation of higher education systems have gone through a total transformation. The number of applying students is continuously decreasing in some European countries therefore only those institutions can “survive” this shortfall, which are able to minimize the loss of the number of students. In this process, the factors forming the competitiveness of these budgetary institutions play an important role from the point of view of survival. The more competitive a higher education institution is, the greater the chance is that the students would like to continue their studies there and thus this institution will have a greater chance for the survival in the future, compared to ones lagging behind in the competition. Aim of our treatise prepared is to present the current situation and main data of the EU higher education and we examine the performance of higher education: to what extent it fulfils the strategy for smart, sustainable and inclusive growth which is worded in the framework of Europe 2020 programme. The treatise is based on analysis of statistical data.
Industrial Applications of High Average Power FELS
Shinn, Michelle D
2005-01-01
The use of lasers for material processing continues to expand, and the annual sales of such lasers exceeds $1 B (US). Large scale (many m2) processing of materials require the economical production of laser powers of the tens of kilowatts, and therefore are not yet commercial processes, although they have been demonstrated. The development of FELs based on superconducting RF (SRF) linac technology provides a scaleable path to laser outputs above 50 kW in the IR, rendering these applications economically viable, since the cost/photon drops as the output power increases. This approach also enables high average power ~ 1 kW output in the UV spectrum. Such FELs will provide quasi-cw (PRFs in the tens of MHz), of ultrafast (pulsewidth ~ 1 ps) output with very high beam quality. This talk will provide an overview of applications tests by our facility's users such as pulsed laser deposition, laser ablation, and laser surface modification, as well as present plans that will be tested with our upgraded FELs. These upg...
Calculating Free Energies Using Average Force
Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)
2001-01-01
A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.
Geographic Gossip: Efficient Averaging for Sensor Networks
Dimakis, Alexandros D. G.; Sarwate, Anand D.; Wainwright, Martin J.
Gossip algorithms for distributed computation are attractive due to their simplicity, distributed nature, and robustness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repeatedly recirculating redundant information. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing times of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of $n$ and $\\sqrt{n}$ respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy $\\epsilon$ using $O(\\frac{n^{1.5}}{\\sqrt{\\log n}} \\log \\epsilon^{-1})$ radio transmissions, which yields a $\\sqrt{\\frac{n}{\\log n}}$ factor improvement over standard gossip algorithms. We illustrate these theoretical results with experimental comparisons between our algorithm and standard methods as applied to various classes of random fields.
High-average-power solid state lasers
International Nuclear Information System (INIS)
Summers, M.A.
1989-01-01
In 1987, a broad-based, aggressive R ampersand D program aimed at developing the technologies necessary to make possible the use of solid state lasers that are capable of delivering medium- to high-average power in new and demanding applications. Efforts were focused along the following major lines: development of laser and nonlinear optical materials, and of coatings for parasitic suppression and evanescent wave control; development of computational design tools; verification of computational models on thoroughly instrumented test beds; and applications of selected aspects of this technology to specific missions. In the laser materials areas, efforts were directed towards producing strong, low-loss laser glasses and large, high quality garnet crystals. The crystal program consisted of computational and experimental efforts aimed at understanding the physics, thermodynamics, and chemistry of large garnet crystal growth. The laser experimental efforts were directed at understanding thermally induced wave front aberrations in zig-zag slabs, understanding fluid mechanics, heat transfer, and optical interactions in gas-cooled slabs, and conducting critical test-bed experiments with various electro-optic switch geometries. 113 refs., 99 figs., 18 tabs
On spectral averages in nuclear spectroscopy
International Nuclear Information System (INIS)
Verbaarschot, J.J.M.
1982-01-01
In nuclear spectroscopy one tries to obtain a description of systems of bound nucleons. By means of theoretical models one attemps to reproduce the eigenenergies and the corresponding wave functions which then enable the computation of, for example, the electromagnetic moments and the transition amplitudes. Statistical spectroscopy can be used for studying nuclear systems in large model spaces. In this thesis, methods are developed and applied which enable the determination of quantities in a finite part of the Hilbert space, which is defined by specific quantum values. In the case of averages in a space defined by a partition of the nucleons over the single-particle orbits, the propagation coefficients reduce to Legendre interpolation polynomials. In chapter 1 these polynomials are derived with the help of a generating function and a generalization of Wick's theorem. One can then deduce the centroid and the variance of the eigenvalue distribution in a straightforward way. The results are used to calculate the systematic energy difference between states of even and odd parity for nuclei in the mass region A=10-40. In chapter 2 an efficient method for transforming fixed angular momentum projection traces into fixed angular momentum for the configuration space traces is developed. In chapter 3 it is shown that the secular behaviour can be represented by a Gaussian function of the energies. (Auth.)
Mean link versus average plaquette tadpoles in lattice NRQCD
Shakespeare, Norman H.; Trottier, Howard D.
1999-03-01
We compare mean-link and average plaquette tadpole renormalization schemes in the context of the quarkonium hyperfine splittings in lattice NRQCD. Simulations are done for the three quarkonium systems c overlinec, b overlinec, and b overlineb. The hyperfine splittings are computed both at leading and at next-to-leading order in the relativistic expansion. Results are obtained at a large number of lattice spacings. A number of features emerge, all of which favor tadpole renormalization using mean links. This includes much better scaling of the hyperfine splittings in the three quarkonium systems. We also find that relativistic corrections to the spin splittings are smaller with mean-link tadpoles, particularly for the c overlinec and b overlinec systems. We also see signs of a breakdown in the NRQCD expansion when the bare quark mass falls below about one in lattice units (with the bare quark masses turning out to be much larger with mean-link tadpoles).
Quantum gravity unification via transfinite arithmetic and geometrical averaging
International Nuclear Information System (INIS)
El Naschie, M.S.
2008-01-01
In E-Infinity theory, we have not only infinitely many dimensions but also infinitely many fundamental forces. However, due to the hierarchical structure of ε (∞) spacetime we have a finite expectation number for its dimensionality and likewise a finite expectation number for the corresponding interactions. Starting from the preceding fundamental principles and using the experimental findings as well as the theoretical value of the coupling constants of the electroweak and the strong forces we present an extremely simple averaging procedure for determining the quantum gravity unification coupling constant with and without super symmetry. The work draws heavily on previous results, in particular a paper, by the Slovian Prof. Marek-Crnjac [Marek-Crnjac L. On the unification of all fundamental forces in a fundamentally fuzzy Cantorian ε (∞) manifold and high energy physics. Chaos, Solitons and Fractals 2004;4:657-68
Applications of ordered weighted averaging (OWA operators in environmental problems
Directory of Open Access Journals (Sweden)
Carlos Llopis-Albert
2017-04-01
Full Text Available This paper presents an application of a prioritized weighted aggregation operator based on ordered weighted averaging (OWA to deal with stakeholders' constructive participation in water resources projects. They have different degree of acceptance or preference regarding the measures and policies to be carried out, which lead to different environmental and socio-economic outcomes, and hence, to different levels of stakeholders’ satisfaction. The methodology establishes a prioritization relationship upon the stakeholders, which preferences are aggregated by means of weights depending on the satisfaction of the higher priority policy maker. The methodology establishes a prioritization relationship upon the stakeholders, which preferences are aggregated by means of weights depending on the satisfaction of the higher priority policy maker. The methodology has been successfully applied to a Public Participation Project (PPP in watershed management, thus obtaining efficient environmental measures in conflict resolution problems under actors’ preference uncertainties.
Generalized Bernoulli-Hurwitz numbers and the universal Bernoulli numbers
International Nuclear Information System (INIS)
Ônishi, Yoshihiro
2011-01-01
The three fundamental properties of the Bernoulli numbers, namely, the von Staudt-Clausen theorem, von Staudt's second theorem, and Kummer's original congruence, are generalized to new numbers that we call generalized Bernoulli-Hurwitz numbers. These are coefficients in the power series expansion of a higher-genus algebraic function with respect to a suitable variable. Our generalization differs strongly from previous works. Indeed, the order of the power of the modulus prime in our Kummer-type congruences is exactly the same as in the trigonometric function case (namely, Kummer's own congruence for the original Bernoulli numbers), and as in the elliptic function case (namely, H. Lang's extension for the Hurwitz numbers). However, in other past results on higher-genus algebraic functions, the modulus was at most half of its value in these classical cases. This contrast is clarified by investigating the analogue of the three properties above for the universal Bernoulli numbers. Bibliography: 34 titles.
Higher Education in Scandinavia
DEFF Research Database (Denmark)
Nielsen, Jørgen Lerche; Andreasen, Lars Birch
2015-01-01
Higher education systems around the world have been undergoing fundamental changes through the last 50 years from more narrow self-sustaining universities for the elite and into mass universities, where new groups of students have been recruited and the number of students enrolled has increased...... an impact on the educational systems in Scandinavia, and what possible futures can be envisioned?...... dramatically. In adjusting to the role of being a mass educational institution, universities have been challenged on how to cope with external pressures, such as forces of globalization and international markets, increased national and international competition for students and research grants, increased...
LeVeque, William J
2002-01-01
Classic two-part work now available in a single volume assumes no prior theoretical knowledge on reader's part and develops the subject fully. Volume I is a suitable first course text for advanced undergraduate and beginning graduate students. Volume II requires a much higher level of mathematical maturity, including a working knowledge of the theory of analytic functions. Contents range from chapters on binary quadratic forms to the Thue-Siegel-Roth Theorem and the Prime Number Theorem. Includes numerous problems and hints for their solutions. 1956 edition. Supplementary Reading. List of Symb
Czech Academy of Sciences Publication Activity Database
Brož, J.; Holubová, A.; Mužík, J.; Oulická, M.; Mužný, M.; Poláček, M.; Fiala, D.; Arsand, E.; Brabec, Marek; Kvapil, M.
2016-01-01
Roč. 18, Suppl. 1 (2016), A70-A70 ISSN 1520-9156. [ATTD 2016. International Conference on Advanced Technologies & Treatments for Diabetes /9./. 03.02.2016-06.02.2016, Milan] Institutional support: RVO:67985807 Subject RIV: BB - Applied Statistics, Operational Research
Number Sense on the Number Line
Woods, Dawn Marie; Ketterlin Geller, Leanne; Basaraba, Deni
2018-01-01
A strong foundation in early number concepts is critical for students' future success in mathematics. Research suggests that visual representations, like a number line, support students' development of number sense by helping them create a mental representation of the order and magnitude of numbers. In addition, explicitly sequencing instruction…
Average properties of bidisperse bubbly flows
Serrano-García, J. C.; Mendez-Díaz, S.; Zenit, R.
2018-03-01
Experiments were performed in a vertical channel to study the properties of a bubbly flow composed of two distinct bubble size species. Bubbles were produced using a capillary bank with tubes with two distinct inner diameters; the flow through each capillary size was controlled such that the amount of large or small bubbles could be controlled. Using water and water-glycerin mixtures, a wide range of Reynolds and Weber number ranges were investigated. The gas volume fraction ranged between 0.5% and 6%. The measurements of the mean bubble velocity of each species and the liquid velocity variance were obtained and contrasted with the monodisperse flows with equivalent gas volume fractions. We found that the bidispersity can induce a reduction of the mean bubble velocity of the large species; for the small size species, the bubble velocity can be increased, decreased, or remain unaffected depending of the flow conditions. The liquid velocity variance of the bidisperse flows is, in general, bound by the values of the small and large monodisperse values; interestingly, in some cases, the liquid velocity fluctuations can be larger than either monodisperse case. A simple model for the liquid agitation for bidisperse flows is proposed, with good agreement with the experimental measurements.
Aarthi, G.; Ramachandra Reddy, G.
2018-03-01
In our paper, the impact of adaptive transmission schemes: (i) optimal rate adaptation (ORA) and (ii) channel inversion with fixed rate (CIFR) on the average spectral efficiency (ASE) are explored for free-space optical (FSO) communications with On-Off Keying (OOK), Polarization shift keying (POLSK), and Coherent optical wireless communication (Coherent OWC) systems under different turbulence regimes. Further to enhance the ASE we have incorporated aperture averaging effects along with the above adaptive schemes. The results indicate that ORA adaptation scheme has the advantage of improving the ASE performance compared with CIFR under moderate and strong turbulence regime. The coherent OWC system with ORA excels the other modulation schemes and could achieve ASE performance of 49.8 bits/s/Hz at the average transmitted optical power of 6 dBm under strong turbulence. By adding aperture averaging effect we could achieve an ASE of 50.5 bits/s/Hz under the same conditions. This makes ORA with Coherent OWC modulation as a favorable candidate for improving the ASE of the FSO communication system.
The Marketing of Higher Education.
Brooker, George; Noble, Michael
1985-01-01
Formal college and university marketing programs are challenging to develop and implement because of the complexity of the marketing mix, the perceived inappropriateness of a traditional marketing officer, the number of diverse groups with input, the uniqueness of higher education institutions, and the difficulty in identifying higher education…
International Nuclear Information System (INIS)
Ullo, J.J.
1977-08-01
The Harwell Boron Pile measurement of the average number of prompt neutrons emitted per fission, ν-bar/sub p/, of 252 Cf was analyzed in detail by a Monte Carlo method. From the calculated energy dependence of the neutron detection efficiency a value of ν-bar/sub p/ = 3.733 +- 0.022 was obtained. This value is 0.76 percent higher than the original reported value of 3.705 +- 0.015. Possible causes for this increase are discussed. 3 figures, 6 tables
Richardson, Thomas M.
2014-01-01
We introduce the super Patalan numbers, a generalization of the super Catalan numbers in the sense of Gessel, and prove a number of properties analagous to those of the super Catalan numbers. The super Patalan numbers generalize the super Catalan numbers similarly to how the Patalan numbers generalize the Catalan numbers.
40 CFR 80.1238 - How is a refinery's or importer's average benzene concentration determined?
2010-07-01
... concentration determined? (a) The average benzene concentration of gasoline produced at a refinery or imported... percent benzene). i = Individual batch of gasoline produced at the refinery or imported during the applicable averaging period. n = Total number of batches of gasoline produced at the refinery or imported...
40 CFR 80.825 - How is the refinery or importer annual average toxics value determined?
2010-07-01
... volume of applicable gasoline produced or imported in batch i. Ti = The toxics value of batch i. n = The number of batches of gasoline produced or imported during the averaging period. i = Individual batch of gasoline produced or imported during the averaging period. (b) The calculation specified in paragraph (a...
A spatially-averaged mathematical model of kidney branching morphogenesis
Zubkov, V.S.
2015-08-01
© 2015 Published by Elsevier Ltd. Kidney development is initiated by the outgrowth of an epithelial ureteric bud into a population of mesenchymal cells. Reciprocal morphogenetic responses between these two populations generate a highly branched epithelial ureteric tree with the mesenchyme differentiating into nephrons, the functional units of the kidney. While we understand some of the mechanisms involved, current knowledge fails to explain the variability of organ sizes and nephron endowment in mice and humans. Here we present a spatially-averaged mathematical model of kidney morphogenesis in which the growth of the two key populations is described by a system of time-dependant ordinary differential equations. We assume that branching is symmetric and is invoked when the number of epithelial cells per tip reaches a threshold value. This process continues until the number of mesenchymal cells falls below a critical value that triggers cessation of branching. The mathematical model and its predictions are validated against experimentally quantified C57Bl6 mouse embryonic kidneys. Numerical simulations are performed to determine how the final number of branches changes as key system parameters are varied (such as the growth rate of tip cells, mesenchyme cells, or component cell population exit rate). Our results predict that the developing kidney responds differently to loss of cap and tip cells. They also indicate that the final number of kidney branches is less sensitive to changes in the growth rate of the ureteric tip cells than to changes in the growth rate of the mesenchymal cells. By inference, increasing the growth rate of mesenchymal cells should maximise branch number. Our model also provides a framework for predicting the branching outcome when ureteric tip or mesenchyme cells change behaviour in response to different genetic or environmental developmental stresses.
A spatially-averaged mathematical model of kidney branching morphogenesis
Zubkov, V.S.; Combes, A.N.; Short, K.M.; Lefevre, J.; Hamilton, N.A.; Smyth, I.M.; Little, M.H.; Byrne, H.M.
2015-01-01
© 2015 Published by Elsevier Ltd. Kidney development is initiated by the outgrowth of an epithelial ureteric bud into a population of mesenchymal cells. Reciprocal morphogenetic responses between these two populations generate a highly branched epithelial ureteric tree with the mesenchyme differentiating into nephrons, the functional units of the kidney. While we understand some of the mechanisms involved, current knowledge fails to explain the variability of organ sizes and nephron endowment in mice and humans. Here we present a spatially-averaged mathematical model of kidney morphogenesis in which the growth of the two key populations is described by a system of time-dependant ordinary differential equations. We assume that branching is symmetric and is invoked when the number of epithelial cells per tip reaches a threshold value. This process continues until the number of mesenchymal cells falls below a critical value that triggers cessation of branching. The mathematical model and its predictions are validated against experimentally quantified C57Bl6 mouse embryonic kidneys. Numerical simulations are performed to determine how the final number of branches changes as key system parameters are varied (such as the growth rate of tip cells, mesenchyme cells, or component cell population exit rate). Our results predict that the developing kidney responds differently to loss of cap and tip cells. They also indicate that the final number of kidney branches is less sensitive to changes in the growth rate of the ureteric tip cells than to changes in the growth rate of the mesenchymal cells. By inference, increasing the growth rate of mesenchymal cells should maximise branch number. Our model also provides a framework for predicting the branching outcome when ureteric tip or mesenchyme cells change behaviour in response to different genetic or environmental developmental stresses.
To quantum averages through asymptotic expansion of classical averages on infinite-dimensional space
International Nuclear Information System (INIS)
Khrennikov, Andrei
2007-01-01
We study asymptotic expansions of Gaussian integrals of analytic functionals on infinite-dimensional spaces (Hilbert and nuclear Frechet). We obtain an asymptotic equality coupling the Gaussian integral and the trace of the composition of scaling of the covariation operator of a Gaussian measure and the second (Frechet) derivative of a functional. In this way we couple classical average (given by an infinite-dimensional Gaussian integral) and quantum average (given by the von Neumann trace formula). We can interpret this mathematical construction as a procedure of 'dequantization' of quantum mechanics. We represent quantum mechanics as an asymptotic projection of classical statistical mechanics with infinite-dimensional phase space. This space can be represented as the space of classical fields, so quantum mechanics is represented as a projection of 'prequantum classical statistical field theory'
Hydrophone area-averaging correction factors in nonlinearly generated ultrasonic beams
International Nuclear Information System (INIS)
Cooling, M P; Humphrey, V F; Wilkens, V
2011-01-01
The nonlinear propagation of an ultrasonic wave can be used to produce a wavefield rich in higher frequency components that is ideally suited to the calibration, or inter-calibration, of hydrophones. These techniques usually use a tone-burst signal, limiting the measurements to harmonics of the fundamental calibration frequency. Alternatively, using a short pulse enables calibration at a continuous spectrum of frequencies. Such a technique is used at PTB in conjunction with an optical measurement technique to calibrate devices. Experimental findings indicate that the area-averaging correction factor for a hydrophone in such a field demonstrates a complex behaviour, most notably varying periodically between frequencies that are harmonics of the centre frequency of the original pulse and frequencies that lie midway between these harmonics. The beam characteristics of such nonlinearly generated fields have been investigated using a finite difference solution to the nonlinear Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation for a focused field. The simulation results are used to calculate the hydrophone area-averaging correction factors for 0.2 mm and 0.5 mm devices. The results clearly demonstrate a number of significant features observed in the experimental investigations, including the variation with frequency, drive level and hydrophone element size. An explanation for these effects is also proposed.
Hydrophone area-averaging correction factors in nonlinearly generated ultrasonic beams
Cooling, M. P.; Humphrey, V. F.; Wilkens, V.
2011-02-01
The nonlinear propagation of an ultrasonic wave can be used to produce a wavefield rich in higher frequency components that is ideally suited to the calibration, or inter-calibration, of hydrophones. These techniques usually use a tone-burst signal, limiting the measurements to harmonics of the fundamental calibration frequency. Alternatively, using a short pulse enables calibration at a continuous spectrum of frequencies. Such a technique is used at PTB in conjunction with an optical measurement technique to calibrate devices. Experimental findings indicate that the area-averaging correction factor for a hydrophone in such a field demonstrates a complex behaviour, most notably varying periodically between frequencies that are harmonics of the centre frequency of the original pulse and frequencies that lie midway between these harmonics. The beam characteristics of such nonlinearly generated fields have been investigated using a finite difference solution to the nonlinear Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation for a focused field. The simulation results are used to calculate the hydrophone area-averaging correction factors for 0.2 mm and 0.5 mm devices. The results clearly demonstrate a number of significant features observed in the experimental investigations, including the variation with frequency, drive level and hydrophone element size. An explanation for these effects is also proposed.
Nonlinearity management in higher dimensions
International Nuclear Information System (INIS)
Kevrekidis, P G; Pelinovsky, D E; Stefanov, A
2006-01-01
In the present paper, we revisit nonlinearity management of the time-periodic nonlinear Schroedinger equation and the related averaging procedure. By means of rigorous estimates, we show that the averaged nonlinear Schroedinger equation does not blow up in the higher dimensional case so long as the corresponding solution remains smooth. In particular, we show that the H 1 norm remains bounded, in contrast with the usual blow-up mechanism for the focusing Schroedinger equation. This conclusion agrees with earlier works in the case of strong nonlinearity management but contradicts those in the case of weak nonlinearity management. The apparent discrepancy is explained by the divergence of the averaging procedure in the limit of weak nonlinearity management
Quantitative metagenomic analyses based on average genome size normalization
DEFF Research Database (Denmark)
Frank, Jeremy Alexander; Sørensen, Søren Johannes
2011-01-01
provide not just a census of the community members but direct information on metabolic capabilities and potential interactions among community members. Here we introduce a method for the quantitative characterization and comparison of microbial communities based on the normalization of metagenomic data...... marine sources using both conventional small-subunit (SSU) rRNA gene analyses and our quantitative method to calculate the proportion of genomes in each sample that are capable of a particular metabolic trait. With both environments, to determine what proportion of each community they make up and how......). These analyses demonstrate how genome proportionality compares to SSU rRNA gene relative abundance and how factors such as average genome size and SSU rRNA gene copy number affect sampling probability and therefore both types of community analysis....
An Exponentially Weighted Moving Average Control Chart for Bernoulli Data
DEFF Research Database (Denmark)
Spliid, Henrik
2010-01-01
of the transformation is given and its limit for small values of p is derived. Control of high yield processes is discussed and the chart is shown to perform very well in comparison with both the most common alternative EWMA chart and the CUSUM chart. The construction and the use of the proposed EWMA chart......We consider a production process in which units are produced in a sequential manner. The units can, for example, be manufactured items or services, provided to clients. Each unit produced can be a failure with probability p or a success (non-failure) with probability (1-p). A novel exponentially...... weighted moving average (EWMA) control chart intended for surveillance of the probability of failure, p, is described. The chart is based on counting the number of non-failures produced between failures in combination with a variance-stabilizing transformation. The distribution function...
Zonally averaged chemical-dynamical model of the lower thermosphere
International Nuclear Information System (INIS)
Kasting, J.F.; Roble, R.G.
1981-01-01
A zonally averaged numerical model of the thermosphere is used to examine the coupling between neutral composition, including N 2 , O 2 and O, temperature, and winds at solstice for solar minimum conditions. The meridional circulation forced by solar heating results in a summer-to-winter flow, with a winter enhancement in atomic oxygen density that is a factor of about 1.8 greater than the summer hemisphere at 160 km. The O 2 and N 2 variations are associated with a latitudinal gradient in total number density, which is required to achieve pressure balance in the presence of large zonal jets. Latitudinal profiles OI (5577A) green line emission intensity are calculated by using both Chapman and Barth mechanisms. Composition of the lower thermosphere is shown to be strongly influenced by circulation patterns initiated in the stratosphere and lower mesosphere, below the lower boundary used in the model
Average size of random polygons with fixed knot topology.
Matsuda, Hiroshi; Yao, Akihisa; Tsukahara, Hiroshi; Deguchi, Tetsuo; Furuta, Ko; Inami, Takeo
2003-07-01
We have evaluated by numerical simulation the average size R(K) of random polygons of fixed knot topology K=,3(1),3(1) musical sharp 4(1), and we have confirmed the scaling law R(2)(K) approximately N(2nu(K)) for the number N of polygonal nodes in a wide range; N=100-2200. The best fit gives 2nu(K) approximately 1.11-1.16 with good fitting curves in the whole range of N. The estimate of 2nu(K) is consistent with the exponent of self-avoiding polygons. In a limited range of N (N greater, similar 600), however, we have another fit with 2nu(K) approximately 1.01-1.07, which is close to the exponent of random polygons.
Rheineck-Leyssius, A T; Kalkman, C J
1999-05-01
To determine the effect of a new signal processing technique (Oxismart, Nellcor, Inc., Pleasanton, CA) on the incidence of false pulse oximeter alarms in the operating room (OR). Prospective observational study. Nonuniversity hospital. 53 ASA physical status I, II, and III consecutive patients undergoing general anesthesia with tracheal intubation. In the OR we compared the number of alarms produced by a recently developed third generation pulse oximeter (Nellcor Symphony N-3000) with Oxismart signal processing technique and a conventional pulse oximeter (Criticare 504). Three pulse oximeters were used simultaneously in each patient: a Nellcor pulse oximeter, a Criticare with the signal averaging time set at 3 seconds (Criticareaverage3s) and a similar unit with the signal averaging time set at 21 seconds (Criticareaverage21s). For each pulse oximeter, the number of false (artifact) alarms was counted. One false alarm was produced by the Nellcor (duration 55 sec) and one false alarm by the Criticareaverage21s monitor (5 sec). The incidence of false alarms was higher in Criticareaverage3s. In eight patients, Criticareaverage3s produced 20 false alarms (p signal processing compared with the Criticare monitor with the longer averaging time of 21 seconds.
The effects of average revenue regulation on electricity transmission investment and pricing
International Nuclear Information System (INIS)
Matsukawa, Isamu
2008-01-01
This paper investigates the long-run effects of average revenue regulation on an electricity transmission monopolist who applies a two-part tariff comprising a variable congestion price and a non-negative fixed access fee. A binding constraint on the monopolist's expected average revenue lowers the access fee, promotes transmission investment, and improves consumer surplus. In a case of any linear or log-linear electricity demand function with a positive probability that no congestion occurs, average revenue regulation is allocatively more efficient than a Coasian two-part tariff if the level of capacity under average revenue regulation is higher than that under a Coasian two-part tariff. (author)
“Simpson’s paradox” as a manifestation of the properties of weighted average (part 2)
Zhekov, Encho
2012-01-01
The article proves that the so-called “Simpson's paradox” is a special case of manifestation of the properties of weighted average. In this case always comes to comparing two weighted averages, where the average of the larger variables is less than that of the smaller. The article demonstrates one method for analyzing the relative change of magnitudes of the type: k S = Σ x iy i i=1 who gives answer to question: what is the reason, the weighted average of few variables with higher values, to ...
“Simpson’s paradox” as a manifestation of the properties of weighted average (part 1)
Zhekov, Encho
2012-01-01
The article proves that the so-called “Simpson's paradox” is a special case of manifestation of the properties of weighted average. In this case always comes to comparing two weighted averages, where the average of the larger variables is less than that of the smaller. The article demonstrates one method for analyzing the relative change of magnitudes of the type: S = Σ ki=1x iy i who gives answer to question: what is the reason, the weighted average of few variables with higher values, to be...
Advanced number theory with applications
Mollin, Richard A
2009-01-01
Algebraic Number Theory and Quadratic Fields Algebraic Number Fields The Gaussian Field Euclidean Quadratic Fields Applications of Unique Factorization Ideals The Arithmetic of Ideals in Quadratic Fields Dedekind Domains Application to Factoring Binary Quadratic Forms Basics Composition and the Form Class Group Applications via Ambiguity Genus Representation Equivalence Modulo p Diophantine Approximation Algebraic and Transcendental Numbers Transcendence Minkowski's Convex Body Theorem Arithmetic Functions The Euler-Maclaurin Summation Formula Average Orders The Riemann zeta-functionIntroduction to p-Adic AnalysisSolving Modulo pn Introduction to Valuations Non-Archimedean vs. Archimedean Valuations Representation of p-Adic NumbersDirichlet: Characters, Density, and Primes in Progression Dirichlet Characters Dirichlet's L-Function and Theorem Dirichlet DensityApplications to Diophantine Equations Lucas-Lehmer Theory Generalized Ramanujan-Nagell Equations Bachet's Equation The Fermat Equation Catalan and the A...
20 CFR 404.221 - Computing your average monthly wage.
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the average...
Mean link versus average plaquette tadpoles in lattice NRQCD
International Nuclear Information System (INIS)
Shakespeare, Norman H.; Trottier, Howard D.
1999-01-01
We compare mean-link and average plaquette tadpole renormalization schemes in the context of the quarkonium hyperfine splittings in lattice NRQCD. Simulations are done for the three quarkonium systems cc-bar, bc-bar, and bb-bar. The hyperfine splittings are computed both at leading and at next-to-leading order in the relativistic expansion. Results are obtained at a large number of lattice spacings. A number of features emerge, all of which favor tadpole renormalization using mean links. This includes much better scaling of the hyperfine splittings in the three quarkonium systems. We also find that relativistic corrections to the spin splittings are smaller with mean-link tadpoles, particularly for the cc-bar and bc-bar systems. We also see signs of a breakdown in the NRQCD expansion when the bare quark mass falls below about one in lattice units (with the bare quark masses turning out to be much larger with mean-link tadpoles)
Mean link versus average plaquette tadpoles in lattice NRQCD
Energy Technology Data Exchange (ETDEWEB)
Shakespeare, Norman H.; Trottier, Howard D
1999-03-01
We compare mean-link and average plaquette tadpole renormalization schemes in the context of the quarkonium hyperfine splittings in lattice NRQCD. Simulations are done for the three quarkonium systems cc-bar, bc-bar, and bb-bar. The hyperfine splittings are computed both at leading and at next-to-leading order in the relativistic expansion. Results are obtained at a large number of lattice spacings. A number of features emerge, all of which favor tadpole renormalization using mean links. This includes much better scaling of the hyperfine splittings in the three quarkonium systems. We also find that relativistic corrections to the spin splittings are smaller with mean-link tadpoles, particularly for the cc-bar and bc-bar systems. We also see signs of a breakdown in the NRQCD expansion when the bare quark mass falls below about one in lattice units (with the bare quark masses turning out to be much larger with mean-link tadpoles)
Potential of high-average-power solid state lasers
International Nuclear Information System (INIS)
Emmett, J.L.; Krupke, W.F.; Sooy, W.R.
1984-01-01
We discuss the possibility of extending solid state laser technology to high average power and of improving the efficiency of such lasers sufficiently to make them reasonable candidates for a number of demanding applications. A variety of new design concepts, materials, and techniques have emerged over the past decade that, collectively, suggest that the traditional technical limitations on power (a few hundred watts or less) and efficiency (less than 1%) can be removed. The core idea is configuring the laser medium in relatively thin, large-area plates, rather than using the traditional low-aspect-ratio rods or blocks. This presents a large surface area for cooling, and assures that deposited heat is relatively close to a cooled surface. It also minimizes the laser volume distorted by edge effects. The feasibility of such configurations is supported by recent developments in materials, fabrication processes, and optical pumps. Two types of lasers can, in principle, utilize this sheet-like gain configuration in such a way that phase and gain profiles are uniformly sampled and, to first order, yield high-quality (undistorted) beams. The zig-zag laser does this with a single plate, and should be capable of power levels up to several kilowatts. The disk laser is designed around a large number of plates, and should be capable of scaling to arbitrarily high power levels
International Nuclear Information System (INIS)
Lykoudis, P.S.
1995-01-01
The method of Average Magnitude Analysis is a mixture of the Integral Method and the Order of Magnitude Analysis. The paper shows how the differential equations of conservation for steady-state, laminar, boundary layer flows are converted to a system of algebraic equations, where the result is a sum of the order of magnitude of each term, multiplied by, a weight coefficient. These coefficients are determined from integrals containing the assumed velocity and temperature profiles. The method is illustrated by applying it to the case of drag and heat transfer over an infinite flat plate. It is then applied to the case of natural convection over an infinite flat plate with and without the presence of a horizontal magnetic field, and subsequently to enclosures of aspect ratios of one or higher. The final correlation in this instance yields the Nusselt number as a function of the aspect ratio and the Rayleigh and Prandtl numbers. This correlation is tested against a wide range of small and large values of these parameters. 19 refs., 4 figs
Reynolds-Averaged Navier-Stokes Solutions to Flat Plate Film Cooling Scenarios
Johnson, Perry L.; Shyam, Vikram; Hah, Chunill
2011-01-01
The predictions of several Reynolds-Averaged Navier-Stokes solutions for a baseline film cooling geometry are analyzed and compared with experimental data. The Fluent finite volume code was used to perform the computations with the realizable k-epsilon turbulence model. The film hole was angled at 35 to the crossflow with a Reynolds number of 17,400. Multiple length-to-diameter ratios (1.75 and 3.5) as well as momentum flux ratios (0.125 and 0.5) were simulated with various domains, boundary conditions, and grid refinements. The coolant to mainstream density ratio was maintained at 2.0 for all scenarios. Computational domain and boundary condition variations show the ability to reduce the computational cost as compared to previous studies. A number of grid refinement and coarsening variations are compared for further insights into the reduction of computational cost. Liberal refinement in the near hole region is valuable, especially for higher momentum jets that tend to lift-off and create a recirculating flow. A lack of proper refinement in the near hole region can severely diminish the accuracy of the solution, even in the far region. The effects of momentum ratio and hole length-to-diameter ratio are also discussed.
Griffin, Tyler J.; Hilton, John, III.; Plummer, Kenneth; Barret, Devynne
2014-01-01
One of the most contentious potential sources of bias is whether instructors who give higher grades receive higher ratings from students. We examined the grade point averages (GPAs) and student ratings across 2073 general education religion courses at a large private university. A moderate correlation was found between GPAs and student evaluations…
Number words and number symbols a cultural history of numbers
Menninger, Karl
1992-01-01
Classic study discusses number sequence and language and explores written numerals and computations in many cultures. "The historian of mathematics will find much to interest him here both in the contents and viewpoint, while the casual reader is likely to be intrigued by the author's superior narrative ability.
Averaged emission factors for the Hungarian car fleet
Energy Technology Data Exchange (ETDEWEB)
Haszpra, L. [Inst. for Atmospheric Physics, Budapest (Hungary); Szilagyi, I. [Central Research Inst. for Chemistry, Budapest (Hungary)
1995-12-31
The vehicular emission of non-methane hydrocarbon (NMHC) is one of the largest anthropogenic sources of NMHC in Hungary and in most of the industrialized countries. Non-methane hydrocarbon plays key role in the formation of photo-chemical air pollution, usually characterized by the ozone concentration, which seriously endangers the environment and human health. The ozone forming potential of the different NMHCs differs from each other significantly, while the NMHC composition of the car exhaust is influenced by the fuel and engine type, technical condition of the vehicle, vehicle speed and several other factors. In Hungary the majority of the cars are still of Eastern European origin. They represent the technological standard of the 70`s, although there are changes recently. Due to the long-term economical decline in Hungary the average age of the cars was about 9 years in 1990 and reached 10 years by 1993. The condition of the majority of the cars is poor. In addition, almost one third (31.2 %) of the cars are equipped with two-stroke engines which emit less NO{sub x} but much more hydrocarbon. The number of cars equipped with catalytic converter was negligible in 1990 and is slowly increasing only recently. As a consequence of these facts the traffic emission in Hungary may differ from that measured in or estimated for the Western European countries and the differences should be taken into account in the air pollution models. For the estimation of the average emission of the Hungarian car fleet a one-day roadway tunnel experiment was performed in the downtown of Budapest in summer, 1991. (orig.)
Using Bayes Model Averaging for Wind Power Forecasts
Preede Revheim, Pål; Beyer, Hans Georg
2014-05-01
does not contain information, but it has the disadvantage of nearly doubling the number of model parameters to be estimated. Second, the BMA procedure is run with group mean wind power as the response variable instead of group mean wind speed. This also solves the problem with longer consecutive periods without information in the input data, but it leaves the power curve to also be estimated from the data. [1] Raftery, A. E., et al. (2005). Using Bayesian Model Averaging to Calibrate Forecast Ensembles. Monthly Weather Review, 133, 1155-1174. [2]Revheim, P. P. and H. G. Beyer (2013). Using Bayesian Model Averaging for wind farm group forecasts. EWEA Wind Power Forecasting Technology Workshop,Rotterdam, 4-5 December 2013. [3]Sloughter, J. M., T. Gneiting and A. E. Raftery (2010). Probabilistic Wind Speed Forecasting Using Ensembles and Bayesian Model Averaging. Journal of the American Statistical Association, Vol. 105, No. 489, 25-35
Combining Service and Learning in Higher Education
National Research Council Canada - National Science Library
Gray, Maryann
1999-01-01
.... Hundreds of college and university presidents, most of the major higher education associations, and a number of highly influential scholars actively support the development of service-learning...
Directory of Open Access Journals (Sweden)
T. Pathinathan
2015-01-01
Full Text Available In this paper we define diamond fuzzy number with the help of triangular fuzzy number. We include basic arithmetic operations like addition, subtraction of diamond fuzzy numbers with examples. We define diamond fuzzy matrix with some matrix properties. We have defined Nested diamond fuzzy number and Linked diamond fuzzy number. We have further classified Right Linked Diamond Fuzzy number and Left Linked Diamond Fuzzy number. Finally we have verified the arithmetic operations for the above mentioned types of Diamond Fuzzy Numbers.
Koninck, Jean-Marie De
2009-01-01
Who would have thought that listing the positive integers along with their most remarkable properties could end up being such an engaging and stimulating adventure? The author uses this approach to explore elementary and advanced topics in classical number theory. A large variety of numbers are contemplated: Fermat numbers, Mersenne primes, powerful numbers, sublime numbers, Wieferich primes, insolite numbers, Sastry numbers, voracious numbers, to name only a few. The author also presents short proofs of miscellaneous results and constantly challenges the reader with a variety of old and new n
Analytical expressions for conditional averages: A numerical test
DEFF Research Database (Denmark)
Pécseli, H.L.; Trulsen, J.
1991-01-01
Conditionally averaged random potential fluctuations are an important quantity for analyzing turbulent electrostatic plasma fluctuations. Experimentally, this averaging can be readily performed by sampling the fluctuations only when a certain condition is fulfilled at a reference position...
Experimental demonstration of squeezed-state quantum averaging
DEFF Research Database (Denmark)
Lassen, Mikael Østergaard; Madsen, Lars Skovgaard; Sabuncu, Metin
2010-01-01
We propose and experimentally demonstrate a universal quantum averaging process implementing the harmonic mean of quadrature variances. The averaged variances are prepared probabilistically by means of linear optical interference and measurement-induced conditioning. We verify that the implemented...
The flattening of the average potential in models with fermions
International Nuclear Information System (INIS)
Bornholdt, S.
1993-01-01
The average potential is a scale dependent scalar effective potential. In a phase with spontaneous symmetry breaking its inner region becomes flat as the averaging extends over infinite volume and the average potential approaches the convex effective potential. Fermion fluctuations affect the shape of the average potential in this region and its flattening with decreasing physical scale. They have to be taken into account to find the true minimum of the scalar potential which determines the scale of spontaneous symmetry breaking. (orig.)
The Pulsair 3000 tonometer--how many readings need to be taken to ensure accuracy of the average?
McCaghrey, G E; Matthews, F E
2001-07-01
Manufacturers of non-contact tonometers recommend that a number of readings are taken on each eye, and an average obtained. With the Keeler Pulsair 3000 it is advised to take four readings, and average these. This report analyses readings in 100 subjects, and compares the first reading, and the averages of the first two and first three readings with the "machine standard" of the average of four readings. It is found that, in the subject group investigated, the average of three readings is not different from the average of four in 95% of individuals, with equivalence defined as +/- 1.0 mmHg.
20 CFR 404.220 - Average-monthly-wage method.
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Average-monthly-wage method. 404.220 Section... INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.220 Average-monthly-wage method. (a) Who is eligible for this method. You must...
A time-averaged cosmic ray propagation theory
International Nuclear Information System (INIS)
Klimas, A.J.
1975-01-01
An argument is presented, which casts doubt on our ability to choose an appropriate magnetic field ensemble for computing the average behavior of cosmic ray particles. An alternate procedure, using time-averages rather than ensemble-averages, is presented. (orig.) [de
7 CFR 51.2561 - Average moisture content.
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except when...
Averaging in SU(2) open quantum random walk
International Nuclear Information System (INIS)
Ampadu Clement
2014-01-01
We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT
Averaging in SU(2) open quantum random walk
Clement, Ampadu
2014-03-01
We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT.
Generalized Bernoulli-Hurwitz numbers and the universal Bernoulli numbers
Energy Technology Data Exchange (ETDEWEB)
Onishi, Yoshihiro [Faculty of Education Human Sciences, University of Yamanashi, Takeda, Kofu (Japan)
2011-10-31
The three fundamental properties of the Bernoulli numbers, namely, the von Staudt-Clausen theorem, von Staudt's second theorem, and Kummer's original congruence, are generalized to new numbers that we call generalized Bernoulli-Hurwitz numbers. These are coefficients in the power series expansion of a higher-genus algebraic function with respect to a suitable variable. Our generalization differs strongly from previous works. Indeed, the order of the power of the modulus prime in our Kummer-type congruences is exactly the same as in the trigonometric function case (namely, Kummer's own congruence for the original Bernoulli numbers), and as in the elliptic function case (namely, H. Lang's extension for the Hurwitz numbers). However, in other past results on higher-genus algebraic functions, the modulus was at most half of its value in these classical cases. This contrast is clarified by investigating the analogue of the three properties above for the universal Bernoulli numbers. Bibliography: 34 titles.
Declining average daily census. Part 1: Implications and options.
Weil, T P
1985-12-01
A national trend toward declining average daily (inpatient) census (ADC) started in late 1982 even before the Medicare prospective payment system began. The decrease in total days will continue despite an increasing number of aged persons in the U.S. population. This decline could have been predicted from trends during 1978 to 1983, such as increasing available beds but decreasing occupancy, 100 percent increases in hospital expenses, and declining lengths of stay. Assuming that health care costs will remain as a relatively fixed part of the gross national product and no major medical advances will occur in the next five years, certain implications and options exist for facilities experiencing a declining ADC. This article discusses several considerations: Attempts to improve market share; Reduction of full-time equivalent employees; Impact of greater acuity of illness among remaining inpatients; Implications of increasing the number of physicians on medical staffs; Option of a closed medical staff by clinical specialty; Unbundling with not-for-profit and profit-making corporations; Review of mergers, consolidations, and multihospital systems to decide when this option is most appropriate; Sale of a not-for-profit hospital to an investor-owned chain, with implications facing Catholic hospitals choosing this option; Impact and difficulty of developing meaningful alternative health care systems with the hospital's medical staff; Special problems of teaching hospitals; The social issue of the hospital shifting from the community's health center to a cost center; Increased turnover of hospital CEOs; With these in mind, institutions can then focus on solutions that can sometimes be used in tandem to resolve this problem's impact. The second part of this article will discuss some of them.
Burkhart, Jerry
2009-01-01
Prime numbers are often described as the "building blocks" of natural numbers. This article shows how the author and his students took this idea literally by using prime factorizations to build numbers with blocks. In this activity, students explore many concepts of number theory, including the relationship between greatest common factors and…
Vazzana, Anthony; Garth, David
2007-01-01
One of the oldest branches of mathematics, number theory is a vast field devoted to studying the properties of whole numbers. Offering a flexible format for a one- or two-semester course, Introduction to Number Theory uses worked examples, numerous exercises, and two popular software packages to describe a diverse array of number theory topics.
Proton transport properties of poly(aspartic acid) with different average molecular weights
Energy Technology Data Exchange (ETDEWEB)
Nagao, Yuki, E-mail: ynagao@kuchem.kyoto-u.ac.j [Department of Mechanical Systems and Design, Graduate School of Engineering, Tohoku University, 6-6-01 Aoba Aramaki, Aoba-ku, Sendai 980-8579 (Japan); Imai, Yuzuru [Institute of Development, Aging and Cancer (IDAC), Tohoku University, 4-1 Seiryo-cho, Aoba-ku, Sendai 980-8575 (Japan); Matsui, Jun [Institute of Multidisciplinary Research for Advanced Materials (IMRAM), Tohoku University, 2-1-1 Katahira, Sendai 980-8577 (Japan); Ogawa, Tomoyuki [Department of Electronic Engineering, Graduate School of Engineering, Tohoku University, 6-6-05 Aoba Aramaki, Aoba-ku, Sendai 980-8579 (Japan); Miyashita, Tokuji [Institute of Multidisciplinary Research for Advanced Materials (IMRAM), Tohoku University, 2-1-1 Katahira, Sendai 980-8577 (Japan)
2011-04-15
Research highlights: Seven polymers with different average molecular weights were synthesized. The proton conductivity depended on the number-average degree of polymerization. The difference of the proton conductivities was more than one order of magnitude. The number-average molecular weight contributed to the stability of the polymer. - Abstract: We synthesized seven partially protonated poly(aspartic acids)/sodium polyaspartates (P-Asp) with different average molecular weights to study their proton transport properties. The number-average degree of polymerization (DP) for each P-Asp was 30 (P-Asp30), 115 (P-Asp115), 140 (P-Asp140), 160 (P-Asp160), 185 (P-Asp185), 205 (P-Asp205), and 250 (P-Asp250). The proton conductivity depended on the number-average DP. The maximum and minimum proton conductivities under a relative humidity of 70% and 298 K were 1.7 . 10{sup -3} S cm{sup -1} (P-Asp140) and 4.6 . 10{sup -4} S cm{sup -1} (P-Asp250), respectively. Differential thermogravimetric analysis (TG-DTA) was carried out for each P-Asp. The results were classified into two categories. One exhibited two endothermic peaks between t = (270 and 300) {sup o}C, the other exhibited only one peak. The P-Asp group with two endothermic peaks exhibited high proton conductivity. The high proton conductivity is related to the stability of the polymer. The number-average molecular weight also contributed to the stability of the polymer.
Average use of Alcohol and Binge Drinking in Pregnancy: Neuropsychological Effects at Age 5
DEFF Research Database (Denmark)
Kilburn, Tina R.
Objectives The objective of this PhD. was to examine the relation between low weekly average maternal alcohol consumption and ‘Binge drinking' (defined as intake of 5 or more drinks per occasion) during pregnancy and information processing time (IPT) in children aged five years. Since a method...... that provided detailed information on maternal alcohol drinking patterns before and during pregnancy and other lifestyle factors. These women were categorized in groups of prenatally average alcohol intake and binge drinking, timing and number of episodes. At the age of five years the children of these women...... and number of episodes) and between simple reaction time (SRT) and alcohol intake or binge drinking (timing and number of episodes) during pregnancy. Conclusion This was one of the first studies investigating IPT and prenatally average alcohol intake and binge drinking in early pregnancy. Daily prenatal...
MCBS Highlights: Ownership and Average Premiums for Medicare Supplementary Insurance Policies
Chulis, George S.; Eppig, Franklin J.; Poisal, John A.
1995-01-01
This article describes private supplementary health insurance holdings and average premiums paid by Medicare enrollees. Data were collected as part of the 1992 Medicare Current Beneficiary Survey (MCBS). Data show the number of persons with insurance and average premiums paid by type of insurance held—individually purchased policies, employer-sponsored policies, or both. Distributions are shown for a variety of demographic, socioeconomic, and health status variables. Primary findings include: Seventy-eight percent of Medicare beneficiaries have private supplementary insurance; 25 percent of those with private insurance hold more than one policy. The average premium paid for private insurance in 1992 was $914. PMID:10153473
On the number of special numbers
Indian Academy of Sciences (India)
without loss of any generality to be the first k primes), then the equation a + b = c has .... This is an elementary exercise in partial summation (see [12]). Thus ... This is easily done by inserting a stronger form of the prime number theorem into the.
A 35-year comparison of children labelled as gifted, unlabelled as gifted and average-ability
Directory of Open Access Journals (Sweden)
Joan Freeman
2014-09-01
Full Text Available http://dx.doi.org/10.5902/1984686X14273Why are some children seen as gifted while others of identical ability are not? To find out why and what the consequences might be, in 1974 I began in England with 70 children labelled as gifted. Each one was matched for age, sex and socio-economic level with two comparison children in the same school class. The first comparison child had an identical gift, and the second taken at random. Investigation was by a battery of tests and deep questioning of pupils, teachers and parents in their schools and homes which went on for 35 years. A major significant difference was that those labelled gifted had significantly more emotional problems than either the unlabelled but identically gifted or the random controls. The vital aspects of success for the entire sample, whether gifted or not, have been hard work, emotional support and a positive personal outlook. But in general, the higher the individual’s intelligence the better their chances in life.
Grešak, Rozalija
2015-01-01
The field of real numbers is usually constructed using Dedekind cuts. In these thesis we focus on the construction of the field of real numbers using metric completion of rational numbers using Cauchy sequences. In a similar manner we construct the field of p-adic numbers, describe some of their basic and topological properties. We follow by a construction of complex p-adic numbers and we compare them with the ordinary complex numbers. We conclude the thesis by giving a motivation for the int...
Averaging and sampling for magnetic-observatory hourly data
Directory of Open Access Journals (Sweden)
J. J. Love
2010-11-01
Full Text Available A time and frequency-domain analysis is made of the effects of averaging and sampling methods used for constructing magnetic-observatory hourly data values. Using 1-min data as a proxy for continuous, geomagnetic variation, we construct synthetic hourly values of two standard types: instantaneous "spot" measurements and simple 1-h "boxcar" averages. We compare these average-sample types with others: 2-h average, Gaussian, and "brick-wall" low-frequency-pass. Hourly spot measurements provide a statistically unbiased representation of the amplitude range of geomagnetic-field variation, but as a representation of continuous field variation over time, they are significantly affected by aliasing, especially at high latitudes. The 1-h, 2-h, and Gaussian average-samples are affected by a combination of amplitude distortion and aliasing. Brick-wall values are not affected by either amplitude distortion or aliasing, but constructing them is, in an operational setting, relatively more difficult than it is for other average-sample types. It is noteworthy that 1-h average-samples, the present standard for observatory hourly data, have properties similar to Gaussian average-samples that have been optimized for a minimum residual sum of amplitude distortion and aliasing. For 1-h average-samples from medium and low-latitude observatories, the average of the combination of amplitude distortion and aliasing is less than the 5.0 nT accuracy standard established by Intermagnet for modern 1-min data. For medium and low-latitude observatories, average differences between monthly means constructed from 1-min data and monthly means constructed from any of the hourly average-sample types considered here are less than the 1.0 nT resolution of standard databases. We recommend that observatories and World Data Centers continue the standard practice of reporting simple 1-h-average hourly values.
Relationships between feeding behavior and average daily gain in cattle
Directory of Open Access Journals (Sweden)
Bruno Fagundes Cunha Lage
2013-12-01
Full Text Available Several studies have reported relationship between eating behavior and performance in feedlot cattle. The evaluation of behavior traits demands high degree of work and trained manpower, therefore, in recent years has been used an automated feed intake measurement system (GrowSafe System ®, that identify and record individual feeding patterns. The aim of this study was to evaluate the relationship between feeding behavior traits and average daily gain in Nellore calves undergoing feed efficiency test. Date from 85 Nelore males was recorded during the feed efficiency test performed in 2012, at Centro APTA Bovinos de Corte, Instituto de Zootecnia, São Paulo State. Were analyzed the behavioral traits: time at feeder (TF, head down duration (HD, representing the time when the animal is actually eating, frequency of visits (FV and feed rate (FR calculated as the amount of dry matter (DM consumed by time at feeder (g.min-1. The ADG was calculated by linear regression of individual weights on days in test. ADG classes were obtained considering the average ADG and standard deviation (SD being: high ADG (>mean + 1.0 SD, medium ADG (± 1.0 SD from the mean and low ADG (
Fast integration using quasi-random numbers
International Nuclear Information System (INIS)
Bossert, J.; Feindt, M.; Kerzel, U.
2006-01-01
Quasi-random numbers are specially constructed series of numbers optimised to evenly sample a given s-dimensional volume. Using quasi-random numbers in numerical integration converges faster with a higher accuracy compared to the case of pseudo-random numbers. The basic properties of quasi-random numbers are introduced, various generators are discussed and the achieved gain is illustrated by examples
Fast integration using quasi-random numbers
Bossert, J.; Feindt, M.; Kerzel, U.
2006-04-01
Quasi-random numbers are specially constructed series of numbers optimised to evenly sample a given s-dimensional volume. Using quasi-random numbers in numerical integration converges faster with a higher accuracy compared to the case of pseudo-random numbers. The basic properties of quasi-random numbers are introduced, various generators are discussed and the achieved gain is illustrated by examples.
On the number of special numbers
Indian Academy of Sciences (India)
We now apply the theory of the Thue equation to obtain an effective bound on m. Indeed, by Lemma 3.2, we can write m2 = ba3 and m2 − 4 = cd3 with b, c cubefree. By the above, both b, c are bounded since they are cubefree and all their prime factors are less than e63727. Now we have a finite number of. Thue equations:.
Average glandular dose in digital mammography and breast tomosynthesis
Energy Technology Data Exchange (ETDEWEB)
Olgar, T. [Ankara Univ. (Turkey). Dept. of Engineering Physics; Universitaetsklinikum Leipzig AoeR (Germany). Klinik und Poliklinik fuer Diagnostische und Interventionelle Radiologie; Kahn, T.; Gosch, D. [Universitaetsklinikum Leipzig AoeR (Germany). Klinik und Poliklinik fuer Diagnostische und Interventionelle Radiologie
2012-10-15
Purpose: To determine the average glandular dose (AGD) in digital full-field mammography (2 D imaging mode) and in breast tomosynthesis (3 D imaging mode). Materials and Methods: Using the method described by Boone, the AGD was calculated from the exposure parameters of 2247 conventional 2 D mammograms and 984 mammograms in 3 D imaging mode of 641 patients examined with the digital mammographic system Hologic Selenia Dimensions. The breast glandular tissue content was estimated by the Hologic R2 Quantra automated volumetric breast density measurement tool for each patient from right craniocaudal (RCC) and left craniocaudal (LCC) images in 2 D imaging mode. Results: The mean compressed breast thickness (CBT) was 52.7 mm for craniocaudal (CC) and 56.0 mm for mediolateral oblique (MLO) views. The mean percentage of breast glandular tissue content was 18.0 % and 17.4 % for RCC and LCC projections, respectively. The mean AGD values in 2 D imaging mode per exposure for the standard breast were 1.57 mGy and 1.66 mGy, while the mean AGD values after correction for real breast composition were 1.82 mGy and 1.94 mGy for CC and MLO views, respectively. The mean AGD values in 3 D imaging mode per exposure for the standard breast were 2.19 mGy and 2.29 mGy, while the mean AGD values after correction for the real breast composition were 2.53 mGy and 2.63 mGy for CC and MLO views, respectively. No significant relationship was found between the AGD and CBT in 2 D imaging mode and a good correlation coefficient of 0.98 in 3 D imaging mode. Conclusion: In this study the mean calculated AGD per exposure in 3 D imaging mode was on average 34 % higher than for 2 D imaging mode for patients examined with the same CBT.
Average glandular dose in digital mammography and breast tomosynthesis
International Nuclear Information System (INIS)
Olgar, T.; Universitaetsklinikum Leipzig AoeR; Kahn, T.; Gosch, D.
2012-01-01
Purpose: To determine the average glandular dose (AGD) in digital full-field mammography (2 D imaging mode) and in breast tomosynthesis (3 D imaging mode). Materials and Methods: Using the method described by Boone, the AGD was calculated from the exposure parameters of 2247 conventional 2 D mammograms and 984 mammograms in 3 D imaging mode of 641 patients examined with the digital mammographic system Hologic Selenia Dimensions. The breast glandular tissue content was estimated by the Hologic R2 Quantra automated volumetric breast density measurement tool for each patient from right craniocaudal (RCC) and left craniocaudal (LCC) images in 2 D imaging mode. Results: The mean compressed breast thickness (CBT) was 52.7 mm for craniocaudal (CC) and 56.0 mm for mediolateral oblique (MLO) views. The mean percentage of breast glandular tissue content was 18.0 % and 17.4 % for RCC and LCC projections, respectively. The mean AGD values in 2 D imaging mode per exposure for the standard breast were 1.57 mGy and 1.66 mGy, while the mean AGD values after correction for real breast composition were 1.82 mGy and 1.94 mGy for CC and MLO views, respectively. The mean AGD values in 3 D imaging mode per exposure for the standard breast were 2.19 mGy and 2.29 mGy, while the mean AGD values after correction for the real breast composition were 2.53 mGy and 2.63 mGy for CC and MLO views, respectively. No significant relationship was found between the AGD and CBT in 2 D imaging mode and a good correlation coefficient of 0.98 in 3 D imaging mode. Conclusion: In this study the mean calculated AGD per exposure in 3 D imaging mode was on average 34 % higher than for 2 D imaging mode for patients examined with the same CBT.
International Nuclear Information System (INIS)
Kaneko, K.
1987-01-01
A relationship between the number projection and the shell model methods is investigated in the case of a single-j shell. We can find a one-to-one correspondence between the number projected and the shell model states
Gallistel, C R
2017-12-01
The representation of discrete and continuous quantities appears to be ancient and pervasive in animal brains. Because numbers are the natural carriers of these representations, we may discover that in brains, it's numbers all the way down.
Hybrid Reynolds-Averaged/Large-Eddy Simulations of a Coaxial Supersonic Free-Jet Experiment
Baurle, Robert A.; Edwards, Jack R.
2010-01-01
Reynolds-averaged and hybrid Reynolds-averaged/large-eddy simulations have been applied to a supersonic coaxial jet flow experiment. The experiment was designed to study compressible mixing flow phenomenon under conditions that are representative of those encountered in scramjet combustors. The experiment utilized either helium or argon as the inner jet nozzle fluid, and the outer jet nozzle fluid consisted of laboratory air. The inner and outer nozzles were designed and operated to produce nearly pressure-matched Mach 1.8 flow conditions at the jet exit. The purpose of the computational effort was to assess the state-of-the-art for each modeling approach, and to use the hybrid Reynolds-averaged/large-eddy simulations to gather insight into the deficiencies of the Reynolds-averaged closure models. The Reynolds-averaged simulations displayed a strong sensitivity to choice of turbulent Schmidt number. The initial value chosen for this parameter resulted in an over-prediction of the mixing layer spreading rate for the helium case, but the opposite trend was observed when argon was used as the injectant. A larger turbulent Schmidt number greatly improved the comparison of the results with measurements for the helium simulations, but variations in the Schmidt number did not improve the argon comparisons. The hybrid Reynolds-averaged/large-eddy simulations also over-predicted the mixing layer spreading rate for the helium case, while under-predicting the rate of mixing when argon was used as the injectant. The primary reason conjectured for the discrepancy between the hybrid simulation results and the measurements centered around issues related to the transition from a Reynolds-averaged state to one with resolved turbulent content. Improvements to the inflow conditions were suggested as a remedy to this dilemma. Second-order turbulence statistics were also compared to their modeled Reynolds-averaged counterparts to evaluate the effectiveness of common turbulence closure
Are average and symmetric faces attractive to infants? Discrimination and looking preferences.
Rhodes, Gillian; Geddes, Keren; Jeffery, Linda; Dziurawiec, Suzanne; Clark, Alison
2002-01-01
Young infants prefer to look at faces that adults find attractive, suggesting a biological basis for some face preferences. However, the basis for infant preferences is not known. Adults find average and symmetric faces attractive. We examined whether 5-8-month-old infants discriminate between different levels of averageness and symmetry in faces, and whether they prefer to look at faces with higher levels of these traits. Each infant saw 24 pairs of female faces. Each pair consisted of two versions of the same face differing either in averageness (12 pairs) or symmetry (12 pairs). Data from the mothers confirmed that adults preferred the more average and more symmetric versions in each pair. The infants were sensitive to differences in both averageness and symmetry, but showed no looking preference for the more average or more symmetric versions. On the contrary, longest looks were significantly longer for the less average versions, and both longest looks and first looks were marginally longer for the less symmetric versions. Mean looking times were also longer for the less average and less symmetric versions, but those differences were not significant. We suggest that the infant looking behaviour may reflect a novelty preference rather than an aesthetic preference.
DEFF Research Database (Denmark)
Andersen, Torben
2014-01-01
had a marked singular and an unmarked plural. Synchronically, however, the singular is arguably the basic member of the number category as revealed by the use of the two numbers. In addition, some nouns have a collective form, which is grammatically singular. Number also plays a role...
DEFF Research Database (Denmark)
Elvik, Rune; Bjørnskau, Torkel
2017-01-01
Highlights •26 studies of the safety-in-numbers effect are reviewed. •The existence of a safety-in-numbers effect is confirmed. •Results are consistent. •Causes of the safety-in-numbers effect are incompletely known....
de Mestre, Neville
2008-01-01
Prime numbers are important as the building blocks for the set of all natural numbers, because prime factorisation is an important and useful property of all natural numbers. Students can discover them by using the method known as the Sieve of Eratosthenes, named after the Greek geographer and astronomer who lived from c. 276-194 BC. Eratosthenes…
on the performance of Autoregressive Moving Average Polynomial
African Journals Online (AJOL)
Timothy Ademakinwa
Distributed Lag (PDL) model, Autoregressive Polynomial Distributed Lag ... Moving Average Polynomial Distributed Lag (ARMAPDL) model. ..... Global Journal of Mathematics and Statistics. Vol. 1. ... Business and Economic Research Center.
Comparison of Interpolation Methods as Applied to Time Synchronous Averaging
National Research Council Canada - National Science Library
Decker, Harry
1999-01-01
Several interpolation techniques were investigated to determine their effect on time synchronous averaging of gear vibration signals and also the effects on standard health monitoring diagnostic parameters...
Light-cone averaging in cosmology: formalism and applications
International Nuclear Information System (INIS)
Gasperini, M.; Marozzi, G.; Veneziano, G.; Nugier, F.
2011-01-01
We present a general gauge invariant formalism for defining cosmological averages that are relevant for observations based on light-like signals. Such averages involve either null hypersurfaces corresponding to a family of past light-cones or compact surfaces given by their intersection with timelike hypersurfaces. Generalized Buchert-Ehlers commutation rules for derivatives of these light-cone averages are given. After introducing some adapted ''geodesic light-cone'' coordinates, we give explicit expressions for averaging the redshift to luminosity-distance relation and the so-called ''redshift drift'' in a generic inhomogeneous Universe
Globalisation and Higher Education
Marginson, Simon; van der Wende, Marijk
2007-01-01
Economic and cultural globalisation has ushered in a new era in higher education. Higher education was always more internationally open than most sectors because of its immersion in knowledge, which never showed much respect for juridical boundaries. In global knowledge economies, higher education
Life Science's Average Publishable Unit (APU Has Increased over the Past Two Decades.
Directory of Open Access Journals (Sweden)
Radames J B Cordero
Full Text Available Quantitative analysis of the scientific literature is important for evaluating the evolution and state of science. To study how the density of biological literature has changed over the past two decades we visually inspected 1464 research articles related only to the biological sciences from ten scholarly journals (with average Impact Factors, IF, ranging from 3.8 to 32.1. By scoring the number of data items (tables and figures, density of composite figures (labeled panels per figure or PPF, as well as the number of authors, pages and references per research publication we calculated an Average Publishable Unit or APU for 1993, 2003, and 2013. The data show an overall increase in the average ± SD number of data items from 1993 to 2013 of approximately 7±3 to 14±11 and PPF ratio of 2±1 to 4±2 per article, suggesting that the APU has doubled in size over the past two decades. As expected, the increase in data items per article is mainly in the form of supplemental material, constituting 0 to 80% of the data items per publication in 2013, depending on the journal. The changes in the average number of pages (approx. 8±3 to 10±3, references (approx. 44±18 to 56±24 and authors (approx. 5±3 to 8±9 per article are also presented and discussed. The average number of data items, figure density and authors per publication are correlated with the journal's average IF. The increasing APU size over time is important when considering the value of research articles for life scientists and publishers, as well as, the implications of these increasing trends in the mechanisms and economics of scientific communication.
Life Science's Average Publishable Unit (APU) Has Increased over the Past Two Decades.
Cordero, Radames J B; de León-Rodriguez, Carlos M; Alvarado-Torres, John K; Rodriguez, Ana R; Casadevall, Arturo
2016-01-01
Quantitative analysis of the scientific literature is important for evaluating the evolution and state of science. To study how the density of biological literature has changed over the past two decades we visually inspected 1464 research articles related only to the biological sciences from ten scholarly journals (with average Impact Factors, IF, ranging from 3.8 to 32.1). By scoring the number of data items (tables and figures), density of composite figures (labeled panels per figure or PPF), as well as the number of authors, pages and references per research publication we calculated an Average Publishable Unit or APU for 1993, 2003, and 2013. The data show an overall increase in the average ± SD number of data items from 1993 to 2013 of approximately 7±3 to 14±11 and PPF ratio of 2±1 to 4±2 per article, suggesting that the APU has doubled in size over the past two decades. As expected, the increase in data items per article is mainly in the form of supplemental material, constituting 0 to 80% of the data items per publication in 2013, depending on the journal. The changes in the average number of pages (approx. 8±3 to 10±3), references (approx. 44±18 to 56±24) and authors (approx. 5±3 to 8±9) per article are also presented and discussed. The average number of data items, figure density and authors per publication are correlated with the journal's average IF. The increasing APU size over time is important when considering the value of research articles for life scientists and publishers, as well as, the implications of these increasing trends in the mechanisms and economics of scientific communication.
International Nuclear Information System (INIS)
Todorov, T.D.
1980-01-01
The set of asymptotic numbers A as a system of generalized numbers including the system of real numbers R, as well as infinitely small (infinitesimals) and infinitely large numbers, is introduced. The detailed algebraic properties of A, which are unusual as compared with the known algebraic structures, are studied. It is proved that the set of asymptotic numbers A cannot be isomorphically embedded as a subspace in any group, ring or field, but some particular subsets of asymptotic numbers are shown to be groups, rings, and fields. The algebraic operation, additive and multiplicative forms, and the algebraic properties are constructed in an appropriate way. It is shown that the asymptotic numbers give rise to a new type of generalized functions quite analogous to the distributions of Schwartz allowing, however, the operation multiplication. A possible application of these functions to quantum theory is discussed
Niederreiter, Harald
2015-01-01
This textbook effectively builds a bridge from basic number theory to recent advances in applied number theory. It presents the first unified account of the four major areas of application where number theory plays a fundamental role, namely cryptography, coding theory, quasi-Monte Carlo methods, and pseudorandom number generation, allowing the authors to delineate the manifold links and interrelations between these areas. Number theory, which Carl-Friedrich Gauss famously dubbed the queen of mathematics, has always been considered a very beautiful field of mathematics, producing lovely results and elegant proofs. While only very few real-life applications were known in the past, today number theory can be found in everyday life: in supermarket bar code scanners, in our cars’ GPS systems, in online banking, etc. Starting with a brief introductory course on number theory in Chapter 1, which makes the book more accessible for undergraduates, the authors describe the four main application areas in Chapters...
Digital mammography screening: average glandular dose and first performance parameters
International Nuclear Information System (INIS)
Weigel, S.; Girnus, R.; Czwoydzinski, J.; Heindel, W.; Decker, T.; Spital, S.
2007-01-01
Purpose: The Radiation Protection Commission demanded structured implementation of digital mammography screening in Germany. The main requirements were the installation of digital reference centers and separate evaluation of the fully digitized screening units. Digital mammography screening must meet the quality standards of the European guidelines and must be compared to analog screening results. We analyzed early surrogate indicators of effective screening and dosage levels for the first German digital screening unit in a routine setting after the first half of the initial screening round. Materials and Methods: We used three digital mammography screening units (one full-field digital scanner [DR] and two computed radiography systems [CR]). Each system has been proven to fulfill the requirements of the National and European guidelines. The radiation exposure levels, the medical workflow and the histological results were documented in a central electronic screening record. Results: In the first year 11,413 women were screened (participation rate 57.5 %). The parenchymal dosages for the three mammographic X-ray systems, averaged for the different breast sizes, were 0.7 (DR), 1.3 (CR), 1.5 (CR) mGy. 7 % of the screened women needed to undergo further examinations. The total number of screen-detected cancers was 129 (detection rate 1.1 %). 21 % of the carcinomas were classified as ductal carcinomas in situ, 40 % of the invasive carcinomas had a histological size ≤ 10 mm and 61 % < 15 mm. The frequency distribution of pT-categories of screen-detected cancer was as follows: pTis 20.9 %, pT1 61.2 %, pT2 14.7 %, pT3 2.3 %, pT4 0.8 %. 73 % of the invasive carcinomas were node-negative. (orig.)
Interpreting Bivariate Regression Coefficients: Going beyond the Average
Halcoussis, Dennis; Phillips, G. Michael
2010-01-01
Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…
Average stress in a Stokes suspension of disks
Prosperetti, Andrea
2004-01-01
The ensemble-average velocity and pressure in an unbounded quasi-random suspension of disks (or aligned cylinders) are calculated in terms of average multipoles allowing for the possibility of spatial nonuniformities in the system. An expression for the stress due to the suspended particles is
47 CFR 1.959 - Computation of average terrain elevation.
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false Computation of average terrain elevation. 1.959 Section 1.959 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Wireless Radio Services Applications and Proceedings Application Requirements and Procedures § 1.959 Computation of average terrain elevation. Except a...
47 CFR 80.759 - Average terrain elevation.
2010-10-01
... 47 Telecommunication 5 2010-10-01 2010-10-01 false Average terrain elevation. 80.759 Section 80.759 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.759 Average terrain elevation. (a)(1) Draw radials...
The average covering tree value for directed graph games
Khmelnitskaya, Anna Borisovna; Selcuk, Özer; Talman, Dolf
We introduce a single-valued solution concept, the so-called average covering tree value, for the class of transferable utility games with limited communication structure represented by a directed graph. The solution is the average of the marginal contribution vectors corresponding to all covering
The Average Covering Tree Value for Directed Graph Games
Khmelnitskaya, A.; Selcuk, O.; Talman, A.J.J.
2012-01-01
Abstract: We introduce a single-valued solution concept, the so-called average covering tree value, for the class of transferable utility games with limited communication structure represented by a directed graph. The solution is the average of the marginal contribution vectors corresponding to all
18 CFR 301.7 - Average System Cost methodology functionalization.
2010-04-01
... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Average System Cost... REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER MARKETING ADMINISTRATIONS AVERAGE SYSTEM COST METHODOLOGY FOR SALES FROM UTILITIES TO BONNEVILLE POWER ADMINISTRATION UNDER NORTHWEST POWER...
Analytic computation of average energy of neutrons inducing fission
International Nuclear Information System (INIS)
Clark, Alexander Rich
2016-01-01
The objective of this report is to describe how I analytically computed the average energy of neutrons that induce fission in the bare BeRP ball. The motivation of this report is to resolve a discrepancy between the average energy computed via the FMULT and F4/FM cards in MCNP6 by comparison to the analytic results.
An alternative scheme of the Bogolyubov's average method
International Nuclear Information System (INIS)
Ortiz Peralta, T.; Ondarza R, R.; Camps C, E.
1990-01-01
In this paper the average energy and the magnetic moment conservation laws in the Drift Theory of charged particle motion are obtained in a simple way. The approach starts from the energy and magnetic moment conservation laws and afterwards the average is performed. This scheme is more economic from the standpoint of time and algebraic calculations than the usual procedure of Bogolyubov's method. (Author)
Bounds on Average Time Complexity of Decision Trees
Chikalov, Igor
2011-01-01
In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any
A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages
DEFF Research Database (Denmark)
Malzahn, Dorthe; Opper, Manfred
2003-01-01
We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages...
Anomalous behavior of q-averages in nonextensive statistical mechanics
International Nuclear Information System (INIS)
Abe, Sumiyoshi
2009-01-01
A generalized definition of average, termed the q-average, is widely employed in the field of nonextensive statistical mechanics. Recently, it has however been pointed out that such an average value may behave unphysically under specific deformations of probability distributions. Here, the following three issues are discussed and clarified. Firstly, the deformations considered are physical and may be realized experimentally. Secondly, in view of the thermostatistics, the q-average is unstable in both finite and infinite discrete systems. Thirdly, a naive generalization of the discussion to continuous systems misses a point, and a norm better than the L 1 -norm should be employed for measuring the distance between two probability distributions. Consequently, stability of the q-average is shown not to be established in all of the cases
Bootstrapping pre-averaged realized volatility under market microstructure noise
DEFF Research Database (Denmark)
Hounyo, Ulrich; Goncalves, Sílvia; Meddahi, Nour
The main contribution of this paper is to propose a bootstrap method for inference on integrated volatility based on the pre-averaging approach of Jacod et al. (2009), where the pre-averaging is done over all possible overlapping blocks of consecutive observations. The overlapping nature of the pre......-averaged returns implies that these are kn-dependent with kn growing slowly with the sample size n. This motivates the application of a blockwise bootstrap method. We show that the "blocks of blocks" bootstrap method suggested by Politis and Romano (1992) (and further studied by Bühlmann and Künsch (1995......)) is valid only when volatility is constant. The failure of the blocks of blocks bootstrap is due to the heterogeneity of the squared pre-averaged returns when volatility is stochastic. To preserve both the dependence and the heterogeneity of squared pre-averaged returns, we propose a novel procedure...
Ore, Oystein
2017-01-01
Number theory is the branch of mathematics concerned with the counting numbers, 1, 2, 3, … and their multiples and factors. Of particular importance are odd and even numbers, squares and cubes, and prime numbers. But in spite of their simplicity, you will meet a multitude of topics in this book: magic squares, cryptarithms, finding the day of the week for a given date, constructing regular polygons, pythagorean triples, and many more. In this revised edition, John Watkins and Robin Wilson have updated the text to bring it in line with contemporary developments. They have added new material on Fermat's Last Theorem, the role of computers in number theory, and the use of number theory in cryptography, and have made numerous minor changes in the presentation and layout of the text and the exercises.
DEFF Research Database (Denmark)
Pedersen, Ken Steen; Skrubel, Rikke; Stege, Helle
2012-01-01
Background The objective of this study was to investigate the association between average daily gain and the number of Lawsonia intracellularis bacteria in faeces of growing pigs with different levels of diarrhoea. Methods A longitudinal field study (n?=?150 pigs) was performed in a Danish herd f...
Classical properties and semiclassical calculations in a spherical nuclear average potential
International Nuclear Information System (INIS)
Carbonell, J.; Brut, F.; Arvieu, R.; Touchard, J.
1984-03-01
We study the relation between the classical properties or an average nuclear potential and its spectral properties. We have drawn the energy-action surface of this potential and related its properties to the spectral ones in the framework of the EBK semiclassical method. We also describe a method allowing us to get the evolution of the spectrum with the mass number
Directory of Open Access Journals (Sweden)
Shelley Mo
Full Text Available To assess the effect of image registration and averaging on the visualization and quantification of the radial peripapillary capillary (RPC network on optical coherence tomography angiography (OCTA.Twenty-two healthy controls were imaged with a commercial OCTA system (AngioVue, Optovue, Inc.. Ten 10x10° scans of the optic disc were obtained, and the most superficial layer (50-μm slab extending from the inner limiting membrane was extracted for analysis. Rigid registration was achieved using ImageJ, and averaging of each 2 to 10 frames was performed in five ~2x2° regions of interest (ROI located 1° from the optic disc margin. The ROI were automatically skeletonized. Signal-to-noise ratio (SNR, number of endpoints and mean capillary length from the skeleton, capillary density, and mean intercapillary distance (ICD were measured for the reference and each averaged ROI. Repeated measures analysis of variance was used to assess statistical significance. Three patients with primary open angle glaucoma were also imaged to compare RPC density to controls.Qualitatively, vessels appeared smoother and closer to histologic descriptions with increasing number of averaged frames. Quantitatively, number of endpoints decreased by 51%, and SNR, mean capillary length, capillary density, and ICD increased by 44%, 91%, 11%, and 4.5% from single frame to 10-frame averaged, respectively. The 10-frame averaged images from the glaucomatous eyes revealed decreased density correlating to visual field defects and retinal nerve fiber layer thinning.OCTA image registration and averaging is a viable and accessible method to enhance the visualization of RPCs, with significant improvements in image quality and RPC quantitative parameters. With this technique, we will be able to non-invasively and reliably study RPC involvement in diseases such as glaucoma.
Godefroy, Gilles
2004-01-01
Numbers are fascinating. The fascination begins in childhood, when we first learn to count. It continues as we learn arithmetic, algebra, geometry, and so on. Eventually, we learn that numbers not only help us to measure the world, but also to understand it and, to some extent, to control it. In The Adventure of Numbers, Gilles Godefroy follows the thread of our expanding understanding of numbers to lead us through the history of mathematics. His goal is to share the joy of discovering and understanding this great adventure of the mind. The development of mathematics has been punctuated by a n
Diamond, Harold G; Cheung, Man Ping
2016-01-01
"Generalized numbers" is a multiplicative structure introduced by A. Beurling to study how independent prime number theory is from the additivity of the natural numbers. The results and techniques of this theory apply to other systems having the character of prime numbers and integers; for example, it is used in the study of the prime number theorem (PNT) for ideals of algebraic number fields. Using both analytic and elementary methods, this book presents many old and new theorems, including several of the authors' results, and many examples of extremal behavior of g-number systems. Also, the authors give detailed accounts of the L^2 PNT theorem of J. P. Kahane and of the example created with H. L. Montgomery, showing that additive structure is needed for proving the Riemann hypothesis. Other interesting topics discussed are propositions "equivalent" to the PNT, the role of multiplicative convolution and Chebyshev's prime number formula for g-numbers, and how Beurling theory provides an interpretation of the ...
Hirst, Keith
1994-01-01
Number and geometry are the foundations upon which mathematics has been built over some 3000 years. This book is concerned with the logical foundations of number systems from integers to complex numbers. The author has chosen to develop the ideas by illustrating the techniques used throughout mathematics rather than using a self-contained logical treatise. The idea of proof has been emphasised, as has the illustration of concepts from a graphical, numerical and algebraic point of view. Having laid the foundations of the number system, the author has then turned to the analysis of infinite proc
Directory of Open Access Journals (Sweden)
Aneta Rita Borkowska
2014-05-01
Full Text Available BACKGROUND The aim of the research was to assess memorization and recall of logically connected and unconnected material, coded graphically and linguistically, and the ability to focus attention, in a group of children with intelligence below average, compared to children with average intelligence. PARTICIPANTS AND PROCEDURE The study group included 27 children with intelligence below average. The control group consisted of 29 individuals. All of them were examined using the authors’ experimental trials and the TUS test (Attention and Perceptiveness Test. RESULTS Children with intelligence below average memorized significantly less information contained in the logical material, demonstrated lower ability to memorize the visual material, memorized significantly fewer words in the verbal material learning task, achieved lower results in such indicators of the visual attention process pace as the number of omissions and mistakes, and had a lower pace of perceptual work, compared to children with average intelligence. CONCLUSIONS The results confirm that children with intelligence below average have difficulties with memorizing new material, both logically connected and unconnected. The significantly lower capacity of direct memory is independent of modality. The results of the study on the memory process confirm the hypothesis about lower abilities of children with intelligence below average, in terms of concentration, work pace, efficiency and perception.
Arsenault, Richard; Gatien, Philippe; Renaud, Benoit; Brissette, François; Martel, Jean-Luc
2015-10-01
This study aims to test whether a weighted combination of several hydrological models can simulate flows more accurately than the models taken individually. In addition, the project attempts to identify the most efficient model averaging method and the optimal number of models to include in the weighting scheme. In order to address the first objective, streamflow was simulated using four lumped hydrological models (HSAMI, HMETS, MOHYSE and GR4J-6), each of which were calibrated with three different objective functions on 429 watersheds. The resulting 12 hydrographs (4 models × 3 metrics) were weighted and combined with the help of 9 averaging methods which are the simple arithmetic mean (SAM), Akaike information criterion (AICA), Bates-Granger (BGA), Bayes information criterion (BICA), Bayesian model averaging (BMA), Granger-Ramanathan average variant A, B and C (GRA, GRB and GRC) and the average by SCE-UA optimization (SCA). The same weights were then applied to the hydrographs in validation mode, and the Nash-Sutcliffe Efficiency metric was measured between the averaged and observed hydrographs. Statistical analyses were performed to compare the accuracy of weighted methods to that of individual models. A Kruskal-Wallis test and a multi-objective optimization algorithm were then used to identify the most efficient weighted method and the optimal number of models to integrate. Results suggest that the GRA, GRB, GRC and SCA weighted methods perform better than the individual members. Model averaging from these four methods were superior to the best of the individual members in 76% of the cases. Optimal combinations on all watersheds included at least one of each of the four hydrological models. None of the optimal combinations included all members of the ensemble of 12 hydrographs. The Granger-Ramanathan average variant C (GRC) is recommended as the best compromise between accuracy, speed of execution, and simplicity.
Lateral dispersion coefficients as functions of averaging time
International Nuclear Information System (INIS)
Sheih, C.M.
1980-01-01
Plume dispersion coefficients are discussed in terms of single-particle and relative diffusion, and are investigated as functions of averaging time. To demonstrate the effects of averaging time on the relative importance of various dispersion processes, and observed lateral wind velocity spectrum is used to compute the lateral dispersion coefficients of total, single-particle and relative diffusion for various averaging times and plume travel times. The results indicate that for a 1 h averaging time the dispersion coefficient of a plume can be approximated by single-particle diffusion alone for travel times <250 s and by relative diffusion for longer travel times. Furthermore, it is shown that the power-law formula suggested by Turner for relating pollutant concentrations for other averaging times to the corresponding 15 min average is applicable to the present example only when the averaging time is less than 200 s and the tral time smaller than about 300 s. Since the turbulence spectrum used in the analysis is an observed one, it is hoped that the results could represent many conditions encountered in the atmosphere. However, as the results depend on the form of turbulence spectrum, the calculations are not for deriving a set of specific criteria but for demonstrating the need in discriminating various processes in studies of plume dispersion
Numbers and other math ideas come alive
Pappas, Theoni
2012-01-01
Most people don't think about numbers, or take them for granted. For the average person numbers are looked upon as cold, clinical, inanimate objects. Math ideas are viewed as something to get a job done or a problem solved. Get ready for a big surprise with Numbers and Other Math Ideas Come Alive. Pappas explores mathematical ideas by looking behind the scenes of what numbers, points, lines, and other concepts are saying and thinking. In each story, properties and characteristics of math ideas are entertainingly uncovered and explained through the dialogues and actions of its math
2010-07-01
... and average carbon-related exhaust emissions. 600.510-12 Section 600.510-12 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF... Transportation. (iv) [Reserved] (2) Average carbon-related exhaust emissions will be calculated to the nearest...
Average inactivity time model, associated orderings and reliability properties
Kayid, M.; Izadkhah, S.; Abouammoh, A. M.
2018-02-01
In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.
Average L-shell fluorescence, Auger, and electron yields
International Nuclear Information System (INIS)
Krause, M.O.
1980-01-01
The dependence of the average L-shell fluorescence and Auger yields on the initial vacancy distribution is shown to be small. By contrast, the average electron yield pertaining to both Auger and Coster-Kronig transitions is shown to display a strong dependence. Numerical examples are given on the basis of Krause's evaluation of subshell radiative and radiationless yields. Average yields are calculated for widely differing vacancy distributions and are intercompared graphically for 40 3 subshell yields in most cases of inner-shell ionization
Simultaneous inference for model averaging of derived parameters
DEFF Research Database (Denmark)
Jensen, Signe Marie; Ritz, Christian
2015-01-01
Model averaging is a useful approach for capturing uncertainty due to model selection. Currently, this uncertainty is often quantified by means of approximations that do not easily extend to simultaneous inference. Moreover, in practice there is a need for both model averaging and simultaneous...... inference for derived parameters calculated in an after-fitting step. We propose a method for obtaining asymptotically correct standard errors for one or several model-averaged estimates of derived parameters and for obtaining simultaneous confidence intervals that asymptotically control the family...
Time average vibration fringe analysis using Hilbert transformation
International Nuclear Information System (INIS)
Kumar, Upputuri Paul; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad
2010-01-01
Quantitative phase information from a single interferogram can be obtained using the Hilbert transform (HT). We have applied the HT method for quantitative evaluation of Bessel fringes obtained in time average TV holography. The method requires only one fringe pattern for the extraction of vibration amplitude and reduces the complexity in quantifying the data experienced in the time average reference bias modulation method, which uses multiple fringe frames. The technique is demonstrated for the measurement of out-of-plane vibration amplitude on a small scale specimen using a time average microscopic TV holography system.
Average multiplications in deep inelastic processes and their interpretation
International Nuclear Information System (INIS)
Kiselev, A.V.; Petrov, V.A.
1983-01-01
Inclusive production of hadrons in deep inelastic proceseseus is considered. It is shown that at high energies the jet evolution in deep inelastic processes is mainly of nonperturbative character. With the increase of a final hadron state energy the leading contribution to an average multiplicity comes from a parton subprocess due to production of massive quark and gluon jets and their further fragmentation as diquark contribution becomes less and less essential. The ratio of the total average multiplicity in deep inelastic processes to the average multiplicity in e + e - -annihilation at high energies tends to unity
Fitting a function to time-dependent ensemble averaged data
DEFF Research Database (Denmark)
Fogelmark, Karl; Lomholt, Michael A.; Irbäck, Anders
2018-01-01
Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion...... method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software....
Average wind statistics for SRP area meteorological towers
International Nuclear Information System (INIS)
Laurinat, J.E.
1987-01-01
A quality assured set of average wind Statistics for the seven SRP area meteorological towers has been calculated for the five-year period 1982--1986 at the request of DOE/SR. A Similar set of statistics was previously compiled for the years 1975-- 1979. The updated wind statistics will replace the old statistics as the meteorological input for calculating atmospheric radionuclide doses from stack releases, and will be used in the annual environmental report. This report details the methods used to average the wind statistics and to screen out bad measurements and presents wind roses generated by the averaged statistics
Templates, Numbers & Watercolors.
Clemesha, David J.
1990-01-01
Describes how a second-grade class used large templates to draw and paint five-digit numbers. The lesson integrated artistic knowledge and vocabulary with their mathematics lesson in place value. Students learned how draftspeople use templates, and they studied number paintings by Charles Demuth and Jasper Johns. (KM)
Higher Education and Inequality
Brown, Roger
2018-01-01
After climate change, rising economic inequality is the greatest challenge facing the advanced Western societies. Higher education has traditionally been seen as a means to greater equality through its role in promoting social mobility. But with increased marketisation higher education now not only reflects the forces making for greater inequality…
Higher Education in California
Public Policy Institute of California, 2016
2016-01-01
Higher education enhances Californians' lives and contributes to the state's economic growth. But population and education trends suggest that California is facing a large shortfall of college graduates. Addressing this shortfall will require strong gains for groups that have been historically underrepresented in higher education. Substantial…
Reimagining Christian Higher Education
Hulme, E. Eileen; Groom, David E., Jr.; Heltzel, Joseph M.
2016-01-01
The challenges facing higher education continue to mount. The shifting of the U.S. ethnic and racial demographics, the proliferation of advanced digital technologies and data, and the move from traditional degrees to continuous learning platforms have created an unstable environment to which Christian higher education must adapt in order to remain…
Elwick, Alex; Cannizzaro, Sara
2017-01-01
This paper investigates the higher education literature surrounding happiness and related notions: satisfaction, despair, flourishing and well-being. It finds that there is a real dearth of literature relating to profound happiness in higher education: much of the literature using the terms happiness and satisfaction interchangeably as if one were…
Bank, Barbara J., Ed.
2011-01-01
This comprehensive, encyclopedic review explores gender and its impact on American higher education across historical and cultural contexts. Challenging recent claims that gender inequities in U.S. higher education no longer exist, the contributors--leading experts in the field--reveal the many ways in which gender is embedded in the educational…
DEFF Research Database (Denmark)
Zou, Yihuan
is about constructing a more inclusive understanding of quality in higher education through combining the macro, meso and micro levels, i.e. from the perspectives of national policy, higher education institutions as organizations in society, individual teaching staff and students. It covers both......Quality in higher education was not invented in recent decades – universities have always possessed mechanisms for assuring the quality of their work. The rising concern over quality is closely related to the changes in higher education and its social context. Among others, the most conspicuous...... changes are the massive expansion, diversification and increased cost in higher education, and new mechanisms of accountability initiated by the state. With these changes the traditional internally enacted academic quality-keeping has been given an important external dimension – quality assurance, which...
Weather conditions influence the number of psychiatric emergency room patients
Brandl, Eva Janina; Lett, Tristram A.; Bakanidze, George; Heinz, Andreas; Bermpohl, Felix; Schouler-Ocak, Meryam
2017-12-01
The specific impact of weather factors on psychiatric disorders has been investigated only in few studies with inconsistent results. We hypothesized that meteorological conditions influence the number of cases presenting in a psychiatric emergency room as a measure of mental health conditions. We analyzed the number of patients consulting the emergency room (ER) of a psychiatric hospital in Berlin, Germany, between January 1, 2008, and December 31, 2014. A total of N = 22,672 cases were treated in the ER over the study period. Meteorological data were obtained from a publicly available data base. Due to collinearity among the meteorological variables, we performed a principal component (PC) analysis. Association of PCs with the daily number of patients was analyzed with autoregressive integrated moving average model. Delayed effects were investigated using Granger causal modeling. Daily number of patients in the ER was significantly higher in spring and summer compared to fall and winter (p psychiatric patients consulting the emergency room. In particular, our data indicate lower patient numbers during very cold temperatures.
Average monthly and annual climate maps for Bolivia
Vicente-Serrano, Sergio M.
2015-02-24
This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.
Medicare Part B Drug Average Sales Pricing Files
U.S. Department of Health & Human Services — Manufacturer reporting of Average Sales Price (ASP) data - A manufacturers ASP must be calculated by the manufacturer every calendar quarter and submitted to CMS...
High Average Power Fiber Laser for Satellite Communications, Phase I
National Aeronautics and Space Administration — Very high average power lasers with high electrical-top-optical (E-O) efficiency, which also support pulse position modulation (PPM) formats in the MHz-data rate...
Time averaging, ageing and delay analysis of financial time series
Cherstvy, Andrey G.; Vinod, Deepak; Aghion, Erez; Chechkin, Aleksei V.; Metzler, Ralf
2017-06-01
We introduce three strategies for the analysis of financial time series based on time averaged observables. These comprise the time averaged mean squared displacement (MSD) as well as the ageing and delay time methods for varying fractions of the financial time series. We explore these concepts via statistical analysis of historic time series for several Dow Jones Industrial indices for the period from the 1960s to 2015. Remarkably, we discover a simple universal law for the delay time averaged MSD. The observed features of the financial time series dynamics agree well with our analytical results for the time averaged measurables for geometric Brownian motion, underlying the famed Black-Scholes-Merton model. The concepts we promote here are shown to be useful for financial data analysis and enable one to unveil new universal features of stock market dynamics.
Historical Data for Average Processing Time Until Hearing Held
Social Security Administration — This dataset provides historical data for average wait time (in days) from the hearing request date until a hearing was held. This dataset includes data from fiscal...
GIS Tools to Estimate Average Annual Daily Traffic
2012-06-01
This project presents five tools that were created for a geographical information system to estimate Annual Average Daily : Traffic using linear regression. Three of the tools can be used to prepare spatial data for linear regression. One tool can be...
The average-shadowing property and topological ergodicity for flows
International Nuclear Information System (INIS)
Gu Rongbao; Guo Wenjing
2005-01-01
In this paper, the transitive property for a flow without sensitive dependence on initial conditions is studied and it is shown that a Lyapunov stable flow with the average-shadowing property on a compact metric space is topologically ergodic
Microbes make average 2 nanometer diameter crystalline UO2 particles.
Suzuki, Y.; Kelly, S. D.; Kemner, K. M.; Banfield, J. F.
2001-12-01
It is well known that phylogenetically diverse groups of microorganisms are capable of catalyzing the reduction of highly soluble U(VI) to highly insoluble U(IV), which rapidly precipitates as uraninite (UO2). Because biological uraninite is highly insoluble, microbial uranyl reduction is being intensively studied as the basis for a cost-effective in-situ bioremediation strategy. Previous studies have described UO2 biomineralization products as amorphous or poorly crystalline. The objective of this study is to characterize the nanocrystalline uraninite in detail in order to determine the particle size, crystallinity, and size-related structural characteristics, and to examine the implications of these for reoxidation and transport. In this study, we obtained U-contaminated sediment and water from an inactive U mine and incubated them anaerobically with nutrients to stimulate reductive precipitation of UO2 by indigenous anaerobic bacteria, mainly Gram-positive spore-forming Desulfosporosinus and Clostridium spp. as revealed by RNA-based phylogenetic analysis. Desulfosporosinus sp. was isolated from the sediment and UO2 was precipitated by this isolate from a simple solution that contains only U and electron donors. We characterized UO2 formed in both of the experiments by high resolution-TEM (HRTEM) and X-ray absorption fine structure analysis (XAFS). The results from HRTEM showed that both the pure and the mixed cultures of microorganisms precipitated around 1.5 - 3 nm crystalline UO2 particles. Some particles as small as around 1 nm could be imaged. Rare particles around 10 nm in diameter were also present. Particles adhere to cells and form colloidal aggregates with low fractal dimension. In some cases, coarsening by oriented attachment on \\{111\\} is evident. Our preliminary results from XAFS for the incubated U-contaminated sample also indicated an average diameter of UO2 of 2 nm. In nanoparticles, the U-U distance obtained by XAFS was 0.373 nm, 0.012 nm
Application of Bayesian approach to estimate average level spacing
International Nuclear Information System (INIS)
Huang Zhongfu; Zhao Zhixiang
1991-01-01
A method to estimate average level spacing from a set of resolved resonance parameters by using Bayesian approach is given. Using the information given in the distributions of both levels spacing and neutron width, the level missing in measured sample can be corrected more precisely so that better estimate for average level spacing can be obtained by this method. The calculation of s-wave resonance has been done and comparison with other work was carried out
Annual average equivalent dose of workers form health area
International Nuclear Information System (INIS)
Daltro, T.F.L.; Campos, L.L.
1992-01-01
The data of personnel monitoring during 1985 and 1991 of personnel that work in health area were studied, obtaining a general overview of the value change of annual average equivalent dose. Two different aspects were presented: the analysis of annual average equivalent dose in the different sectors of a hospital and the comparison of these doses in the same sectors in different hospitals. (C.G.C.)
A precise measurement of the average b hadron lifetime
Buskulic, Damir; De Bonis, I; Décamp, D; Ghez, P; Goy, C; Lees, J P; Lucotte, A; Minard, M N; Odier, P; Pietrzyk, B; Ariztizabal, F; Chmeissani, M; Crespo, J M; Efthymiopoulos, I; Fernández, E; Fernández-Bosman, M; Gaitan, V; Garrido, L; Martínez, M; Orteu, S; Pacheco, A; Padilla, C; Palla, Fabrizio; Pascual, A; Perlas, J A; Sánchez, F; Teubert, F; Colaleo, A; Creanza, D; De Palma, M; Farilla, A; Gelao, G; Girone, M; Iaselli, Giuseppe; Maggi, G; Maggi, M; Marinelli, N; Natali, S; Nuzzo, S; Ranieri, A; Raso, G; Romano, F; Ruggieri, F; Selvaggi, G; Silvestris, L; Tempesta, P; Zito, G; Huang, X; Lin, J; Ouyang, Q; Wang, T; Xie, Y; Xu, R; Xue, S; Zhang, J; Zhang, L; Zhao, W; Bonvicini, G; Cattaneo, M; Comas, P; Coyle, P; Drevermann, H; Engelhardt, A; Forty, Roger W; Frank, M; Hagelberg, R; Harvey, J; Jacobsen, R; Janot, P; Jost, B; Knobloch, J; Lehraus, Ivan; Markou, C; Martin, E B; Mato, P; Meinhard, H; Minten, Adolf G; Miquel, R; Oest, T; Palazzi, P; Pater, J R; Pusztaszeri, J F; Ranjard, F; Rensing, P E; Rolandi, Luigi; Schlatter, W D; Schmelling, M; Schneider, O; Tejessy, W; Tomalin, I R; Venturi, A; Wachsmuth, H W; Wiedenmann, W; Wildish, T; Witzeling, W; Wotschack, J; Ajaltouni, Ziad J; Bardadin-Otwinowska, Maria; Barrès, A; Boyer, C; Falvard, A; Gay, P; Guicheney, C; Henrard, P; Jousset, J; Michel, B; Monteil, S; Montret, J C; Pallin, D; Perret, P; Podlyski, F; Proriol, J; Rossignol, J M; Saadi, F; Fearnley, Tom; Hansen, J B; Hansen, J D; Hansen, J R; Hansen, P H; Nilsson, B S; Kyriakis, A; Simopoulou, Errietta; Siotis, I; Vayaki, Anna; Zachariadou, K; Blondel, A; Bonneaud, G R; Brient, J C; Bourdon, P; Passalacqua, L; Rougé, A; Rumpf, M; Tanaka, R; Valassi, Andrea; Verderi, M; Videau, H L; Candlin, D J; Parsons, M I; Focardi, E; Parrini, G; Corden, M; Delfino, M C; Georgiopoulos, C H; Jaffe, D E; Antonelli, A; Bencivenni, G; Bologna, G; Bossi, F; Campana, P; Capon, G; Chiarella, V; Felici, G; Laurelli, P; Mannocchi, G; Murtas, F; Murtas, G P; Pepé-Altarelli, M; Dorris, S J; Halley, A W; ten Have, I; Knowles, I G; Lynch, J G; Morton, W T; O'Shea, V; Raine, C; Reeves, P; Scarr, J M; Smith, K; Smith, M G; Thompson, A S; Thomson, F; Thorn, S; Turnbull, R M; Becker, U; Braun, O; Geweniger, C; Graefe, G; Hanke, P; Hepp, V; Kluge, E E; Putzer, A; Rensch, B; Schmidt, M; Sommer, J; Stenzel, H; Tittel, K; Werner, S; Wunsch, M; Beuselinck, R; Binnie, David M; Cameron, W; Colling, D J; Dornan, Peter J; Konstantinidis, N P; Moneta, L; Moutoussi, A; Nash, J; San Martin, G; Sedgbeer, J K; Stacey, A M; Dissertori, G; Girtler, P; Kneringer, E; Kuhn, D; Rudolph, G; Bowdery, C K; Brodbeck, T J; Colrain, P; Crawford, G; Finch, A J; Foster, F; Hughes, G; Sloan, Terence; Whelan, E P; Williams, M I; Galla, A; Greene, A M; Kleinknecht, K; Quast, G; Raab, J; Renk, B; Sander, H G; Wanke, R; Van Gemmeren, P; Zeitnitz, C; Aubert, Jean-Jacques; Bencheikh, A M; Benchouk, C; Bonissent, A; Bujosa, G; Calvet, D; Carr, J; Diaconu, C A; Etienne, F; Thulasidas, M; Nicod, D; Payre, P; Rousseau, D; Talby, M; Abt, I; Assmann, R W; Bauer, C; Blum, Walter; Brown, D; Dietl, H; Dydak, Friedrich; Ganis, G; Gotzhein, C; Jakobs, K; Kroha, H; Lütjens, G; Lutz, Gerhard; Männer, W; Moser, H G; Richter, R H; Rosado-Schlosser, A; Schael, S; Settles, Ronald; Seywerd, H C J; Stierlin, U; Saint-Denis, R; Wolf, G; Alemany, R; Boucrot, J; Callot, O; Cordier, A; Courault, F; Davier, M; Duflot, L; Grivaz, J F; Heusse, P; Jacquet, M; Kim, D W; Le Diberder, F R; Lefrançois, J; Lutz, A M; Musolino, G; Nikolic, I A; Park, H J; Park, I C; Schune, M H; Simion, S; Veillet, J J; Videau, I; Abbaneo, D; Azzurri, P; Bagliesi, G; Batignani, G; Bettarini, S; Bozzi, C; Calderini, G; Carpinelli, M; Ciocci, M A; Ciulli, V; Dell'Orso, R; Fantechi, R; Ferrante, I; Foà, L; Forti, F; Giassi, A; Giorgi, M A; Gregorio, A; Ligabue, F; Lusiani, A; Marrocchesi, P S; Messineo, A; Rizzo, G; Sanguinetti, G; Sciabà, A; Spagnolo, P; Steinberger, Jack; Tenchini, Roberto; Tonelli, G; Triggiani, G; Vannini, C; Verdini, P G; Walsh, J; Betteridge, A P; Blair, G A; Bryant, L M; Cerutti, F; Gao, Y; Green, M G; Johnson, D L; Medcalf, T; Mir, L M; Perrodo, P; Strong, J A; Bertin, V; Botterill, David R; Clifft, R W; Edgecock, T R; Haywood, S; Edwards, M; Maley, P; Norton, P R; Thompson, J C; Bloch-Devaux, B; Colas, P; Duarte, H; Emery, S; Kozanecki, Witold; Lançon, E; Lemaire, M C; Locci, E; Marx, B; Pérez, P; Rander, J; Renardy, J F; Rosowsky, A; Roussarie, A; Schuller, J P; Schwindling, J; Si Mohand, D; Trabelsi, A; Vallage, B; Johnson, R P; Kim, H Y; Litke, A M; McNeil, M A; Taylor, G; Beddall, A; Booth, C N; Boswell, R; Cartwright, S L; Combley, F; Dawson, I; Köksal, A; Letho, M; Newton, W M; Rankin, C; Thompson, L F; Böhrer, A; Brandt, S; Cowan, G D; Feigl, E; Grupen, Claus; Lutters, G; Minguet-Rodríguez, J A; Rivera, F; Saraiva, P; Smolik, L; Stephan, F; Apollonio, M; Bosisio, L; Della Marina, R; Giannini, G; Gobbo, B; Ragusa, F; Rothberg, J E; Wasserbaech, S R; Armstrong, S R; Bellantoni, L; Elmer, P; Feng, P; Ferguson, D P S; Gao, Y S; González, S; Grahl, J; Harton, J L; Hayes, O J; Hu, H; McNamara, P A; Nachtman, J M; Orejudos, W; Pan, Y B; Saadi, Y; Schmitt, M; Scott, I J; Sharma, V; Turk, J; Walsh, A M; Wu Sau Lan; Wu, X; Yamartino, J M; Zheng, M; Zobernig, G
1996-01-01
An improved measurement of the average b hadron lifetime is performed using a sample of 1.5 million hadronic Z decays, collected during the 1991-1993 runs of ALEPH, with the silicon vertex detector fully operational. This uses the three-dimensional impact parameter distribution of lepton tracks coming from semileptonic b decays and yields an average b hadron lifetime of 1.533 \\pm 0.013 \\pm 0.022 ps.
Bivariate copulas on the exponentially weighted moving average control chart
Directory of Open Access Journals (Sweden)
Sasigarn Kuvattana
2016-10-01
Full Text Available This paper proposes four types of copulas on the Exponentially Weighted Moving Average (EWMA control chart when observations are from an exponential distribution using a Monte Carlo simulation approach. The performance of the control chart is based on the Average Run Length (ARL which is compared for each copula. Copula functions for specifying dependence between random variables are used and measured by Kendall’s tau. The results show that the Normal copula can be used for almost all shifts.
Averaging Bias Correction for Future IPDA Lidar Mission MERLIN
Directory of Open Access Journals (Sweden)
Tellier Yoann
2018-01-01
Full Text Available The CNES/DLR MERLIN satellite mission aims at measuring methane dry-air mixing ratio column (XCH4 and thus improving surface flux estimates. In order to get a 1% precision on XCH4 measurements, MERLIN signal processing assumes an averaging of data over 50 km. The induced biases due to the non-linear IPDA lidar equation are not compliant with accuracy requirements. This paper analyzes averaging biases issues and suggests correction algorithms tested on realistic simulated scenes.
Averaging Bias Correction for Future IPDA Lidar Mission MERLIN
Tellier, Yoann; Pierangelo, Clémence; Wirth, Martin; Gibert, Fabien
2018-04-01
The CNES/DLR MERLIN satellite mission aims at measuring methane dry-air mixing ratio column (XCH4) and thus improving surface flux estimates. In order to get a 1% precision on XCH4 measurements, MERLIN signal processing assumes an averaging of data over 50 km. The induced biases due to the non-linear IPDA lidar equation are not compliant with accuracy requirements. This paper analyzes averaging biases issues and suggests correction algorithms tested on realistic simulated scenes.
The average action for scalar fields near phase transitions
International Nuclear Information System (INIS)
Wetterich, C.
1991-08-01
We compute the average action for fields in two, three and four dimensions, including the effects of wave function renormalization. A study of the one loop evolution equations for the scale dependence of the average action gives a unified picture of the qualitatively different behaviour in various dimensions for discrete as well as abelian and nonabelian continuous symmetry. The different phases and the phase transitions can be infered from the evolution equation. (orig.)
Wave function collapse implies divergence of average displacement
Marchewka, A.; Schuss, Z.
2005-01-01
We show that propagating a truncated discontinuous wave function by Schr\\"odinger's equation, as asserted by the collapse axiom, gives rise to non-existence of the average displacement of the particle on the line. It also implies that there is no Zeno effect. On the other hand, if the truncation is done so that the reduced wave function is continuous, the average coordinate is finite and there is a Zeno effect. Therefore the collapse axiom of measurement needs to be revised.
Average geodesic distance of skeleton networks of Sierpinski tetrahedron
Yang, Jinjin; Wang, Songjing; Xi, Lifeng; Ye, Yongchao
2018-04-01
The average distance is concerned in the research of complex networks and is related to Wiener sum which is a topological invariant in chemical graph theory. In this paper, we study the skeleton networks of the Sierpinski tetrahedron, an important self-similar fractal, and obtain their asymptotic formula for average distances. To provide the formula, we develop some technique named finite patterns of integral of geodesic distance on self-similar measure for the Sierpinski tetrahedron.
Directory of Open Access Journals (Sweden)
Cliver Edward W.
2017-01-01
Full Text Available We analyze the normalization factors (k′-factors used to scale secondary observers to the Royal Greenwich Observatory (RGO reference series of the Hoyt & Schatten (1998a, 1998b group sunspot number (GSN. A time series of these k′-factors exhibits an anomaly from 1841 to 1920, viz., the average k′-factor for all observers who began reporting groups from 1841 to 1883 is 1.075 vs. 1.431 for those who began from 1884 to 1920, with a progressive rise, on average, during the latter period. The 1883–1884 break between the two subintervals occurs precisely at the point where Hoyt and Schatten began to use a complex daisy-chaining method to scale observers to RGO. The 1841–1920 anomaly implies, implausibly, that the average sunspot observer who began from 1841 to 1883 was nearly as proficient at counting groups as mid-20th century RGO (for which k′ = 1.0 by definition while observers beginning during the 1884–1920 period regressed in group counting capability relative to those from the earlier interval. Instead, as shown elsewhere and substantiated here, RGO group counts increased relative to those of other long-term observers from 1874 to ~1915. This apparent inhomogeneity in the RGO group count series is primarily responsible for the increase in k′-factors from 1884 to 1920 and the suppression, by 44% on average, of the Hoyt and Schatten GSN relative to the original Wolf sunspot number (WSN before ~1885. Correcting for the early “learning curve” in the RGO reference series and minimizing the use of daisy-chaining rectifies the anomalous behavior of the k′-factor series. The resultant GSN time series (designated GSN* is in reasonable agreement with the revised WSN (SN*; Clette & Lefèvre 2016 and the backbone-based group sunspot number (RGS; Svalgaard & Schatten 2016 but significantly higher than other recent reconstructions (Friedli, personal communication, 2016; Lockwood et al. 2014a, 2014b; Usoskin et al. 2016a. This result
International Nuclear Information System (INIS)
Coveyou, R.R.
1974-01-01
The subject of random number generation is currently controversial. Differing opinions on this subject seem to stem from implicit or explicit differences in philosophy; in particular, from differing ideas concerning the role of probability in the real world of physical processes, electronic computers, and Monte Carlo calculations. An attempt is made here to reconcile these views. The role of stochastic ideas in mathematical models is discussed. In illustration of these ideas, a mathematical model of the use of random number generators in Monte Carlo calculations is constructed. This model is used to set up criteria for the comparison and evaluation of random number generators. (U.S.)
Weiss, Edwin
1998-01-01
Careful organization and clear, detailed proofs characterize this methodical, self-contained exposition of basic results of classical algebraic number theory from a relatively modem point of view. This volume presents most of the number-theoretic prerequisites for a study of either class field theory (as formulated by Artin and Tate) or the contemporary treatment of analytical questions (as found, for example, in Tate's thesis).Although concerned exclusively with algebraic number fields, this treatment features axiomatic formulations with a considerable range of applications. Modem abstract te
Cohn, Harvey
1980-01-01
""A very stimulating book ... in a class by itself."" - American Mathematical MonthlyAdvanced students, mathematicians and number theorists will welcome this stimulating treatment of advanced number theory, which approaches the complex topic of algebraic number theory from a historical standpoint, taking pains to show the reader how concepts, definitions and theories have evolved during the last two centuries. Moreover, the book abounds with numerical examples and more concrete, specific theorems than are found in most contemporary treatments of the subject.The book is divided into three parts
Crossley, John N
1987-01-01
This book presents detailed studies of the development of three kinds of number. In the first part the development of the natural numbers from Stone-Age times right up to the present day is examined not only from the point of view of pure history but also taking into account archaeological, anthropological and linguistic evidence. The dramatic change caused by the introduction of logical theories of number in the 19th century is also treated and this part ends with a non-technical account of the very latest developments in the area of Gödel's theorem. The second part is concerned with the deve
Professor Stewart's incredible numbers
Stewart, Ian
2015-01-01
Ian Stewart explores the astonishing properties of numbers from 1 to10 to zero and infinity, including one figure that, if you wrote it out, would span the universe. He looks at every kind of number you can think of - real, imaginary, rational, irrational, positive and negative - along with several you might have thought you couldn't think of. He explains the insights of the ancient mathematicians, shows how numbers have evolved through the ages, and reveals the way numerical theory enables everyday life. Under Professor Stewart's guidance you will discover the mathematics of codes,
LeVeque, William J
1996-01-01
This excellent textbook introduces the basics of number theory, incorporating the language of abstract algebra. A knowledge of such algebraic concepts as group, ring, field, and domain is not assumed, however; all terms are defined and examples are given - making the book self-contained in this respect.The author begins with an introductory chapter on number theory and its early history. Subsequent chapters deal with unique factorization and the GCD, quadratic residues, number-theoretic functions and the distribution of primes, sums of squares, quadratic equations and quadratic fields, diopha
Kneusel, Ronald T
2015-01-01
This is a book about numbers and how those numbers are represented in and operated on by computers. It is crucial that developers understand this area because the numerical operations allowed by computers, and the limitations of those operations, especially in the area of floating point math, affect virtually everything people try to do with computers. This book aims to fill this gap by exploring, in sufficient but not overwhelming detail, just what it is that computers do with numbers. Divided into two parts, the first deals with standard representations of integers and floating point numb
Sierpinski, Waclaw
1988-01-01
Since the publication of the first edition of this work, considerable progress has been made in many of the questions examined. This edition has been updated and enlarged, and the bibliography has been revised.The variety of topics covered here includes divisibility, diophantine equations, prime numbers (especially Mersenne and Fermat primes), the basic arithmetic functions, congruences, the quadratic reciprocity law, expansion of real numbers into decimal fractions, decomposition of integers into sums of powers, some other problems of the additive theory of numbers and the theory of Gaussian
Directory of Open Access Journals (Sweden)
R. A. Mollin
1986-01-01
Full Text Available A powerful number is a positive integer n satisfying the property that p2 divides n whenever the prime p divides n; i.e., in the canonical prime decomposition of n, no prime appears with exponent 1. In [1], S.W. Golomb introduced and studied such numbers. In particular, he asked whether (25,27 is the only pair of consecutive odd powerful numbers. This question was settled in [2] by W.A. Sentance who gave necessary and sufficient conditions for the existence of such pairs. The first result of this paper is to provide a generalization of Sentance's result by giving necessary and sufficient conditions for the existence of pairs of powerful numbers spaced evenly apart. This result leads us naturally to consider integers which are representable as a proper difference of two powerful numbers, i.e. n=p1−p2 where p1 and p2 are powerful numbers with g.c.d. (p1,p2=1. Golomb (op.cit. conjectured that 6 is not a proper difference of two powerful numbers, and that there are infinitely many numbers which cannot be represented as a proper difference of two powerful numbers. The antithesis of this conjecture was proved by W.L. McDaniel [3] who verified that every non-zero integer is in fact a proper difference of two powerful numbers in infinitely many ways. McDaniel's proof is essentially an existence proof. The second result of this paper is a simpler proof of McDaniel's result as well as an effective algorithm (in the proof for explicitly determining infinitely many such representations. However, in both our proof and McDaniel's proof one of the powerful numbers is almost always a perfect square (namely one is always a perfect square when n≢2(mod4. We provide in §2 a proof that all even integers are representable in infinitely many ways as a proper nonsquare difference; i.e., proper difference of two powerful numbers neither of which is a perfect square. This, in conjunction with the odd case in [4], shows that every integer is representable in
Corry, Leo
2015-01-01
The world around us is saturated with numbers. They are a fundamental pillar of our modern society, and accepted and used with hardly a second thought. But how did this state of affairs come to be? In this book, Leo Corry tells the story behind the idea of number from the early days of the Pythagoreans, up until the turn of the twentieth century. He presents an overview of how numbers were handled and conceived in classical Greek mathematics, in the mathematics of Islam, in European mathematics of the middle ages and the Renaissance, during the scientific revolution, all the way through to the
Dudley, Underwood
2008-01-01
Ideal for a first course in number theory, this lively, engaging text requires only a familiarity with elementary algebra and the properties of real numbers. Author Underwood Dudley, who has written a series of popular mathematics books, maintains that the best way to learn mathematics is by solving problems. In keeping with this philosophy, the text includes nearly 1,000 exercises and problems-some computational and some classical, many original, and some with complete solutions. The opening chapters offer sound explanations of the basics of elementary number theory and develop the fundamenta
Familiarity and Voice Representation: From Acoustic-Based Representation to Voice Averages
Directory of Open Access Journals (Sweden)
Maureen Fontaine
2017-07-01
Full Text Available The ability to recognize an individual from their voice is a widespread ability with a long evolutionary history. Yet, the perceptual representation of familiar voices is ill-defined. In two experiments, we explored the neuropsychological processes involved in the perception of voice identity. We specifically explored the hypothesis that familiar voices (trained-to-familiar (Experiment 1, and famous voices (Experiment 2 are represented as a whole complex pattern, well approximated by the average of multiple utterances produced by a single speaker. In experiment 1, participants learned three voices over several sessions, and performed a three-alternative forced-choice identification task on original voice samples and several “speaker averages,” created by morphing across varying numbers of different vowels (e.g., [a] and [i] produced by the same speaker. In experiment 2, the same participants performed the same task on voice samples produced by familiar speakers. The two experiments showed that for famous voices, but not for trained-to-familiar voices, identification performance increased and response times decreased as a function of the number of utterances in the averages. This study sheds light on the perceptual representation of familiar voices, and demonstrates the power of average in recognizing familiar voices. The speaker average captures the unique characteristics of a speaker, and thus retains the information essential for recognition; it acts as a prototype of the speaker.
Bridges, Ann; Mitchell, John
2015-01-01
A brand new edition of the former Higher English: Close Reading , completely revised and updated for the new Higher element (Reading for Understanding, Analysis and Evaluation) - worth 30% of marks in the final exam!. We are working with SQA to secure endorsement for this title. Written by two highly experienced authors this book shows you how to practice for the Reading for Understanding, Analysis and Evaluation section of the new Higher English exam. This book introduces the terms and concepts that lie behind success and offers guidance on the interpretation of questions and targeting answer
African Journals Online (AJOL)
OLUWOLE
Agro-Science Journal of Tropical Agriculture, Food, Environment and Extension. Volume 9 Number 1 ... of persistent dumping of cheap subsidized food imports from developed ... independence of the inefficiency effects in the two estimation ...
High Reynolds Number Turbulence
National Research Council Canada - National Science Library
Smits, Alexander J
2007-01-01
The objectives of the grant were to provide a systematic study to fill the gap between existing research on low Reynolds number turbulent flows to the kinds of turbulent flows encountered on full-scale vehicles...
International Development Research Centre (IDRC) Digital Library (Canada)
Operating a Demographic Surveillance System (DSS) like this one requires a blend of high-tech number-crunching ability and .... views follow a standardized format that takes several ... general levels of health and to the use of health services.
Quantum random number generator
Pooser, Raphael C.
2016-05-10
A quantum random number generator (QRNG) and a photon generator for a QRNG are provided. The photon generator may be operated in a spontaneous mode below a lasing threshold to emit photons. Photons emitted from the photon generator may have at least one random characteristic, which may be monitored by the QRNG to generate a random number. In one embodiment, the photon generator may include a photon emitter and an amplifier coupled to the photon emitter. The amplifier may enable the photon generator to be used in the QRNG without introducing significant bias in the random number and may enable multiplexing of multiple random numbers. The amplifier may also desensitize the photon generator to fluctuations in power supplied thereto while operating in the spontaneous mode. In one embodiment, the photon emitter and amplifier may be a tapered diode amplifier.
Solar Indices - Sunspot Numbers
National Oceanic and Atmospheric Administration, Department of Commerce — Collection includes a variety of indices related to solar activity contributed by a number of national and private solar observatories located worldwide. This...
Schwartz, Richard Evan
2014-01-01
In the American Mathematical Society's first-ever book for kids (and kids at heart), mathematician and author Richard Evan Schwartz leads math lovers of all ages on an innovative and strikingly illustrated journey through the infinite number system. By means of engaging, imaginative visuals and endearing narration, Schwartz manages the monumental task of presenting the complex concept of Big Numbers in fresh and relatable ways. The book begins with small, easily observable numbers before building up to truly gigantic ones, like a nonillion, a tredecillion, a googol, and even ones too huge for names! Any person, regardless of age, can benefit from reading this book. Readers will find themselves returning to its pages for a very long time, perpetually learning from and growing with the narrative as their knowledge deepens. Really Big Numbers is a wonderful enrichment for any math education program and is enthusiastically recommended to every teacher, parent and grandparent, student, child, or other individual i...
Planning for Higher Education.
Lindstrom, Caj-Gunnar
1984-01-01
Decision processes for strategic planning for higher education institutions are outlined using these parameters: institutional goals and power structure, organizational climate, leadership attitudes, specific problem type, and problem-solving conditions and alternatives. (MSE)
N.V. Provozin; А.S. Teletov
2011-01-01
The article discusses the features advertising higher education institution. The analysis results of marketing research students for their choice of institutions and further study. Principles of the advertising campaign on three levels: the university, the faculty, the separate department.
Gibbison, Godfrey A.; Henry, Tracyann L.; Perkins-Brown, Jayne
2011-01-01
Freshman grade point average, in particular first semester grade point average, is an important predictor of survival and eventual student success in college. As many institutions of higher learning are searching for ways to improve student success, one would hope that policies geared towards the success of freshmen have long term benefits…
Indian Academy of Sciences (India)
One could endlessly churn out congruent numbers following the method in Box 1 without being certain when a given number n (or n x m 2, for some integer m) will ap- pear on the list. Continuing in this way ·would exhaust one's computing resources, not to mention one's patience! Also, this procedure is of no avail if n is not ...
International Nuclear Information System (INIS)
Accioly, A.J.
1987-01-01
A possible classical route conducting towards a general relativity theory with higher-derivatives starting, in a sense, from first principles, is analysed. A completely causal vacuum solution with the symmetries of the Goedel universe is obtained in the framework of this higher-derivative gravity. This very peculiar and rare result is the first known vcuum solution of the fourth-order gravity theory that is not a solution of the corresponding Einstein's equations.(Author) [pt
CERN. Geneva
2014-01-01
The conjectured relation between higher spin theories on anti de-Sitter (AdS) spaces and weakly coupled conformal field theories is reviewed. I shall then outline the evidence in favour of a concrete duality of this kind, relating a specific higher spin theory on AdS3 to a family of 2d minimal model CFTs. Finally, I shall explain how this relation fits into the framework of the familiar stringy AdS/CFT correspondence.
DEFF Research Database (Denmark)
Korsby, Trine Mygind
2017-01-01
Taking a point of departure in negotiations for access to a phone number for a brothel abroad, the article demonstrates how a group of pimps in Eastern Romania attempt to extend their local business into the rest of the EU. The article shows how the phone number works as a micro-infrastructure in......Taking a point of departure in negotiations for access to a phone number for a brothel abroad, the article demonstrates how a group of pimps in Eastern Romania attempt to extend their local business into the rest of the EU. The article shows how the phone number works as a micro...... in turn cultivate and maximize uncertainty about themselves in others. When making the move to go abroad into unknown terrains, accessing the infrastructure generated by the phone number can provide certainty and consolidate one’s position within criminal networks abroad. However, at the same time......, mishandling the phone number can be dangerous and in that sense produce new doubts and uncertainties....
International Nuclear Information System (INIS)
Metcalfe, N.; Shanks, T.; Fong, R.; Jones, L.R.
1991-01-01
Using the Prime Focus CCD Camera at the Isaac Newton Telescope we have determined the form of the B and R galaxy number-magnitude count relations in 12 independent fields for 21 m ccd m and 19 m ccd m 5. The average galaxy count relations lie in the middle of the wide range previously encompassed by photographic data. The field-to-field variation of the counts is small enough to define the faint (B m 5) galaxy count to ±10 per cent and this variation is consistent with that expected from galaxy clustering considerations. Our new data confirm that the B, and also the R, galaxy counts show evidence for strong galaxy luminosity evolution, and that the majority of the evolving galaxies are of moderately blue colour. (author)
Limit cycles from a cubic reversible system via the third-order averaging method
Directory of Open Access Journals (Sweden)
Linping Peng
2015-04-01
Full Text Available This article concerns the bifurcation of limit cycles from a cubic integrable and non-Hamiltonian system. By using the averaging theory of the first and second orders, we show that under any small cubic homogeneous perturbation, at most two limit cycles bifurcate from the period annulus of the unperturbed system, and this upper bound is sharp. By using the averaging theory of the third order, we show that two is also the maximal number of limit cycles emerging from the period annulus of the unperturbed system.
Noise Reduction for Nonlinear Nonstationary Time Series Data using Averaging Intrinsic Mode Function
Directory of Open Access Journals (Sweden)
Christofer Toumazou
2013-07-01
Full Text Available A novel noise filtering algorithm based on averaging Intrinsic Mode Function (aIMF, which is a derivation of Empirical Mode Decomposition (EMD, is proposed to remove white-Gaussian noise of foreign currency exchange rates that are nonlinear nonstationary times series signals. Noise patterns with different amplitudes and frequencies were randomly mixed into the five exchange rates. A number of filters, namely; Extended Kalman Filter (EKF, Wavelet Transform (WT, Particle Filter (PF and the averaging Intrinsic Mode Function (aIMF algorithm were used to compare filtering and smoothing performance. The aIMF algorithm demonstrated high noise reduction among the performance of these filters.
Average Soil Water Retention Curves Measured by Neutron Radiography
Energy Technology Data Exchange (ETDEWEB)
Cheng, Chu-Lin [ORNL; Perfect, Edmund [University of Tennessee, Knoxville (UTK); Kang, Misun [ORNL; Voisin, Sophie [ORNL; Bilheux, Hassina Z [ORNL; Horita, Juske [Texas Tech University (TTU); Hussey, Dan [NIST Center for Neutron Research (NCRN), Gaithersburg, MD
2011-01-01
Water retention curves are essential for understanding the hydrologic behavior of partially-saturated porous media and modeling flow transport processes within the vadose zone. In this paper we report direct measurements of the main drying and wetting branches of the average water retention function obtained using 2-dimensional neutron radiography. Flint sand columns were saturated with water and then drained under quasi-equilibrium conditions using a hanging water column setup. Digital images (2048 x 2048 pixels) of the transmitted flux of neutrons were acquired at each imposed matric potential (~10-15 matric potential values per experiment) at the NCNR BT-2 neutron imaging beam line. Volumetric water contents were calculated on a pixel by pixel basis using Beer-Lambert s law after taking into account beam hardening and geometric corrections. To remove scattering effects at high water contents the volumetric water contents were normalized (to give relative saturations) by dividing the drying and wetting sequences of images by the images obtained at saturation and satiation, respectively. The resulting pixel values were then averaged and combined with information on the imposed basal matric potentials to give average water retention curves. The average relative saturations obtained by neutron radiography showed an approximate one-to-one relationship with the average values measured volumetrically using the hanging water column setup. There were no significant differences (at p < 0.05) between the parameters of the van Genuchten equation fitted to the average neutron radiography data and those estimated from replicated hanging water column data. Our results indicate that neutron imaging is a very effective tool for quantifying the average water retention curve.
Estimating average glandular dose by measuring glandular rate in mammograms
International Nuclear Information System (INIS)
Goto, Sachiko; Azuma, Yoshiharu; Sumimoto, Tetsuhiro; Eiho, Shigeru
2003-01-01
The glandular rate of the breast was objectively measured in order to calculate individual patient exposure dose (average glandular dose) in mammography. By employing image processing techniques and breast-equivalent phantoms with various glandular rate values, a conversion curve for pixel value to glandular rate can be determined by a neural network. Accordingly, the pixel values in clinical mammograms can be converted to the glandular rate value for each pixel. The individual average glandular dose can therefore be calculated using the individual glandular rates on the basis of the dosimetry method employed for quality control in mammography. In the present study, a data set of 100 craniocaudal mammograms from 50 patients was used to evaluate our method. The average glandular rate and average glandular dose of the data set were 41.2% and 1.79 mGy, respectively. The error in calculating the individual glandular rate can be estimated to be less than ±3%. When the calculation error of the glandular rate is taken into consideration, the error in the individual average glandular dose can be estimated to be 13% or less. We feel that our method for determining the glandular rate from mammograms is useful for minimizing subjectivity in the evaluation of patient breast composition. (author)
How to pass higher English colour
Bridges, Ann
2009-01-01
How to Pass is the Number 1 revision series for Scottish qualifications across the three examination levels of Standard Grade, Intermediate and Higher! Second editions of the books present all of the material in full colour for the first time.
Zou, Yaotian; Tarko, Andrew P
2018-02-01
The objective of this study was to develop crash modification factors (CMFs) and estimate the average crash costs applicable to a wide range of road-barrier scenarios that involved three types of road barriers (concrete barriers, W-beam guardrails, and high-tension cable barriers) to produce a suitable basis for comparing barrier-oriented design alternatives and road improvements. The intention was to perform the most comprehensive and in-depth analysis allowed by the cross-sectional method and the crash data available in Indiana. To accomplish this objective and to use the available data efficiently, the effects of barrier were estimated on the frequency of barrier-relevant (BR) crashes, the types of harmful events and their occurrence during a BR crash, and the severity of BR crash outcomes. The harmful events component added depth to the analysis by connecting the crash onset with its outcome. Further improvement of the analysis was accomplished by considering the crash outcome severity of all the individuals involved in a crash and not just drivers, utilizing hospital data, and pairing the observations with and without road barriers along same or similar road segments to better control the unobserved heterogeneity. This study confirmed that the total number of BR crashes tended to be higher where medians had installed barriers, mainly due to collisions with barriers and, in some cases, with other vehicles after redirecting vehicles back to traffic. These undesirable effects of barriers were surpassed by the positive results of reducing cross-median crashes, rollover events, and collisions with roadside hazards. The average cost of a crash (unit cost) was reduced by 50% with cable barriers installed in medians wider than 50ft. A similar effect was concluded for concrete barriers and guardrails installed in medians narrower than 50ft. The studied roadside guardrails also reduced the unit cost by 20%-30%. Median cable barriers were found to be the most effective
Energy Technology Data Exchange (ETDEWEB)
Nelson, R.N. (ed.)
1985-05-01
This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name.
International Nuclear Information System (INIS)
Nelson, R.N.
1985-05-01
This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name
Broderick, Ciaran; Matthews, Tom; Wilby, Robert L.; Bastola, Satish; Murphy, Conor
2016-10-01
Understanding hydrological model predictive capabilities under contrasting climate conditions enables more robust decision making. Using Differential Split Sample Testing (DSST), we analyze the performance of six hydrological models for 37 Irish catchments under climate conditions unlike those used for model training. Additionally, we consider four ensemble averaging techniques when examining interperiod transferability. DSST is conducted using 2/3 year noncontinuous blocks of (i) the wettest/driest years on record based on precipitation totals and (ii) years with a more/less pronounced seasonal precipitation regime. Model transferability between contrasting regimes was found to vary depending on the testing scenario, catchment, and evaluation criteria considered. As expected, the ensemble average outperformed most individual ensemble members. However, averaging techniques differed considerably in the number of times they surpassed the best individual model member. Bayesian Model Averaging (BMA) and the Granger-Ramanathan Averaging (GRA) method were found to outperform the simple arithmetic mean (SAM) and Akaike Information Criteria Averaging (AICA). Here GRA performed better than the best individual model in 51%-86% of cases (according to the Nash-Sutcliffe criterion). When assessing model predictive skill under climate change conditions we recommend (i) setting up DSST to select the best available analogues of expected annual mean and seasonal climate conditions; (ii) applying multiple performance criteria; (iii) testing transferability using a diverse set of catchments; and (iv) using a multimodel ensemble in conjunction with an appropriate averaging technique. Given the computational efficiency and performance of GRA relative to BMA, the former is recommended as the preferred ensemble averaging technique for climate assessment.
Sarathy, Mani
2016-08-17
This chapter focuses on the production and combustion of alcohol fuels with four or more carbon atoms, which we classify as higher alcohols. It assesses the feasibility of utilizing various C4-C8 alcohols as fuels for internal combustion engines. Utilizing higher-molecular-weight alcohols as fuels requires careful analysis of their fuel properties. ASTM standards provide fuel property requirements for spark-ignition (SI) and compression-ignition (CI) engines such as the stability, lubricity, viscosity, and cold filter plugging point (CFPP) properties of blends of higher alcohols. Important combustion properties that are studied include laminar and turbulent flame speeds, flame blowout/extinction limits, ignition delay under various mixing conditions, and gas-phase and particulate emissions. The chapter focuses on the combustion of higher alcohols in reciprocating SI and CI engines and discusses higher alcohol performance in SI and CI engines. Finally, the chapter identifies the sources, production pathways, and technologies currently being pursued for production of some fuels, including n-butanol, iso-butanol, and n-octanol.
NSSEFF Designing New Higher Temperature Superconductors
2017-04-13
AFRL-AFOSR-VA-TR-2017-0083 NSSEFF - DESIGINING NEW HIGHER TEMPERATURE SUPERCONDUCTORS Meigan Aronson THE RESEARCH FOUNDATION OF STATE UNIVERSITY OF...2015 4. TITLE AND SUBTITLE NSSEFF - DESIGINING NEW HIGHER TEMPERATURE SUPERCONDUCTORS 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA9550-10-1-0191 5c...materials, identifying the most promising candidates. 15. SUBJECT TERMS TEMPERATURE, SUPERCONDUCTOR 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF
Testing averaged cosmology with type Ia supernovae and BAO data
Energy Technology Data Exchange (ETDEWEB)
Santos, B.; Alcaniz, J.S. [Departamento de Astronomia, Observatório Nacional, 20921-400, Rio de Janeiro – RJ (Brazil); Coley, A.A. [Department of Mathematics and Statistics, Dalhousie University, Halifax, B3H 3J5 Canada (Canada); Devi, N. Chandrachani, E-mail: thoven@on.br, E-mail: aac@mathstat.dal.ca, E-mail: chandrachaniningombam@astro.unam.mx, E-mail: alcaniz@on.br [Instituto de Astronomía, Universidad Nacional Autónoma de México, Box 70-264, México City, México (Mexico)
2017-02-01
An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO data is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.
Average contraction and synchronization of complex switched networks
International Nuclear Information System (INIS)
Wang Lei; Wang Qingguo
2012-01-01
This paper introduces an average contraction analysis for nonlinear switched systems and applies it to investigating the synchronization of complex networks of coupled systems with switching topology. For a general nonlinear system with a time-dependent switching law, a basic convergence result is presented according to average contraction analysis, and a special case where trajectories of a distributed switched system converge to a linear subspace is then investigated. Synchronization is viewed as the special case with all trajectories approaching the synchronization manifold, and is thus studied for complex networks of coupled oscillators with switching topology. It is shown that the synchronization of a complex switched network can be evaluated by the dynamics of an isolated node, the coupling strength and the time average of the smallest eigenvalue associated with the Laplacians of switching topology and the coupling fashion. Finally, numerical simulations illustrate the effectiveness of the proposed methods. (paper)
The Health Effects of Income Inequality: Averages and Disparities.
Truesdale, Beth C; Jencks, Christopher
2016-01-01
Much research has investigated the association of income inequality with average life expectancy, usually finding negative correlations that are not very robust. A smaller body of work has investigated socioeconomic disparities in life expectancy, which have widened in many countries since 1980. These two lines of work should be seen as complementary because changes in average life expectancy are unlikely to affect all socioeconomic groups equally. Although most theories imply long and variable lags between changes in income inequality and changes in health, empirical evidence is confined largely to short-term effects. Rising income inequality can affect individuals in two ways. Direct effects change individuals' own income. Indirect effects change other people's income, which can then change a society's politics, customs, and ideals, altering the behavior even of those whose own income remains unchanged. Indirect effects can thus change both average health and the slope of the relationship between individual income and health.
Testing averaged cosmology with type Ia supernovae and BAO data
International Nuclear Information System (INIS)
Santos, B.; Alcaniz, J.S.; Coley, A.A.; Devi, N. Chandrachani
2017-01-01
An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO data is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.
Perceived Average Orientation Reflects Effective Gist of the Surface.
Cha, Oakyoon; Chong, Sang Chul
2018-03-01
The human ability to represent ensemble visual information, such as average orientation and size, has been suggested as the foundation of gist perception. To effectively summarize different groups of objects into the gist of a scene, observers should form ensembles separately for different groups, even when objects have similar visual features across groups. We hypothesized that the visual system utilizes perceptual groups characterized by spatial configuration and represents separate ensembles for different groups. Therefore, participants could not integrate ensembles of different perceptual groups on a task basis. We asked participants to determine the average orientation of visual elements comprising a surface with a contour situated inside. Although participants were asked to estimate the average orientation of all the elements, they ignored orientation signals embedded in the contour. This constraint may help the visual system to keep the visual features of occluding objects separate from those of the occluded objects.
Object detection by correlation coefficients using azimuthally averaged reference projections.
Nicholson, William V
2004-11-01
A method of computing correlation coefficients for object detection that takes advantage of using azimuthally averaged reference projections is described and compared with two alternative methods-computing a cross-correlation function or a local correlation coefficient versus the azimuthally averaged reference projections. Two examples of an application from structural biology involving the detection of projection views of biological macromolecules in electron micrographs are discussed. It is found that a novel approach to computing a local correlation coefficient versus azimuthally averaged reference projections, using a rotational correlation coefficient, outperforms using a cross-correlation function and a local correlation coefficient in object detection from simulated images with a range of levels of simulated additive noise. The three approaches perform similarly in detecting macromolecular views in electron microscope images of a globular macrolecular complex (the ribosome). The rotational correlation coefficient outperforms the other methods in detection of keyhole limpet hemocyanin macromolecular views in electron micrographs.
Measurement of average radon gas concentration at workplaces
International Nuclear Information System (INIS)
Kavasi, N.; Somlai, J.; Kovacs, T.; Gorjanacz, Z.; Nemeth, Cs.; Szabo, T.; Varhegyi, A.; Hakl, J.
2003-01-01
In this paper results of measurement of average radon gas concentration at workplaces (the schools and kindergartens and the ventilated workplaces) are presented. t can be stated that the one month long measurements means very high variation (as it is obvious in the cases of the hospital cave and the uranium tailing pond). Consequently, in workplaces where the expectable changes of radon concentration considerable with the seasons should be measure for 12 months long. If it is not possible, the chosen six months period should contain summer and winter months as well. The average radon concentration during working hours can be differ considerable from the average of the whole time in the cases of frequent opening the doors and windows or using artificial ventilation. (authors)
Size and emotion averaging: costs of dividing attention after all.
Brand, John; Oriet, Chris; Tottenham, Laurie Sykes
2012-03-01
Perceptual averaging is a process by which sets of similar items are represented by summary statistics such as their average size, luminance, or orientation. Researchers have argued that this process is automatic, able to be carried out without interference from concurrent processing. Here, we challenge this conclusion and demonstrate a reliable cost of computing the mean size of circles distinguished by colour (Experiments 1 and 2) and the mean emotionality of faces distinguished by sex (Experiment 3). We also test the viability of two strategies that could have allowed observers to guess the correct response without computing the average size or emotionality of both sets concurrently. We conclude that although two means can be computed concurrently, doing so incurs a cost of dividing attention.
Henneaux, Marc; Vasiliev, Mikhail A
2017-01-01
Symmetries play a fundamental role in physics. Non-Abelian gauge symmetries are the symmetries behind theories for massless spin-1 particles, while the reparametrization symmetry is behind Einstein's gravity theory for massless spin-2 particles. In supersymmetric theories these particles can be connected also to massless fermionic particles. Does Nature stop at spin-2 or can there also be massless higher spin theories. In the past strong indications have been given that such theories do not exist. However, in recent times ways to evade those constraints have been found and higher spin gauge theories have been constructed. With the advent of the AdS/CFT duality correspondence even stronger indications have been given that higher spin gauge theories play an important role in fundamental physics. All these issues were discussed at an international workshop in Singapore in November 2015 where the leading scientists in the field participated. This volume presents an up-to-date, detailed overview of the theories i...
INTERNATIONALIZATION IN HIGHER EDUCATION
Directory of Open Access Journals (Sweden)
Catalina Crisan-Mitra
2016-03-01
Full Text Available Internationalization of higher education is one of the key trends of development. There are several approaches on how to achieve competitiveness and performance in higher education and international academic mobility; students’ exchange programs, partnerships are some of the aspects that can play a significant role in this process. This paper wants to point out the student’s perception regarding two main directions: one about the master students’ expectation regarding how an internationalized master should be organized and should function, and second the degree of satisfaction of the beneficiaries of internationalized master programs from Babe-Bolyai University. This article is based on an empirical qualitative research that was implemented to students of an internationalized master from the Faculty of Economics and Business Administration. This research can be considered a useful example for those preoccupied to increase the quality of higher education and conclusions drawn have relevance both theoretically and especially practically.
DEFF Research Database (Denmark)
Zou, Yihuan; Zhao, Yingsheng; Du, Xiangyun
. This transformation involves a broad scale of change at individual level, organizational level, and societal level. In this change process in higher education, staff development remains one of the key elements for university innovation and at the same time demands a systematic and holistic approach.......This paper starts with a critical approach to reflect on the current practice of quality assessment and assurance in higher education. This is followed by a proposal that in response to the global challenges for improving the quality of higher education, universities should take active actions...... of change by improving the quality of teaching and learning. From a constructivist perspective of understanding education and learning, this paper also discusses why and how universities should give more weight to learning and change the traditional role of teaching to an innovative approach of facilitation...
DEFF Research Database (Denmark)
Levin, Bruce R; McCall, Ingrid C.; Perrot, Veronique
2017-01-01
We postulate that the inhibition of growth and low rates of mortality of bacteria exposed to ribosome-binding antibiotics deemed bacteriostatic can be attributed almost uniquely to these drugs reducing the number of ribosomes contributing to protein synthesis, i.e., the number of effective......-targeting bacteriostatic antibiotics, the time before these bacteria start to grow again when the drugs are removed, referred to as the post-antibiotic effect (PAE), is markedly greater for constructs with fewer rrn operons than for those with more rrn operons. We interpret the results of these other experiments reported...... here as support for the hypothesis that the reduction in the effective number of ribosomes due to binding to these structures provides a sufficient explanation for the action of bacteriostatic antibiotics that target these structures....
Quantum random number generator
Soubusta, Jan; Haderka, Ondrej; Hendrych, Martin
2001-03-01
Since reflection or transmission of a quantum particle on a beamsplitter is inherently random quantum process, a device built on this principle does not suffer from drawbacks of neither pseudo-random computer generators or classical noise sources. Nevertheless, a number of physical conditions necessary for high quality random numbers generation must be satisfied. Luckily, in quantum optics realization they can be well controlled. We present an easy random number generator based on the division of weak light pulses on a beamsplitter. The randomness of the generated bit stream is supported by passing the data through series of 15 statistical test. The device generates at a rate of 109.7 kbit/s.
Alizée Dauvergne
2010-01-01
What makes the LHC the biggest particle accelerator in the world? Here are some of the numbers that characterise the LHC, and their equivalents in terms that are easier for us to imagine. Feature Number Equivalent Circumference ~ 27 km Distance covered by beam in 10 hours ~ 10 billion km a round trip to Neptune Number of times a single proton travels around the ring each second 11 245 Speed of protons first entering the LHC 299 732 500 m/s 99.9998 % of the speed of light Speed of protons when they collide 299 789 760 m/s 99.9999991 % of the speed of light Collision temperature ~ 1016 °C ove...
Issues in Moroccan Higher Education
Directory of Open Access Journals (Sweden)
Mohammed Lazrak
2017-06-01
Full Text Available Historically, education has always been the springboard for socio-economic development of nations. Undoubtedly, education proved to be the catalyst of change and the front wagon that drives with it all the other wagons pertaining to other dynamic sectors. In effect, the role of education can be seen to provide pupils with the curriculum and hidden curriculum skills alike; teaching skills that will prepare them physically, mentally and socially for the world of work in later life. In Morocco, the country spends over 26% of its Gross Domestic Product (GDP on education. Unfortunately, though this number is important, Moroccan education (primary, secondary and higher education alike still suffers from the mismatch between the state expenditures on education and the general product in reality. In this article, an attempt is made to touch on some relevant issues pertaining to higher education with special reference to Morocco. First, it provides some tentative definitions, mission and functions of university and higher education. Second, it gives a historical sketch of the major reforms that took place in Morocco as well as the major changes pertaining to these reforms respectively. Third, it provides a general overview of the history of higher education in Morocco, it also tackles an issue related to governance in higher education which is cost sharing. Fourth, it delves into the history of English Language Teaching (ELT, lists some characteristics of the English Departments in Morocco. Fifth, it discusses the issue of private vs. public higher education. Last, but not least, it tackles the issue of Brain Drain.
Increase in average foveal thickness after internal limiting membrane peeling
Directory of Open Access Journals (Sweden)
Kumagai K
2017-04-01
Full Text Available Kazuyuki Kumagai,1 Mariko Furukawa,1 Tetsuyuki Suetsugu,1 Nobuchika Ogino2 1Department of Ophthalmology, Kami-iida Daiichi General Hospital, 2Department of Ophthalmology, Nishigaki Eye Clinic, Aichi, Japan Purpose: To report the findings in three cases in which the average foveal thickness was increased after a thin epiretinal membrane (ERM was removed by vitrectomy with internal limiting membrane (ILM peeling.Methods: The foveal contour was normal preoperatively in all eyes. All cases underwent successful phacovitrectomy with ILM peeling for a thin ERM. The optical coherence tomography (OCT images were examined before and after the surgery. The changes in the average foveal (1 mm thickness and the foveal areas within 500 µm from the foveal center were measured. The postoperative changes in the inner and outer retinal areas determined from the cross-sectional OCT images were analyzed.Results: The average foveal thickness and the inner and outer foveal areas increased significantly after the surgery in each of the three cases. The percentage increase in the average foveal thickness relative to the baseline thickness was 26% in Case 1, 29% in Case 2, and 31% in Case 3. The percentage increase in the foveal inner retinal area was 71% in Case 1, 113% in Case 2, and 110% in Case 3, and the percentage increase in foveal outer retinal area was 8% in Case 1, 13% in Case 2, and 18% in Case 3.Conclusion: The increase in the average foveal thickness and the inner and outer foveal areas suggests that a centripetal movement of the inner and outer retinal layers toward the foveal center probably occurred due to the ILM peeling. Keywords: internal limiting membrane, optical coherence tomography, average foveal thickness, epiretinal membrane, vitrectomy