Effect of random edge failure on the average path length
Guo Dongchao; Liang Mangui; Li Dandan; Jiang Zhongyuan, E-mail: mgliang58@gmail.com, E-mail: 08112070@bjtu.edu.cn [Institute of Information Science, Beijing Jiaotong University, 100044, Beijing (China)
2011-10-14
We study the effect of random removal of edges on the average path length (APL) in a large class of uncorrelated random networks in which vertices are characterized by hidden variables controlling the attachment of edges between pairs of vertices. A formula for approximating the APL of networks suffering random edge removal is derived first. Then, the formula is confirmed by simulations for classical ER (Erdoes and Renyi) random graphs, BA (Barabasi and Albert) networks, networks with exponential degree distributions as well as random networks with asymptotic power-law degree distributions with exponent {alpha} > 2. (paper)
Glycogen with short average chain length enhances bacterial durability
Wang, Liang; Wise, Michael J.
2011-09-01
Glycogen is conventionally viewed as an energy reserve that can be rapidly mobilized for ATP production in higher organisms. However, several studies have noted that glycogen with short average chain length in some bacteria is degraded very slowly. In addition, slow utilization of glycogen is correlated with bacterial viability, that is, the slower the glycogen breakdown rate, the longer the bacterial survival time in the external environment under starvation conditions. We call that a durable energy storage mechanism (DESM). In this review, evidence from microbiology, biochemistry, and molecular biology will be assembled to support the hypothesis of glycogen as a durable energy storage compound. One method for testing the DESM hypothesis is proposed.
Determining average path length and average trapping time on generalized dual dendrimer
Li, Ling; Guan, Jihong
2015-03-01
Dendrimer has wide number of important applications in various fields. In some cases during transport or diffusion process, it transforms into its dual structure named Husimi cactus. In this paper, we study the structure properties and trapping problem on a family of generalized dual dendrimer with arbitrary coordination numbers. We first calculate exactly the average path length (APL) of the networks. The APL increases logarithmically with the network size, indicating that the networks exhibit a small-world effect. Then we determine the average trapping time (ATT) of the trapping process in two cases, i.e., the trap placed on a central node and the trap is uniformly distributed in all the nodes of the network. In both case, we obtain explicit solutions of ATT and show how they vary with the networks size. Besides, we also discuss the influence of the coordination number on trapping efficiency.
Model averaging, optimal inference and habit formation
Thomas H B FitzGerald
2014-06-01
Full Text Available Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function – the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge – that of determining which model or models of their environment are the best for guiding behaviour. Bayesian model averaging – which says that an agent should weight the predictions of different models according to their evidence – provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent’s behaviour should show an equivalent balance. We hypothesise that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realisable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behaviour. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded Bayesian inference, focussing particularly upon the relationship between goal-directed and habitual behaviour.
Increasing average period lengths by switching of robust chaos maps in finite precision
Nagaraj, N.; Shastry, M. C.; Vaidya, P. G.
2008-12-01
Grebogi, Ott and Yorke (Phys. Rev. A 38, 1988) have investigated the effect of finite precision on average period length of chaotic maps. They showed that the average length of periodic orbits (T) of a dynamical system scales as a function of computer precision (ɛ) and the correlation dimension (d) of the chaotic attractor: T ˜ɛ-d/2. In this work, we are concerned with increasing the average period length which is desirable for chaotic cryptography applications. Our experiments reveal that random and chaotic switching of deterministic chaotic dynamical systems yield higher average length of periodic orbits as compared to simple sequential switching or absence of switching. To illustrate the application of switching, a novel generalization of the Logistic map that exhibits Robust Chaos (absence of attracting periodic orbits) is first introduced. We then propose a pseudo-random number generator based on chaotic switching between Robust Chaos maps which is found to successfully pass stringent statistical tests of randomness.
Concept of formation length in radiation theory
Baier, V.N.; Katkov, V.M.
2005-01-01
The features of electromagnetic processes are considered which connected with finite size of space region in which final particles (photon, electron-positron pair) are formed. The longitudinal dimension of the region is known as the formation length. If some external agent is acting on an electron while traveling this distance the emission process can be disrupted. There are different agents: multiple scattering of projectile, polarization of a medium, action of external fields, etc. The theory of radiation under influence of the multiple scattering, the Landau-Pomeranchuk-Migdal (LPM) effect, is presented. The probability of radiation is calculated with an accuracy up to 'next to leading logarithm' and with the Coulomb corrections taken into account. The integral characteristics of bremsstrahlung are given, it is shown that the effective radiation length increases due to the LPM effect at high energy. The LPM effect for pair creation is also presented. The multiple scattering influences also on radiative corrections in a medium (and an external field too) including the anomalous magnetic moment of an electron and the polarization tensor as well as coherent scattering of a photon in a Coulomb field. The polarization of a medium alters the radiation probability in soft part of spectrum. Specific features of radiation from a target of finite thickness include: the boundary photon emission, interference effects for thin target, multi-photon radiation. The theory predictions are compared with experimental data obtained at SLAC and CERN SPS. For electron-positron colliding beams following items are discussed: the separation of coherent and incoherent mechanisms of radiation, the beam-size effect in bremsstrahlung, coherent radiation and mechanisms of electron-positron creation
Tit, N.; Kumar, N.; Pradhan, P.
1993-07-01
Exact numerical calculation of ensemble averaged length-scale dependent conductance for the 1D Anderson model is shown to support an earlier conjecture for a conductance minimum. Numerical results can be understood in terms of the Thouless expression for the conductance and the Wigner level-spacing statistics. (author). 8 refs, 2 figs
Fishman, M. M.
1985-01-01
The problem of multialternative sequential discernment of processes is formulated in terms of conditionally optimum procedures minimizing the average length of observations, without any probabilistic assumptions about any one occurring process, rather than in terms of Bayes procedures minimizing the average risk. The problem is to find the procedure that will transform inequalities into equalities. The problem is formulated for various models of signal observation and data processing: (1) discernment of signals from background interference by a multichannel system; (2) discernment of pulse sequences with unknown time delay; (3) discernment of harmonic signals with unknown frequency. An asymptotically optimum sequential procedure is constructed which compares the statistics of the likelihood ratio with the mean-weighted likelihood ratio and estimates the upper bound for conditional average lengths of observations. This procedure is shown to remain valid as the upper bound for the probability of erroneous partial solutions decreases approaching zero and the number of hypotheses increases approaching infinity. It also remains valid under certain special constraints on the probability such as a threshold. A comparison with a fixed-length procedure reveals that this sequential procedure decreases the length of observations to one quarter, on the average, when the probability of erroneous partial solutions is low.
Tan, K. L.; Chong, Z. L.; Khoo, M. B. C.; Teoh, W. L.; Teh, S. Y.
2017-09-01
Quality control is crucial in a wide variety of fields, as it can help to satisfy customers’ needs and requirements by enhancing and improving the products and services to a superior quality level. The EWMA median chart was proposed as a useful alternative to the EWMA \\bar{X} chart because the median-type chart is robust against contamination, outliers or small deviation from the normality assumption compared to the traditional \\bar{X}-type chart. To provide a complete understanding of the run-length distribution, the percentiles of the run-length distribution should be investigated rather than depending solely on the average run length (ARL) performance measure. This is because interpretation depending on the ARL alone can be misleading, as the process mean shifts change according to the skewness and shape of the run-length distribution, varying from almost symmetric when the magnitude of the mean shift is large, to highly right-skewed when the process is in-control (IC) or slightly out-of-control (OOC). Before computing the percentiles of the run-length distribution, optimal parameters of the EWMA median chart will be obtained by minimizing the OOC ARL, while retaining the IC ARL at a desired value.
Optimization of the Critical Diameter and Average Path Length of Social Networks
Haifeng Du
2017-01-01
Full Text Available Optimizing average path length (APL by adding shortcut edges has been widely discussed in connection with social networks, but the relationship between network diameter and APL is generally ignored in the dynamic optimization of APL. In this paper, we analyze this relationship and transform the problem of optimizing APL into the problem of decreasing diameter to 2. We propose a mathematic model based on a memetic algorithm. Experimental results show that our algorithm can efficiently solve this problem as well as optimize APL.
Average accelerator simulation Truebeam using phase space in IAEA format
Santana, Emico Ferreira; Milian, Felix Mas; Paixao, Paulo Oliveira; Costa, Raranna Alves da; Velasco, Fermin Garcia
2015-01-01
In this paper is used a computational code of radiation transport simulation based on Monte Carlo technique, in order to model a linear accelerator of treatment by Radiotherapy. This work is the initial step of future proposals which aim to study several treatment of patient by Radiotherapy, employing computational modeling in cooperation with the institutions UESC, IPEN, UFRJ e COI. The Chosen simulation code is GATE/Geant4. The average accelerator is TrueBeam of Varian Company. The geometric modeling was based in technical manuals, and radiation sources on the phase space for photons, provided by manufacturer in the IAEA (International Atomic Energy Agency) format. The simulations were carried out in equal conditions to experimental measurements. Were studied photons beams of 6MV, with 10 per 10 cm of field, focusing on a water phantom. For validation were compared dose curves in depth, lateral profiles in different depths of the simulated results and experimental data. The final modeling of this accelerator will be used in future works involving treatments and real patients. (author)
Espinosa-Paredes, Gilberto
2010-01-01
The aim of this paper is to propose a framework to obtain a new formulation for multiphase flow conservation equations without length-scale restrictions, based on the non-local form of the averaged volume conservation equations. The simplification of the local averaging volume of the conservation equations to obtain practical equations is subject to the following length-scale restrictions: d << l << L, where d is the characteristic length of the dispersed phases, l is the characteristic length of the averaging volume, and L is the characteristic length of the physical system. If the foregoing inequality does not hold, or if the scale of the problem of interest is of the order of l, the averaging technique and therefore, the macroscopic theories of multiphase flow should be modified in order to include appropriate considerations and terms in the corresponding equations. In these cases the local form of the averaged volume conservation equations are not appropriate to describe the multiphase system. As an example of the conservation equations without length-scale restrictions, the natural circulation boiling water reactor was consider to study the non-local effects on the thermal-hydraulic core performance during steady-state and transient behaviors, and the results were compared with the classic local averaging volume conservation equations.
Calculation of the KIc using the average increase in crack length
Uskov, E.I.; Babak, A.V.
1985-01-01
The test results from high temperature tensile testing of tungsten specimens were analytically examined. The specimens were tested for crack resistance over the temperature range 2200 C down to the temperature of embrittlement. The critical stress intensity factor (KIc) was selected as the controlling parameter for determining the resistance to cracking. Although KIc values are normally obtained by repeatedly removing the specimens from the test chamber and assaying the crack lengths, a correction coefficient was defined in order to accommodate the use of the specimen for performing fracture toughness trials at different temperatures without repeatedly removing the specimen from the test chamber. 8 references
Mexico's Epidemic of Violence and Its Public Health Significance on Average Length of Life
Canudas-Romo, Vladimir; Aburto, José Manuel; García-Guerrero, Victor Manual
2017-01-01
levels, respectively. In 2014, female life expectancy at age 20 was 59.5 years (95% CI 59.0 to 60.1); 71% of these years (42.3 years, 41.6 to 43.0) were spent with perceived vulnerability of violence taking place in the state and 26% at the home (15.3 years, 15 to 15.8). For males, life expectancy at age...... 20 was 54.5 years (53.7 to 55.1); 64% of these years (34.6 years, 34.0 to 35.4) were lived with perceived vulnerability of violence at the state and 20% at the home (11.1 years, 10.8 to 11.5). Conclusions The number of years lived with perceived vulnerability among Mexicans has increased by 30.......5 million person-years over the last 10 years. If perceived vulnerability remains at its 2014 level, the average Mexican adults would be expected to live a large fraction of his/her life with perceived vulnerability of violence. Acts of violence continue to rise in the country and they should be addressed...
Wilasinee Peerajit
2017-12-01
Full Text Available This paper proposes the explicit formulas for the derivation of exact formulas from Average Run Lengths (ARLs using integral equation on CUSUM control chart when observations are long memory processes with exponential white noise. The authors compared efficiency in terms of the percentage of absolute difference to a similar method to verify the accuracy of the ARLs between the values obtained by the explicit formulas and numerical integral equation (NIE method. The explicit formulas were based on Banach fixed point theorem which was used to guarantee the existence and uniqueness of the solution for ARFIMA(p,d,q. Results showed that the two methods are similar in good agreement with the percentage of absolute difference at less than 0.23%. Therefore, the explicit formulas are an efficient alternative for implementation in real applications because the computational CPU time for ARLs from the explicit formulas are 1 second preferable over the NIE method.
Oligonucleotide Length-Dependent Formation of Virus-Like Particles.
Maassen, Stan J; de Ruiter, Mark V; Lindhoud, Saskia; Cornelissen, Jeroen J L M
2018-05-23
Understanding the assembly pathway of viruses can contribute to creating monodisperse virus-based materials. In this study, the cowpea chlorotic mottle virus (CCMV) is used to determine the interactions between the capsid proteins of viruses and their cargo. The assembly of the capsid proteins in the presence of different lengths of short, single-stranded (ss) DNA is studied at neutral pH, at which the protein-protein interactions are weak. Chromatography, electrophoresis, microscopy, and light scattering data show that the assembly efficiency and speed of the particles increase with increasing length of oligonucleotides. The minimal length required for assembly under the conditions used herein is 14 nucleotides. Assembly of particles containing such short strands of ssDNA can take almost a month. This slow assembly process enabled the study of intermediate states, which confirmed a low cooperative assembly for CCMV and allowed for further expansion of current assembly theories. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Agustín Rubio Alcover
2013-07-01
Full Text Available This study is intended to compare the complete filmographies of the three American directors whose works are analyzed. They are Clint Eastwood, Brian De Palma and Woody Allen. We define the approach as a trampoline for leaping over the wall of a difficult conceptual and methodological blind alley—an understanding of movie editors and their task, but above all their contribution. Their work is disguised as something merely technical and obvious but, even in the best of cases, this attitude never anything other than lazy. It is the analysis route upheld and cultivated by the David Bordwell and Barry Salt that we are prepared to travel along. If we want to abjure an unsustainably radical anti-empiricism without precipitating ourselves into neo-empiricist infantilism or regressing to a chaotic teratology—that is: to try to remain focused on both the wood and the trees—a statistical-type study, aided by the latest-generation digital and computer tools and, more specifically, an Average Shot Length study (which we will refer to from now on with the acronym ASL appears to us an objective and, consequently, literally unobjectionable criterion. It is probably as reductionist as it is stimulating when it comes to reaching conclusions that are non-definitive but undoubtedly worthy of interest, because, faced with the subjectivity of the analysis and at the general macroscopic level of the movie, the normative and the standard blends with the deviant, or exceptional.
THE AVERAGE STAR FORMATION HISTORIES OF GALAXIES IN DARK MATTER HALOS FROM z = 0-8
Behroozi, Peter S.; Wechsler, Risa H.; Conroy, Charlie
2013-01-01
We present a robust method to constrain average galaxy star formation rates (SFRs), star formation histories (SFHs), and the intracluster light (ICL) as a function of halo mass. Our results are consistent with observed galaxy stellar mass functions, specific star formation rates (SSFRs), and cosmic star formation rates (CSFRs) from z = 0 to z = 8. We consider the effects of a wide range of uncertainties on our results, including those affecting stellar masses, SFRs, and the halo mass function at the heart of our analysis. As they are relevant to our method, we also present new calibrations of the dark matter halo mass function, halo mass accretion histories, and halo-subhalo merger rates out to z = 8. We also provide new compilations of CSFRs and SSFRs; more recent measurements are now consistent with the buildup of the cosmic stellar mass density at all redshifts. Implications of our work include: halos near 10 12 M ☉ are the most efficient at forming stars at all redshifts, the baryon conversion efficiency of massive halos drops markedly after z ∼ 2.5 (consistent with theories of cold-mode accretion), the ICL for massive galaxies is expected to be significant out to at least z ∼ 1-1.5, and dwarf galaxies at low redshifts have higher stellar mass to halo mass ratios than previous expectations and form later than in most theoretical models. Finally, we provide new fitting formulae for SFHs that are more accurate than the standard declining tau model. Our approach places a wide variety of observations relating to the SFH of galaxies into a self-consistent framework based on the modern understanding of structure formation in ΛCDM. Constraints on the stellar mass-halo mass relationship and SFRs are available for download online.
The Average Star Formation Histories of Galaxies in Dark Matter Halos from z = 0-8
Behroozi, Peter S.; Wechsler, Risa H.; Conroy, Charlie
2013-06-01
We present a robust method to constrain average galaxy star formation rates (SFRs), star formation histories (SFHs), and the intracluster light (ICL) as a function of halo mass. Our results are consistent with observed galaxy stellar mass functions, specific star formation rates (SSFRs), and cosmic star formation rates (CSFRs) from z = 0 to z = 8. We consider the effects of a wide range of uncertainties on our results, including those affecting stellar masses, SFRs, and the halo mass function at the heart of our analysis. As they are relevant to our method, we also present new calibrations of the dark matter halo mass function, halo mass accretion histories, and halo-subhalo merger rates out to z = 8. We also provide new compilations of CSFRs and SSFRs; more recent measurements are now consistent with the buildup of the cosmic stellar mass density at all redshifts. Implications of our work include: halos near 1012 M ⊙ are the most efficient at forming stars at all redshifts, the baryon conversion efficiency of massive halos drops markedly after z ~ 2.5 (consistent with theories of cold-mode accretion), the ICL for massive galaxies is expected to be significant out to at least z ~ 1-1.5, and dwarf galaxies at low redshifts have higher stellar mass to halo mass ratios than previous expectations and form later than in most theoretical models. Finally, we provide new fitting formulae for SFHs that are more accurate than the standard declining tau model. Our approach places a wide variety of observations relating to the SFH of galaxies into a self-consistent framework based on the modern understanding of structure formation in ΛCDM. Constraints on the stellar mass-halo mass relationship and SFRs are available for download online.
Robinson, J C; Luft, H S
1985-12-01
A variety of recent proposals rely heavily on market forces as a means of controlling hospital cost inflation. Sceptics argue, however, that increased competition might lead to cost-increasing acquisitions of specialized clinical services and other forms of non-price competition as means of attracting physicians and patients. Using data from hospitals in 1972 we analyzed the impact of market structure on average hospital costs, measured in terms of both cost per patient and cost per patient day. Under the retrospective reimbursement system in place at the time, hospitals in more competitive environments exhibited significantly higher costs of production than did those in less competitive environments.
Hernandez, Leonor; Julia, J.E.; Paranjape, Sidharth; Hibiki, Takashi; Ishii, Mamoru
2010-01-01
In this work, the use of the area-averaged void fraction and bubble chord length entropies is introduced as flow regime indicators in two-phase flow systems. The entropy provides quantitative information about the disorder in the area-averaged void fraction or bubble chord length distributions. The CPDF (cumulative probability distribution function) of void fractions and bubble chord lengths obtained by means of impedance meters and conductivity probes are used to calculate both entropies. Entropy values for 242 flow conditions in upward two-phase flows in 25.4 and 50.8-mm pipes have been calculated. The measured conditions cover ranges from 0.13 to 5 m/s in the superficial liquid velocity j f and ranges from 0.01 to 25 m/s in the superficial gas velocity j g . The physical meaning of both entropies has been interpreted using the visual flow regime map information. The area-averaged void fraction and bubble chord length entropies capability as flow regime indicators have been checked with other statistical parameters and also with different input signals durations. The area-averaged void fraction and the bubble chord length entropies provide better or at least similar results than those obtained with other indicators that include more than one parameter. The entropy is capable to reduce the relevant information of the flow regimes in only one significant and useful parameter. In addition, the entropy computation time is shorter than the majority of the other indicators. The use of one parameter as input also represents faster predictions. (orig.)
Estimating dew formation in rice, using seasonally averaged diel patterns of weather variables
Luo, W.; Goudriaan, J.
2004-01-01
If dew formation cannot be measured it has to be estimated. Available simulation models for estimating dew formation require hourly weather data as input. However, such data are not available for places without an automatic weather station. In such cases the diel pattern of weather variables might
Cooper, D.L.; Ponec, Robert
2013-01-01
Roč. 113, č. 2 (2013), s. 102-111 ISSN 0020-7608 R&D Projects: GA ČR GA203/09/0118 Institutional support: RVO:67985858 Keywords : transition metal hydrides * bond formation * analysis of domain averaged Fermi holes Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 1.166, year: 2013
Average formation number n-barOH of colloid-type indium hydroxide
Stefanowicz, T.; Szent-Kirallyine Gajda, J.
1983-01-01
Indium perchlorate in perchloric acid solution was titrated with sodium hydroxide solution to various pH values. Indium hydroxide colloid was removed by ultracentrifugation and supernatant solution was titrated with base to neutral pH. The two-stage titration data were used to calculate the formation number of indium hydroxide colloid, which was found to equal n-bar OH = 2.8. (author)
The disk averaged star formation relation for Local Volume dwarf galaxies
López-Sánchez, Á. R.; Lagos, C. D. P.; Young, T.; Jerjen, H.
2018-05-01
Spatially resolved H I studies of dwarf galaxies have provided a wealth of precision data. However these high-quality, resolved observations are only possible for handful of dwarf galaxies in the Local Volume. Future H I surveys are unlikely to improve the current situation. We therefore explore a method for estimating the surface density of the atomic gas from global H I parameters, which are conversely widely available. We perform empirical tests using galaxies with resolved H I maps, and find that our approximation produces values for the surface density of atomic hydrogen within typically 0.5 dex of the true value. We apply this method to a sample of 147 galaxies drawn from modern near-infrared stellar photometric surveys. With this sample we confirm a strict correlation between the atomic gas surface density and the star formation rate surface density, that is vertically offset from the Kennicutt-Schmidt relation by a factor of 10 - 30, and significantly steeper than the classical N = 1.4 of Kennicutt (1998). We further infer the molecular fraction in the sample of low surface brightness, predominantly dwarf galaxies by assuming that the star formation relationship with molecular gas observed for spiral galaxies also holds in these galaxies, finding a molecular-to-atomic gas mass fraction within the range of 5-15%. Comparison of the data to available models shows that a model in which the thermal pressure balances the vertical gravitational field captures better the shape of the ΣSFR-Σgas relationship. However, such models fail to reproduce the data completely, suggesting that thermal pressure plays an important role in the disks of dwarf galaxies.
McDougall, K E; Cooper, P L; Stewart, A J; Huggins, C E
2015-12-01
The prevalence of malnutrition in subacute inpatient settings has been reported to be 30-50%. While there are a number of nutrition evaluation tools which have been validated to diagnose malnutrition, the use of a validated nutrition evaluation tool to measure changes in nutritional status during an average length of stay for a subacute inpatient has not yet been tested. This study aims to determine the potential of the full MNA (full Mini Nutritional Assessment) and MNA (Mini Nutritional Assessment Short Form) scores to measure change in nutritional status over an average subacute inpatient stay (21 days). A prospective observational study. The study was performed in three Rehabilitation and Geriatric Evaluation and Management (GEM) wards of the Kingston Centre, Monash Health, Melbourne, Australia. All patients ≥65 years admitted to these wards with an expected length of stay of at least 14 days were considered for inclusion in this study. Nutritional status was assessed on admission using the full MNA as part of usual dietetic care and patients were provided with nutrition intervention/diet therapy based on full MNA classification. Full MNA score (0-30), MNA score (0-14), anthropometry (weight and height) and nutritional biochemistry (serum albumin, transthyretin and C-reactive protein) were compared between admission and day 20.5 ± 2.4. Mean age (± SD) of 83 ± 7 years, n=114. For those patients diagnosed at risk of malnutrition or malnourished (n=103), there were significant increases in full MNA score (1.8 ± 2.4, pnutrition states (p=0.033). Both the MNA and full MNA can be used to evaluate nutrition progress within the subacute inpatient setting over a three week time period, thereby providing clinicians with feedback on a patient's nutrition progress and assisting with ongoing care planning. Due to its ease of use and shorter time required to complete, the MNA may be the preferred nutrition evaluation tool in this setting.
Effects of pulse-length and emitter area on virtual cathode formation in electron guns
Valfells, Agust; Feldman, D.W.; Virgo, M.; O'Shea, P.G.; Lau, Y.Y.
2002-01-01
Recent experiments at the University of Maryland using photoemission from a dispenser cathode have yielded some interesting results regarding the effects of the area of emission and of the ratio between the pulse length and the gap transit time on the amount of current that may be drawn from an electron gun before a virtual cathode forms. The experiments show that a much higher current density may be drawn from a short pulse or limited emitter area than is anticipated by the Child-Langmuir limiting current. There is also evidence that the current may be increased even after virtual cathode formation, which leads a distinction between a limiting current density and a current density critical for virtual cathode formation. The experiments have also yielded some interesting results on the longitudinal structure of the current pulse passed through the anode. Some empirical and theoretical scaling laws regarding the formation of virtual cathodes in an electron gun will be presented. This work was motivated by the needs of the University of Maryland Electron Ring (UMER) [P. G. O'Shea, M. Reiser, R. A. Kishek et al., Nucl. Instrum. Methods Phys. Res. A 464, 646 (2001)] where the goal is to generate pulses that are well-localized in time and space
The full-length form of the Drosophila amyloid precursor protein is involved in memory formation.
Bourdet, Isabelle; Preat, Thomas; Goguel, Valérie
2015-01-21
The APP plays a central role in AD, a pathology that first manifests as a memory decline. Understanding the role of APP in normal cognition is fundamental in understanding the progression of AD, and mammalian studies have pointed to a role of secreted APPα in memory. In Drosophila, we recently showed that APPL, the fly APP ortholog, is required for associative memory. In the present study, we aimed to characterize which form of APPL is involved in this process. We show that expression of a secreted-APPL form in the mushroom bodies, the center for olfactory memory, is able to rescue the memory deficit caused by APPL partial loss of function. We next assessed the impact on memory of the Drosophila α-secretase kuzbanian (KUZ), the enzyme initiating the nonamyloidogenic pathway that produces secreted APPLα. Strikingly, KUZ overexpression not only failed to rescue the memory deficit caused by APPL loss of function, it exacerbated this deficit. We further show that in addition to an increase in secreted-APPL forms, KUZ overexpression caused a decrease of membrane-bound full-length species that could explain the memory deficit. Indeed, we observed that transient expression of a constitutive membrane-bound mutant APPL form is sufficient to rescue the memory deficit caused by APPL reduction, revealing for the first time a role of full-length APPL in memory formation. Our data demonstrate that, in addition to secreted APPL, the noncleaved form is involved in memory, raising the possibility that secreted and full-length APPL act together in memory processes. Copyright © 2015 the authors 0270-6474/15/351043-09$15.00/0.
Rossi, Sergio; Deslauriers, Annie; Anfodillo, Tommaso; Morin, Hubert; Saracino, Antonio; Motta, Renzo; Borghetti, Marco
2006-01-01
Intra-annual radial growth rates and durations in trees are reported to differ greatly in relation to species, site and environmental conditions. However, very similar dynamics of cambial activity and wood formation are observed in temperate and boreal zones. Here, we compared weekly xylem cell production and variation in stem circumference in the main northern hemisphere conifer species (genera Picea, Pinus, Abies and Larix) from 1996 to 2003. Dynamics of radial growth were modeled with a Gompertz function, defining the upper asymptote (A), x-axis placement (beta) and rate of change (kappa). A strong linear relationship was found between the constants beta and kappa for both types of analysis. The slope of the linear regression, which corresponds to the time at which maximum growth rate occurred, appeared to converge towards the summer solstice. The maximum growth rate occurred around the time of maximum day length, and not during the warmest period of the year as previously suggested. The achievements of photoperiod could act as a growth constraint or a limit after which the rate of tree-ring formation tends to decrease, thus allowing plants to safely complete secondary cell wall lignification before winter.
Cunningham, Daniel J.; Shearer, David A.; Carter, Neil; Drawer, Scott; Pollard, Ben; Bennett, Mark; Eager, Robin; Cook, Christian J.; Farrell, John; Russell, Mark
2018-01-01
The assessment of competitive movement demands in team sports has traditionally relied upon global positioning system (GPS) analyses presented as fixed-time epochs (e.g., 5–40 min). More recently, presenting game data as a rolling average has become prevalent due to concerns over a loss of sampling resolution associated with the windowing of data over fixed periods. Accordingly, this study compared rolling average (ROLL) and fixed-time (FIXED) epochs for quantifying the peak movement demands of international rugby union match-play as a function of playing position. Elite players from three different squads (n = 119) were monitored using 10 Hz GPS during 36 matches played in the 2014–2017 seasons. Players categorised broadly as forwards and backs, and then by positional sub-group (FR: front row, SR: second row, BR: back row, HB: half back, MF: midfield, B3: back three) were monitored during match-play for peak values of high-speed running (>5 m·s-1; HSR) and relative distance covered (m·min-1) over 60–300 s using two types of sample-epoch (ROLL, FIXED). Irrespective of the method used, as the epoch length increased, values for the intensity of running actions decreased (e.g., For the backs using the ROLL method, distance covered decreased from 177.4 ± 20.6 m·min-1 in the 60 s epoch to 107.5 ± 13.3 m·min-1 for the 300 s epoch). For the team as a whole, and irrespective of position, estimates of fixed effects indicated significant between-method differences across all time-points for both relative distance covered and HSR. Movement demands were underestimated consistently by FIXED versus ROLL with differences being most pronounced using 60 s epochs (95% CI HSR: -6.05 to -4.70 m·min-1, 95% CI distance: -18.45 to -16.43 m·min-1). For all HSR time epochs except one, all backs groups increased more (p < 0.01) from FIXED to ROLL than the forward groups. Linear mixed modelling of ROLL data highlighted that for HSR (except 60 s epoch), SR was the only group not
Pan, Xiaohua; Zhang, Yan; Sun, Xiaobo; Pan, Wei; Yu, Guifeng; Si, Shuxin; Wang, Jinping
2018-04-01
Carbon dots (CDs) have attracted increasing attention due to their high performances and potential applications in wide range of areas. However, their emission mechanism is not clear so far. In order to reveal more factors contributing to the emission of CDs, the effect of carbon chain length of starting materials on the formation of CDs and their optical properties was experimentally investigated in this work. In order to focus on the effect of carbon chain length, the starting materials with C, O, N in fully identical forms and only carbon chain lengths being different were selected for synthesizing CDs, including citric acid (CA) and adipic acid (AA) as carbon sources, and diamines with different carbon chain lengths (H2N(CH2)nNH2, n = 2, 4, 6) as nitrogen sources, as well as ethylenediamine (EDA) as nitrogen source and diacids with different carbon chain lengths (HOOC(CH2)nCOOH, n = 0, 2, 4, 6) as carbon sources. Therefore, the effect of carbon chain length of starting materials on the formation and optical properties of CDs can be systematically investigated by characterizing and comparing the structures and optical properties of as-prepared nine types of CDs. Moreover, the density of –NH2 on the surface of the CDs was quantitatively detected by a spectrophotometry so as to elucidate the relationship between the –NH2 related surface state and the optical properties.
Nakajima, Kimihiko; Shimasaku, Kazuhiro; Ono, Yoshiaki; Okamura, Sadanori [Department of Astronomy, Graduate School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Ouchi, Masami [Institute for the Physics and Mathematics of the Universe (IPMU), TODIAS, The University of Tokyo, 5-1-5 Kashiwanoha, Kashiwa, Chiba 277-8583 (Japan); Lee, Janice C.; Ly, Chun [Observatories of the Carnegie Institution of Washington, 813 Santa Barbara Street, Pasadena, CA 91101 (United States); Foucaud, Sebastien [Department of Earth Sciences, National Taiwan Normal University, No. 88, Tingzhou Road, Sec. 4, Taipei 11677, Taiwan (China); Dale, Daniel A. [Department of Physics and Astronomy, University of Wyoming, Laramie, WY (United States); Salim, Samir [Department of Astronomy, Indiana University, Bloomington, IN (United States); Finn, Rose [Department of Physics, Siena College, Loudonville, NY (United States); Almaini, Omar, E-mail: nakajima@astron.s.u-tokyo.ac.jp [School of Physics and Astronomy, University of Nottingham, Nottingham (United Kingdom)
2012-01-20
We present the average metallicity and star formation rate (SFR) of Ly{alpha} emitters (LAEs) measured from our large-area survey with three narrowband (NB) filters covering the Ly{alpha}, [O II]{lambda}3727, and H{alpha}+[N II] lines of LAEs at z = 2.2. We select 919 z = 2.2 LAEs from Subaru/Suprime-Cam NB data in conjunction with Magellan/IMACS spectroscopy. Of these LAEs, 561 and 105 are observed with KPNO/NEWFIRM near-infrared NB filters whose central wavelengths are matched to redshifted [O II] and H{alpha} nebular lines, respectively. By stacking the near-infrared images of the LAEs, we successfully obtain average nebular-line fluxes of LAEs, the majority of which are too faint to be identified individually by NB imaging or deep spectroscopy. The stacked object has an H{alpha} luminosity of 1.7 Multiplication-Sign 10{sup 42} erg s{sup -1} corresponding to an SFR of 14 M{sub Sun} yr{sup -1}. We place, for the first time, a firm lower limit to the average metallicity of LAEs of Z {approx}> 0.09 Z{sub Sun} (2{sigma}) based on the [O II]/(H{alpha}+[N II]) index together with photoionization models and empirical relations. This lower limit of metallicity rules out the hypothesis that LAEs, so far observed at z {approx} 2, are extremely metal-poor (Z < 2 Multiplication-Sign 10{sup -2} Z{sub Sun }) galaxies at the 4{sigma} level. This limit is higher than a simple extrapolation of the observed mass-metallicity relation of z {approx} 2 UV-selected galaxies toward lower masses (5 Multiplication-Sign 10{sup 8} M{sub Sun }), but roughly consistent with a recently proposed fundamental mass-metallicity relation when the LAEs' relatively low SFR is taken into account. The H{alpha} and Ly{alpha} luminosities of our NB-selected LAEs indicate that the escape fraction of Ly{alpha} photons is {approx}12%-30%, much higher than the values derived for other galaxy populations at z {approx} 2.
AVERAGE METALLICITY AND STAR FORMATION RATE OF Lyα EMITTERS PROBED BY A TRIPLE NARROWBAND SURVEY
Nakajima, Kimihiko; Shimasaku, Kazuhiro; Ono, Yoshiaki; Okamura, Sadanori; Ouchi, Masami; Lee, Janice C.; Ly, Chun; Foucaud, Sebastien; Dale, Daniel A.; Salim, Samir; Finn, Rose; Almaini, Omar
2012-01-01
We present the average metallicity and star formation rate (SFR) of Lyα emitters (LAEs) measured from our large-area survey with three narrowband (NB) filters covering the Lyα, [O II]λ3727, and Hα+[N II] lines of LAEs at z = 2.2. We select 919 z = 2.2 LAEs from Subaru/Suprime-Cam NB data in conjunction with Magellan/IMACS spectroscopy. Of these LAEs, 561 and 105 are observed with KPNO/NEWFIRM near-infrared NB filters whose central wavelengths are matched to redshifted [O II] and Hα nebular lines, respectively. By stacking the near-infrared images of the LAEs, we successfully obtain average nebular-line fluxes of LAEs, the majority of which are too faint to be identified individually by NB imaging or deep spectroscopy. The stacked object has an Hα luminosity of 1.7 × 10 42 erg s –1 corresponding to an SFR of 14 M ☉ yr –1 . We place, for the first time, a firm lower limit to the average metallicity of LAEs of Z ∼> 0.09 Z ☉ (2σ) based on the [O II]/(Hα+[N II]) index together with photoionization models and empirical relations. This lower limit of metallicity rules out the hypothesis that LAEs, so far observed at z ∼ 2, are extremely metal-poor (Z –2 Z ☉ ) galaxies at the 4σ level. This limit is higher than a simple extrapolation of the observed mass-metallicity relation of z ∼ 2 UV-selected galaxies toward lower masses (5 × 10 8 M ☉ ), but roughly consistent with a recently proposed fundamental mass-metallicity relation when the LAEs' relatively low SFR is taken into account. The Hα and Lyα luminosities of our NB-selected LAEs indicate that the escape fraction of Lyα photons is ∼12%-30%, much higher than the values derived for other galaxy populations at z ∼ 2.
Sampoorna, M.; Nagendra, K. N. [Indian Institute of Astrophysics, Koramangala, Bengaluru 560 034 (India); Stenflo, J. O., E-mail: sampoorna@iiap.res.in, E-mail: knn@iiap.res.in, E-mail: stenflo@astro.phys.ethz.ch [Institute of Astronomy, ETH Zurich, CH-8093 Zurich (Switzerland)
2017-08-01
Magnetic fields in the solar atmosphere leave their fingerprints in the polarized spectrum of the Sun via the Hanle and Zeeman effects. While the Hanle and Zeeman effects dominate, respectively, in the weak and strong field regimes, both these effects jointly operate in the intermediate field strength regime. Therefore, it is necessary to solve the polarized line transfer equation, including the combined influence of Hanle and Zeeman effects. Furthermore, it is required to take into account the effects of partial frequency redistribution (PRD) in scattering when dealing with strong chromospheric lines with broad damping wings. In this paper, we present a numerical method to solve the problem of polarized PRD line formation in magnetic fields of arbitrary strength and orientation. This numerical method is based on the concept of operator perturbation. For our studies, we consider a two-level atom model without hyperfine structure and lower-level polarization. We compare the PRD idealization of angle-averaged Hanle–Zeeman redistribution matrices with the full treatment of angle-dependent PRD, to indicate when the idealized treatment is inadequate and what kind of polarization effects are specific to angle-dependent PRD. Because the angle-dependent treatment is presently computationally prohibitive when applied to realistic model atmospheres, we present the computed emergent Stokes profiles for a range of magnetic fields, with the assumption of an isothermal one-dimensional medium.
Javvaji, Brahmanandam [Indian Institute of Science, Department of Aerospace Engineering (India); Raha, S. [Indian Institute of Science, Department of Computational and Data Sciences (India); Mahapatra, D. Roy, E-mail: droymahapatra@aero.iisc.ernet.in [Indian Institute of Science, Department of Aerospace Engineering (India)
2017-02-15
Electromagnetic and thermo-mechanical forces play a major role in nanotube-based materials and devices. Under high-energy electron transport or high current densities, carbon nanotubes fail via sequential fracture. The failure sequence is governed by certain length scale and flow of current. We report a unified phenomenological model derived from molecular dynamic simulation data, which successfully captures the important physics of the complex failure process. Length-scale and strain rate-dependent defect nucleation, growth, and fracture in single-walled carbon nanotubes with diameters in the range of 0.47 to 2.03 nm and length which is about 6.17 to 26.45 nm are simulated. Nanotubes with long length and small diameter show brittle fracture, while those with short length and large diameter show transition from ductile to brittle fracture. In short nanotubes with small diameters, we observe several structural transitions like Stone-Wales defect initiation, its propagation to larger void nucleation, formation of multiple chains of atoms, conversion to monatomic chain of atoms, and finally complete fracture of the carbon nanotube. Hybridization state of carbon-carbon bonds near the end cap evolves, leading to the formation of monatomic chain in short nanotubes with small diameter. Transition from ductile to brittle fracture is also observed when strain rate exceeds a critical value. A generalized analytical model of failure is established, which correlates the defect energy during the formation of atomic chain with aspect ratio of the nanotube and strain rate. Variation in the mechanical properties such as elastic modulus, tensile strength, and fracture strain with the size and strain rate shows important implications in mitigating force fields and ways to enhance the life of electronic devices and nanomaterial conversion via fracture in manufacturing.
Effects of Presentation Format and List Length on Children's False Memories
Swannell, Ellen R.; Dewhurst, Stephen A.
2013-01-01
The effect of list length on children's false memories was investigated using list and story versions of the Deese/Roediger-McDermott procedure. Short (7 items) and long (14 items) sequences of semantic associates were presented to children aged 6, 8, and 10 years old either in lists or embedded within a story that emphasized the list theme.…
FRC formation studies in a field reversed theta pinch with a variable length coil
Maqueda, R.; Sobehart, J.; Rodrigo, A.B.
1987-01-01
The formation phase of field reversed configurations (FRC) produced using a theta pinch has received considerable attention lately in connection with the possibility of developing formation methods in time scales longer than the Alven radial time, which would permit the use of low-voltage technology and represent an important engineering simplification in the trend towards larger scale machines sup (1)). The mechanisms leading to the loss of trapped reversed flux during the preheating 2 ) and formation sup (3,4)) stages, looking for maximization of this quantity in order to improve on the stability and transport properties of the configuration in its final equilibrium state are investigated. As a result, semi-emperical scaling laws have been obtained relating the reversed flux loss with experimental operating parameters during the early stages of the formation process 1 ). (author) [pt
Abdelkhalik Eladl
2018-01-01
Full Text Available This paper reports an investigation of the effects of process parameters on the quality characteristics of polymeric parts produced by micro injection moulding (μIM with two different materials. Four injection moulding process parameters (injection velocity, holding pressure, melt temperature and mould temperature were investigated using Polypropylene (PP and Acrylonitrile Butadiene Styrene (ABS. Three key characteristics of the mouldings were evaluated with respect to process settings and the material employed: part mass, flow length and flash formation. The experimentation employs a test part with four micro fingers with different aspect ratios (from 21 up to 150 and was carried out according to the Design of Experiments (DOE statistical technique. The results show that holding pressure and injection velocity are the most influential parameters on part mass with a direct effect for both materials. Both parameters have a similar effect on flow length for both PP and ABS at all aspect ratios and have higher effects as the feature thickness decreased below 300 μm. The study shows that for the investigated materials the injection speed and packing pressure were the most influential parameters for increasing the amount of flash formation, with relative effects consistent for both materials. Higher melt and mould temperatures settings were less influential parameters for increasing the flash amount when moulding with both materials. Of the two investigated materials, PP was the one exhibiting more flash formation as compared with ABS, when corresponding injection moulding parameters settings for both materials were considered.
Devale, Madhuri R; Mahesh, M C; Bhandary, Shreetha
2017-01-01
Stresses generated during root canal instrumentation have been reported to cause apical cracks. The smaller, less pronounced defects like cracks can later propagate into vertical root fracture, when the tooth is subjected to repeated stresses from endodontic or restorative procedures. This study evaluated occurrence of apical cracks with stainless steel hand files, rotary NiTi RaCe and K3 files at two different instrumentation lengths. In the present in vitro study, 60 mandibular premolars were mounted in resin blocks with simulated periodontal ligament. Apical 3 mm of the root surfaces were exposed and stained using India ink. Preoperative images of root apices were obtained at 100x using stereomicroscope. The teeth were divided into six groups of 10 each. First two groups were instrumented with stainless steel files, next two groups with rotary NiTi RaCe files and the last two groups with rotary NiTi K3 files. The instrumentation was carried out till the apical foramen (Working Length-WL) and 1 mm short of the apical foramen (WL-1) with each file system. After root canal instrumentation, postoperative images of root apices were obtained. Preoperative and postoperative images were compared and the occurrence of cracks was recorded. Descriptive statistical analysis and Chi-square tests were used to analyze the results. Apical root cracks were seen in 30%, 35% and 20% of teeth instrumented with K-files, RaCe files and K3 files respectively. There was no statistical significance among three instrumentation systems in the formation of apical cracks (p=0.563). Apical cracks were seen in 40% and 20% of teeth instrumented with K-files; 60% and 10% of teeth with RaCe files and 40% and 0% of teeth with K3 files at WL and WL-1 respectively. For groups instrumented with hand files there was no statistical significance in number of cracks at WL and WL-1 (p=0.628). But for teeth instrumented with RaCe files and K3 files significantly more number of cracks were seen at WL than
Mahesh, MC; Bhandary, Shreetha
2017-01-01
Introduction Stresses generated during root canal instrumentation have been reported to cause apical cracks. The smaller, less pronounced defects like cracks can later propagate into vertical root fracture, when the tooth is subjected to repeated stresses from endodontic or restorative procedures. Aim This study evaluated occurrence of apical cracks with stainless steel hand files, rotary NiTi RaCe and K3 files at two different instrumentation lengths. Materials and Methods In the present in vitro study, 60 mandibular premolars were mounted in resin blocks with simulated periodontal ligament. Apical 3 mm of the root surfaces were exposed and stained using India ink. Preoperative images of root apices were obtained at 100x using stereomicroscope. The teeth were divided into six groups of 10 each. First two groups were instrumented with stainless steel files, next two groups with rotary NiTi RaCe files and the last two groups with rotary NiTi K3 files. The instrumentation was carried out till the apical foramen (Working Length-WL) and 1 mm short of the apical foramen (WL-1) with each file system. After root canal instrumentation, postoperative images of root apices were obtained. Preoperative and postoperative images were compared and the occurrence of cracks was recorded. Descriptive statistical analysis and Chi-square tests were used to analyze the results. Results Apical root cracks were seen in 30%, 35% and 20% of teeth instrumented with K-files, RaCe files and K3 files respectively. There was no statistical significance among three instrumentation systems in the formation of apical cracks (p=0.563). Apical cracks were seen in 40% and 20% of teeth instrumented with K-files; 60% and 10% of teeth with RaCe files and 40% and 0% of teeth with K3 files at WL and WL-1 respectively. For groups instrumented with hand files there was no statistical significance in number of cracks at WL and WL-1 (p=0.628). But for teeth instrumented with RaCe files and K3 files
Ajrian, E A; Sidorenko, S N
2002-01-01
The effect of the ion-molecule and intermolecular interactions on the formation of inter-ion average force potentials is investigated within the framework of a classical ion-dipole model of electrolyte solutions. These potentials are shown to possess the Coulomb asymptotics at large distances while in the region of mean distances they reveal creation and disintegration of solvent-shared ion pairs. The calculation results provide a qualitatively authentic physical picture which is experimentally observed in strong electrolytes solutions. In particular, an increased interaction between an ion and a molecule enhances formation of ion pairs in which the ions are separated by one solvent molecule
U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...
Lee, Kyoung-Soo; Glikman, Eilat; Dey, Arjun; Reddy, Naveen; Jannuzi, Buell T.; Brown, Michael J. I.; Gonzalez, Anthony H.; Cooper, Michael C.; Fan Xiaohui; Bian Fuyan; Stern, Daniel; Brodwin, Mark; Cooray, Asantha
2011-01-01
We investigate the average physical properties and star formation histories (SFHs) of the most UV-luminous star-forming galaxies at z ∼ 3.7. Our results are based on the average spectral energy distributions (SEDs), constructed from stacked optical-to-infrared photometry, of a sample of the 1913 most UV-luminous star-forming galaxies found in 5.3 deg 2 of the NOAO Deep Wide-Field Survey. We find that the shape of the average SED in the rest optical and infrared is fairly constant with UV luminosity, i.e., more UV-luminous galaxies are, on average, also more luminous at longer wavelengths. In the rest UV, however, the spectral slope β (≡ dlogF λ /dlogλ; measured at 0.13 μm rest UV and thus star formation rates (SFRs) scale closely with stellar mass such that more UV-luminous galaxies are also more massive, (2) the median ages indicate that the stellar populations are relatively young (200-400 Myr) and show little correlation with UV luminosity, and (3) more UV-luminous galaxies are dustier than their less-luminous counterparts, such that L ∼ 4-5L* galaxies are extincted up to A(1600) = 2 mag while L ∼ L* galaxies have A(1600) = 0.7-1.5 mag. We argue that the average SFHs of UV-luminous galaxies are better described by models in which SFR increases with time in order to simultaneously reproduce the tight correlation between the UV-derived SFR and stellar mass and their universally young ages. We demonstrate the potential of measurements of the SFR-M * relation at multiple redshifts to discriminate between simple models of SFHs. Finally, we discuss the fate of these UV-brightest galaxies in the next 1-2 Gyr and their possible connection to the most massive galaxies at z ∼ 2.
Earnest, Arul; Chen, Mark I C; Seow, Eillyne
2006-01-22
It has been postulated that patients admitted on weekends or after office hours may experience delays in clinical management and consequently have longer length of stay (LOS). We investigated if day and time of admission is associated with LOS in Tan Tock Seng Hospital (TTSH), a 1,400 bed acute care tertiary hospital serving the central and northern regions of Singapore. This was a historical cohort study based on all admissions from TTSH from 1st September 2003 to 31st August 2004. Data was extracted from routinely available computerized hospital information systems for analysis by episode of care. LOS for each episode of care was log-transformed before analysis, and a multivariate linear regression model was used to study if sex, age group, type of admission, admission source, day of week admitted, admission on a public holiday or eve of public holiday, admission on a weekend and admission time were associated with an increased LOS. In the multivariate analysis, sex, age group, type of admission, source of admission, admission on the eve of public holiday and weekends and time of day admitted were independently and significantly associated with LOS. Patients admitted on Friday, Saturday or Sunday stayed on average 0.3 days longer than those admitted on weekdays, after adjusting for potential confounders; those admitted on the eve of public holidays, and those admitted in the afternoons and after office hours also had a longer LOS (differences of 0.71, 1.14 and 0.65 days respectively). Cases admitted over a weekend, eve of holiday, in the afternoons, and after office hours, do have an increased LOS. Further research is needed to identify processes contributing to the above phenomenon.
Chen Mark IC
2006-01-01
Full Text Available Abstract Background It has been postulated that patients admitted on weekends or after office hours may experience delays in clinical management and consequently have longer length of stay (LOS. We investigated if day and time of admission is associated with LOS in Tan Tock Seng Hospital (TTSH, a 1,400 bed acute care tertiary hospital serving the central and northern regions of Singapore. Methods This was a historical cohort study based on all admissions from TTSH from 1st September 2003 to 31st August 2004. Data was extracted from routinely available computerized hospital information systems for analysis by episode of care. LOS for each episode of care was log-transformed before analysis, and a multivariate linear regression model was used to study if sex, age group, type of admission, admission source, day of week admitted, admission on a public holiday or eve of public holiday, admission on a weekend and admission time were associated with an increased LOS. Results In the multivariate analysis, sex, age group, type of admission, source of admission, admission on the eve of public holiday and weekends and time of day admitted were independently and significantly associated with LOS. Patients admitted on Friday, Saturday or Sunday stayed on average 0.3 days longer than those admitted on weekdays, after adjusting for potential confounders; those admitted on the eve of public holidays, and those admitted in the afternoons and after office hours also had a longer LOS (differences of 0.71, 1.14 and 0.65 days respectively. Conclusion Cases admitted over a weekend, eve of holiday, in the afternoons, and after office hours, do have an increased LOS. Further research is needed to identify processes contributing to the above phenomenon.
Dexter, Franklin; Epstein, Richard H
2018-03-01
Diagnosis-related group (DRG) based reimbursement creates incentives for reduction in hospital length of stay (LOS). Such reductions might be accomplished by lesser incidences of discharges to home. However, we previously reported that, while controlling for DRG, each 1-day decrease in hospital median LOS was associated with lesser odds of transfer to a postacute care facility (P = .0008). The result, though, was limited to elective admissions, 15 common surgical DRGs, and the 2013 US National Readmission Database. We studied the same potential relationship between decreased LOS and postacute care using different methodology and over 2 different years. The observational study was performed using summary measures from the 2008 and 2014 US National Inpatient Sample, with 3 types of categories (strata): (1) Clinical Classifications Software's classes of procedures (CCS), (2) DRGs including a major operating room procedure during hospitalization, or (3) CCS limiting patients to those with US Medicare as the primary payer. Greater reductions in the mean LOS were associated with smaller percentages of patients with disposition to postacute care. Analyzed using 72 different CCSs, 174 DRGs, or 70 CCSs limited to Medicare patients, each pairwise reduction in the mean LOS by 1 day was associated with an estimated 2.6% ± 0.4%, 2.3% ± 0.3%, or 2.4% ± 0.3% (absolute) pairwise reduction in the mean incidence of use of postacute care, respectively. These 3 results obtained using bivariate weighted least squares linear regression were all P < .0001, as were the corresponding results obtained using unweighted linear regression or the Spearman rank correlation. In the United States, reductions in hospital LOS, averaged over many surgical procedures, are not accomplished through a greater incidence of use of postacute care.
Westbrook, J I; Li, L; Raban, M Z; Baysari, M T; Mumford, V; Prgomet, M; Georgiou, A; Kim, T; Lake, R; McCullagh, C; Dalla-Pozza, L; Karnon, J; O'Brien, T A; Ambler, G; Day, R; Cowell, C T; Gazarian, M; Worthington, R; Lehmann, C U; White, L; Barbaric, D; Gardo, A; Kelly, M; Kennedy, P
2016-10-21
Medication errors are the most frequent cause of preventable harm in hospitals. Medication management in paediatric patients is particularly complex and consequently potential for harms are greater than in adults. Electronic medication management (eMM) systems are heralded as a highly effective intervention to reduce adverse drug events (ADEs), yet internationally evidence of their effectiveness in paediatric populations is limited. This study will assess the effectiveness of an eMM system to reduce medication errors, ADEs and length of stay (LOS). The study will also investigate system impact on clinical work processes. A stepped-wedge cluster randomised controlled trial (SWCRCT) will measure changes pre-eMM and post-eMM system implementation in prescribing and medication administration error (MAE) rates, potential and actual ADEs, and average LOS. In stage 1, 8 wards within the first paediatric hospital will be randomised to receive the eMM system 1 week apart. In stage 2, the second paediatric hospital will randomise implementation of a modified eMM and outcomes will be assessed. Prescribing errors will be identified through record reviews, and MAEs through direct observation of nurses and record reviews. Actual and potential severity will be assigned. Outcomes will be assessed at the patient-level using mixed models, taking into account correlation of admissions within wards and multiple admissions for the same patient, with adjustment for potential confounders. Interviews and direct observation of clinicians will investigate the effects of the system on workflow. Data from site 1 will be used to develop improvements in the eMM and implemented at site 2, where the SWCRCT design will be repeated (stage 2). The research has been approved by the Human Research Ethics Committee of the Sydney Children's Hospitals Network and Macquarie University. Results will be reported through academic journals and seminar and conference presentations. Australian New Zealand
Gap length distributions by PEPR
Warszawer, T.N.
1980-01-01
Conditions guaranteeing exponential gap length distributions are formulated and discussed. Exponential gap length distributions of bubble chamber tracks first obtained on a CRT device are presented. Distributions of resulting average gap lengths and their velocity dependence are discussed. (orig.)
Longo, Edoardo; Moretto, Alessandro; Formaggio, Fernando; Toniolo, Claudio
2011-10-01
Critical main-chain length for peptide helix formation in the crystal (solid) state and in organic solvents has been already reported. In this short communication, we describe our results aiming at assessing the aforementioned parameter in water solution. To this goal, we synthesized step-by-step by solution procedures a complete series of N-terminally acetylated, C-terminally methoxylated oligopeptides, characterized only by alternating Aib and Ala residues, from the dimer to the nonamer level. All these compounds were investigated by electronic circular dichroism in the far-UV region in water solution as a function of chemical structure, namely presence/absence of an ester moiety or a negative charge at the C-terminus, and temperature. We find that the critical main-chain lengths for 3(10)- and α-helices, although still formed to a limited extent, in aqueous solution are six and eight residues, respectively. © 2011 Wiley-Liss, Inc.
Natacha Scarafone
Full Text Available Nine neurodegenerative disorders, called polyglutamine (polyQ diseases, are characterized by the formation of intranuclear amyloid-like aggregates by nine proteins containing a polyQ tract above a threshold length. These insoluble aggregates and/or some of their soluble precursors are thought to play a role in the pathogenesis. The mechanism by which polyQ expansions trigger the aggregation of the relevant proteins remains, however, unclear. In this work, polyQ tracts of different lengths were inserted into a solvent-exposed loop of the β-lactamase BlaP and the effects of these insertions on the properties of BlaP were investigated by a range of biophysical techniques. The insertion of up to 79 glutamines does not modify the structure of BlaP; it does, however, significantly destabilize the enzyme. The extent of destabilization is largely independent of the polyQ length, allowing us to study independently the effects intrinsic to the polyQ length and those related to the structural integrity of BlaP on the aggregating properties of the chimeras. Only chimeras with 55Q and 79Q readily form amyloid-like fibrils; therefore, similarly to the proteins associated with diseases, there is a threshold number of glutamines above which the chimeras aggregate into amyloid-like fibrils. Most importantly, the chimera containing 79Q forms amyloid-like fibrils at the same rate whether BlaP is folded or not, whereas the 55Q chimera aggregates into amyloid-like fibrils only if BlaP is unfolded. The threshold value for amyloid-like fibril formation depends, therefore, on the structural integrity of the β-lactamase moiety and thus on the steric and/or conformational constraints applied to the polyQ tract. These constraints have, however, no significant effect on the propensity of the 79Q tract to trigger fibril formation. These results suggest that the influence of the protein context on the aggregating properties of polyQ disease-associated proteins could be
Whitaker, Katherine E.; Pope, Alexandra; Cybulski, Ryan; Casey, Caitlin M.; Popping, Gergo; Yun, Min; 3D-HST Collaboration
2018-01-01
The total star formation budget of galaxies consists of the sum of the unobscured star formation, as observed in the rest-frame ultraviolet (UV), together with the obscured component that is absorbed and re-radiated by dust grains in the infrared. We explore how the fraction of obscured star formation depends (SFR) and stellar mass for mass-complete samples of galaxies at 0 MIPS 24μm photometry in the well-studied 5 extragalactic CANDELS fields. We find a strong dependence of the fraction of obscured star formation (f_obscured=SFR_IR/SFR_UV+IR) on stellar mass, with remarkably little evolution in this fraction with redshift out to z=2.5. 50% of star formation is obscured for galaxies with log(M/M⊙)=9.4 although unobscured star formation dominates the budget at lower masses, there exists a tail of low mass extremely obscured star-forming galaxies at z > 1. For log(M/M⊙)>10.5, >90% of star formation is obscured at all redshifts. We also show that at fixed total SFR, f_obscured is lower at higher redshift. At fixed mass, high-redshift galaxies are observed to have more compact sizes and much higher star formation rates, gas fractions and hence surface densities (implying higher dust obscuration), yet we observe no redshift evolution in f_obscured with stellar mass. This poses a challenge to theoretical models to reproduce, where the observed compact sizes at high redshift seem in tension with lower dust obscuration.
Ohara, Masayuki; Lu, Huimei; Shiraki, Katsutomo; Ishimura, Yoshimasa; Uesaka, Toshihiro; Katoh, Osamu; Watanabe, Hiromitsu
2001-01-01
The radioprotective effect of miso, a fermentation product from soy bean, was investigated with reference to the survival time, crypt survival and jejunum crypt length in male B6C3F1 mice. Miso at three different fermentation stages (early-, medium- and long-term fermented miso) was mixed in MF diet into biscuits at 10% and was administered from 1 week before irradiation. Animal survival in the long-term fermented miso group was significantly prolonged as compared with the short-term fermented miso and MF cases after 8 Gy of 60 Co-γ-ray irradiation at a dose rate of 2 Gy min -1 . Delay in mortality was evident in all three miso groups, with significantly increased survival. At doses of 10 and 12 Gy X-irradiation at a dose rate of 4 Gy min -1 , the treatment with long-term fermented miso significantly increased crypt survival. Also the protective influence against irradiation in terms of crypt lengths in the long-term fermented miso group was significantly greater than in the short-term or medium-term fermented miso and MF diet groups. Thus, prolonged fermentation appears to be very important for protection against radiation effects. (author)
Ohara, M; Lu, H; Shiraki, K; Ishimura, Y; Uesaka, T; Katoh, O; Watanabe, H
2001-12-01
The radioprotective effect of miso, a fermentation product from soy bean, was investigated with reference to the survival time, crypt survival and jejunum crypt length in male B6C3F1 mice. Miso at three different fermentation stages (early-, medium- and long-term fermented miso) was mixed in MF diet into biscuits at 10% and was administered from 1 week before irradiation. Animal survival in the long-term fermented miso group was significantly prolonged as compared with the short-term fermented miso and MF cases after 8 Gy of 60Co-gamma-ray irradiation at a dose rate of 2Gy min(-1). Delay in mortality was evident in all three miso groups, with significantly increased survival. At doses of 10 and 12 Gy X-irradiation at a dose rate of 4 Gy min(-1), the treatment with long-term fermented miso significantly increased crypt survival. Also the protective influence against irradiation in terms of crypt lengths in the long-term fermented miso group was significantly greater than in the short-term or medium-term fermented miso and MF diet groups. Thus, prolonged fermentation appears to be very important for protection against radiation effects.
Mitschker, F.; Wißing, J.; Hoppe, Ch; de los Arcos, T.; Grundmeier, G.; Awakowicz, P.
2018-04-01
The respective effect of average incorporated ion energy and impinging atomic oxygen flux on the deposition of silicon oxide (SiO x ) barrier coatings for polymers is studied in a microwave driven low pressure discharge with additional variable RF bias. Under consideration of plasma parameters, bias voltage, film density, chemical composition and particle fluxes, both are determined relative to the effective flux of Si atoms contributing to film growth. Subsequently, a correlation with barrier performance and chemical structure is achieved by measuring the oxygen transmission rate (OTR) and by performing x-ray photoelectron spectroscopy. It is observed that an increase in incorporated energy to 160 eV per deposited Si atom result in an enhanced cross-linking of the SiO x network and, therefore, an improved barrier performance by almost two orders of magnitude. Furthermore, independently increasing the number of oxygen atoms to 10 500 per deposited Si atom also lead to a comparable barrier improvement by an enhanced cross-linking.
Inoue, Tohru; Yamakawa, Haruka
2011-04-15
Micellization behavior was investigated for polyoxyethylene-type nonionic surfactants with varying chain length (C(n)E(m)) in a room temperature ionic liquid, 1-butyl-3-methylimidazolium tetrafluoroborate (bmimBF(4)). Critical micelle concentration (cmc) was determined from the variation of (1)H NMR chemical shift with the surfactant concentration. The logarithmic value of cmc decreased linearly with the number of carbon atoms in the surfactant hydrocarbon chain, similarly to the case observed in aqueous surfactant solutions. However, the slope of the straight line is much smaller in bmimBF(4) than in aqueous solution. Thermodynamic parameters for micelle formation estimated from the temperature dependence of cmc showed that the micellization in bmimBF(4) is an entropy-driven process around room temperature. This behavior is also similar to the case in aqueous solution. However, the magnitude of the entropic contribution to the overall micellization free energy in bmimBF(4) is much smaller compared with that in aqueous solution. These results suggest that the micellization in bmimBF(4) proceeds through a mechanism similar to the hydrophobic interaction in aqueous surfactant solutions, although the solvophobic effect in bmimBF(4) is much weaker than the hydrophobic effect. Copyright © 2011 Elsevier Inc. All rights reserved.
Fundamental length and relativistic length
Strel'tsov, V.N.
1988-01-01
It si noted that the introduction of fundamental length contradicts the conventional representations concerning the contraction of the longitudinal size of fast-moving objects. The use of the concept of relativistic length and the following ''elongation formula'' permits one to solve this problem
He, Minhui; Yang, Bao; Shishov, Vladimir; Rossi, Sergio; Bräuning, Achim; Ljungqvist, Fredrik Charpentier; Grießinger, Jussi
2018-04-01
The response of the growing season to the ongoing global warming has gained considerable attention. In particular, how and to which extent the growing season will change during this century is essential information for the Tibetan Plateau, where the observed warming trend has exceeded the global mean. In this study, the 1960-2014 mean length of the tree-ring growing season (LOS) on the Tibetan Plateau was derived from results of the Vaganov-Shashkin oscilloscope tree growth model, based on 20 composite study sites and more than 3000 trees. Bootstrap and partial correlations were used to evaluate the most significant climate factors determining the LOS in the study region. Based on this relationship, we predicted the future variability of the LOS under three emission scenarios (Representative Concentration Pathways (RCP) 2.6, 6.0, and 8.5, representing different concentrations of greenhouse gasses) derived from 17 Earth system models participating in the Coupled Model Intercomparison Project Phase 5 (CMIP5). The averaged LOS on the Tibetan Plateau is 103 days during the period 1960-2014, and April-September minimum temperature is the strongest factor controlling the LOS. We detected a general increase in the LOS over the twenty-first century under all the three selected scenarios. By the middle of this century, LOS will extend by about 3 to 4 weeks under the RCPs 2.6 and 6.0, and by more than 1 month (37 days) under the RCP 8.5, relative to the baseline period 1960-2014. From the middle to the end of the twenty-first century, LOS will further extend by about 3 to 4 weeks under the RCPs 6.0 and 8.5, respectively. Under the RCP 2.6 scenario, however, the extension reaches a plateau at around 2050 and about 2 weeks LOS extension. In total, we found an average rate of 2.1, 3.6, and 5.0 days decade -1 for the LOS extension from 2015 to 2100 under the RCPs 2.6, 6.0, and 8.5, respectively. However, such estimated LOS extensions may be offset by other ecological
Earth Data Analysis Center, University of New Mexico — Flame length was modeled using FlamMap, an interagency fire behavior mapping and analysis program that computes potential fire behavior characteristics. The tool...
Truncated cross-sectional average length of life
Canudas-Romo, Vladimir; Guillot, Michel
2015-01-01
Period life expectancies are commonly used to compare populations, but these correspond to simple juxtapositions of current mortality levels. In order to construct life expectancies for cohorts, a complete historical series of mortality rates is needed, and these are available for only a subset o...... for most of the disparity in mortality between the populations are identified. Supplementary material for this article is available at: http://dx.doi.org/10.1080/00324728.2015.1019955....
Relativistic distances, sizes, lengths
Strel'tsov, V.N.
1992-01-01
Such notion as light or retarded distance, field size, formation way, visible size of a body, relativistic or radar length and wave length of light from a moving atom are considered. The relation between these notions is cleared up, their classification is given. It is stressed that the formation way is defined by the field size of a moving particle. In the case of the electromagnetic field, longitudinal sizes increase proportionally γ 2 with growing charge velocity (γ is the Lorentz-factor). 18 refs
Hakvoort, T. B.; Spijkers, J. A.; Vermeulen, J. L.; Lamers, W. H.
1996-01-01
We have developed a fast and general method to obtain an enriched, full-length cDNA expression library with subtractively enriched cDNA fragments. The procedure relies on RecA-mediated triple-helix formation of single-stranded cDNA fragments with a double-stranded cDNA plasmid library. The complexes
Pradhan, T.
1975-01-01
The concept of fundamental length was first put forward by Heisenberg from purely dimensional reasons. From a study of the observed masses of the elementary particles known at that time, it is sumrised that this length should be of the order of magnitude 1 approximately 10 -13 cm. It was Heisenberg's belief that introduction of such a fundamental length would eliminate the divergence difficulties from relativistic quantum field theory by cutting off the high energy regions of the 'proper fields'. Since the divergence difficulties arise primarily due to infinite number of degrees of freedom, one simple remedy would be the introduction of a principle that limits these degrees of freedom by removing the effectiveness of the waves with a frequency exceeding a certain limit without destroying the relativistic invariance of the theory. The principle can be stated as follows: It is in principle impossible to invent an experiment of any kind that will permit a distintion between the positions of two particles at rest, the distance between which is below a certain limit. A more elegant way of introducing fundamental length into quantum theory is through commutation relations between two position operators. In quantum field theory such as quantum electrodynamics, it can be introduced through the commutation relation between two interpolating photon fields (vector potentials). (K.B.)
Wang, Wei
2013-01-01
Mixed-format tests containing both multiple-choice (MC) items and constructed-response (CR) items are now widely used in many testing programs. Mixed-format tests often are considered to be superior to tests containing only MC items although the use of multiple item formats leads to measurement challenges in the context of equating conducted under…
Chrien, R.E.
1986-10-01
The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs
Sharma, Kiran K; Razskazovskiy, Yuriy; Purkayastha, Shubhadeep; Bernhard, William A
2009-06-11
The question of how NA base sequence influences the yield of DNA strand breaks produced by the direct effect of ionizing radiation was investigated in a series of oligodeoxynucleotides of the form (d(CG)(n))(2) and (d(GC)(n))(2). The yields of free base release from X-irradiated DNA films containing 2.5 waters/nucleotide were measured by HPLC as a function of oligomer length. For (d(CG)(n))(2), the ratio of the Gua yield to Cyt yield, R, was relatively constant at 2.4-2.5 for n = 2-4 and it decreased to 1.2 as n increased from 5 to 10. When Gua was moved to the 5' end, for example going from d(CG)(5) to d(GC)(5), R dropped from 1.9 +/- 0.1 to 1.1 +/- 0.1. These effects are poorly described if the chemistry at the oligomer ends is assumed to be independent of the remainder of the oligomer. A mathematical model incorporating charge transfer through the base stack was derived to explain these effects. In addition, EPR was used to measure the yield of trapped-deoxyribose radicals at 4 K following X-irradiation at 4 K. The yield of free base release was substantially greater, by 50-100 nmol/J, than the yield of trapped-deoxyribose radicals. Therefore, a large fraction of free base release stems from a nonradical intermediate. For this intermediate, a deoxyribose carbocation formed by two one-electron oxidations is proposed. This reaction pathway requires that the hole (electron loss site) transfers through the base stack and, upon encountering a deoxyribose hole, oxidizes that site to form a deoxyribose carbocation. This reaction mechanism provides a consistent way of explaining both the absence of trapped radical intermediates and the unusual dependence of free base release on oligomer length.
Gramkow, Claus
1999-01-01
In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belo...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion......In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...
Ichiguchi, Katsuji
1998-01-01
A new reduced set of resistive MHD equations is derived by averaging the full MHD equations on specified flux coordinates, which is consistent with 3D equilibria. It is confirmed that the total energy is conserved and the linearized equations for ideal modes are self-adjoint. (author)
Determining average yarding distance.
Roger H. Twito; Charles N. Mann
1979-01-01
Emphasis on environmental and esthetic quality in timber harvesting has brought about increased use of complex boundaries of cutting units and a consequent need for a rapid and accurate method of determining the average yarding distance and area of these units. These values, needed for evaluation of road and landing locations in planning timber harvests, are easily and...
Watson, Jane; Chick, Helen
2012-01-01
This paper analyses the responses of 247 middle school students to items requiring the concept of average in three different contexts: a city's weather reported in maximum daily temperature, the number of children in a family, and the price of houses. The mixed but overall disappointing performance on the six items in the three contexts indicates…
Averaging operations on matrices
2014-07-03
Jul 3, 2014 ... Role of Positive Definite Matrices. • Diffusion Tensor Imaging: 3 × 3 pd matrices model water flow at each voxel of brain scan. • Elasticity: 6 × 6 pd matrices model stress tensors. • Machine Learning: n × n pd matrices occur as kernel matrices. Tanvi Jain. Averaging operations on matrices ...
Patricia Bouyer
2015-09-01
Full Text Available Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in NP inter coNP and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements.
Gramkow, Claus
2001-01-01
In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong ...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation.......In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...
Brent, T.P.; Remack, J.S. (St. Jude Children' s Research Hospital, Memphis, TN (USA))
1988-07-25
Repair of chloroethylnitrosourea (CENU)-induced precursors of DNA interstrand cross-links by O{sup 6}-alkylguanine-DNA alkyltransferase (GAT or GATase) appears to be a factor in tumor resistance to therapy with this class of antineoplastic drugs. Since human GAT is highly specific for O{sup 6}-guanine, yet the probably cross-link structure is N{prime}-Guanine N{sup 3}cytosine ethane, rearrangement of the initial O{sup 6}-guanine adduct via O{sup 6},N{sup 1}ethanoguanine has been proposed. The authors suggested that GAT reaction with this intermediate would produce DNA covalently linked to protein through an ethane link from N{sup 1}-guanine to the alkylacceptor site on GAT. In preliminary studies they demonstrated a covalent complex between GAT and carmustine (BCNU)-treated DNA by a precipitation assay method. They have now developed a method for isolating the reaction product of BCNU-treated synthetic 14-mer ({sup 32}P)-labeled oligodeoxynucleotide and GAT using polyacrylamide gel electrophoresis. This approach can be used to characterize the adducts induced by CENUs that lead to complex formation with GAT.
Average Bandwidth Allocation Model of WFQ
Tomáš Balogh
2012-01-01
Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.
Eliazar, Iddo
2018-02-01
The popular perception of statistical distributions is depicted by the iconic bell curve which comprises of a massive bulk of 'middle-class' values, and two thin tails - one of small left-wing values, and one of large right-wing values. The shape of the bell curve is unimodal, and its peak represents both the mode and the mean. Thomas Friedman, the famous New York Times columnist, recently asserted that we have entered a human era in which "Average is Over" . In this paper we present mathematical models for the phenomenon that Friedman highlighted. While the models are derived via different modeling approaches, they share a common foundation. Inherent tipping points cause the models to phase-shift from a 'normal' bell-shape statistical behavior to an 'anomalous' statistical behavior: the unimodal shape changes to an unbounded monotone shape, the mode vanishes, and the mean diverges. Hence: (i) there is an explosion of small values; (ii) large values become super-large; (iii) 'middle-class' values are wiped out, leaving an infinite rift between the small and the super large values; and (iv) "Average is Over" indeed.
Average nuclear surface properties
Groote, H. von.
1979-01-01
The definition of the nuclear surface energy is discussed for semi-infinite matter. This definition is extended also for the case that there is a neutron gas instead of vacuum on the one side of the plane surface. The calculations were performed with the Thomas-Fermi Model of Syler and Blanchard. The parameters of the interaction of this model were determined by a least squares fit to experimental masses. The quality of this fit is discussed with respect to nuclear masses and density distributions. The average surface properties were calculated for different particle asymmetry of the nucleon-matter ranging from symmetry beyond the neutron-drip line until the system no longer can maintain the surface boundary and becomes homogeneous. The results of the calculations are incorporated in the nuclear Droplet Model which then was fitted to experimental masses. (orig.)
Americans' Average Radiation Exposure
2000-01-01
We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body
Bacon, C L
2011-05-01
Previous studies have suggested that development of inhibitors in previously treated patients (PTPs) may be attributable to a switch in factor VIII (FVIII) therapeutic product. Consequently, it is widely recognized that inhibitor development must be assessed in PTPs following the introduction of any new FVIII product. Following a national tender process in 2006, all patients with haemophilia A in Ireland changed their FVIII treatment product en masse to a plasma and albumin-free recombinant full-length FVIII product (ADVATE(®)). In this study, we retrospectively reviewed the case records of Irish PTPs to evaluate risk of inhibitor formation following this treatment switch. One hundred and thirteen patients participated in the study. Most patients (89%) had severe haemophilia. Only one of 96 patients with no inhibitor history developed an inhibitor. Prior to the switch in his recombinant FVIII (rFVIII) treatment of choice, this child had only experienced three exposure days (EDs). Consequently, in total he had only received 6 EDs when his inhibitor was first diagnosed. In keeping with this lack of de novo inhibitor development, we observed no evidence of any recurrent inhibitor formation in any of 16 patients with previously documented inhibitors. Similarly, following a previous en masse switch, we have previously reported that changing from a Chinese hamster ovary cell-produced to a baby hamster kidney cell-produced rFVIII was also associated with a low risk of inhibitor formation in PTPs. Our cumulative findings from these two studies clearly emphasizes that the risk of inhibitor development for PTPs following changes in commercial rFVIII product is low, at least in the Irish population.
Kimura, Masayuki; Hjelmborg, Jacob V B; Gardner, Jeffrey P
2008-01-01
Leukocyte telomere length, representing the mean length of all telomeres in leukocytes, is ostensibly a bioindicator of human aging. The authors hypothesized that shorter telomeres might forecast imminent mortality in elderly people better than leukocyte telomere length. They performed mortality...
Improving consensus structure by eliminating averaging artifacts
KC Dukka B
2009-03-01
Full Text Available Abstract Background Common structural biology methods (i.e., NMR and molecular dynamics often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures. However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%; in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38 of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA 1, our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction 2, which
An approach to averaging digitized plantagram curves.
Hawes, M R; Heinemeyer, R; Sovak, D; Tory, B
1994-07-01
The averaging of outline shapes of the human foot for the purposes of determining information concerning foot shape and dimension within the context of comfort of fit of sport shoes is approached as a mathematical problem. An outline of the human footprint is obtained by standard procedures and the curvature is traced with a Hewlett Packard Digitizer. The paper describes the determination of an alignment axis, the identification of two ray centres and the division of the total curve into two overlapping arcs. Each arc is divided by equiangular rays which intersect chords between digitized points describing the arc. The radial distance of each ray is averaged within groups of foot lengths which vary by +/- 2.25 mm (approximately equal to 1/2 shoe size). The method has been used to determine average plantar curves in a study of 1197 North American males (Hawes and Sovak 1993).
Angly, Florent E; Willner, Dana; Prieto-Davó, Alejandra; Edwards, Robert A; Schmieder, Robert; Vega-Thurber, Rebecca; Antonopoulos, Dionysios A; Barott, Katie; Cottrell, Matthew T; Desnues, Christelle; Dinsdale, Elizabeth A; Furlan, Mike; Haynes, Matthew; Henn, Matthew R; Hu, Yongfei; Kirchman, David L; McDole, Tracey; McPherson, John D; Meyer, Folker; Miller, R Michael; Mundt, Egbert; Naviaux, Robert K; Rodriguez-Mueller, Beltran; Stevens, Rick; Wegley, Linda; Zhang, Lixin; Zhu, Baoli; Rohwer, Forest
2009-12-01
Metagenomic studies characterize both the composition and diversity of uncultured viral and microbial communities. BLAST-based comparisons have typically been used for such analyses; however, sampling biases, high percentages of unknown sequences, and the use of arbitrary thresholds to find significant similarities can decrease the accuracy and validity of estimates. Here, we present Genome relative Abundance and Average Size (GAAS), a complete software package that provides improved estimates of community composition and average genome length for metagenomes in both textual and graphical formats. GAAS implements a novel methodology to control for sampling bias via length normalization, to adjust for multiple BLAST similarities by similarity weighting, and to select significant similarities using relative alignment lengths. In benchmark tests, the GAAS method was robust to both high percentages of unknown sequences and to variations in metagenomic sequence read lengths. Re-analysis of the Sargasso Sea virome using GAAS indicated that standard methodologies for metagenomic analysis may dramatically underestimate the abundance and importance of organisms with small genomes in environmental systems. Using GAAS, we conducted a meta-analysis of microbial and viral average genome lengths in over 150 metagenomes from four biomes to determine whether genome lengths vary consistently between and within biomes, and between microbial and viral communities from the same environment. Significant differences between biomes and within aquatic sub-biomes (oceans, hypersaline systems, freshwater, and microbialites) suggested that average genome length is a fundamental property of environments driven by factors at the sub-biome level. The behavior of paired viral and microbial metagenomes from the same environment indicated that microbial and viral average genome sizes are independent of each other, but indicative of community responses to stressors and environmental conditions.
Florent E Angly
2009-12-01
Full Text Available Metagenomic studies characterize both the composition and diversity of uncultured viral and microbial communities. BLAST-based comparisons have typically been used for such analyses; however, sampling biases, high percentages of unknown sequences, and the use of arbitrary thresholds to find significant similarities can decrease the accuracy and validity of estimates. Here, we present Genome relative Abundance and Average Size (GAAS, a complete software package that provides improved estimates of community composition and average genome length for metagenomes in both textual and graphical formats. GAAS implements a novel methodology to control for sampling bias via length normalization, to adjust for multiple BLAST similarities by similarity weighting, and to select significant similarities using relative alignment lengths. In benchmark tests, the GAAS method was robust to both high percentages of unknown sequences and to variations in metagenomic sequence read lengths. Re-analysis of the Sargasso Sea virome using GAAS indicated that standard methodologies for metagenomic analysis may dramatically underestimate the abundance and importance of organisms with small genomes in environmental systems. Using GAAS, we conducted a meta-analysis of microbial and viral average genome lengths in over 150 metagenomes from four biomes to determine whether genome lengths vary consistently between and within biomes, and between microbial and viral communities from the same environment. Significant differences between biomes and within aquatic sub-biomes (oceans, hypersaline systems, freshwater, and microbialites suggested that average genome length is a fundamental property of environments driven by factors at the sub-biome level. The behavior of paired viral and microbial metagenomes from the same environment indicated that microbial and viral average genome sizes are independent of each other, but indicative of community responses to stressors and
The difference between alternative averages
James Vaupel
2012-09-01
Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.
Changing mortality and average cohort life expectancy
Robert Schoen
2005-10-01
Full Text Available Period life expectancy varies with changes in mortality, and should not be confused with the life expectancy of those alive during that period. Given past and likely future mortality changes, a recent debate has arisen on the usefulness of the period life expectancy as the leading measure of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure, the average cohort life expectancy (ACLE, to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate measures of mortality are calculated for England and Wales, Norway, and Switzerland for the years 1880 to 2000. CAL is found to be sensitive to past and present changes in death rates. ACLE requires the most data, but gives the best representation of the survivorship of cohorts present at a given time.
Lake Basin Fetch and Maximum Length/Width
Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...
How to average logarithmic retrievals?
B. Funke
2012-04-01
Full Text Available Calculation of mean trace gas contributions from profiles obtained by retrievals of the logarithm of the abundance rather than retrievals of the abundance itself are prone to biases. By means of a system simulator, biases of linear versus logarithmic averaging were evaluated for both maximum likelihood and maximum a priori retrievals, for various signal to noise ratios and atmospheric variabilities. These biases can easily reach ten percent or more. As a rule of thumb we found for maximum likelihood retrievals that linear averaging better represents the true mean value in cases of large local natural variability and high signal to noise ratios, while for small local natural variability logarithmic averaging often is superior. In the case of maximum a posteriori retrievals, the mean is dominated by the a priori information used in the retrievals and the method of averaging is of minor concern. For larger natural variabilities, the appropriateness of the one or the other method of averaging depends on the particular case because the various biasing mechanisms partly compensate in an unpredictable manner. This complication arises mainly because of the fact that in logarithmic retrievals the weight of the prior information depends on abundance of the gas itself. No simple rule was found on which kind of averaging is superior, and instead of suggesting simple recipes we cannot do much more than to create awareness of the traps related with averaging of mixing ratios obtained from logarithmic retrievals.
Lagrangian averaging with geodesic mean.
Oliver, Marcel
2017-11-01
This paper revisits the derivation of the Lagrangian averaged Euler (LAE), or Euler- α equations in the light of an intrinsic definition of the averaged flow map as the geodesic mean on the volume-preserving diffeomorphism group. Under the additional assumption that first-order fluctuations are statistically isotropic and transported by the mean flow as a vector field, averaging of the kinetic energy Lagrangian of an ideal fluid yields the LAE Lagrangian. The derivation presented here assumes a Euclidean spatial domain without boundaries.
Averaging in spherically symmetric cosmology
Coley, A. A.; Pelavas, N.
2007-01-01
The averaging problem in cosmology is of fundamental importance. When applied to study cosmological evolution, the theory of macroscopic gravity (MG) can be regarded as a long-distance modification of general relativity. In the MG approach to the averaging problem in cosmology, the Einstein field equations on cosmological scales are modified by appropriate gravitational correlation terms. We study the averaging problem within the class of spherically symmetric cosmological models. That is, we shall take the microscopic equations and effect the averaging procedure to determine the precise form of the correlation tensor in this case. In particular, by working in volume-preserving coordinates, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. We find that the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background must be of the form of a spatial curvature. Inhomogeneities and spatial averaging, through this spatial curvature correction term, can have a very significant dynamical effect on the dynamics of the Universe and cosmological observations; in particular, we discuss whether spatial averaging might lead to a more conservative explanation of the observed acceleration of the Universe (without the introduction of exotic dark matter fields). We also find that the correlation tensor for a non-FLRW background can be interpreted as the sum of a spatial curvature and an anisotropic fluid. This may lead to interesting effects of averaging on astrophysical scales. We also discuss the results of averaging an inhomogeneous Lemaitre-Tolman-Bondi solution as well as calculations of linear perturbations (that is, the backreaction) in an FLRW background, which support the main conclusions of the analysis
Averaging models: parameters estimation with the R-Average procedure
S. Noventa
2010-01-01
Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.
Canela, Andrés; Klatt, Peter; Blasco, María A
2007-01-01
Most somatic cells of long-lived species undergo telomere shortening throughout life. Critically short telomeres trigger loss of cell viability in tissues, which has been related to alteration of tissue function and loss of regenerative capabilities in aging and aging-related diseases. Hence, telomere length is an important biomarker for aging and can be used in the prognosis of aging diseases. These facts highlight the importance of developing methods for telomere length determination that can be employed to evaluate telomere length during the human aging process. Telomere length quantification methods have improved greatly in accuracy and sensitivity since the development of the conventional telomeric Southern blot. Here, we describe the different methodologies recently developed for telomere length quantification, as well as their potential applications for human aging studies.
MARD—A moving average rose diagram application for the geosciences
Munro, Mark A.; Blenkinsop, Thomas G.
2012-12-01
MARD 1.0 is a computer program for generating smoothed rose diagrams by using a moving average, which is designed for use across the wide range of disciplines encompassed within the Earth Sciences. Available in MATLAB®, Microsoft® Excel and GNU Octave formats, the program is fully compatible with both Microsoft® Windows and Macintosh operating systems. Each version has been implemented in a user-friendly way that requires no prior experience in programming with the software. MARD conducts a moving average smoothing, a form of signal processing low-pass filter, upon the raw circular data according to a set of pre-defined conditions selected by the user. This form of signal processing filter smoothes the angular dataset, emphasising significant circular trends whilst reducing background noise. Customisable parameters include whether the data is uni- or bi-directional, the angular range (or aperture) over which the data is averaged, and whether an unweighted or weighted moving average is to be applied. In addition to the uni- and bi-directional options, the MATLAB® and Octave versions also possess a function for plotting 2-dimensional dips/pitches in a single, lower, hemisphere. The rose diagrams from each version are exportable as one of a selection of common graphical formats. Frequently employed statistical measures that determine the vector mean, mean resultant (or length), circular standard deviation and circular variance are also included. MARD's scope is demonstrated via its application to a variety of datasets within the Earth Sciences.
Evaluations of average level spacings
Liou, H.I.
1980-01-01
The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of 168 Er data. 19 figures, 2 tables
Ergodic averages via dominating processes
Møller, Jesper; Mengersen, Kerrie
2006-01-01
We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain....
Telomere length and depression
Wium-Andersen, Marie Kim; Ørsted, David Dynnes; Rode, Line
2017-01-01
BACKGROUND: Depression has been cross-sectionally associated with short telomeres as a measure of biological age. However, the direction and nature of the association is currently unclear. AIMS: We examined whether short telomere length is associated with depression cross-sectionally as well...... as prospectively and genetically. METHOD: Telomere length and three polymorphisms, TERT, TERC and OBFC1, were measured in 67 306 individuals aged 20-100 years from the Danish general population and associated with register-based attendance at hospital for depression and purchase of antidepressant medication....... RESULTS: Attendance at hospital for depression was associated with short telomere length cross-sectionally, but not prospectively. Further, purchase of antidepressant medication was not associated with short telomere length cross-sectionally or prospectively. Mean follow-up was 7.6 years (range 0...
High average power supercontinuum sources
The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium.
High Average Power Fiber Laser for Satellite Communications, Phase I
National Aeronautics and Space Administration — Very high average power lasers with high electrical-top-optical (E-O) efficiency, which also support pulse position modulation (PPM) formats in the MHz-data rate...
Economic issues of broiler production length
Szőllősi László
2014-01-01
Full Text Available The length of broiler production cycle is also an important factor when profitability is measured. This paper is to determine the effects of different market ages and down-time period, overall broiler production cycle length on performance and economic parameters based on Hungarian production and financial circumstances. A deterministic model was constructed to manage the function-like correlations of age-related daily weight gain, daily feed intake and daily mortality data. The results show that broiler production cycle length has a significant effect on production and economic performance. Cycle length is determined by the length of down-time and grow-out periods. If down-time period is reduced by one day, an average net income of EUR 0.55 per m2 is realizable. However, the production period is not directly proportional either with emerging costs or obtainable revenues. Profit maximization is attainable if the production period is 41-42 days.
How does harvest size vary with hunting season length?
Sunde, Peter; Asferg, Tommy
2014-01-01
season length (population management/ethical/other). In non-sedentary species, changes in bag size correlated positively with changes in season length (overall response: b = 0.54, 95%CI: 0.14-0.95): reducing the hunting season to 50% of its initial length would on average result in a 31% reduction (95...
When good = better than average
Don A. Moore
2007-10-01
Full Text Available People report themselves to be above average on simple tasks and below average on difficult tasks. This paper proposes an explanation for this effect that is simpler than prior explanations. The new explanation is that people conflate relative with absolute evaluation, especially on subjective measures. The paper then presents a series of four studies that test this conflation explanation. These tests distinguish conflation from other explanations, such as differential weighting and selecting the wrong referent. The results suggest that conflation occurs at the response stage during which people attempt to disambiguate subjective response scales in order to choose an answer. This is because conflation has little effect on objective measures, which would be equally affected if the conflation occurred at encoding.
Autoregressive Moving Average Graph Filtering
Isufi, Elvin; Loukas, Andreas; Simonetto, Andrea; Leus, Geert
2016-01-01
One of the cornerstones of the field of signal processing on graphs are graph filters, direct analogues of classical filters, but intended for signals defined on graphs. This work brings forth new insights on the distributed graph filtering problem. We design a family of autoregressive moving average (ARMA) recursions, which (i) are able to approximate any desired graph frequency response, and (ii) give exact solutions for tasks such as graph signal denoising and interpolation. The design phi...
Averaging Robertson-Walker cosmologies
Brown, Iain A.; Robbers, Georg; Behrend, Juliane
2009-01-01
The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ω eff 0 ≈ 4 × 10 −6 , with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10 −8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w eff < −1/3 can be found for strongly phantom models
Bruyere, M.; Vallee, A.; Collette, C.
1986-09-01
Extended fuel cycle length and burnup are currently offered by Framatome and Fragema in order to satisfy the needs of the utilities in terms of fuel cycle cost and of overall systems cost optimization. We intend to point out the consequences of an increased fuel cycle length and burnup on reactor safety, in order to determine whether the bounding safety analyses presented in the Safety Analysis Report are applicable and to evaluate the effect on plant licensing. This paper presents the results of this examination. The first part indicates the consequences of increased fuel cycle length and burnup on the nuclear data used in the bounding accident analyses. In the second part of this paper, the required safety reanalyses are presented and the impact on the safety margins of different fuel management strategies is examined. In addition, systems modifications which can be required are indicated
Bivariate copulas on the exponentially weighted moving average control chart
Sasigarn Kuvattana
2016-10-01
Full Text Available This paper proposes four types of copulas on the Exponentially Weighted Moving Average (EWMA control chart when observations are from an exponential distribution using a Monte Carlo simulation approach. The performance of the control chart is based on the Average Run Length (ARL which is compared for each copula. Copula functions for specifying dependence between random variables are used and measured by Kendall’s tau. The results show that the Normal copula can be used for almost all shifts.
Holland, Brendan J; Adcock, Jacqui L; Nesterenko, Pavel N; Peristyy, Anton; Stevenson, Paul G; Barnett, Neil W; Conlan, Xavier A; Francis, Paul S
2014-09-09
Sodium polyphosphate is commonly used to enhance chemiluminescence reactions with acidic potassium permanganate through a dual enhancement mechanism, but commercially available polyphosphates vary greatly in composition. We have examined the influence of polyphosphate composition and concentration on both the dual enhancement mechanism of chemiluminescence intensity and the stability of the reagent under analytically useful conditions. The average chain length (n) provides a convenient characterisation, but materials with similar values can exhibit markedly different distributions of phosphate oligomers. There is a minimum polyphosphate chain length (∼6) required for a large enhancement of the emission intensity, but no further advantage was obtained using polyphosphate materials with much longer average chain lengths. Providing there is a sufficient average chain length, the optimum concentration of polyphosphate is dependent on the analyte and in some cases, may be lower than the quantities previously used in routine detection. However, the concentration of polyphosphate should not be lowered in permanganate reagents that have been partially reduced to form high concentrations of the key manganese(III) co-reactant, as this intermediate needs to be stabilised to prevent formation of insoluble manganese(IV). Copyright © 2014 Elsevier B.V. All rights reserved.
Lifetime and Path Length of the Virtual Particle
Lyuboshitz, V.L.; Lyuboshitz, V.V.
2005-01-01
The concepts of the lifetime and path length of a virtual particle are introduced. It is shown that, near the mass surface of the real particle, these quantities constitute a 4-vector. At very high energies, the virtual particle can propagate over considerable (even macroscopic) distances. The formulas for the lifetime and path length of an ultrarelativistic virtual electron in the process of bremsstrahlung in the Coulomb field of a nucleus are obtained. The lifetime and path length of the virtual photon at its conversion into an electron-positron pair are discussed. The connection between the path length of the virtual particle and the coherence length (formation length) is analyzed
The average crossing number of equilateral random polygons
Diao, Y; Dobay, A; Kusner, R B; Millett, K; Stasiak, A
2003-01-01
In this paper, we study the average crossing number of equilateral random walks and polygons. We show that the mean average crossing number ACN of all equilateral random walks of length n is of the form (3/16)n ln n + O(n). A similar result holds for equilateral random polygons. These results are confirmed by our numerical studies. Furthermore, our numerical studies indicate that when random polygons of length n are divided into individual knot types, the for each knot type K can be described by a function of the form = a(n-n 0 )ln(n-n 0 ) + b(n-n 0 ) + c where a, b and c are constants depending on K and n 0 is the minimal number of segments required to form K. The profiles diverge from each other, with more complex knots showing higher than less complex knots. Moreover, the profiles intersect with the profile of all closed walks. These points of intersection define the equilibrium length of K, i.e., the chain length n e (K) at which a statistical ensemble of configurations with given knot type K-upon cutting, equilibration and reclosure to a new knot type K'-does not show a tendency to increase or decrease . This concept of equilibrium length seems to be universal, and applies also to other length-dependent observables for random knots, such as the mean radius of gyration g >
Topological quantization of ensemble averages
Prodan, Emil
2009-01-01
We define the current of a quantum observable and, under well-defined conditions, we connect its ensemble average to the index of a Fredholm operator. The present work builds on a formalism developed by Kellendonk and Schulz-Baldes (2004 J. Funct. Anal. 209 388) to study the quantization of edge currents for continuous magnetic Schroedinger operators. The generalization given here may be a useful tool to scientists looking for novel manifestations of the topological quantization. As a new application, we show that the differential conductance of atomic wires is given by the index of a certain operator. We also comment on how the formalism can be used to probe the existence of edge states
Flexible time domain averaging technique
Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng
2013-09-01
Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.
Pion nucleus scattering lengths
Huang, W.T.; Levinson, C.A.; Banerjee, M.K.
1971-09-01
Soft pion theory and the Fubini-Furlan mass dispersion relations have been used to analyze the pion nucleon scattering lengths and obtain a value for the sigma commutator term. With this value and using the same principles, scattering lengths have been predicted for nuclei with mass number ranging from 6 to 23. Agreement with experiment is very good. For those who believe in the Gell-Mann-Levy sigma model, the evaluation of the commutator yields the value 0.26(m/sub σ//m/sub π/) 2 for the sigma nucleon coupling constant. The large dispersive corrections for the isosymmetric case implies that the basic idea behind many of the soft pion calculations, namely, slow variation of matrix elements from the soft pion limit to the physical pion mass, is not correct. 11 refs., 1 fig., 3 tabs
Relativistic length agony continued
Redžić D.V.
2014-01-01
Full Text Available We made an attempt to remedy recent confusing treatments of some basic relativistic concepts and results. Following the argument presented in an earlier paper (Redžić 2008b, we discussed the misconceptions that are recurrent points in the literature devoted to teaching relativity such as: there is no change in the object in Special Relativity, illusory character of relativistic length contraction, stresses and strains induced by Lorentz contraction, and related issues. We gave several examples of the traps of everyday language that lurk in Special Relativity. To remove a possible conceptual and terminological muddle, we made a distinction between the relativistic length reduction and relativistic FitzGerald-Lorentz contraction, corresponding to a passive and an active aspect of length contraction, respectively; we pointed out that both aspects have fundamental dynamical contents. As an illustration of our considerations, we discussed briefly the Dewan-Beran-Bell spaceship paradox and the ‘pole in a barn’ paradox. [Projekat Ministarstva nauke Republike Srbije, br. 171028
Inverse methods for estimating primary input signals from time-averaged isotope profiles
Passey, Benjamin H.; Cerling, Thure E.; Schuster, Gerard T.; Robinson, Todd F.; Roeder, Beverly L.; Krueger, Stephen K.
2005-08-01
Mammalian teeth are invaluable archives of ancient seasonality because they record along their growth axes an isotopic record of temporal change in environment, plant diet, and animal behavior. A major problem with the intra-tooth method is that intra-tooth isotope profiles can be extremely time-averaged compared to the actual pattern of isotopic variation experienced by the animal during tooth formation. This time-averaging is a result of the temporal and spatial characteristics of amelogenesis (tooth enamel formation), and also results from laboratory sampling. This paper develops and evaluates an inverse method for reconstructing original input signals from time-averaged intra-tooth isotope profiles. The method requires that the temporal and spatial patterns of amelogenesis are known for the specific tooth and uses a minimum length solution of the linear system Am = d, where d is the measured isotopic profile, A is a matrix describing temporal and spatial averaging during amelogenesis and sampling, and m is the input vector that is sought. Accuracy is dependent on several factors, including the total measurement error and the isotopic structure of the measured profile. The method is shown to accurately reconstruct known input signals for synthetic tooth enamel profiles and the known input signal for a rabbit that underwent controlled dietary changes. Application to carbon isotope profiles of modern hippopotamus canines reveals detailed dietary histories that are not apparent from the measured data alone. Inverse methods show promise as an effective means of dealing with the time-averaging problem in studies of intra-tooth isotopic variation.
The average Indian female nose.
Patil, Surendra B; Kale, Satish M; Jaiswal, Sumeet; Khare, Nishant; Math, Mahantesh
2011-12-01
This study aimed to delineate the anthropometric measurements of the noses of young women of an Indian population and to compare them with the published ideals and average measurements for white women. This anthropometric survey included a volunteer sample of 100 young Indian women ages 18 to 35 years with Indian parents and no history of previous surgery or trauma to the nose. Standardized frontal, lateral, oblique, and basal photographs of the subjects' noses were taken, and 12 standard anthropometric measurements of the nose were determined. The results were compared with published standards for North American white women. In addition, nine nasal indices were calculated and compared with the standards for North American white women. The nose of Indian women differs significantly from the white nose. All the nasal measurements for the Indian women were found to be significantly different from those for North American white women. Seven of the nine nasal indices also differed significantly. Anthropometric analysis suggests differences between the Indian female nose and the North American white nose. Thus, a single aesthetic ideal is inadequate. Noses of Indian women are smaller and wider, with a less projected and rounded tip than the noses of white women. This study established the nasal anthropometric norms for nasal parameters, which will serve as a guide for cosmetic and reconstructive surgery in Indian women.
Smarandache, Florentin
2013-09-01
Let's denote by VE the speed of the Earth and byVR the speed of the rocket. Both travel in the same direction on parallel trajectories. We consider the Earth as a moving (at a constant speed VE -VR) spacecraft of almost spherical form, whose radius is r and thus the diameter 2r, and the rocket as standing still. The non-proper length of Earth's diameter, as measured by the astronaut is: L = 2 r√{ 1 -|/VE -VR|2 c2 } rocket! Also, let's assume that the astronaut is laying down in the direction of motion. Therefore, he would also shrink, or he would die!
P. R. Parthasarathy
2001-01-01
Full Text Available The transient solution is obtained analytically using continued fractions for a state-dependent birth-death queue in which potential customers are discouraged by the queue length. This queueing system is then compared with the well-known infinite server queueing system which has the same steady state solution as the model under consideration, whereas their transient solutions are different. A natural measure of speed of convergence of the mean number in the system to its stationarity is also computed.
Kidney Length in Normal Korean Children
Kim, In One; Cheon, Jung Eun; Lee, Young Seok; Lee, Sun Wha; Kim, Ok Hwa; Kim, Ji Hye; Kim, Hong Dae; Sim, Jung Suk
2010-01-01
Renal length offers important information to detect or follow-up various renal diseases. The purpose of this study was to determine the kidney length of normal Korean children in relation to age, height, weight, body surface area (BSA), and body mass index (BMI). Children between 1 month and 15 years of age without urological abnormality were recruited. Children below 3rd percentile and over 97th percentile for height or weight were excluded. Both renal lengths were measured in the prone position three times and then averaged by experienced radiologists. The mean length and standard deviation for each age group was obtained, and regression equation was calculated between renal length and age, weight, height, BSA, and BMI, respectively. Renal length was measured in 550 children. Renal length grows rapidly until 24 month, while the growth rate is reduced thereafter. The regression equation for age is: renal length (mm) = 45.953 + 1.064 x age (month, ≤ 24 months) (R2 = 0.720) or 62.173 + 0.203 x age (months, > 24 months) (R2 = 0.711). The regression equation for height is: renal length (mm) = 24.494 + 0.457 x height (cm) (R2 = 0.894). The regression equation for weight is: renal length (mm) = 38.342 + 2.117 x weight (kg, ≤18 kg) (R2 = 0.852) or 64.498 + 0.646 x weight (kg, > 18 kg) (R2 = 0.651). The regression equation for BSA is: renal length (mm) = 31.622 + 61.363 x BSA (m2, ≤ 0.7) (R2 = 0.857) or 52.717 + 29.959 x BSA (m2, > 0.7) (R2 = 0.715). The regression equation for BMI is: renal length (mm) = 44.474 + 1.163 x BMI (R2 = 0.079). This study provides data on the normal renal length and its association with age, weight, height, BSA and BMI. The results of this study will guide the detection and follow-up of renal diseases in Korean children
Correlation length estimation in a polycrystalline material model
Simonovski, I.; Cizelj, L.
2005-01-01
This paper deals with the correlation length estimated from a mesoscopic model of a polycrystalline material. The correlation length can be used in some macroscopic material models as a material parameter that describes the internal length. It can be estimated directly from the strain and stress fields calculated from a finite-element model, which explicitly accounts for the selected mesoscopic features such as the random orientation, shape and size of the grains. A crystal plasticity material model was applied in the finite-element analysis. Different correlation lengths were obtained depending on the used set of crystallographic orientations. We determined that the different sets of crystallographic orientations affect the general level of the correlation length, however, as the external load is increased the behaviour of correlation length is similar in all the analyzed cases. The correlation lengths also changed with the macroscopic load. If the load is below the yield strength the correlation lengths are constant, and are slightly higher than the average grain size. The correlation length can therefore be considered as an indicator of first plastic deformations in the material. Increasing the load above the yield strength creates shear bands that temporarily increase the values of the correlation lengths calculated from the strain fields. With a further load increase the correlation lengths decrease slightly but stay above the average grain size. (author)
Cervical length measurement: comparison of transabdominal and transvaginal approach
Westerway, Sue C; Pedersen, Lars Henning; Hyett, Jon
2015-01-01
to external cervical os. Bland- Altman plots and Wilcoxon signed rank test were used to evaluate differences between TA and TV measurements. Results: The validity of the TA method depended on cervical length. Although the TA method underestimated cervical length by 2.0 mm on average (P ... plots showed an inverse trend with shorter cervixes. In women with a cervix test to detect cervical length
Averaging of nonlinearity-managed pulses
Zharnitsky, Vadim; Pelinovsky, Dmitry
2005-01-01
We consider the nonlinear Schroedinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons
Role of spatial averaging in multicellular gradient sensing.
Smith, Tyler; Fancher, Sean; Levchenko, Andre; Nemenman, Ilya; Mugler, Andrew
2016-05-20
Gradient sensing underlies important biological processes including morphogenesis, polarization, and cell migration. The precision of gradient sensing increases with the length of a detector (a cell or group of cells) in the gradient direction, since a longer detector spans a larger range of concentration values. Intuition from studies of concentration sensing suggests that precision should also increase with detector length in the direction transverse to the gradient, since then spatial averaging should reduce the noise. However, here we show that, unlike for concentration sensing, the precision of gradient sensing decreases with transverse length for the simplest gradient sensing model, local excitation-global inhibition. The reason is that gradient sensing ultimately relies on a subtraction of measured concentration values. While spatial averaging indeed reduces the noise in these measurements, which increases precision, it also reduces the covariance between the measurements, which results in the net decrease in precision. We demonstrate how a recently introduced gradient sensing mechanism, regional excitation-global inhibition (REGI), overcomes this effect and recovers the benefit of transverse averaging. Using a REGI-based model, we compute the optimal two- and three-dimensional detector shapes, and argue that they are consistent with the shapes of naturally occurring gradient-sensing cell populations.
Understanding coastal morphodynamic patterns from depth-averaged sediment concentration
Ribas, F.; Falques, A.; de Swart, H. E.; Dodd, N.; Garnier, R.; Calvete, D.
This review highlights the important role of the depth-averaged sediment concentration (DASC) to understand the formation of a number of coastal morphodynamic features that have an alongshore rhythmic pattern: beach cusps, surf zone transverse and crescentic bars, and shoreface-connected sand
The Distribution of Lightning Channel Lengths in Northern Alabama Thunderstorms
Peterson, H. S.; Koshak, W. J.
2010-01-01
Lightning is well known to be a major source of tropospheric NOx, and in most cases is the dominant natural source (Huntreiser et al 1998, Jourdain and Hauglustaine 2001). Production of NOx by a segment of a lightning channel is a function of channel segment energy density and channel segment altitude. A first estimate of NOx production by a lightning flash can be found by multiplying production per segment [typically 104 J/m; Hill (1979)] by the total length of the flash s channel. The purpose of this study is to determine average channel length for lightning flashes near NALMA in 2008, and to compare average channel length of ground flashes to the average channel length of cloud flashes.
The average size of ordered binary subgraphs
van Leeuwen, J.; Hartel, Pieter H.
To analyse the demands made on the garbage collector in a graph reduction system, the change in size of an average graph is studied when an arbitrary edge is removed. In ordered binary trees the average number of deleted nodes as a result of cutting a single edge is equal to the average size of a
Wang, Ling; Abdel-Aty, Mohamed; Wang, Xuesong; Yu, Rongjie
2018-02-01
There have been plenty of traffic safety studies based on average daily traffic (ADT), average hourly traffic (AHT), or microscopic traffic at 5 min intervals. Nevertheless, not enough research has compared the performance of these three types of safety studies, and seldom of previous studies have intended to find whether the results of one type of study is transferable to the other two studies. First, this study built three models: a Bayesian Poisson-lognormal model to estimate the daily crash frequency using ADT, a Bayesian Poisson-lognormal model to estimate the hourly crash frequency using AHT, and a Bayesian logistic regression model for the real-time safety analysis using microscopic traffic. The model results showed that the crash contributing factors found by different models were comparable but not the same. Four variables, i.e., the logarithm of volume, the standard deviation of speed, the logarithm of segment length, and the existence of diverge segment, were positively significant in the three models. Additionally, weaving segments experienced higher daily and hourly crash frequencies than merge and basic segments. Then, each of the ADT-based, AHT-based, and real-time models was used to estimate safety conditions at different levels: daily and hourly, meanwhile, the real-time model was also used in 5 min intervals. The results uncovered that the ADT- and AHT-based safety models performed similar in predicting daily and hourly crash frequencies, and the real-time safety model was able to provide hourly crash frequency. Copyright © 2017 Elsevier Ltd. All rights reserved.
Sperm length evolution in the fungus-growing ants
Baer, B.; Dijkstra, M. B.; Mueller, U. G.
2009-01-01
-growing ants, representing 9 of the 12 recognized genera, and mapped these onto the ant phylogeny. We show that average sperm length across species is highly variable and decreases with mature colony size in basal genera with singly mated queens, suggesting that sperm production or storage constraints affect...... the evolution of sperm length. Sperm length does not decrease further in multiply mating leaf-cutting ants, despite substantial further increases in colony size. In a combined analysis, sexual dimorphism explained 63.1% of the variance in sperm length between species. As colony size was not a significant...... predictor in this analysis, we conclude that sperm production trade-offs in males have been the major selective force affecting sperm length across the fungus-growing ants, rather than storage constraints in females. The relationship between sperm length and sexual dimorphism remained robust...
Averaging for solitons with nonlinearity management
Pelinovsky, D.E.; Kevrekidis, P.G.; Frantzeskakis, D.J.
2003-01-01
We develop an averaging method for solitons of the nonlinear Schroedinger equation with a periodically varying nonlinearity coefficient, which is used to effectively describe solitons in Bose-Einstein condensates, in the context of the recently proposed technique of Feshbach resonance management. Using the derived local averaged equation, we study matter-wave bright and dark solitons and demonstrate a very good agreement between solutions of the averaged and full equations
DSCOVR Magnetometer Level 2 One Minute Averages
National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-minute average of Level 1 data
DSCOVR Magnetometer Level 2 One Second Averages
National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-second average of Level 1 data
Spacetime averaging of exotic singularity universes
Dabrowski, Mariusz P.
2011-01-01
Taking a spacetime average as a measure of the strength of singularities we show that big-rips (type I) are stronger than big-bangs. The former have infinite spacetime averages while the latter have them equal to zero. The sudden future singularities (type II) and w-singularities (type V) have finite spacetime averages. The finite scale factor (type III) singularities for some values of the parameters may have an infinite average and in that sense they may be considered stronger than big-bangs.
NOAA Average Annual Salinity (3-Zone)
California Natural Resource Agency — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...
40 CFR 76.11 - Emissions averaging.
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Emissions averaging. 76.11 Section 76.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General...
Determinants of College Grade Point Averages
Bailey, Paul Dean
2012-01-01
Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the…
A virtual pebble game to ensemble average graph rigidity.
González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J
2015-01-01
The body-bar Pebble Game (PG) algorithm is commonly used to calculate network rigidity properties in proteins and polymeric materials. To account for fluctuating interactions such as hydrogen bonds, an ensemble of constraint topologies are sampled, and average network properties are obtained by averaging PG characterizations. At a simpler level of sophistication, Maxwell constraint counting (MCC) provides a rigorous lower bound for the number of internal degrees of freedom (DOF) within a body-bar network, and it is commonly employed to test if a molecular structure is globally under-constrained or over-constrained. MCC is a mean field approximation (MFA) that ignores spatial fluctuations of distance constraints by replacing the actual molecular structure by an effective medium that has distance constraints globally distributed with perfect uniform density. The Virtual Pebble Game (VPG) algorithm is a MFA that retains spatial inhomogeneity in the density of constraints on all length scales. Network fluctuations due to distance constraints that may be present or absent based on binary random dynamic variables are suppressed by replacing all possible constraint topology realizations with the probabilities that distance constraints are present. The VPG algorithm is isomorphic to the PG algorithm, where integers for counting "pebbles" placed on vertices or edges in the PG map to real numbers representing the probability to find a pebble. In the VPG, edges are assigned pebble capacities, and pebble movements become a continuous flow of probability within the network. Comparisons between the VPG and average PG results over a test set of proteins and disordered lattices demonstrate the VPG quantitatively estimates the ensemble average PG results well. The VPG performs about 20% faster than one PG, and it provides a pragmatic alternative to averaging PG rigidity characteristics over an ensemble of constraint topologies. The utility of the VPG falls in between the most
Computation of the bounce-average code
Cutler, T.A.; Pearlstein, L.D.; Rensink, M.E.
1977-01-01
The bounce-average computer code simulates the two-dimensional velocity transport of ions in a mirror machine. The code evaluates and bounce-averages the collision operator and sources along the field line. A self-consistent equilibrium magnetic field is also computed using the long-thin approximation. Optionally included are terms that maintain μ, J invariance as the magnetic field changes in time. The assumptions and analysis that form the foundation of the bounce-average code are described. When references can be cited, the required results are merely stated and explained briefly. A listing of the code is appended
The relationship of protein conservation and sequence length
Panchenko Anna R
2002-11-01
Full Text Available Abstract Background In general, the length of a protein sequence is determined by its function and the wide variance in the lengths of an organism's proteins reflects the diversity of specific functional roles for these proteins. However, additional evolutionary forces that affect the length of a protein may be revealed by studying the length distributions of proteins evolving under weaker functional constraints. Results We performed sequence comparisons to distinguish highly conserved and poorly conserved proteins from the bacterium Escherichia coli, the archaeon Archaeoglobus fulgidus, and the eukaryotes Saccharomyces cerevisiae, Drosophila melanogaster, and Homo sapiens. For all organisms studied, the conserved and nonconserved proteins have strikingly different length distributions. The conserved proteins are, on average, longer than the poorly conserved ones, and the length distributions for the poorly conserved proteins have a relatively narrow peak, in contrast to the conserved proteins whose lengths spread over a wider range of values. For the two prokaryotes studied, the poorly conserved proteins approximate the minimal length distribution expected for a diverse range of structural folds. Conclusions There is a relationship between protein conservation and sequence length. For all the organisms studied, there seems to be a significant evolutionary trend favoring shorter proteins in the absence of other, more specific functional constraints.
Rotational averaging of multiphoton absorption cross sections
Friese, Daniel H., E-mail: daniel.h.friese@uit.no; Beerepoot, Maarten T. P.; Ruud, Kenneth [Centre for Theoretical and Computational Chemistry, University of Tromsø — The Arctic University of Norway, N-9037 Tromsø (Norway)
2014-11-28
Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.
Sea Surface Temperature Average_SST_Master
National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using ArcGIS...
Trajectory averaging for stochastic approximation MCMC algorithms
Liang, Faming
2010-01-01
to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic
Should the average tax rate be marginalized?
Feldman, N. E.; Katuščák, Peter
-, č. 304 (2006), s. 1-65 ISSN 1211-3298 Institutional research plan: CEZ:MSM0021620846 Keywords : tax * labor supply * average tax Subject RIV: AH - Economics http://www.cerge-ei.cz/pdf/wp/Wp304.pdf
A practical guide to averaging functions
Beliakov, Gleb; Calvo Sánchez, Tomasa
2016-01-01
This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...
MN Temperature Average (1961-1990) - Line
Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...
MN Temperature Average (1961-1990) - Polygon
Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...
Tree Diamter Effects on Cost and Productivity of Cut-to-Length Systems
Matthew A. Holtzscher; Bobby L. Lanford
1997-01-01
Currently, there is a lack of economic information concerning cut-to-length harvesting systems. This study examined and measured the different costs of operating cut-to-length logging equipment over a range of average stand diameters at breast height. Three different cut-to-length logging systems were examined in this study. Systems included: 1) felier-buncher/manual/...
Nonequilibrium statistical averages and thermo field dynamics
Marinaro, A.; Scarpetta, Q.
1984-01-01
An extension of thermo field dynamics is proposed, which permits the computation of nonequilibrium statistical averages. The Brownian motion of a quantum oscillator is treated as an example. In conclusion it is pointed out that the procedure proposed to computation of time-dependent statistical average gives the correct two-point Green function for the damped oscillator. A simple extension can be used to compute two-point Green functions of free particles
An approximate analytical approach to resampling averages
Malzahn, Dorthe; Opper, M.
2004-01-01
Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....
Sadhukhan, Debasis; Roy, Sudipto Singha; Rakshit, Debraj; Prabhu, R; Sen De, Aditi; Sen, Ujjwal
2016-01-01
Classical correlation functions of ground states typically decay exponentially and polynomially, respectively, for gapped and gapless short-range quantum spin systems. In such systems, entanglement decays exponentially even at the quantum critical points. However, quantum discord, an information-theoretic quantum correlation measure, survives long lattice distances. We investigate the effects of quenched disorder on quantum correlation lengths of quenched averaged entanglement and quantum discord, in the anisotropic XY and XYZ spin glass and random field chains. We find that there is virtually neither reduction nor enhancement in entanglement length while quantum discord length increases significantly with the introduction of the quenched disorder.
Improved averaging for non-null interferometry
Fleig, Jon F.; Murphy, Paul E.
2013-09-01
Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.
Does length or neighborhood size cause the word length effect?
Jalbert, Annie; Neath, Ian; Surprenant, Aimée M
2011-10-01
Jalbert, Neath, Bireta, and Surprenant (2011) suggested that past demonstrations of the word length effect, the finding that words with fewer syllables are recalled better than words with more syllables, included a confound: The short words had more orthographic neighbors than the long words. The experiments reported here test two predictions that would follow if neighborhood size is a more important factor than word length. In Experiment 1, we found that concurrent articulation removed the effect of neighborhood size, just as it removes the effect of word length. Experiment 2 demonstrated that this pattern is also found with nonwords. For Experiment 3, we factorially manipulated length and neighborhood size, and found only effects of the latter. These results are problematic for any theory of memory that includes decay offset by rehearsal, but they are consistent with accounts that include a redintegrative stage that is susceptible to disruption by noise. The results also confirm the importance of lexical and linguistic factors on memory tasks thought to tap short-term memory.
Keeping disease at arm's length
Lassen, Aske Juul
2015-01-01
active ageing change everyday life with chronic disease, and how do older people combine an active life with a range of chronic diseases? The participants in the study use activities to keep their diseases at arm’s length, and this distancing of disease at the same time enables them to engage in social...... and physical activities at the activity centre. In this way, keeping disease at arm’s length is analysed as an ambiguous health strategy. The article shows the importance of looking into how active ageing is practised, as active ageing seems to work well in the everyday life of the older people by not giving...... emphasis to disease. The article is based on ethnographic fieldwork and uses vignettes of four participants to show how they each keep diseases at arm’s length....
Continuously variable focal length lens
Adams, Bernhard W; Chollet, Matthieu C
2013-12-17
A material preferably in crystal form having a low atomic number such as beryllium (Z=4) provides for the focusing of x-rays in a continuously variable manner. The material is provided with plural spaced curvilinear, optically matched slots and/or recesses through which an x-ray beam is directed. The focal length of the material may be decreased or increased by increasing or decreasing, respectively, the number of slots (or recesses) through which the x-ray beam is directed, while fine tuning of the focal length is accomplished by rotation of the material so as to change the path length of the x-ray beam through the aligned cylindrical slows. X-ray analysis of a fixed point in a solid material may be performed by scanning the energy of the x-ray beam while rotating the material to maintain the beam's focal point at a fixed point in the specimen undergoing analysis.
CEBAF Upgrade Bunch Length Measurements
Ahmad, Mahmoud [Old Dominion Univ., Norfolk, VA (United States)
2016-05-01
Many accelerators use short electron bunches and measuring the bunch length is important for efficient operations. CEBAF needs a suitable bunch length because bunches that are too long will result in beam interruption to the halls due to excessive energy spread and beam loss. In this work, bunch length is measured by invasive and non-invasive techniques at different beam energies. Two new measurement techniques have been commissioned; a harmonic cavity showed good results compared to expectations from simulation, and a real time interferometer is commissioned and first checkouts were performed. Three other techniques were used for measurements and comparison purposes without modifying the old procedures. Two of them can be used when the beam is not compressed longitudinally while the other one, the synchrotron light monitor, can be used with compressed or uncompressed beam.
Kondo length in bosonic lattices
Giuliano, Domenico; Sodano, Pasquale; Trombettoni, Andrea
2017-09-01
Motivated by the fact that the low-energy properties of the Kondo model can be effectively simulated in spin chains, we study the realization of the effect with bond impurities in ultracold bosonic lattices at half filling. After presenting a discussion of the effective theory and of the mapping of the bosonic chain onto a lattice spin Hamiltonian, we provide estimates for the Kondo length as a function of the parameters of the bosonic model. We point out that the Kondo length can be extracted from the integrated real-space correlation functions, which are experimentally accessible quantities in experiments with cold atoms.
Continuous lengths of oxide superconductors
Kroeger, Donald M.; List, III, Frederick A.
2000-01-01
A layered oxide superconductor prepared by depositing a superconductor precursor powder on a continuous length of a first substrate ribbon. A continuous length of a second substrate ribbon is overlaid on the first substrate ribbon. Sufficient pressure is applied to form a bound layered superconductor precursor powder between the first substrate ribbon and the second substrate ribbon. The layered superconductor precursor is then heat treated to establish the oxide superconducting phase. The layered oxide superconductor has a smooth interface between the substrate and the oxide superconductor.
Summary of neutron scattering lengths
Koester, L.
1981-12-01
All available neutron-nuclei scattering lengths are collected together with their error bars in a uniform way. Bound scattering lengths are given for the elements, the isotopes, and the various spin-states. They are discussed in the sense of their use as basic parameters for many investigations in the field of nuclear and solid state physics. The data bank is available on magnetic tape, too. Recommended values and a map of these data serve for an uncomplicated use of these quantities. (orig.)
Overview of bunch length measurements
Lumpkin, A. H.
1999-01-01
An overview of particle and photon beam bunch length measurements is presented in the context of free-electron laser (FEL) challenges. Particle-beam peak current is a critical factor in obtaining adequate FEL gain for both oscillators and self-amplified spontaneous emission (SASE) devices. Since measurement of charge is a standard measurement, the bunch length becomes the key issue for ultrashort bunches. Both time-domain and frequency-domain techniques are presented in the context of using electromagnetic radiation over eight orders of magnitude in wavelength. In addition, the measurement of microbunching in a micropulse is addressed
Asynchronous Gossip for Averaging and Spectral Ranking
Borkar, Vivek S.; Makhijani, Rahul; Sundaresan, Rajesh
2014-08-01
We consider two variants of the classical gossip algorithm. The first variant is a version of asynchronous stochastic approximation. We highlight a fundamental difficulty associated with the classical asynchronous gossip scheme, viz., that it may not converge to a desired average, and suggest an alternative scheme based on reinforcement learning that has guaranteed convergence to the desired average. We then discuss a potential application to a wireless network setting with simultaneous link activation constraints. The second variant is a gossip algorithm for distributed computation of the Perron-Frobenius eigenvector of a nonnegative matrix. While the first variant draws upon a reinforcement learning algorithm for an average cost controlled Markov decision problem, the second variant draws upon a reinforcement learning algorithm for risk-sensitive control. We then discuss potential applications of the second variant to ranking schemes, reputation networks, and principal component analysis.
Benchmarking statistical averaging of spectra with HULLAC
Klapisch, Marcel; Busquet, Michel
2008-11-01
Knowledge of radiative properties of hot plasmas is important for ICF, astrophysics, etc When mid-Z or high-Z elements are present, the spectra are so complex that one commonly uses statistically averaged description of atomic systems [1]. In a recent experiment on Fe[2], performed under controlled conditions, high resolution transmission spectra were obtained. The new version of HULLAC [3] allows the use of the same model with different levels of details/averaging. We will take advantage of this feature to check the effect of averaging with comparison with experiment. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Quant. Spectros. Rad. Transf. 65, 43 (2000). [2] J. E. Bailey, G. A. Rochau, C. A. Iglesias et al., Phys. Rev. Lett. 99, 265002-4 (2007). [3]. M. Klapisch, M. Busquet, and A. Bar-Shalom, AIP Conference Proceedings 926, 206-15 (2007).
Books average previous decade of economic misery.
Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.
Books Average Previous Decade of Economic Misery
Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159
Exploiting scale dependence in cosmological averaging
Mattsson, Teppo; Ronkainen, Maria
2008-01-01
We study the role of scale dependence in the Buchert averaging method, using the flat Lemaitre–Tolman–Bondi model as a testing ground. Within this model, a single averaging scale gives predictions that are too coarse, but by replacing it with the distance of the objects R(z) for each redshift z, we find an O(1%) precision at z<2 in the averaged luminosity and angular diameter distances compared to their exact expressions. At low redshifts, we show the improvement for generic inhomogeneity profiles, and our numerical computations further verify it up to redshifts z∼2. At higher redshifts, the method breaks down due to its inability to capture the time evolution of the inhomogeneities. We also demonstrate that the running smoothing scale R(z) can mimic acceleration, suggesting that it could be at least as important as the backreaction in explaining dark energy as an inhomogeneity induced illusion
Stochastic Averaging and Stochastic Extremum Seeking
Liu, Shu-Jun
2012-01-01
Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering and analysis of bacterial convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...
Aperture averaging in strong oceanic turbulence
Gökçe, Muhsin Caner; Baykal, Yahya
2018-04-01
Receiver aperture averaging technique is employed in underwater wireless optical communication (UWOC) systems to mitigate the effects of oceanic turbulence, thus to improve the system performance. The irradiance flux variance is a measure of the intensity fluctuations on a lens of the receiver aperture. Using the modified Rytov theory which uses the small-scale and large-scale spatial filters, and our previously presented expression that shows the atmospheric structure constant in terms of oceanic turbulence parameters, we evaluate the irradiance flux variance and the aperture averaging factor of a spherical wave in strong oceanic turbulence. Irradiance flux variance variations are examined versus the oceanic turbulence parameters and the receiver aperture diameter are examined in strong oceanic turbulence. Also, the effect of the receiver aperture diameter on the aperture averaging factor is presented in strong oceanic turbulence.
Regional averaging and scaling in relativistic cosmology
Buchert, Thomas; Carfora, Mauro
2002-01-01
Averaged inhomogeneous cosmologies lie at the forefront of interest, since cosmological parameters such as the rate of expansion or the mass density are to be considered as volume-averaged quantities and only these can be compared with observations. For this reason the relevant parameters are intrinsically scale-dependent and one wishes to control this dependence without restricting the cosmological model by unphysical assumptions. In the latter respect we contrast our way to approach the averaging problem in relativistic cosmology with shortcomings of averaged Newtonian models. Explicitly, we investigate the scale-dependence of Eulerian volume averages of scalar functions on Riemannian three-manifolds. We propose a complementary view of a Lagrangian smoothing of (tensorial) variables as opposed to their Eulerian averaging on spatial domains. This programme is realized with the help of a global Ricci deformation flow for the metric. We explain rigorously the origin of the Ricci flow which, on heuristic grounds, has already been suggested as a possible candidate for smoothing the initial dataset for cosmological spacetimes. The smoothing of geometry implies a renormalization of averaged spatial variables. We discuss the results in terms of effective cosmological parameters that would be assigned to the smoothed cosmological spacetime. In particular, we find that on the smoothed spatial domain B-bar evaluated cosmological parameters obey Ω-bar B-bar m + Ω-bar B-bar R + Ω-bar B-bar A + Ω-bar B-bar Q 1, where Ω-bar B-bar m , Ω-bar B-bar R and Ω-bar B-bar A correspond to the standard Friedmannian parameters, while Ω-bar B-bar Q is a remnant of cosmic variance of expansion and shear fluctuations on the averaging domain. All these parameters are 'dressed' after smoothing out the geometrical fluctuations, and we give the relations of the 'dressed' to the 'bare' parameters. While the former provide the framework of interpreting observations with a 'Friedmannian bias
Average: the juxtaposition of procedure and context
Watson, Jane; Chick, Helen; Callingham, Rosemary
2014-09-01
This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.
Average-case analysis of numerical problems
2000-01-01
The average-case analysis of numerical problems is the counterpart of the more traditional worst-case approach. The analysis of average error and cost leads to new insight on numerical problems as well as to new algorithms. The book provides a survey of results that were mainly obtained during the last 10 years and also contains new results. The problems under consideration include approximation/optimal recovery and numerical integration of univariate and multivariate functions as well as zero-finding and global optimization. Background material, e.g. on reproducing kernel Hilbert spaces and random fields, is provided.
Grassmann Averages for Scalable Robust PCA
Hauberg, Søren; Feragen, Aasa; Black, Michael J.
2014-01-01
As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...
Evaluation of Efficient Line Lengths for Better Readability
Zahid Hussain
2012-01-01
Full Text Available In this paper the major findings of a formal experiment about onscreen text line lengths are presented. The experiment examined the effects of four different line lengths on the reading speed and the reading efficiency. Efficiency is defined as a combination of reading speed and accuracy. Sixteen people between the age of 24 and 36 participated at the experiment. The subjects had to read four different texts with an average line length around 2000 characters. The texts contained substitution words, which had to be detected by the subjects to measure reading accuracy. Besides objective measures like reading speed and accuracy, the subjects were asked to subjectively vote on their reading experience. The results from our objective measures show strong similarities to those of the work done previously by different researchers. The absolute reading speed grows when the line length grows from CPL (Characters Per Line 30-120. The measured reading efficiency, however, doesn\\\\\\'t grow steadily, although a growing trend can be seen. This is due to the fact, that the test persons found in average more substitution words from the 60 CPL text than they did from the 30 and 90 CPL texts. The reading speed seems to increase while the line length increases but the overall comprehension seems to peak at medium line lengths. As in the previous studies, our test persons also prefer the medium (60 and 90 CPL line lengths, although they perform better when reading longer lines. In the overall subjective opinion 13 out of 16 test persons selected the 60 or 90 CPL line length as their favorite. The literature doesn\\\\\\'t truly provide a scientific explanation for the difference between the objective performance and the subjective preference. A natural hypothesis would be that the line length that is the fastest to read would also feel most comfortable to the readers but in the light of this and the earlier research it seems like this is not the case.
Dorsal Phalloplasty to Preserve Penis Length after Penile Prosthesis Implantation
Osama Shaeer
2017-03-01
Full Text Available Objectives: Following penile prosthesis implantation (PPI, patients may complain of a decrease in visible penis length. A dorsal phalloplasty defines the penopubic junction by tacking pubic skin to the pubis, revealing the base of the penis. This study aimed to evaluate the efficacy of a dorsal phalloplasty in increasing the visible penis length following PPI. Methods: An inflatable penile prosthesis was implanted in 13 patients with severe erectile dysfunction (ED at the Kamal Shaeer Hospital, Cairo, Egypt, from January 2013 to May 2014. During the surgery, nonabsorbable tacking sutures were used to pin the pubic skin to the pubis through the same penoscrotal incision. Intraoperative penis length was measured before and after the dorsal phalloplasty. Overall patient satisfaction was measured on a 5-point rating scale and patients were requested to subjectively compare their postoperative penis length with memories of their penis length before the onset of ED. Results: Intraoperatively, the dorsal phalloplasty increased the visible length of the erect penis by an average of 25.6%. The average length before and after tacking was 10.2 ± 2.9 cm and 13.7 ± 2.8 cm, respectively (P <0.002. Postoperatively, seven patients (53.8% reported a longer penis, five patients (38.5% reported no change in length and one patient (7.7% reported a slightly shorter penis. The mean overall patient satisfaction score was 4.9 ± 0.3. None of the patients developed postoperative complications. Conclusion: A dorsal phalloplasty during PPI is an effective method of increasing visible penis length, therefore minimising the impression of a shorter penis after implantation.
Diet, nutrition and telomere length.
Paul, Ligi
2011-10-01
The ends of human chromosomes are protected by DNA-protein complexes termed telomeres, which prevent the chromosomes from fusing with each other and from being recognized as a double-strand break by DNA repair proteins. Due to the incomplete replication of linear chromosomes by DNA polymerase, telomeric DNA shortens with repeated cell divisions until the telomeres reach a critical length, at which point the cells enter senescence. Telomere length is an indicator of biological aging, and dysfunction of telomeres is linked to age-related pathologies like cardiovascular disease, Parkinson disease, Alzheimer disease and cancer. Telomere length has been shown to be positively associated with nutritional status in human and animal studies. Various nutrients influence telomere length potentially through mechanisms that reflect their role in cellular functions including inflammation, oxidative stress, DNA integrity, DNA methylation and activity of telomerase, the enzyme that adds the telomeric repeats to the ends of the newly synthesized DNA. Copyright © 2011 Elsevier Inc. All rights reserved.
Ben Ruktantichoke
2011-06-01
Full Text Available In this study water flowed through a straight horizontal plastic tube placed at the bottom of a large tank of water. The effect of changing the length of tubing on the velocity of flow was investigated. It was found that the Hagen-Poiseuille Equation is valid when the effect of water entering the tube is accounted for.
Finite length Taylor Couette flow
Streett, C. L.; Hussaini, M. Y.
1987-01-01
Axisymmetric numerical solutions of the unsteady Navier-Stokes equations for flow between concentric rotating cylinders of finite length are obtained by a spectral collocation method. These representative results pertain to two-cell/one-cell exchange process, and are compared with recent experiments.
Chaotic behaviour of a pendulum with variable length
Bartuccelli, M; Christiansen, P L; Muto, V; Soerensen, M P; Pedersen, N F
1987-08-01
The Melnikov function for the prediction of Smale horseshoe chaos is applied to a driven damped pendulum with variable length. Depending on the parameters, it is shown that this dynamical system undertakes heteroclinic bifurcations which are the source of the unstable chaotic motion. The analytical results are illustrated by new numerical simulations. Furthermore, using the averaging theorem, the stability of the subharmonics is studied.
Generalized Jackknife Estimators of Weighted Average Derivatives
Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael
With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic...
Average beta measurement in EXTRAP T1
Hedin, E.R.
1988-12-01
Beginning with the ideal MHD pressure balance equation, an expression for the average poloidal beta, Β Θ , is derived. A method for unobtrusively measuring the quantities used to evaluate Β Θ in Extrap T1 is described. The results if a series of measurements yielding Β Θ as a function of externally applied toroidal field are presented. (author)
HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS
2005-01-01
Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department
Bayesian Averaging is Well-Temperated
Hansen, Lars Kai
2000-01-01
Bayesian predictions are stochastic just like predictions of any other inference scheme that generalize from a finite sample. While a simple variational argument shows that Bayes averaging is generalization optimal given that the prior matches the teacher parameter distribution the situation is l...
Gibbs equilibrium averages and Bogolyubov measure
Sankovich, D.P.
2011-01-01
Application of the functional integration methods in equilibrium statistical mechanics of quantum Bose-systems is considered. We show that Gibbs equilibrium averages of Bose-operators can be represented as path integrals over a special Gauss measure defined in the corresponding space of continuous functions. We consider some problems related to integration with respect to this measure
High average-power induction linacs
Prono, D.S.; Barrett, D.; Bowles, E.; Caporaso, G.J.; Chen, Yu-Jiuan; Clark, J.C.; Coffield, F.; Newton, M.A.; Nexsen, W.; Ravenscroft, D.; Turner, W.C.; Watson, J.A.
1989-01-01
Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of ∼ 50-ns duration pulses to > 100 MeV. In this paper the authors report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs
Function reconstruction from noisy local averages
Chen Yu; Huang Jianguo; Han Weimin
2008-01-01
A regularization method is proposed for the function reconstruction from noisy local averages in any dimension. Error bounds for the approximate solution in L 2 -norm are derived. A number of numerical examples are provided to show computational performance of the method, with the regularization parameters selected by different strategies
A singularity theorem based on spatial averages
journal of. July 2007 physics pp. 31–47. A singularity theorem based on spatial ... In this paper I would like to present a result which confirms – at least partially – ... A detailed analysis of how the model fits in with the .... Further, the statement that the spatial average ...... Financial support under grants FIS2004-01626 and no.
Multiphase averaging of periodic soliton equations
Forest, M.G.
1979-01-01
The multiphase averaging of periodic soliton equations is considered. Particular attention is given to the periodic sine-Gordon and Korteweg-deVries (KdV) equations. The periodic sine-Gordon equation and its associated inverse spectral theory are analyzed, including a discussion of the spectral representations of exact, N-phase sine-Gordon solutions. The emphasis is on physical characteristics of the periodic waves, with a motivation from the well-known whole-line solitons. A canonical Hamiltonian approach for the modulational theory of N-phase waves is prescribed. A concrete illustration of this averaging method is provided with the periodic sine-Gordon equation; explicit averaging results are given only for the N = 1 case, laying a foundation for a more thorough treatment of the general N-phase problem. For the KdV equation, very general results are given for multiphase averaging of the N-phase waves. The single-phase results of Whitham are extended to general N phases, and more importantly, an invariant representation in terms of Abelian differentials on a Riemann surface is provided. Several consequences of this invariant representation are deduced, including strong evidence for the Hamiltonian structure of N-phase modulational equations
A dynamic analysis of moving average rules
Chiarella, C.; He, X.Z.; Hommes, C.H.
2006-01-01
The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type
Essays on model averaging and political economics
Wang, W.
2013-01-01
This thesis first investigates various issues related with model averaging, and then evaluates two policies, i.e. West Development Drive in China and fiscal decentralization in U.S, using econometric tools. Chapter 2 proposes a hierarchical weighted least squares (HWALS) method to address multiple
2010-01-01
... 7 Agriculture 10 2010-01-01 2010-01-01 false On average. 1209.12 Section 1209.12 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS... CONSUMER INFORMATION ORDER Mushroom Promotion, Research, and Consumer Information Order Definitions § 1209...
High average-power induction linacs
Prono, D.S.; Barrett, D.; Bowles, E.
1989-01-01
Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs
Average Costs versus Net Present Value
E.A. van der Laan (Erwin); R.H. Teunter (Ruud)
2000-01-01
textabstractWhile the net present value (NPV) approach is widely accepted as the right framework for studying production and inventory control systems, average cost (AC) models are more widely used. For the well known EOQ model it can be verified that (under certain conditions) the AC approach gives
Average beta-beating from random errors
Tomas Garcia, Rogelio; Langner, Andy Sven; Malina, Lukas; Franchi, Andrea; CERN. Geneva. ATS Department
2018-01-01
The impact of random errors on average β-beating is studied via analytical derivations and simulations. A systematic positive β-beating is expected from random errors quadratic with the sources or, equivalently, with the rms β-beating. However, random errors do not have a systematic eﬀect on the tune.
Reliability Estimates for Undergraduate Grade Point Average
Westrick, Paul A.
2017-01-01
Undergraduate grade point average (GPA) is a commonly employed measure in educational research, serving as a criterion or as a predictor depending on the research question. Over the decades, researchers have used a variety of reliability coefficients to estimate the reliability of undergraduate GPA, which suggests that there has been no consensus…
Tendon surveillance requirements - average tendon force
Fulton, J.F.
1982-01-01
Proposed Rev. 3 to USNRC Reg. Guide 1.35 discusses the need for comparing, for individual tendons, the measured and predicted lift-off forces. Such a comparison is intended to detect any abnormal tendon force loss which might occur. Recognizing that there are uncertainties in the prediction of tendon losses, proposed Guide 1.35.1 has allowed specific tolerances on the fundamental losses. Thus, the lift-off force acceptance criteria for individual tendons appearing in Reg. Guide 1.35, Proposed Rev. 3, is stated relative to a lower bound predicted tendon force, which is obtained using the 'plus' tolerances on the fundamental losses. There is an additional acceptance criterion for the lift-off forces which is not specifically addressed in these two Reg. Guides; however, it is included in a proposed Subsection IWX to ASME Code Section XI. This criterion is based on the overriding requirement that the magnitude of prestress in the containment structure be sufficeint to meet the minimum prestress design requirements. This design requirement can be expressed as an average tendon force for each group of vertical hoop, or dome tendons. For the purpose of comparing the actual tendon forces with the required average tendon force, the lift-off forces measured for a sample of tendons within each group can be averaged to construct the average force for the entire group. However, the individual lift-off forces must be 'corrected' (normalized) prior to obtaining the sample average. This paper derives the correction factor to be used for this purpose. (orig./RW)
Averaging processes in granular flows driven by gravity
Rossi, Giulia; Armanini, Aronne
2016-04-01
One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental
Dependence of paracentric inversion rate on tract length
York, Thomas L; Durrett, Rick; Nielsen, Rasmus
2007-01-01
BACKGROUND: We develop a Bayesian method based on MCMC for estimating the relative rates of pericentric and paracentric inversions from marker data from two species. The method also allows estimation of the distribution of inversion tract lengths. RESULTS: We apply the method to data from...... Drosophila melanogaster and D. yakuba. We find that pericentric inversions occur at a much lower rate compared to paracentric inversions. The average paracentric inversion tract length is approx. 4.8 Mb with small inversions being more frequent than large inversions.If the two breakpoints defining...... a paracentric inversion tract are uniformly and independently distributed over chromosome arms there will be more short tract-length inversions than long; we find an even greater preponderance of short tract lengths than this would predict. Thus there appears to be a correlation between the positions...
The screening length of interatomic potential in atomic collisions
Yamamura, Y.; Takeuchi, W.; Kawamura, T.
1998-03-01
In computer studies on the interaction of charged particle with solids, many authors treat the nuclear collision by the Thomas-Fermi screened Coulomb potential. For better agreement with experiment, the screening length is modified sometimes. We investigate the theoretical background for the correction factor of the screening length in the interatomic potential which can be deduced from two steps. The first step is to select the correction factor of an isolated atom so as to match the average radius of the Thomas-Fermi electron distribution with that of the Hartree-Fock electron distribution, where we use the Clementi and Roetti's table. The second step is to determine the correction factor of the screening length of the interatomic potential by using a combination rule. The correction factors obtained for the screening length are in good agreement with those determined by the computer analysis of the Impact Collision Ion Scattering Spectroscopy (ICISS) data. (author)
Electron Cloud Cyclotron Resonances in the Presence of a Short-bunch-length Relativistic Beam
Celata, Christine; Celata, C.M.; Furman, Miguel A.; Vay, J.-L.; Wu, Jennifer W.
2008-01-01
Computer simulations using the 2D code 'POSINST' were used to study the formation of the electron cloud in the wiggler section of the positron damping ring of the International Linear Collider. In order to simulate an x-y slice of the wiggler (i.e., a slice perpendicular to the beam velocity), each simulation assumed a constant vertical magnetic field. At values of the magnetic field where the cyclotron frequency was an integral multiple of the bunch frequency, and where the field strength was less than approximately 0.6 T, equilibrium average electron densities were up to three times the density found at other neighboring field values. Effects of this resonance between the bunch and cyclotron frequency are expected to be non-negligible when the beam bunch length is much less than the product of the electron cyclotron period and the beam
Statistics on exponential averaging of periodograms
Peeters, T.T.J.M. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Ciftcioglu, Oe. [Istanbul Technical Univ. (Turkey). Dept. of Electrical Engineering
1994-11-01
The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a {chi}{sup 2} distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.).
Statistics on exponential averaging of periodograms
Peeters, T.T.J.M.; Ciftcioglu, Oe.
1994-11-01
The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a χ 2 distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.)
ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE
Carmen BOGHEAN
2013-12-01
Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.
Eric Costello
2011-01-01
Full Text Available The shape of a cable hanging under its own weight and uniform horizontal tension between two power poles is a catenary. The catenary is a curve which has an equation defined by a hyperbolic cosine function and a scaling factor. The scaling factor for power cables hanging under their own weight is equal to the horizontal tension on the cable divided by the weight of the cable. Both of these values are unknown for this problem. Newton's method was used to approximate the scaling factor and the arc length function to determine the length of the cable. A script was written using the Python programming language in order to quickly perform several iterations of Newton's method to get a good approximation for the scaling factor.
Weighted estimates for the averaging integral operator
Opic, Bohumír; Rákosník, Jiří
2010-01-01
Roč. 61, č. 3 (2010), s. 253-262 ISSN 0010-0757 R&D Projects: GA ČR GA201/05/2033; GA ČR GA201/08/0383 Institutional research plan: CEZ:AV0Z10190503 Keywords : averaging integral operator * weighted Lebesgue spaces * weights Subject RIV: BA - General Mathematics Impact factor: 0.474, year: 2010 http://link.springer.com/article/10.1007%2FBF03191231
Average Transverse Momentum Quantities Approaching the Lightfront
Boer, Daniel
2015-01-01
In this contribution to Light Cone 2014, three average transverse momentum quantities are discussed: the Sivers shift, the dijet imbalance, and the $p_T$ broadening. The definitions of these quantities involve integrals over all transverse momenta that are overly sensitive to the region of large transverse momenta, which conveys little information about the transverse momentum distributions of quarks and gluons inside hadrons. TMD factorization naturally suggests alternative definitions of su...
Time-averaged MSD of Brownian motion
Andreanov, Alexei; Grebenkov, Denis
2012-01-01
We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we de...
Average configuration of the geomagnetic tail
Fairfield, D.H.
1979-01-01
Over 3000 hours of Imp 6 magnetic field data obtained between 20 and 33 R/sub E/ in the geomagnetic tail have been used in a statistical study of the tail configuration. A distribution of 2.5-min averages of B/sub z/ as a function of position across the tail reveals that more flux crosses the equatorial plane near the dawn and dusk flanks (B-bar/sub z/=3.γ) than near midnight (B-bar/sub z/=1.8γ). The tail field projected in the solar magnetospheric equatorial plane deviates from the x axis due to flaring and solar wind aberration by an angle α=-0.9 Y/sub SM/-2.7, where Y/sub SM/ is in earth radii and α is in degrees. After removing these effects, the B/sub y/ component of the tail field is found to depend on interplanetary sector structure. During an 'away' sector the B/sub y/ component of the tail field is on average 0.5γ greater than that during a 'toward' sector, a result that is true in both tail lobes and is independent of location across the tail. This effect means the average field reversal between northern and southern lobes of the tail is more often 178 0 rather than the 180 0 that is generally supposed
Unscrambling The "Average User" Of Habbo Hotel
Mikael Johnson
2007-01-01
Full Text Available The “user” is an ambiguous concept in human-computer interaction and information systems. Analyses of users as social actors, participants, or configured users delineate approaches to studying design-use relationships. Here, a developer’s reference to a figure of speech, termed the “average user,” is contrasted with design guidelines. The aim is to create an understanding about categorization practices in design through a case study about the virtual community, Habbo Hotel. A qualitative analysis highlighted not only the meaning of the “average user,” but also the work that both the developer and the category contribute to this meaning. The average user a represents the unknown, b influences the boundaries of the target user groups, c legitimizes the designer to disregard marginal user feedback, and d keeps the design space open, thus allowing for creativity. The analysis shows how design and use are intertwined and highlights the developers’ role in governing different users’ interests.
Minimal Length, Measurability and Gravity
Alexander Shalyt-Margolin
2016-03-01
Full Text Available The present work is a continuation of the previous papers written by the author on the subject. In terms of the measurability (or measurable quantities notion introduced in a minimal length theory, first the consideration is given to a quantum theory in the momentum representation. The same terms are used to consider the Markov gravity model that here illustrates the general approach to studies of gravity in terms of measurable quantities.
Positron lifetimes at the initial stage of pore formation in Vycor glass
Jasinska, B; Goworek, T
2000-01-01
The formation of narrow pores during leaching of Vycor glass by sulphuric acid was investigated using the positron lifetime technique. During the leaching process the pore diameter remained roughly constant (except for the case of cold leaching). The time of processing changed the total length of capillaries, but not their number; at the temperature 50 deg. C during 20 min of leaching the average leaching depth was 24 mu m.
Telomere Length – a New Biomarker in Medicine
Agnieszka Kozłowska
2015-12-01
Full Text Available A number of xenobiotics in the environment and workplace influences on our health and life. Biomarkers are tools for measuring such exposures and their effects in the organism. Nowadays, telomere length, epigenetic changes, mutations and changes in gene expression pattern have become new molecular biomarkers. Telomeres play the role of molecular clock, which influences on expectancy of cell life and thus aging, the formation of damages, development diseases and carcinogenesis. The telomere length depends on mechanisms of replication and the activity of telomerase. Telomere length is currently used as a biomarker of susceptibility and/or exposure. This paper describes the role of telomere length as a biomarker of aging cells, oxidative stress, a marker of many diseases including cancer, and as a marker of environmental and occupational exposure.
Volkov, M.K.; Osipov, A.A.
1983-01-01
The msub(π)asub(0)sup(1/2)=0.1, msub(π)asub(0)sup(3/2)=-0.1, msub(π)asub(0)sup((-))=0.07, msub(π)sup(3)asub(1)sup(1/2)=0.018, msub(π)sup(3)asub(1)aup(3/2)=0.002, msub(π)sup(3)asub(1)sup((-))=0.0044, msub(π)sup(5)asub(2)sup(1/2)=2.4x10sup(-4) and msub(π)sup(5)asub(2)sup(3/2)=-1.2x10sup(-4) scattering lengths are calculated in the framework of the composite meson model which is based on four-quark interaction. The decay form factors of (rho, epsilon, S*) → 2π, (K tilde, K*) → Kπ are used. The q 2 -terms of the quark box diagrams are taken into account. It is shown that the q 2 -terms of the box diagrams give the main contribution to the s-wave scattering lengths. The diagrams with the intermediate vector mesons begin to play the essential role at calculation of the p- and d-wave scattering lengths
Finnigan, Bradley; Halley, Peter; Jack, Kevin; McDowell, Alasdair; Truss, Rowan; Casey, Phil; Knott, Robert; Martin, Darren
2006-01-01
Two organically modified layered silicates (with small and large diameters) were incorporated into three segmented polyurethanes with various degrees of microphase separation. Microphase separation increased with the molecular weight of the poly(hexamethylene oxide) soft segment. The molecular weight of the soft segment did not influence the amount of polyurethane intercalating the interlayer spacing. Small-angle neutron scattering and differential scanning calorimetry data indicated that the layered silicates did not affect the microphase morphology of any host polymer, regardless of the particle diameter. The stiffness enhancement on filler addition increased as the microphase separation of the polyurethane decreased, presumably because a greater number of urethane linkages were available to interact with the filler. For comparison, the small nanofiller was introduced into a polyurethane with a poly(tetramethylene oxide) soft segment, and a significant increase in the tensile strength and a sharper upturn in the stress-strain curve resulted. No such improvement occurred in the host polymers with poly(hexamethylene oxide) soft segments. It is proposed that the nanocomposite containing the more hydrophilic and mobile poly(tetramethylene oxide) soft segment is capable of greater secondary bonding between the polyurethane chains and the organosilicate surface, resulting in improved stress transfer to the filler and reduced molecular slippage.
Full length prototype SSC dipole test results
Strait, J.; Brown, B.C.; Carson, J.
1987-01-01
Results are presented from tests of the first full length prototype SSC dipole magnet. The cryogenic behavior of the magnet during a slow cooldown to 4.5K and a slow warmup to room temperature has been measured. Magnetic field quality was measured at currents up to 2000 A. Averaged over the body field all harmonics with the exception of b 2 and b 8 are at or within the tolerances specified by the SSC Central Design Group. (The values of b 2 and b 8 result from known design and construction defects which will be be corrected in later magnets.) Using an NMR probe the average body field strength is measured to be 10.283 G/A with point to point variations on the order of one part in 1000. Data are presented on quench behavior of the magnet up to 3500 A (approximately 55% of full field) including longitudinal and transverse velocities for the first 250 msec of the quench
The average inter-crossing number of equilateral random walks and polygons
Diao, Y; Dobay, A; Stasiak, A
2005-01-01
In this paper, we study the average inter-crossing number between two random walks and two random polygons in the three-dimensional space. The random walks and polygons in this paper are the so-called equilateral random walks and polygons in which each segment of the walk or polygon is of unit length. We show that the mean average inter-crossing number ICN between two equilateral random walks of the same length n is approximately linear in terms of n and we were able to determine the prefactor of the linear term, which is a = 3ln2/8 ∼ 0.2599. In the case of two random polygons of length n, the mean average inter-crossing number ICN is also linear, but the prefactor of the linear term is different from that of the random walks. These approximations apply when the starting points of the random walks and polygons are of a distance ρ apart and ρ is small compared to n. We propose a fitting model that would capture the theoretical asymptotic behaviour of the mean average ICN for large values of ρ. Our simulation result shows that the model in fact works very well for the entire range of ρ. We also study the mean ICN between two equilateral random walks and polygons of different lengths. An interesting result is that even if one random walk (polygon) has a fixed length, the mean average ICN between the two random walks (polygons) would still approach infinity if the length of the other random walk (polygon) approached infinity. The data provided by our simulations match our theoretical predictions very well
Operator product expansion and its thermal average
Mallik, S [Saha Inst. of Nuclear Physics, Calcutta (India)
1998-05-01
QCD sum rules at finite temperature, like the ones at zero temperature, require the coefficients of local operators, which arise in the short distance expansion of the thermal average of two-point functions of currents. We extend the configuration space method, applied earlier at zero temperature, to the case at finite temperature. We find that, upto dimension four, two new operators arise, in addition to the two appearing already in the vacuum correlation functions. It is argued that the new operators would contribute substantially to the sum rules, when the temperature is not too low. (orig.) 7 refs.
Fluctuations of wavefunctions about their classical average
Benet, L; Flores, J; Hernandez-Saldana, H; Izrailev, F M; Leyvraz, F; Seligman, T H
2003-01-01
Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics
Phase-averaged transport for quasiperiodic Hamiltonians
Bellissard, J; Schulz-Baldes, H
2002-01-01
For a class of discrete quasi-periodic Schroedinger operators defined by covariant re- presentations of the rotation algebra, a lower bound on phase-averaged transport in terms of the multifractal dimensions of the density of states is proven. This result is established under a Diophantine condition on the incommensuration parameter. The relevant class of operators is distinguished by invariance with respect to symmetry automorphisms of the rotation algebra. It includes the critical Harper (almost-Mathieu) operator. As a by-product, a new solution of the frame problem associated with Weyl-Heisenberg-Gabor lattices of coherent states is given.
Baseline-dependent averaging in radio interferometry
Wijnholds, S. J.; Willis, A. G.; Salvini, S.
2018-05-01
This paper presents a detailed analysis of the applicability and benefits of baseline-dependent averaging (BDA) in modern radio interferometers and in particular the Square Kilometre Array. We demonstrate that BDA does not affect the information content of the data other than a well-defined decorrelation loss for which closed form expressions are readily available. We verify these theoretical findings using simulations. We therefore conclude that BDA can be used reliably in modern radio interferometry allowing a reduction of visibility data volume (and hence processing costs for handling visibility data) by more than 80 per cent.
Multistage parallel-serial time averaging filters
Theodosiou, G.E.
1980-01-01
Here, a new time averaging circuit design, the 'parallel filter' is presented, which can reduce the time jitter, introduced in time measurements using counters of large dimensions. This parallel filter could be considered as a single stage unit circuit which can be repeated an arbitrary number of times in series, thus providing a parallel-serial filter type as a result. The main advantages of such a filter over a serial one are much less electronic gate jitter and time delay for the same amount of total time uncertainty reduction. (orig.)
Time-averaged MSD of Brownian motion
Andreanov, Alexei; Grebenkov, Denis S
2012-01-01
We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution
Time-dependent angularly averaged inverse transport
Bal, Guillaume; Jollivet, Alexandre
2009-01-01
This paper concerns the reconstruction of the absorption and scattering parameters in a time-dependent linear transport equation from knowledge of angularly averaged measurements performed at the boundary of a domain of interest. Such measurement settings find applications in medical and geophysical imaging. We show that the absorption coefficient and the spatial component of the scattering coefficient are uniquely determined by such measurements. We obtain stability results on the reconstruction of the absorption and scattering parameters with respect to the measured albedo operator. The stability results are obtained by a precise decomposition of the measurements into components with different singular behavior in the time domain
Independence, Odd Girth, and Average Degree
Löwenstein, Christian; Pedersen, Anders Sune; Rautenbach, Dieter
2011-01-01
We prove several tight lower bounds in terms of the order and the average degree for the independence number of graphs that are connected and/or satisfy some odd girth condition. Our main result is the extension of a lower bound for the independence number of triangle-free graphs of maximum...... degree at most three due to Heckman and Thomas [Discrete Math 233 (2001), 233–237] to arbitrary triangle-free graphs. For connected triangle-free graphs of order n and size m, our result implies the existence of an independent set of order at least (4n−m−1) / 7. ...
Bootstrapping Density-Weighted Average Derivatives
Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael
Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...... (1989). In many cases validity of bootstrap-based inference procedures is found to depend crucially on whether the bandwidth sequence satisfies a particular (asymptotic linearity) condition. An exception to this rule occurs for inference procedures involving a studentized estimator employing a "robust...
Average Nuclear properties based on statistical model
El-Jaick, L.J.
1974-01-01
The rough properties of nuclei were investigated by statistical model, in systems with the same and different number of protons and neutrons, separately, considering the Coulomb energy in the last system. Some average nuclear properties were calculated based on the energy density of nuclear matter, from Weizsscker-Beth mass semiempiric formulae, generalized for compressible nuclei. In the study of a s surface energy coefficient, the great influence exercised by Coulomb energy and nuclear compressibility was verified. For a good adjust of beta stability lines and mass excess, the surface symmetry energy were established. (M.C.K.) [pt
Time-averaged MSD of Brownian motion
Andreanov, Alexei; Grebenkov, Denis S.
2012-07-01
We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution.
De Luca, G.; Magnus, J.R.
2011-01-01
In this article, we describe the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals, which implement, respectively, the exact Bayesian model-averaging estimator and the weighted-average least-squares
Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.
Dirks, Jean; And Others
1983-01-01
Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)
Calculating ensemble averaged descriptions of protein rigidity without sampling.
Luis C González
Full Text Available Previous works have demonstrated that protein rigidity is related to thermodynamic stability, especially under conditions that favor formation of native structure. Mechanical network rigidity properties of a single conformation are efficiently calculated using the integer body-bar Pebble Game (PG algorithm. However, thermodynamic properties require averaging over many samples from the ensemble of accessible conformations to accurately account for fluctuations in network topology. We have developed a mean field Virtual Pebble Game (VPG that represents the ensemble of networks by a single effective network. That is, all possible number of distance constraints (or bars that can form between a pair of rigid bodies is replaced by the average number. The resulting effective network is viewed as having weighted edges, where the weight of an edge quantifies its capacity to absorb degrees of freedom. The VPG is interpreted as a flow problem on this effective network, which eliminates the need to sample. Across a nonredundant dataset of 272 protein structures, we apply the VPG to proteins for the first time. Our results show numerically and visually that the rigidity characterizations of the VPG accurately reflect the ensemble averaged [Formula: see text] properties. This result positions the VPG as an efficient alternative to understand the mechanical role that chemical interactions play in maintaining protein stability.
Calculating ensemble averaged descriptions of protein rigidity without sampling.
González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J
2012-01-01
Previous works have demonstrated that protein rigidity is related to thermodynamic stability, especially under conditions that favor formation of native structure. Mechanical network rigidity properties of a single conformation are efficiently calculated using the integer body-bar Pebble Game (PG) algorithm. However, thermodynamic properties require averaging over many samples from the ensemble of accessible conformations to accurately account for fluctuations in network topology. We have developed a mean field Virtual Pebble Game (VPG) that represents the ensemble of networks by a single effective network. That is, all possible number of distance constraints (or bars) that can form between a pair of rigid bodies is replaced by the average number. The resulting effective network is viewed as having weighted edges, where the weight of an edge quantifies its capacity to absorb degrees of freedom. The VPG is interpreted as a flow problem on this effective network, which eliminates the need to sample. Across a nonredundant dataset of 272 protein structures, we apply the VPG to proteins for the first time. Our results show numerically and visually that the rigidity characterizations of the VPG accurately reflect the ensemble averaged [Formula: see text] properties. This result positions the VPG as an efficient alternative to understand the mechanical role that chemical interactions play in maintaining protein stability.
Trajectory averaging for stochastic approximation MCMC algorithms
Liang, Faming
2010-10-01
The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.
Averaged null energy condition from causality
Hartman, Thomas; Kundu, Sandipan; Tajdini, Amirhossein
2017-07-01
Unitary, Lorentz-invariant quantum field theories in flat spacetime obey mi-crocausality: commutators vanish at spacelike separation. For interacting theories in more than two dimensions, we show that this implies that the averaged null energy, ∫ duT uu , must be non-negative. This non-local operator appears in the operator product expansion of local operators in the lightcone limit, and therefore contributes to n-point functions. We derive a sum rule that isolates this contribution and is manifestly positive. The argument also applies to certain higher spin operators other than the stress tensor, generating an infinite family of new constraints of the form ∫ duX uuu··· u ≥ 0. These lead to new inequalities for the coupling constants of spinning operators in conformal field theory, which include as special cases (but are generally stronger than) the existing constraints from the lightcone bootstrap, deep inelastic scattering, conformal collider methods, and relative entropy. We also comment on the relation to the recent derivation of the averaged null energy condition from relative entropy, and suggest a more general connection between causality and information-theoretic inequalities in QFT.
Beta-energy averaging and beta spectra
Stamatelatos, M.G.; England, T.R.
1976-07-01
A simple yet highly accurate method for approximately calculating spectrum-averaged beta energies and beta spectra for radioactive nuclei is presented. This method should prove useful for users who wish to obtain accurate answers without complicated calculations of Fermi functions, complex gamma functions, and time-consuming numerical integrations as required by the more exact theoretical expressions. Therefore, this method should be a good time-saving alternative for investigators who need to make calculations involving large numbers of nuclei (e.g., fission products) as well as for occasional users interested in restricted number of nuclides. The average beta-energy values calculated by this method differ from those calculated by ''exact'' methods by no more than 1 percent for nuclides with atomic numbers in the 20 to 100 range and which emit betas of energies up to approximately 8 MeV. These include all fission products and the actinides. The beta-energy spectra calculated by the present method are also of the same quality
Asymptotic Time Averages and Frequency Distributions
Muhammad El-Taha
2016-01-01
Full Text Available Consider an arbitrary nonnegative deterministic process (in a stochastic setting {X(t, t≥0} is a fixed realization, i.e., sample-path of the underlying stochastic process with state space S=(-∞,∞. Using a sample-path approach, we give necessary and sufficient conditions for the long-run time average of a measurable function of process to be equal to the expectation taken with respect to the same measurable function of its long-run frequency distribution. The results are further extended to allow unrestricted parameter (time space. Examples are provided to show that our condition is not superfluous and that it is weaker than uniform integrability. The case of discrete-time processes is also considered. The relationship to previously known sufficient conditions, usually given in stochastic settings, will also be discussed. Our approach is applied to regenerative processes and an extension of a well-known result is given. For researchers interested in sample-path analysis, our results will give them the choice to work with the time average of a process or its frequency distribution function and go back and forth between the two under a mild condition.
Chaotic Universe, Friedmannian on the average 2
Marochnik, L S [AN SSSR, Moscow. Inst. Kosmicheskikh Issledovanij
1980-11-01
The cosmological solutions are found for the equations for correlators, describing a statistically chaotic Universe, Friedmannian on the average in which delta-correlated fluctuations with amplitudes h >> 1 are excited. For the equation of state of matter p = n epsilon, the kind of solutions depends on the position of maximum of the spectrum of the metric disturbances. The expansion of the Universe, in which long-wave potential and vortical motions and gravitational waves (modes diverging at t ..-->.. 0) had been excited, tends asymptotically to the Friedmannian one at t ..-->.. identity and depends critically on n: at n < 0.26, the solution for the scalefactor is situated higher than the Friedmannian one, and lower at n > 0.26. The influence of finite at t ..-->.. 0 long-wave fluctuation modes leads to an averaged quasiisotropic solution. The contribution of quantum fluctuations and of short-wave parts of the spectrum of classical fluctuations to the expansion law is considered. Their influence is equivalent to the contribution from an ultrarelativistic gas with corresponding energy density and pressure. The restrictions are obtained for the degree of chaos (the spectrum characteristics) compatible with the observed helium abundance, which could have been retained by a completely chaotic Universe during its expansion up to the nucleosynthesis epoch.
Averaging in the presence of sliding errors
Yost, G.P.
1991-08-01
In many cases the precision with which an experiment can measure a physical quantity depends on the value of that quantity. Not having access to the true value, experimental groups are forced to assign their errors based on their own measured value. Procedures which attempt to derive an improved estimate of the true value by a suitable average of such measurements usually weight each experiment's measurement according to the reported variance. However, one is in a position to derive improved error estimates for each experiment from the average itself, provided an approximate idea of the functional dependence of the error on the central value is known. Failing to do so can lead to substantial biases. Techniques which avoid these biases without loss of precision are proposed and their performance is analyzed with examples. These techniques are quite general and can bring about an improvement even when the behavior of the errors is not well understood. Perhaps the most important application of the technique is in fitting curves to histograms
Introducing Modified Degree 4 Chordal Rings with Two Chord Lengths
Pedersen, Jens Myrup
2007-01-01
In this paper an analysis of modified degree 4 Chordal Rings with two chord lengths named CHRm is presented and compared to similar topologies: Chordal Rings, N2R and modified N2R. Formulas for approximating diameters and average path lengths are provided and verified, and it is shown...... that the distances in CHRm are significantly smaller than in traditional Chordal Rings and N2R, and also smaller than modified N2R for topologies with up to 1500 nodes. Despite the proposed CHRm being of degree 4, and the modified N2R of degree 3, CHRm may be better suited for the optical level of fiber rings, due...
[Renal length measured by ultrasound in adult mexican population].
Oyuela-Carrasco, J; Rodríguez-Castellanos, F; Kimura, E; Delgado-Hernández, R; Herrera-Félix, J P
2009-01-01
Renal length estimation by ultrasound is an important parameter in clinical evaluation of kidney disease and healthy donors. Changes in renal volume may be a sign of kidney disease. Correct interpretation of renal length requires the knowledge of normal limits, these have not been described for Latin American population. To describe normal renal length (RL) by ultrasonography in a group of Mexican adults. Ultrasound measure of RL in 153 healthy Mexican adults stratified by age. Describe the association of RL to several anthropometric variables. A total of 77 males and 76 females were scanner. The average age for the group was 44.12 +/- 15.44 years. The mean weight, body mass index (BMI) and height were 68.87 +/- 11.69 Kg, 26.77 +/- 3.82 kg/m2 and 160 +/- 8.62 cm respectively. Dividing the population by gender, showed a height of 166 +/- 6.15 cm for males and 154.7 +/- 5.97 cm for females (p =0.000). Left renal length (LRL) in the whole group was 105.8 +/- 7.56 mm and right renal length (RRL) was 104.3 +/- 6.45 mm (p = 0.000.) The LRL for males was 107.16 +/- 6.97 mm and for females was 104.6 +/- 7.96 mm. The average RRL for males was 105.74 +/- 5.74 mm and for females 102.99 +/- 6.85 mm (p = 0.008.) We noted that RL decreased with age and the rate of decline accelerates alter 60 years of age. Both lengths correlated significantly and positively with weight, BMI and height. The RL was significantly larger in males than in females in both kidneys (p = 0.036) in this Mexican population. Renal length declines after 60 years of age and specially after 70 years.
Relationships between average depth and number of misclassifications for decision trees
Chikalov, Igor
2014-02-14
This paper presents a new tool for the study of relationships between the total path length or the average depth and the number of misclassifications for decision trees. In addition to algorithm, the paper also presents the results of experiments with datasets from UCI ML Repository [9] and datasets representing Boolean functions with 10 variables.
Relationships Between Average Depth and Number of Nodes for Decision Trees
Chikalov, Igor
2013-07-24
This paper presents a new tool for the study of relationships between total path length or average depth and number of nodes of decision trees. In addition to algorithm, the paper also presents the results of experiments with datasets from UCI ML Repository [1]. © Springer-Verlag Berlin Heidelberg 2014.
Relationships Between Average Depth and Number of Nodes for Decision Trees
Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail
2013-01-01
This paper presents a new tool for the study of relationships between total path length or average depth and number of nodes of decision trees. In addition to algorithm, the paper also presents the results of experiments with datasets from UCI ML
Relationships between average depth and number of misclassifications for decision trees
Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail
2014-01-01
This paper presents a new tool for the study of relationships between the total path length or the average depth and the number of misclassifications for decision trees. In addition to algorithm, the paper also presents the results of experiments with datasets from UCI ML Repository [9] and datasets representing Boolean functions with 10 variables.
High average power linear induction accelerator development
Bayless, J.R.; Adler, R.J.
1987-07-01
There is increasing interest in linear induction accelerators (LIAs) for applications including free electron lasers, high power microwave generators and other types of radiation sources. Lawrence Livermore National Laboratory has developed LIA technology in combination with magnetic pulse compression techniques to achieve very impressive performance levels. In this paper we will briefly discuss the LIA concept and describe our development program. Our goals are to improve the reliability and reduce the cost of LIA systems. An accelerator is presently under construction to demonstrate these improvements at an energy of 1.6 MeV in 2 kA, 65 ns beam pulses at an average beam power of approximately 30 kW. The unique features of this system are a low cost accelerator design and an SCR-switched, magnetically compressed, pulse power system. 4 refs., 7 figs
FEL system with homogeneous average output
Douglas, David R.; Legg, Robert; Whitney, R. Roy; Neil, George; Powers, Thomas Joseph
2018-01-16
A method of varying the output of a free electron laser (FEL) on very short time scales to produce a slightly broader, but smooth, time-averaged wavelength spectrum. The method includes injecting into an accelerator a sequence of bunch trains at phase offsets from crest. Accelerating the particles to full energy to result in distinct and independently controlled, by the choice of phase offset, phase-energy correlations or chirps on each bunch train. The earlier trains will be more strongly chirped, the later trains less chirped. For an energy recovered linac (ERL), the beam may be recirculated using a transport system with linear and nonlinear momentum compactions M.sub.56, which are selected to compress all three bunch trains at the FEL with higher order terms managed.
Quetelet, the average man and medical knowledge.
Caponi, Sandra
2013-01-01
Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.
[Quetelet, the average man and medical knowledge].
Caponi, Sandra
2013-01-01
Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.
Asymmetric network connectivity using weighted harmonic averages
Morrison, Greg; Mahadevan, L.
2011-02-01
We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.
Angle-averaged Compton cross sections
Nickel, G.H.
1983-01-01
The scattering of a photon by an individual free electron is characterized by six quantities: α = initial photon energy in units of m 0 c 2 ; α/sub s/ = scattered photon energy in units of m 0 c 2 ; β = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV
Average Gait Differential Image Based Human Recognition
Jinyan Chen
2014-01-01
Full Text Available The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI, AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition.
Reynolds averaged simulation of unsteady separated flow
Iaccarino, G.; Ooi, A.; Durbin, P.A.; Behnia, M.
2003-01-01
The accuracy of Reynolds averaged Navier-Stokes (RANS) turbulence models in predicting complex flows with separation is examined. The unsteady flow around square cylinder and over a wall-mounted cube are simulated and compared with experimental data. For the cube case, none of the previously published numerical predictions obtained by steady-state RANS produced a good match with experimental data. However, evidence exists that coherent vortex shedding occurs in this flow. Its presence demands unsteady RANS computation because the flow is not statistically stationary. The present study demonstrates that unsteady RANS does indeed predict periodic shedding, and leads to much better concurrence with available experimental data than has been achieved with steady computation
Angle-averaged Compton cross sections
Nickel, G.H.
1983-01-01
The scattering of a photon by an individual free electron is characterized by six quantities: ..cap alpha.. = initial photon energy in units of m/sub 0/c/sup 2/; ..cap alpha../sub s/ = scattered photon energy in units of m/sub 0/c/sup 2/; ..beta.. = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV.
An average salary: approaches to the index determination
T. M. Pozdnyakova
2017-01-01
Full Text Available The article “An average salary: approaches to the index determination” is devoted to studying various methods of calculating this index, both used by official state statistics of the Russian Federation and offered by modern researchers.The purpose of this research is to analyze the existing approaches to calculating the average salary of employees of enterprises and organizations, as well as to make certain additions that would help to clarify this index.The information base of the research is laws and regulations of the Russian Federation Government, statistical and analytical materials of the Federal State Statistics Service of Russia for the section «Socio-economic indexes: living standards of the population», as well as materials of scientific papers, describing different approaches to the average salary calculation. The data on the average salary of employees of educational institutions of the Khabarovsk region served as the experimental base of research. In the process of conducting the research, the following methods were used: analytical, statistical, calculated-mathematical and graphical.The main result of the research is an option of supplementing the method of calculating average salary index within enterprises or organizations, used by Goskomstat of Russia, by means of introducing a correction factor. Its essence consists in the specific formation of material indexes for different categories of employees in enterprises or organizations, mainly engaged in internal secondary jobs. The need for introducing this correction factor comes from the current reality of working conditions of a wide range of organizations, when an employee is forced, in addition to the main position, to fulfill additional job duties. As a result, the situation is frequent when the average salary at the enterprise is difficult to assess objectively because it consists of calculating multiple rates per staff member. In other words, the average salary of
The balanced survivor average causal effect.
Greene, Tom; Joffe, Marshall; Hu, Bo; Li, Liang; Boucher, Ken
2013-05-07
Statistical analysis of longitudinal outcomes is often complicated by the absence of observable values in patients who die prior to their scheduled measurement. In such cases, the longitudinal data are said to be "truncated by death" to emphasize that the longitudinal measurements are not simply missing, but are undefined after death. Recently, the truncation by death problem has been investigated using the framework of principal stratification to define the target estimand as the survivor average causal effect (SACE), which in the context of a two-group randomized clinical trial is the mean difference in the longitudinal outcome between the treatment and control groups for the principal stratum of always-survivors. The SACE is not identified without untestable assumptions. These assumptions have often been formulated in terms of a monotonicity constraint requiring that the treatment does not reduce survival in any patient, in conjunction with assumed values for mean differences in the longitudinal outcome between certain principal strata. In this paper, we introduce an alternative estimand, the balanced-SACE, which is defined as the average causal effect on the longitudinal outcome in a particular subset of the always-survivors that is balanced with respect to the potential survival times under the treatment and control. We propose a simple estimator of the balanced-SACE that compares the longitudinal outcomes between equivalent fractions of the longest surviving patients between the treatment and control groups and does not require a monotonicity assumption. We provide expressions for the large sample bias of the estimator, along with sensitivity analyses and strategies to minimize this bias. We consider statistical inference under a bootstrap resampling procedure.
Scale dependence of the average potential around the maximum in Φ4 theories
Tetradis, N.; Wetterich, C.
1992-04-01
The average potential describes the physics at a length scale k - 1 by averaging out the degrees of freedom with characteristic moments larger than k. The dependence on k can be described by differential evolution equations. We solve these equations for the nonconvex part of the potential around the origin in φ 4 theories, in the phase with spontaneous symmetry breaking. The average potential is real and approaches the convex effective potential in the limit k → 0. Our calculation is relevant for processes for which the shape of the potential at a given scale is important, such as tunneling phenomena or inflation. (orig.)
National Oceanic and Atmospheric Administration, Department of Commerce — The invention implements a run-length file format with improved space-sav qualities. The file starts with a header in ASCII format and includes information such as...
Stalk-length-dependence of the contractility of Vorticella convallaria
Gul Chung, Eun; Ryu, Sangjin
2017-12-01
Vorticella convallaria is a sessile protozoan of which the spasmoneme contracts on a millisecond timescale. Because this contraction is induced and powered by the binding of calcium ions (Ca2+), the spasmoneme showcases Ca2+-powered cellular motility. Because the isometric tension of V. convallaria increases linearly with its stalk length, it is hypothesized that the contractility of V. convallaria during unhindered contraction depends on the stalk length. In this study, the contractile force and energetics of V. convallaria cells of different stalk lengths were evaluated using a fluid dynamic drag model which accounts for the unsteadiness and finite Reynolds number of the water flow caused by contracting V. convallaria and the wall effect of the no-slip substrate. It was found that the contraction displacement, peak contraction speed, peak contractile force, total mechanical work, and peak power depended on the stalk length. The observed stalk-length-dependencies were simulated using a damped spring model, and the model estimated that the average spring constant of the contracting stalk was 1.34 nN µm-1. These observed length-dependencies of Vorticella’s key contractility parameters reflect the biophysical mechanism of the spasmonemal contraction, and thus they should be considered in developing a theoretical model of the Vorticella spasmoneme.
A low complexity method for the optimization of network path length in spatially embedded networks
Chen, Guang; Yang, Xu-Hua; Xu, Xin-Li; Ming, Yong; Chen, Sheng-Yong; Wang, Wan-Liang
2014-01-01
The average path length of a network is an important index reflecting the network transmission efficiency. In this paper, we propose a new method of decreasing the average path length by adding edges. A new indicator is presented, incorporating traffic flow demand, to assess the decrease in the average path length when a new edge is added during the optimization process. With the help of the indicator, edges are selected and added into the network one by one. The new method has a relatively small time computational complexity in comparison with some traditional methods. In numerical simulations, the new method is applied to some synthetic spatially embedded networks. The result shows that the method can perform competitively in decreasing the average path length. Then, as an example of an application of this new method, it is applied to the road network of Hangzhou, China. (paper)
Total Path Length and Number of Terminal Nodes for Decision Trees
Hussain, Shahid
2014-01-01
This paper presents a new tool for study of relationships between total path length (average depth) and number of terminal nodes for decision trees. These relationships are important from the point of view of optimization of decision trees
Industrial Applications of High Average Power FELS
Shinn, Michelle D
2005-01-01
The use of lasers for material processing continues to expand, and the annual sales of such lasers exceeds $1 B (US). Large scale (many m2) processing of materials require the economical production of laser powers of the tens of kilowatts, and therefore are not yet commercial processes, although they have been demonstrated. The development of FELs based on superconducting RF (SRF) linac technology provides a scaleable path to laser outputs above 50 kW in the IR, rendering these applications economically viable, since the cost/photon drops as the output power increases. This approach also enables high average power ~ 1 kW output in the UV spectrum. Such FELs will provide quasi-cw (PRFs in the tens of MHz), of ultrafast (pulsewidth ~ 1 ps) output with very high beam quality. This talk will provide an overview of applications tests by our facility's users such as pulsed laser deposition, laser ablation, and laser surface modification, as well as present plans that will be tested with our upgraded FELs. These upg...
Calculating Free Energies Using Average Force
Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)
2001-01-01
A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.
Geographic Gossip: Efficient Averaging for Sensor Networks
Dimakis, Alexandros D. G.; Sarwate, Anand D.; Wainwright, Martin J.
Gossip algorithms for distributed computation are attractive due to their simplicity, distributed nature, and robustness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repeatedly recirculating redundant information. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing times of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of $n$ and $\\sqrt{n}$ respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy $\\epsilon$ using $O(\\frac{n^{1.5}}{\\sqrt{\\log n}} \\log \\epsilon^{-1})$ radio transmissions, which yields a $\\sqrt{\\frac{n}{\\log n}}$ factor improvement over standard gossip algorithms. We illustrate these theoretical results with experimental comparisons between our algorithm and standard methods as applied to various classes of random fields.
High-average-power solid state lasers
Summers, M.A.
1989-01-01
In 1987, a broad-based, aggressive R ampersand D program aimed at developing the technologies necessary to make possible the use of solid state lasers that are capable of delivering medium- to high-average power in new and demanding applications. Efforts were focused along the following major lines: development of laser and nonlinear optical materials, and of coatings for parasitic suppression and evanescent wave control; development of computational design tools; verification of computational models on thoroughly instrumented test beds; and applications of selected aspects of this technology to specific missions. In the laser materials areas, efforts were directed towards producing strong, low-loss laser glasses and large, high quality garnet crystals. The crystal program consisted of computational and experimental efforts aimed at understanding the physics, thermodynamics, and chemistry of large garnet crystal growth. The laser experimental efforts were directed at understanding thermally induced wave front aberrations in zig-zag slabs, understanding fluid mechanics, heat transfer, and optical interactions in gas-cooled slabs, and conducting critical test-bed experiments with various electro-optic switch geometries. 113 refs., 99 figs., 18 tabs
The concept of average LET values determination
Makarewicz, M.
1981-01-01
The concept of average LET (linear energy transfer) values determination, i.e. ordinary moments of LET in absorbed dose distribution vs. LET of ionizing radiation of any kind and any spectrum (even the unknown ones) has been presented. The method is based on measurement of ionization current with several values of voltage supplying an ionization chamber operating in conditions of columnar recombination of ions or ion recombination in clusters while the chamber is placed in the radiation field at the point of interest. By fitting a suitable algebraic expression to the measured current values one can obtain coefficients of the expression which can be interpreted as values of LET moments. One of the advantages of the method is its experimental and computational simplicity. It has been shown that for numerical estimation of certain effects dependent on LET of radiation it is not necessary to know the dose distribution but only a number of parameters of the distribution, i.e. the LET moments. (author)
On spectral averages in nuclear spectroscopy
Verbaarschot, J.J.M.
1982-01-01
In nuclear spectroscopy one tries to obtain a description of systems of bound nucleons. By means of theoretical models one attemps to reproduce the eigenenergies and the corresponding wave functions which then enable the computation of, for example, the electromagnetic moments and the transition amplitudes. Statistical spectroscopy can be used for studying nuclear systems in large model spaces. In this thesis, methods are developed and applied which enable the determination of quantities in a finite part of the Hilbert space, which is defined by specific quantum values. In the case of averages in a space defined by a partition of the nucleons over the single-particle orbits, the propagation coefficients reduce to Legendre interpolation polynomials. In chapter 1 these polynomials are derived with the help of a generating function and a generalization of Wick's theorem. One can then deduce the centroid and the variance of the eigenvalue distribution in a straightforward way. The results are used to calculate the systematic energy difference between states of even and odd parity for nuclei in the mass region A=10-40. In chapter 2 an efficient method for transforming fixed angular momentum projection traces into fixed angular momentum for the configuration space traces is developed. In chapter 3 it is shown that the secular behaviour can be represented by a Gaussian function of the energies. (Auth.)
Voter dynamics on an adaptive network with finite average connectivity
Mukhopadhyay, Abhishek; Schmittmann, Beate
2009-03-01
We study a simple model for voter dynamics in a two-party system. The opinion formation process is implemented in a random network of agents in which interactions are not restricted by geographical distance. In addition, we incorporate the rapidly changing nature of the interpersonal relations in the model. At each time step, agents can update their relationships, so that there is no history dependence in the model. This update is determined by their own opinion, and by their preference to make connections with individuals sharing the same opinion and with opponents. Using simulations and analytic arguments, we determine the final steady states and the relaxation into these states for different system sizes. In contrast to earlier studies, the average connectivity (``degree'') of each agent is constant here, independent of the system size. This has significant consequences for the long-time behavior of the model.
Correlation between length and tilt of lipid tails
Kopelevich, Dmitry I.; Nagle, John F.
2015-10-01
It is becoming recognized from simulations, and to a lesser extent from experiment, that the classical Helfrich-Canham membrane continuum mechanics model can be fruitfully enriched by the inclusion of molecular tilt, even in the fluid, chain disordered, biologically relevant phase of lipid bilayers. Enriched continuum theories then add a tilt modulus κθ to accompany the well recognized bending modulus κ. Different enrichment theories largely agree for many properties, but it has been noticed that there is considerable disagreement in one prediction; one theory postulates that the average length of the hydrocarbon chain tails increases strongly with increasing tilt and another predicts no increase. Our analysis of an all-atom simulation favors the latter theory, but it also shows that the overall tail length decreases slightly with increasing tilt. We show that this deviation from continuum theory can be reconciled by consideration of the average shape of the tails, which is a descriptor not obviously includable in continuum theory.
Measurement of the diffusion length of thermal neutrons inside graphite
Ertaud, A.; Beauge, R.; Fauquez, H.; De Laboulay, H.; Mercier, C.; Vautrey, L.
1948-11-01
The diffusion length of thermal neutrons inside a given industrial graphite is determined by measuring the neutron density inside a parallelepipedal piling up of graphite bricks (2.10 x 2.10 x 2.442 m). A 3.8 curies (Ra α → Be) source is placed inside the parallelepipedal block of graphite and thin manganese detectors are used. Corrections are added to the unweighted measurements to take into account the effects of the damping of supra-thermal neutrons in the measurement area. These corrections are experimentally deduced from the differential measurements made with a cadmium screen interposed between the source and the first plane of measurement. An error analysis completes the report. The diffusion length obtained is: L = 45.7 cm ± 0.3. The average density of the graphite used is 1.76 and the average apparent density of the piling up is 1.71. (J.S.)
Interobserver Variation of the Renal Length Measurement on Ultrasonography
Jeong, Yoong Ki; Chung, Hye Weon; Kim, Tae Sung; Ryoo, Jae Wook; Kim, Tae Kyoung; Kim, Seung Hyup
1995-01-01
We assessed interobserver variation in the measurement of the renal length on ultrasonography. Ultrasonographic examinations were performed in randomly selected 50 patients. The maximallenhths of both kidneys were measured with calipers during the scanning from frozen images by three observers in a blinded fashion. There was a relatively constant tendency of an observer to measure a renal length either longer or shorter than the other observer(Kendall coefficient>0.05). Average interobserver variations were 0.51 cm (±0.42 cm) in right kidney and 0.53 cm (±0.41 cm) in left kidney and were within 1 cm in 91% right and 89% of left kidney. Interobserver variation about 1cm should be considered in the measurement of the renal length on ultrasonography
Modelling length of hospital stay in motor victims
Mercedes Ayuso-Gutiérrez
2015-03-01
Full Text Available Objective. To analyze which socio-demographic and other factors related to motor injuries affect the length of hospital recovery stay. Materials and methods. In the study a sample of 17 932 motor accidents was used. All the crashes occurred in Spain between 2000 and 2007. Different regression models were fitted to data to identify and measure the impact of a set of explanatory regressors. Results. Time of hospital stay for men is on average 41% larger than for women. When the victim has a fracture as a consequence of the accident, the mean time of hospital stay is multiplied by five. Injuries located in lower extremities, the head and abdomen are associated with greater hospitalization lengths. Conclusions. Gender, age and type of victim, as well as the location and nature of injuries, are found to be factors that have significant impact on the expected length of hospital stay.
Childhood adversity, social support, and telomere length among perinatal women.
Mitchell, Amanda M; Kowalsky, Jennifer M; Epel, Elissa S; Lin, Jue; Christian, Lisa M
2018-01-01
Adverse perinatal health outcomes are heightened among women with psychosocial risk factors, including childhood adversity and a lack of social support. Biological aging could be one pathway by which such outcomes occur. However, data examining links between psychosocial factors and indicators of biological aging among perinatal women are limited. The current study examined the associations of childhood socioeconomic status (SES), childhood trauma, and current social support with telomere length in peripheral blood mononuclear cells (PBMCs) in a sample of 81 women assessed in early, mid, and late pregnancy as well as 7-11 weeks postpartum. Childhood SES was defined as perceived childhood social class and parental educational attainment. Measures included the Childhood Trauma Questionnaire, Center for Epidemiologic Studies-Depression Scale, Multidimensional Scale of Perceived Social Support, and average telomere length in PBMCs. Per a linear mixed model, telomere length did not change across pregnancy and postpartum visits; thus, subsequent analyses defined telomere length as the average across all available timepoints. ANCOVAs showed group differences by perceived childhood social class, maternal and paternal educational attainment, and current family social support, with lower values corresponding with shorter telomeres, after adjustment for possible confounds. No effects of childhood trauma or social support from significant others or friends on telomere length were observed. Findings demonstrate that while current SES was not related to telomeres, low childhood SES, independent of current SES, and low family social support were distinct risk factors for cellular aging in women. These data have relevance for understanding potential mechanisms by which early life deprivation of socioeconomic and relationship resources affect maternal health. In turn, this has potential significance for intergenerational transmission of telomere length. The predictive value of
Length scales for the Navier-Stokes equations on a rotating sphere
Kyrychko, Yuliya N.; Bartuccelli, Michele V.
2004-01-01
In this Letter we obtain the dissipative length scale for the Navier-Stokes equations on a two-dimensional rotating sphere S 2 . This system is a fundamental model of the large scale atmospheric dynamics. Using the equations of motion in their vorticity form, we construct the ladder inequalities from which a set of time-averaged length scales is obtained
Increasing amperometric biosensor sensitivity by length fractionated single-walled carbon nanotubes
Tasca, Federico; Gorton, Lo; Wagner, Jakob Birkedal
2008-01-01
In this work the sensitivity-increasing effect of single-walled carbon nanotubes (SWCNTs) in amperometric biosensors, depending on their average length distribution, was studied. For this purpose the SWCNTs were oxidatively shortened and subsequently length separated by size exclusion...
Prompt fission neutron spectra and average prompt neutron multiplicities
Madland, D.G.; Nix, J.R.
1983-01-01
We present a new method for calculating the prompt fission neutron spectrum N(E) and average prompt neutron multiplicity anti nu/sub p/ as functions of the fissioning nucleus and its excitation energy. The method is based on standard nuclear evaporation theory and takes into account (1) the motion of the fission fragments, (2) the distribution of fission-fragment residual nuclear temperature, (3) the energy dependence of the cross section sigma/sub c/ for the inverse process of compound-nucleus formation, and (4) the possibility of multiple-chance fission. We use a triangular distribution in residual nuclear temperature based on the Fermi-gas model. This leads to closed expressions for N(E) and anti nu/sub p/ when sigma/sub c/ is assumed constant and readily computed quadratures when the energy dependence of sigma/sub c/ is determined from an optical model. Neutron spectra and average multiplicities calculated with an energy-dependent cross section agree well with experimental data for the neutron-induced fission of 235 U and the spontaneous fission of 252 Cf. For the latter case, there are some significant inconsistencies between the experimental spectra that need to be resolved. 29 references
Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method
Tsai, F. T. C.; Elshall, A. S.
2014-12-01
Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.
Examining Hurricane Track Length and Stage Duration Since 1980
Fandrich, K. M.; Pennington, D.
2017-12-01
Each year, tropical systems impact thousands of people worldwide. Current research shows a correlation between the intensity and frequency of hurricanes and the changing climate. However, little is known about other prominent hurricane features. This includes information about hurricane track length (the total distance traveled from tropical depression through a hurricane's final category assignment) and how this distance may have changed with time. Also unknown is the typical duration of a hurricane stage, such as tropical storm to category one, and if the time spent in each stage has changed in recent decades. This research aims to examine changes in hurricane stage duration and track lengths for the 319 storms in NOAA's National Ocean Service Hurricane Reanalysis dataset that reached Category 2 - 5 from 1980 - 2015. Based on evident ocean warming, it is hypothesized that a general increase in track length with time will be detected, thus modern hurricanes are traveling a longer distance than past hurricanes. It is also expected that stage durations are decreasing with time so that hurricanes mature faster than in past decades. For each storm, coordinates are acquired at 4-times daily intervals throughout its duration and track lengths are computed for each 6-hour period. Total track lengths are then computed and storms are analyzed graphically and statistically by category for temporal track length changes. The stage durations of each storm are calculated as the time difference between two consecutive stages. Results indicate that average track lengths for Cat 2 and 3 hurricanes are increasing through time. These findings show that these hurricanes are traveling a longer distance than earlier Cat 2 and 3 hurricanes. In contrast, average track lengths for Cat 4 and 5 hurricanes are decreasing through time, showing less distance traveled than earlier decades. Stage durations for all Cat 2, 4 and 5 storms decrease through the decades but Cat 3 storms show a
Correlation length of magnetosheath fluctuations: Cluster statistics
O. Gutynska
2008-09-01
Full Text Available Magnetosheath parameters are usually described by gasdynamic or magnetohydrodynamic (MHD models but these models cannot account for one of the most important sources of magnetosheath fluctuations – the foreshock. Earlier statistical processing of a large amount of magnetosheath observations has shown that the magnetosheath magnetic field and plasma flow fluctuations downstream of the quasiparallel shock are much larger than those at the opposite flank. These studies were based on the observations of a single spacecraft and thus they could not provide full information on propagation of the fluctuations through the magnetosheath.
We present the results of a statistical survey of the magnetosheath magnetic field fluctuations using two years of Cluster observations. We discuss the dependence of the cross-correlation coefficients between different spacecraft pairs on the orientation of the separation vector with respect to the average magnetic field and plasma flow vectors and other parameters. We have found that the correlation length does not exceed ~1 R_{E} in the analyzed frequency range (0.001–0.125 Hz and does not depend significantly on the magnetic field or plasma flow direction. A close connection of cross-correlation coefficients computed in the magnetosheath with the cross-correlation coefficients between a solar wind monitor and a magnetosheath spacecraft suggests that solar wind structures persist on the background of magnetosheath fluctuations.
Relation between axial length and ocular parameters
Xue-Qiu Yang
2013-09-01
Full Text Available AIM: To investigatethe relation between axial length(AL, age and ocular parameters.METHODS: A total of 360 subjects(360 eyeswith emmetropia or myopia were recruited. Refraction, center corneal thickness(CCT, AL, intraocular pressure(IOPwere measured by automatic-refractor, Pachymeter, A-mode ultrasound and non-contact tonometer, respectively. Corneal curvature(CC, anterior chamber depth(ACDand white-to-white distance(WWDwere measured by Orbscan II. Three dimensional frequency domain coherent optical tomography(3D-OCTwas used to examine the retinal nerve fiber layer thickness(RNFLT. The Pearson correlation coefficient(rand multiple regression analysis were performed to evaluate the relationship between AL, age and ocular parameters.RESULTS: The average AL was 24.15±1.26mm. With elongation of the AL, spherical equivalent(SE(r=-0.742,Pr=-0.395, Pr=-0.374, Pr=0.411, Pr=0.099, P=0.060and WWD(r=0.061, P=0.252. There was also a significant correlation between AL and age(P=0.001, SE(PPPCONCLUSION: In longer eyes, there is a tendency toward myopia, a flatter cornea, a deeper ACD and a thinner RNFLT. Age is an influencing factor for the AL as well.
Correlation length of magnetosheath fluctuations: Cluster statistics
O. Gutynska
2008-09-01
Full Text Available Magnetosheath parameters are usually described by gasdynamic or magnetohydrodynamic (MHD models but these models cannot account for one of the most important sources of magnetosheath fluctuations – the foreshock. Earlier statistical processing of a large amount of magnetosheath observations has shown that the magnetosheath magnetic field and plasma flow fluctuations downstream of the quasiparallel shock are much larger than those at the opposite flank. These studies were based on the observations of a single spacecraft and thus they could not provide full information on propagation of the fluctuations through the magnetosheath. We present the results of a statistical survey of the magnetosheath magnetic field fluctuations using two years of Cluster observations. We discuss the dependence of the cross-correlation coefficients between different spacecraft pairs on the orientation of the separation vector with respect to the average magnetic field and plasma flow vectors and other parameters. We have found that the correlation length does not exceed ~1 RE in the analyzed frequency range (0.001–0.125 Hz and does not depend significantly on the magnetic field or plasma flow direction. A close connection of cross-correlation coefficients computed in the magnetosheath with the cross-correlation coefficients between a solar wind monitor and a magnetosheath spacecraft suggests that solar wind structures persist on the background of magnetosheath fluctuations.
Aarthi, G.; Ramachandra Reddy, G.
2018-03-01
In our paper, the impact of adaptive transmission schemes: (i) optimal rate adaptation (ORA) and (ii) channel inversion with fixed rate (CIFR) on the average spectral efficiency (ASE) are explored for free-space optical (FSO) communications with On-Off Keying (OOK), Polarization shift keying (POLSK), and Coherent optical wireless communication (Coherent OWC) systems under different turbulence regimes. Further to enhance the ASE we have incorporated aperture averaging effects along with the above adaptive schemes. The results indicate that ORA adaptation scheme has the advantage of improving the ASE performance compared with CIFR under moderate and strong turbulence regime. The coherent OWC system with ORA excels the other modulation schemes and could achieve ASE performance of 49.8 bits/s/Hz at the average transmitted optical power of 6 dBm under strong turbulence. By adding aperture averaging effect we could achieve an ASE of 50.5 bits/s/Hz under the same conditions. This makes ORA with Coherent OWC modulation as a favorable candidate for improving the ASE of the FSO communication system.
Constraints on food chain length arising from regional metacommunity dynamics
Calcagno, Vincent; Massol, François; Mouquet, Nicolas; Jarne, Philippe; David, Patrice
2011-01-01
Classical ecological theory has proposed several determinants of food chain length, but the role of metacommunity dynamics has not yet been fully considered. By modelling patchy predator–prey metacommunities with extinction–colonization dynamics, we identify two distinct constraints on food chain length. First, finite colonization rates limit predator occupancy to a subset of prey-occupied sites. Second, intrinsic extinction rates accumulate along trophic chains. We show how both processes concur to decrease maximal and average food chain length in metacommunities. This decrease is mitigated if predators track their prey during colonization (habitat selection) and can be reinforced by top-down control of prey vital rates (especially extinction). Moreover, top-down control of colonization and habitat selection can interact to produce a counterintuitive positive relationship between perturbation rate and food chain length. Our results show how novel limits to food chain length emerge in spatially structured communities. We discuss the connections between these constraints and the ones commonly discussed, and suggest ways to test for metacommunity effects in food webs. PMID:21367786
Ensemble bayesian model averaging using markov chain Monte Carlo sampling
Vrugt, Jasper A [Los Alamos National Laboratory; Diks, Cees G H [NON LANL; Clark, Martyn P [NON LANL
2008-01-01
Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.
On a Bayesian estimation procedure for determining the average ore grade of a uranium deposit
Heising, C.D.; Zamora-Reyes, J.A.
1996-01-01
A Bayesian procedure is applied to estimate the average ore grade of a specific uranium deposit (the Morrison formation in New Mexico). Experimental data taken from drilling tests for this formation constitute deposit specific information, E 2 . This information is combined, through a single stage application of Bayes' theorem, with the more extensive and well established information on all similar formations in the region, E 1 . It is assumed that the best estimate for the deposit specific case should include the relevant experimental evidence collected from other like formations giving incomplete information on the specific deposit. This follows traditional methods for resource estimation, which presume that previous collective experience obtained from similar formations in the geological region can be used to infer the geologic characteristics of a less well characterized formation. (Author)
Reddish, V C
1978-01-01
Stellar Formation brings together knowledge about the formation of stars. In seeking to determine the conditions necessary for star formation, this book examines questions such as how, where, and why stars form, and at what rate and with what properties. This text also considers whether the formation of a star is an accident or an integral part of the physical properties of matter. This book consists of 13 chapters divided into two sections and begins with an overview of theories that explain star formation as well as the state of knowledge of star formation in comparison to stellar structure
Silk, J.; Di Cintio, A.; Dvorkin, I.
2014-01-01
Galaxy formation is at the forefront of observation and theory in cosmology. An improved understanding is essential for improving our knowledge both of the cosmological parameters, of the contents of the universe, and of our origins. In these lectures intended for graduate students, galaxy formation theory is reviewed and confronted with recent observational issues. In lecture 1, the following topics are presented: star formation considerations, including IMF, star formation efficiency and star formation rate, the origin of the galaxy luminosity function, and feedback in dwarf galaxies. In lecture 2, we describe formation of disks and massive spheroids, including the growth of supermassive black holes, negative feedback in spheroids, the AGN-star formation connection, star formation rates at high redshift and the baryon fraction in galaxies.
To quantum averages through asymptotic expansion of classical averages on infinite-dimensional space
Khrennikov, Andrei
2007-01-01
We study asymptotic expansions of Gaussian integrals of analytic functionals on infinite-dimensional spaces (Hilbert and nuclear Frechet). We obtain an asymptotic equality coupling the Gaussian integral and the trace of the composition of scaling of the covariation operator of a Gaussian measure and the second (Frechet) derivative of a functional. In this way we couple classical average (given by an infinite-dimensional Gaussian integral) and quantum average (given by the von Neumann trace formula). We can interpret this mathematical construction as a procedure of 'dequantization' of quantum mechanics. We represent quantum mechanics as an asymptotic projection of classical statistical mechanics with infinite-dimensional phase space. This space can be represented as the space of classical fields, so quantum mechanics is represented as a projection of 'prequantum classical statistical field theory'
Length-dependent optical properties of single-walled carbon nanotube samples
Naumov, Anton V.; Tsyboulski, Dmitri A.; Bachilo, Sergei M.; Weisman, R. Bruce
2013-01-01
Highlights: ► Length-independent absorption per atom in single-walled carbon nanotubes. ► Reduced fluorescence quantum yield for short nanotubes. ► Exciton quenching at nanotube ends, sidewall defects probably limits quantum yield. - Abstract: Contradictory findings have been reported on the length dependence of optical absorption cross sections and fluorescence quantum yields in single-walled carbon nanotubes (SWCNTs). To clarify these points, studies have been made on bulk SWCNT dispersions subjected to length fractionation by electrophoretic separation or by ultrasonication-induced scission. Fractions ranged from ca. 120 to 760 nm in mean length. Samples prepared by shear-assisted dispersion were subsequently shortened by ultrasonic processing. After accounting for processing-induced changes in the surfactant absorption background, SWCNT absorption was found constant within ±11% as average nanotube length changed by a factor of 3.8. This indicates that the absorption cross-section per carbon atom is not length dependent. By contrast, in length fractions prepared by both methods, the bulk fluorescence efficiency or average quantum yield increased with SWCNT average length and approached an apparent asymptotic limit near 1 μm. This result is interpreted as reflecting the combined contributions of exciton quenching by sidewall defects and by the ends of shorter nanotubes
Averaged emission factors for the Hungarian car fleet
Haszpra, L. [Inst. for Atmospheric Physics, Budapest (Hungary); Szilagyi, I. [Central Research Inst. for Chemistry, Budapest (Hungary)
1995-12-31
The vehicular emission of non-methane hydrocarbon (NMHC) is one of the largest anthropogenic sources of NMHC in Hungary and in most of the industrialized countries. Non-methane hydrocarbon plays key role in the formation of photo-chemical air pollution, usually characterized by the ozone concentration, which seriously endangers the environment and human health. The ozone forming potential of the different NMHCs differs from each other significantly, while the NMHC composition of the car exhaust is influenced by the fuel and engine type, technical condition of the vehicle, vehicle speed and several other factors. In Hungary the majority of the cars are still of Eastern European origin. They represent the technological standard of the 70`s, although there are changes recently. Due to the long-term economical decline in Hungary the average age of the cars was about 9 years in 1990 and reached 10 years by 1993. The condition of the majority of the cars is poor. In addition, almost one third (31.2 %) of the cars are equipped with two-stroke engines which emit less NO{sub x} but much more hydrocarbon. The number of cars equipped with catalytic converter was negligible in 1990 and is slowly increasing only recently. As a consequence of these facts the traffic emission in Hungary may differ from that measured in or estimated for the Western European countries and the differences should be taken into account in the air pollution models. For the estimation of the average emission of the Hungarian car fleet a one-day roadway tunnel experiment was performed in the downtown of Budapest in summer, 1991. (orig.)
Average Likelihood Methods of Classification of Code Division Multiple Access (CDMA)
2016-05-01
subject to code matrices that follows the structure given by (113). [⃗ yR y⃗I ] = √ Es 2L [ GR1 −GI1 GI2 GR2 ] [ QR −QI QI QR ] [⃗ bR b⃗I ] + [⃗ nR n⃗I... QR ] [⃗ b+ b⃗− ] + [⃗ n+ n⃗− ] (115) The average likelihood for type 4 CDMA (116) is a special case of type 1 CDMA with twice the code length and...AVERAGE LIKELIHOOD METHODS OF CLASSIFICATION OF CODE DIVISION MULTIPLE ACCESS (CDMA) MAY 2016 FINAL TECHNICAL REPORT APPROVED FOR PUBLIC RELEASE
Li, Qingchen; Cao, Guangxi; Xu, Wei
2018-01-01
Based on a multifractal detrending moving average algorithm (MFDMA), this study uses the fractionally autoregressive integrated moving average process (ARFIMA) to demonstrate the effectiveness of MFDMA in the detection of auto-correlation at different sample lengths and to simulate some artificial time series with the same length as the actual sample interval. We analyze the effect of predictable and unpredictable meteorological disasters on the US and Chinese stock markets and the degree of long memory in different sectors. Furthermore, we conduct a preliminary investigation to determine whether the fluctuations of financial markets caused by meteorological disasters are derived from the normal evolution of the financial system itself or not. We also propose several reasonable recommendations.
Gated cardiac blood pool studies in atrial fibrillation: Role of cycle length windowing
Wallis, J W; Juni, J E; Wu, L [Michigan Univ., Ann Arbor (USA). Div. of Nuclear Medicine
1991-01-01
Cycle length windowing is gaining increasing acceptance in gated blood pool imaging of patients with atrial fibrillation (AF). The goals of this study were: to assess differences of ejection fraction (EF) in AF with and without windowing and to determine how EF varied with cycle length in patients with AF. Twenty patients with AF were prospectively studied by gated blood pool imaging, with simultaneous collection in each patient of 5-7 studies with cycle length windows spanning the cycle length histogram. Each window accepted beats of only a narrow range of cycle lengths. EF was determined for each of the narrow cycle length windows as well as for the entire gated blood pool study without cycle length windowing. For every patient an average of the windowed EFs was compared with the non-windowed EF. EF values were similar (mean windowed: 46.6; non-windowed: 45.5; P=0.16), and there was a good correlation between the two techniques (r=0.97). The data were then examined for a relationship of EF with cycle length. The difference from average windowed EF ({Delta}EF) was calculated for each window and plotted vs. the cycle length of the center of each window. No predictable linear or nonlinear relationship of {Delta}EF with window position was observed. Lack of predictable variation of EF with cycle length is likely due to lack of a predictable amount of ventricular filling for a given cycle length, as the amount of diastolic filling in AF depends on the random cycle length of the preceding beat. In summary, windowing in AF does not provide a clinically significant difference in EF determination. If cycle length windowing is used, the exact location of the window is not critical. (orig.).
[Myopia: frequency of lattice degeneration and axial length].
Martín Sánchez, M D; Roldán Pallarés, M
2001-05-01
To evaluate the relationship between lattice retinal degeneration and axial length of the eye in different grades of myopia. A sample of 200 eyes from 124 myopic patients was collected by chance. The average age was 34.8 years (20-50 years) and the myopia was between 0.5 and 20 diopters (D). The eyes were grouped according to the degree of refraction defect, the mean axial length of each group (Scan A) and the frequency of lattice retinal degeneration and the relationship between these variables was studied. The possible influence of age on our results was also considered. For the statistical analysis, the SAS 6.07 program with the variance analysis for quantitative variables, and chi(2) test for qualitative variables with a 5% significance were used. A multivariable linear regression model was also adjusted. The highest frequency of lattice retinal degeneration occurred in those myopia patients having more than 15 D, and also in the group of myopia patients between 3 and 6 D, but this did not show statistical significance when compared with the other myopic groups. If the axial length is assessed, a greater frequency of lattice retinal degeneration is also found when the axial length is 25-27 mm and 29-30 mm, which correspond, respectively, to myopias between 3-10 D and more than 15 D. When the multivariable linear regression model was adjusted, the axial length showed the existence of lattice retinal degeneration (beta 0.41 mm; p=0.08) adjusted by the number of diopters (beta 0.38 mm; plattice retinal degeneration was found for myopias with axial eye length between 29-30 mm (more than 15 D), and 25-27 mm (between 3-10 D).
the impact of machine geometries on the average torque of dual ...
HOD
Keywords: average torque, dual start, machine geometry, optimal value, PM machines. 1. ... permanent magnet length, back-iron size etc. were ..... e (N m. ) Stator tooth width/stator slot pitch. 4. 5. 7. 8. 10. 11. 13. 14. Number of rotor poles. 0. 1. 2. 3. 4. 5. 6. 0. 2. 4. 6. 8. 10. 12. T orq u e (Nm. ) Back-iron thickness (mm). 4. 5. 7.
Short Rayleigh Length Free Electron Lasers
Crooker, P P; Armstead, R L; Blau, J
2004-01-01
Conventional free electron laser (FEL) oscillators minimize the optical mode volume around the electron beam in the undulator by making the resonator Rayleigh length about one third of the undulator length. This maximizes gain and beam-mode coupling. In compact configurations of high-power infrared FELs or moderate power UV FELs, the resulting optical intensity can damage the resonator mirrors. To increase the spot size and thereby reduce the optical intensity at the mirrors below the damage threshold, a shorter Rayleigh length can be used, but the FEL interaction is significantly altered. A new FEL interaction is described and analyzed with a Rayleigh length that is only one tenth the undulator length, or less. The effect of mirror vibration and positioning are more critical in the short Rayleigh length design, but we find that they are still within normal design tolerances.
Length dependent properties of SNS microbridges
Sauvageau, J.E.; Jain, R.K.; Li, K.; Lukens, J.E.; Ono, R.H.
1985-01-01
Using an in-situ, self-aligned deposition scheme, arrays of variable length SNS junctions in the range of 0.05 μm to 1 μm have been fabricated. Arrays of SNS microbridges of lead-copper and niobium-copper fabricated using this technique have been used to study the length dependence, at constant temperature, of the critical current I and bridge resistance R /SUB d/ . For bridges with lengths pounds greater than the normal metal coherence length xi /SUB n/ (T), the dependence of I /SUB c/ on L is consistent with an exponential dependence on the reduced length l=L/xi /SUB n/ (T). For shorter bridges, deviations from this behavior is seen. It was also found that the bridge resistance R /SUB d/ does not vary linearly with the geometric bridge length but appears to approach a finite value as L→O
Kocsis, E; Trus, B L; Steer, C J; Bisher, M E; Steven, A C
1991-08-01
We have developed computational techniques that allow image averaging to be applied to electron micrographs of filamentous molecules that exhibit tight and variable curvature. These techniques, which involve straightening by cubic-spline interpolation, image classification, and statistical analysis of the molecules' curvature properties, have been applied to purified brain clathrin. This trimeric filamentous protein polymerizes, both in vivo and in vitro, into a wide range of polyhedral structures. Contrasted by low-angle rotary shadowing, dissociated clathrin molecules appear as distinctive three-legged structures, called "triskelions" (E. Ungewickell and D. Branton (1981) Nature 289, 420). We find triskelion legs to vary from 35 to 62 nm in total length, according to an approximately bell-shaped distribution (mu = 51.6 nm). Peaks in averaged curvature profiles mark hinges or sites of enhanced flexibility. Such profiles, calculated for each length class, show that triskelion legs are flexible over their entire lengths. However, three curvature peaks are observed in every case: their locations define a proximal segment of systematically increasing length (14.0-19.0 nm), a mid-segment of fixed length (approximately 12 nm), and a rather variable end-segment (11.6-19.5 nm), terminating in a hinge just before the globular terminal domain (approximately 7.3 nm diameter). Thus, two major factors contribute to the overall variability in leg length: (1) stretching of the proximal segment and (2) stretching of the end-segment and/or scrolling of the terminal domain. The observed elasticity of the proximal segment may reflect phosphorylation of the clathrin light chains.
20 CFR 404.221 - Computing your average monthly wage.
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the average...
Measuring Crack Length in Coarse Grain Ceramics
Salem, Jonathan A.; Ghosn, Louis J.
2010-01-01
Due to a coarse grain structure, crack lengths in precracked spinel specimens could not be measured optically, so the crack lengths and fracture toughness were estimated by strain gage measurements. An expression was developed via finite element analysis to correlate the measured strain with crack length in four-point flexure. The fracture toughness estimated by the strain gaged samples and another standardized method were in agreement.
Dither Cavity Length Controller with Iodine Locking
Lawson Marty
2016-01-01
Full Text Available A cavity length controller for a seeded Q-switched frequency doubled Nd:YAG laser is constructed. The cavity length controller uses a piezo-mirror dither voltage to find the optimum length for the seeded cavity. The piezo-mirror dither also dithers the optical frequency of the output pulse. [1]. This dither in optical frequency is then used to lock to an Iodine absorption line.
Average and local structure of α-CuI by configurational averaging
Mohn, Chris E; Stoelen, Svein
2007-01-01
Configurational Boltzmann averaging together with density functional theory are used to study in detail the average and local structure of the superionic α-CuI. We find that the coppers are spread out with peaks in the atom-density at the tetrahedral sites of the fcc sublattice of iodines. We calculate Cu-Cu, Cu-I and I-I pair radial distribution functions, the distribution of coordination numbers and the distribution of Cu-I-Cu, I-Cu-I and Cu-Cu-Cu bond-angles. The partial pair distribution functions are in good agreement with experimental neutron diffraction-reverse Monte Carlo, extended x-ray absorption fine structure and ab initio molecular dynamics results. In particular, our results confirm the presence of a prominent peak at around 2.7 A in the Cu-Cu pair distribution function as well as a broader, less intense peak at roughly 4.3 A. We find highly flexible bonds and a range of coordination numbers for both iodines and coppers. This structural flexibility is of key importance in order to understand the exceptional conductivity of coppers in α-CuI; the iodines can easily respond to changes in the local environment as the coppers diffuse, and a myriad of different diffusion-pathways is expected due to the large variation in the local motifs
Declining average daily census. Part 1: Implications and options.
Weil, T P
1985-12-01
A national trend toward declining average daily (inpatient) census (ADC) started in late 1982 even before the Medicare prospective payment system began. The decrease in total days will continue despite an increasing number of aged persons in the U.S. population. This decline could have been predicted from trends during 1978 to 1983, such as increasing available beds but decreasing occupancy, 100 percent increases in hospital expenses, and declining lengths of stay. Assuming that health care costs will remain as a relatively fixed part of the gross national product and no major medical advances will occur in the next five years, certain implications and options exist for facilities experiencing a declining ADC. This article discusses several considerations: Attempts to improve market share; Reduction of full-time equivalent employees; Impact of greater acuity of illness among remaining inpatients; Implications of increasing the number of physicians on medical staffs; Option of a closed medical staff by clinical specialty; Unbundling with not-for-profit and profit-making corporations; Review of mergers, consolidations, and multihospital systems to decide when this option is most appropriate; Sale of a not-for-profit hospital to an investor-owned chain, with implications facing Catholic hospitals choosing this option; Impact and difficulty of developing meaningful alternative health care systems with the hospital's medical staff; Special problems of teaching hospitals; The social issue of the hospital shifting from the community's health center to a cost center; Increased turnover of hospital CEOs; With these in mind, institutions can then focus on solutions that can sometimes be used in tandem to resolve this problem's impact. The second part of this article will discuss some of them.
Information, polarization and term length in democracy
Schultz, Christian
2008-01-01
This paper considers term lengths in a representative democracy where the political issue divides the population on the left-right scale. Parties are ideologically different and better informed about the consequences of policies than voters are. A short term length makes the government more...... accountable, but the re-election incentive leads to policy-distortion as the government seeks to manipulate swing voters' beliefs to make its ideology more popular. This creates a trade-off: A short term length improves accountability but gives distortions. A short term length is best for swing voters when...
Area, length and thickness conservation: Dogma or reality?
Moretti, Isabelle; Callot, Jean Paul
2012-08-01
The basic assumption of quantitative structural geology is the preservation of material during deformation. However the hypothesis of volume conservation alone does not help to predict past or future geometries and so this assumption is usually translated into bed length in 2D (or area in 3D) and thickness conservation. When subsurface data are missing, geologists may extrapolate surface data to depth using the kink-band approach. These extrapolations, preserving both thicknesses and dips, lead to geometries which are restorable but often erroneous, due to both disharmonic deformation and internal deformation of layers. First, the Bolivian Sub-Andean Zone case is presented to highlight the evolution of the concepts on which balancing is based, and the important role played by a decoupling level in enhancing disharmony. Second, analogue models are analyzed to test the validity of the balancing techniques. Chamberlin's excess area approach is shown to be on average valid. However, neither the length nor the thicknesses are preserved. We propose that in real cases, the length preservation hypothesis during shortening could also be a wrong assumption. If the data are good enough to image the decollement level, the Chamberlin excess area method could be used to compute the bed length changes.
Components of genetic variability of ear length of silage maize
Sečanski Mile
2006-01-01
Full Text Available The objective of this study was to evaluate following parameters of the ear length of silage maize: variability of inbred lines and their diallel hybrids, superior-parent heterosis and genetic components of variability and habitability on the basis of a diallel set. The analysis of genetic variance shows that the additive component (D was lower than the dominant (H1 and H2 genetic variances, while the frequency of dominant genes (u for this trait was greater than the frequency of recessive genes (v Furthermore, this is also confirmed by the dominant to recessive genes ratio in parental inbreeds for the ear length (Kd/Kr> 1, which is greater than unity during both investigation years. The calculated value of the average degree of dominance √H1/D is greater than unity, pointing out to superdominance in inheritance of this trait in both years of investigation, which is also confirmed by the results of Vr/Wr regression analysis of inheritance of the ear length. As a presence of the non-allelic interaction was established it is necessary to study effects of epitasis as it can have a greater significance in certain hybrids. A greater value of dominant than additive variance resulted in high broad-sense habitability for ear length in both investigation years.
Leukocyte Telomere Length and Cognitive Function in Older Adults
Emily Frith
2018-04-01
Full Text Available We evaluated the specific association between leukocyte telomere length and cognitive function among a national sample of the broader U.S. older adult population. Data from the 1999-2002 National Health and Nutrition Examination Survey (NHANES were used to identify 1,722 adults, between 60-85 years, with complete data on selected study variables. DNA was extracted from whole blood via the LTL assay, which is administered using quantitative polymerase chain reaction to measure telomere length relative to standard reference DNA (T/S ratio. Average telomere length was recorded, with two to three assays performed to control for individual variability. The DSST (Digit Symbol Substitution Test was used to assess participant executive cognitive functioning tasks of pairing and free recall. Individuals were excluded if they had been diagnosed with coronary artery disease, congestive heart failure, heart attack or stroke at the baseline assessment. Leukocyte telomere length was associated with higher cognitive performance, independent of gender, race-ethnicity, physical activity status, body mass index and other covariates. In this sample, there was a strong association between LTL and cognition; for every 1 T/S ratio increase in LTL, there was a corresponding 9.9 unit increase in the DSST (β = 9.9; 95% CI: 5.6-14.2; P [JCBPR 2018; 7(1.000: 14-18
Water Erosion in Different Slope Lengths on Bare Soil
Bárbara Bagio
Full Text Available ABSTRACT Water erosion degrades the soil and contaminates the environment, and one influential factor on erosion is slope length. The aim of this study was to quantify losses of soil (SL and water (WL in a Humic Cambisol in a field experiment under natural rainfall conditions from July 4, 2014 to June 18, 2015 in individual events of 41 erosive rains in the Southern Plateau of Santa Catarina and to estimate soil losses through the USLE and RUSLE models. The treatments consisted of slope lengths of 11, 22, 33, and 44 m, with an average degree of slope of 8 %, on bare and uncropped soil that had been cultivated with corn prior to the study. At the end of the corn cycle, the stalk residue was removed from the surface, leaving the roots of the crop in the soil. Soil loss by water erosion is related linearly and positively to the increase in slope length in the span between 11 and 44 m. Soil losses were related to water losses and the Erosivity Index (EI30, while water losses were related to rain depth. Soil losses estimated by the USLE and RUSLE model showed lower values than the values observed experimentally in the field, especially the values estimated by the USLE. The values of factor L calculated for slope length of 11, 22, 33, and 44 m for the two versions (USLE and RUSLE of the soil loss prediction model showed satisfactory results in relation to the values of soil losses observed.
Sedimentological regimes for turbidity currents: Depth-averaged theory
Halsey, Thomas C.; Kumar, Amit; Perillo, Mauricio M.
2017-07-01
Turbidity currents are one of the most significant means by which sediment is moved from the continents into the deep ocean; their properties are interesting both as elements of the global sediment cycle and due to their role in contributing to the formation of deep water oil and gas reservoirs. One of the simplest models of the dynamics of turbidity current flow was introduced three decades ago, and is based on depth-averaging of the fluid mechanical equations governing the turbulent gravity-driven flow of relatively dilute turbidity currents. We examine the sedimentological regimes of a simplified version of this model, focusing on the role of the Richardson number Ri [dimensionless inertia] and Rouse number Ro [dimensionless sedimentation velocity] in determining whether a current is net depositional or net erosional. We find that for large Rouse numbers, the currents are strongly net depositional due to the disappearance of local equilibria between erosion and deposition. At lower Rouse numbers, the Richardson number also plays a role in determining the degree of erosion versus deposition. The currents become more erosive at lower values of the product Ro × Ri, due to the effect of clear water entrainment. At higher values of this product, the turbulence becomes insufficient to maintain the sediment in suspension, as first pointed out by Knapp and Bagnold. We speculate on the potential for two-layer solutions in this insufficiently turbulent regime, which would comprise substantial bedload flow with an overlying turbidity current.
Yépez, L.D.; Carrillo, J.L.; Donado, F.; Sausedo-Solorio, J.M.; Miranda-Romagnoli, P.
2016-01-01
The dynamical pattern formation of clusters of magnetic particles in a low-concentration magnetorheological fluid, under the influence of a superposition of two perpendicular sinusoidal fields, is studied experimentally. By varying the frequency and phase shift of the perpendicular fields, this configuration enables us to experimentally analyze a wide range of field configurations, including the case of a pure rotating field and the case of an oscillating unidirectional field. The fields are applied parallel to the horizontal plane where the fluid lies or in the vertical plane. For fields applied in the horizontal plane, we observed that, when the ratio of the frequencies increases, the average cluster size exhibits a kind of periodic resonances. When the phase shift between the fields is varied, the average chain length reaches maximal values for the cases of the rotating field and the unidirectional case. We analyze and discuss these results in terms of a weighted average of the time-dependent Mason number. In the case of a rotating field on the vertical plane, we also observe that the competition between the magnetic and the viscous forces determines the average cluster size. We show that this configuration generates a series of physically meaningful self-organization of clusters and transport phenomena. - Highlights: • A weighted average of the time-dependent Mason number is proposed. • The self-propelling clusters appear when a vertical rotating magnetic field is applied. • The largest average chain lengths are reached when frequencies are multiples one another. • Rotating and unidirectional alternating fields produce the largest average chain length values.
Yépez, L.D.; Carrillo, J.L. [Instituto de Física de la Universidad Autónoma de Puebla, Ciudad Universitaria, Edif. 110 A, Puebla 72570 (Mexico); Donado, F.; Sausedo-Solorio, J.M.; Miranda-Romagnoli, P. [Instituto de Ciencias Básicas e Ingeniería Universidad Autónoma del Estado de Hidalgo, Pachuca 42090, Pachuca (Mexico)
2016-06-15
The dynamical pattern formation of clusters of magnetic particles in a low-concentration magnetorheological fluid, under the influence of a superposition of two perpendicular sinusoidal fields, is studied experimentally. By varying the frequency and phase shift of the perpendicular fields, this configuration enables us to experimentally analyze a wide range of field configurations, including the case of a pure rotating field and the case of an oscillating unidirectional field. The fields are applied parallel to the horizontal plane where the fluid lies or in the vertical plane. For fields applied in the horizontal plane, we observed that, when the ratio of the frequencies increases, the average cluster size exhibits a kind of periodic resonances. When the phase shift between the fields is varied, the average chain length reaches maximal values for the cases of the rotating field and the unidirectional case. We analyze and discuss these results in terms of a weighted average of the time-dependent Mason number. In the case of a rotating field on the vertical plane, we also observe that the competition between the magnetic and the viscous forces determines the average cluster size. We show that this configuration generates a series of physically meaningful self-organization of clusters and transport phenomena. - Highlights: • A weighted average of the time-dependent Mason number is proposed. • The self-propelling clusters appear when a vertical rotating magnetic field is applied. • The largest average chain lengths are reached when frequencies are multiples one another. • Rotating and unidirectional alternating fields produce the largest average chain length values.
Shelley Mo
Full Text Available To assess the effect of image registration and averaging on the visualization and quantification of the radial peripapillary capillary (RPC network on optical coherence tomography angiography (OCTA.Twenty-two healthy controls were imaged with a commercial OCTA system (AngioVue, Optovue, Inc.. Ten 10x10° scans of the optic disc were obtained, and the most superficial layer (50-μm slab extending from the inner limiting membrane was extracted for analysis. Rigid registration was achieved using ImageJ, and averaging of each 2 to 10 frames was performed in five ~2x2° regions of interest (ROI located 1° from the optic disc margin. The ROI were automatically skeletonized. Signal-to-noise ratio (SNR, number of endpoints and mean capillary length from the skeleton, capillary density, and mean intercapillary distance (ICD were measured for the reference and each averaged ROI. Repeated measures analysis of variance was used to assess statistical significance. Three patients with primary open angle glaucoma were also imaged to compare RPC density to controls.Qualitatively, vessels appeared smoother and closer to histologic descriptions with increasing number of averaged frames. Quantitatively, number of endpoints decreased by 51%, and SNR, mean capillary length, capillary density, and ICD increased by 44%, 91%, 11%, and 4.5% from single frame to 10-frame averaged, respectively. The 10-frame averaged images from the glaucomatous eyes revealed decreased density correlating to visual field defects and retinal nerve fiber layer thinning.OCTA image registration and averaging is a viable and accessible method to enhance the visualization of RPCs, with significant improvements in image quality and RPC quantitative parameters. With this technique, we will be able to non-invasively and reliably study RPC involvement in diseases such as glaucoma.
s -wave scattering length of a Gaussian potential
Jeszenszki, Peter; Cherny, Alexander Yu.; Brand, Joachim
2018-04-01
We provide accurate expressions for the s -wave scattering length for a Gaussian potential well in one, two, and three spatial dimensions. The Gaussian potential is widely used as a pseudopotential in the theoretical description of ultracold-atomic gases, where the s -wave scattering length is a physically relevant parameter. We first describe a numerical procedure to compute the value of the s -wave scattering length from the parameters of the Gaussian, but find that its accuracy is limited in the vicinity of singularities that result from the formation of new bound states. We then derive simple analytical expressions that capture the correct asymptotic behavior of the s -wave scattering length near the bound states. Expressions that are increasingly accurate in wide parameter regimes are found by a hierarchy of approximations that capture an increasing number of bound states. The small number of numerical coefficients that enter these expressions is determined from accurate numerical calculations. The approximate formulas combine the advantages of the numerical and approximate expressions, yielding an accurate and simple description from the weakly to the strongly interacting limit.
Asymmetry of limbic structure (hippocampal formation and amygdaloidal complex at PTSD
Aida Sarač-Hadžihalilović
2003-05-01
Full Text Available Defining exact position of weak anatomic function which is find in a base of neurological and psychiatric disorder is just became the subject of intensive research interest. For this purposes it is important to implement structural and functional MRI techniques, also for further lightening and seeing subject of this work, more concretely connected to PTSD. Therefore, exactly MRI gives most sensitive volumetric measuring of hippocampal formation and amygdaloidal complex.The goal of this work was to research asymmetry of hippocampal formation and amygdaloidal complex to the PTSD patients.Results showed that at the axial slice length of hippocampal formation on the left and right side of all patients are significantly asymmetric. At the sagittal slice from the left side of hippocampal formation is in many cases longer than right about 50 %. At the coronal slice, there are no significant differences toward patient proportion according to symm. / asymm. of the hippocampal formation width at the right and left side. Difference in volume average of hippocampal formation between right and left side for axial and coronal slice is not statistically significant, but it is significant for sagittal slice. In about amygdaloidal complex patients with PTSD toward symm. / asymm. Amygdaloidal complex at the right and left side of axial and sagittal slice in all three measurement shows asymmetry, what is especially shown at sagittal slice. Difference in average length of amygdaloidal complex at the right and left side is not statistically significant for no one slice.Therefore, results of a new research that are used MRI, showed smaller hippocampal level at PTSD (researched by Van der Kolka 1996, Pitman 1996, Bremner et al., 1995.. Application of MRI technique in research of asymmetry of hippocampal formation and amygdaloidal complex, which we used in our research, we recommend as a template for future researches in a sense of lightening anatomic function that is
Length scale for configurational entropy in microemulsions
Reiss, H.; Kegel, W.K.; Groenewold, J.
1996-01-01
In this paper we study the length scale that must be used in evaluating the mixing entropy in a microemulsion. The central idea involves the choice of a length scale in configuration space that is consistent with the physical definition of entropy in phase space. We show that this scale may be
Proofs of Contracted Length Non-covariance
Strel'tsov, V.N.
1994-01-01
Different proofs of contracted length non covariance are discussed. The way based on the establishment of interval inconstancy (dependence on velocity) seems to be the most convincing one. It is stressed that the known non covariance of the electromagnetic field energy and momentum of a moving charge ('the problem 4/3') is a direct consequence of contracted length non covariance. 8 refs
The length of the male urethra
Tobias. S. Kohler
2008-08-01
Full Text Available PURPOSE: Catheter-based medical devices are an important component of the urologic armamentarium. To our knowledge, there is no population-based data regarding normal male urethral length. We evaluated the length of the urethra in men with normal genitourinary anatomy undergoing either Foley catheter removal or standard cystoscopy. MATERIALS AND METHODS: Male urethral length was obtained in 109 men. After study permission was obtained, the subject's penis was placed on a gentle stretch and the catheter was marked at the tip of the penis. The catheter was then removed and the distance from the mark to the beginning of the re-inflated balloon was measured. Alternatively, urethral length was measured at the time of cystoscopy, on removal of the cystoscope. Data on age, weight, and height was obtained in patients when possible. RESULTS: The mean urethral length was 22.3 cm with a standard deviation of 2.4 cm. Urethral length varied between 15 cm and 29 cm. No statistically significant correlation was found between urethral length and height, weight, body mass index (BMI, or age. CONCLUSIONS: Literature documenting the length of the normal male adult urethra is scarce. Our data adds to basic anatomic information of the male urethra and may be used to optimize genitourinary device design.
Analysis of ureteral length in adult cadavers
Hugo F. F. Novaes
2013-04-01
Full Text Available Introduction In some occasions, correlations between human structures can help planning surgical intra-abdominal interventions. The previous determination of ureteral length helps pre-operatory planning of surgeries, reduces costs of auxiliary exams, the correct choice of double-J catheter with low morbidity and fewer symptoms, and an adequate adhesion to treatment. Objective To evaluate ureteral length in adult cadavers and to analyze its correlation with anthropometric measures. Materials and Methods: From April 2009 to January 2012 we determined ureteral length of adult cadavers submitted to necropsy and obtained the following measures: height, distance from shoulder to wrist, elbow-wrist, xiphoid appendix-umbilicus, umbilicus-pubis, xiphoid appendix-pubis and between iliac spines. We analyzed the correlations between ureteral length and those anthropometric measures. Results We dissected 115 ureters from 115 adult corpses from April 2009 to January 2012. Median ureteral length didn't vary between sexes or according to height. It was observed no correlation among ureteral length and all considered anthropometric measures in all analyzed subgroups and in general population. There were no significant differences between right and left ureteral measures. Conclusions There is no difference of ureteral length in relation to height or gender (male or female. There is no significant correlation among ureteral length and the considered anthropometric measures.
Influence of mandibular length on mouth opening
Dijkstra, PU; Hof, AL; Stegenga, B; De Bont, LGM
Theoretically, mouth opening not only reflects the mobility of the temporomandibular joints (TMJs) but also the mandibular length. Clinically, the exact relationship between mouth opening, mandibular length, and mobility of TMJs is unclear. To study this relationship 91 healthy subjects, 59 women
Stubbs, Peter W; Walsh, Lee D; D'Souza, Arkiev; Héroux, Martin E; Bolsterlee, Bart; Gandevia, Simon C; Herbert, Robert D
2018-06-01
In reduced muscle preparations, the slack length and passive stiffness of muscle fibres have been shown to be influenced by previous muscle contraction or stretch. In human muscles, such behaviours have been inferred from measures of muscle force, joint stiffness and reflex magnitudes and latencies. Using ultrasound imaging, we directly observed that isometric contraction of the vastus lateralis muscle at short lengths reduces the slack lengths of the muscle-tendon unit and muscle fascicles. The effect is apparent 60 s after the contraction. These observations imply that muscle contraction at short lengths causes the formation of bonds which reduce the effective length of structures that generate passive tension in muscles. In reduced muscle preparations, stretch and muscle contraction change the properties of relaxed muscle fibres. In humans, effects of stretch and contraction on properties of relaxed muscles have been inferred from measurements of time taken to develop force, joint stiffness and reflex latencies. The current study used ultrasound imaging to directly observe the effects of stretch and contraction on muscle-tendon slack length and fascicle slack length of the human vastus lateralis muscle in vivo. The muscle was conditioned by (a) strong isometric contractions at long muscle-tendon lengths, (b) strong isometric contractions at short muscle-tendon lengths, (c) weak isometric contractions at long muscle-tendon lengths and (d) slow stretches. One minute after conditioning, ultrasound images were acquired from the relaxed muscle as it was slowly lengthened through its physiological range. The ultrasound image sequences were used to identify muscle-tendon slack angles and fascicle slack lengths. Contraction at short muscle-tendon lengths caused a mean 13.5 degree (95% CI 11.8-15.0 degree) shift in the muscle-tendon slack angle towards shorter muscle-tendon lengths, and a mean 5 mm (95% CI 2-8 mm) reduction in fascicle slack length, compared to the
Cawyer, Chase R; Anderson, Sarah B; Szychowski, Jeff M; Neely, Cherry; Owen, John
2018-03-01
To compare the accuracy of a new regression-derived formula developed from the National Fetal Growth Studies data to the common alternative method that uses the average of the gestational ages (GAs) calculated for each fetal biometric measurement (biparietal diameter, head circumference, abdominal circumference, and femur length). This retrospective cross-sectional study identified nonanomalous singleton pregnancies that had a crown-rump length plus at least 1 additional sonographic examination with complete fetal biometric measurements. With the use of the crown-rump length to establish the referent estimated date of delivery, each method's (National Institute of Child Health and Human Development regression versus Hadlock average [Radiology 1984; 152:497-501]), error at every examination was computed. Error, defined as the difference between the crown-rump length-derived GA and each method's predicted GA (weeks), was compared in 3 GA intervals: 1 (14 weeks-20 weeks 6 days), 2 (21 weeks-28 weeks 6 days), and 3 (≥29 weeks). In addition, the proportion of each method's examinations that had errors outside prespecified (±) day ranges was computed by using odds ratios. A total of 16,904 sonograms were identified. The overall and prespecified GA range subset mean errors were significantly smaller for the regression compared to the average (P < .01), and the regression had significantly lower odds of observing examinations outside the specified range of error in GA intervals 2 (odds ratio, 1.15; 95% confidence interval, 1.01-1.31) and 3 (odds ratio, 1.24; 95% confidence interval, 1.17-1.32) than the average method. In a contemporary unselected population of women dated by a crown-rump length-derived GA, the National Institute of Child Health and Human Development regression formula produced fewer estimates outside a prespecified margin of error than the commonly used Hadlock average; the differences were most pronounced for GA estimates at 29 weeks and later.
Roentgenologic investigations for the anterior tooth length
Cho, Won Pyo; Ahn, Hyung Kyu [College of Dentistry, Seoul National University , Seoul (Korea, Republic of)
1972-11-15
The author measured the length of crown, root and tooth on the films which was taken by intraoral bisecting technic with mesh plate on the films. The films were taken from the dry skulls, dentiform, same patients who had to be removed their upper incisors, and the other patients who admitted for dental care. From this serial experiment the results were made as follows: 1. By using the film and mesh plate in the oral cavity, the real tooth length can be measured easily on the film surfaces. 2. The film distortion in the oral cavity can be avoided when taking the film using the mesh plate and film together. 3. When measuring the film, length of crown was elongated and length of root was shortened. 4. When using the well-trained bisecting technic, the real tooth length can be measured directly on the intraoral film.
Screening length in dusty plasma crystals
Nikolaev, V S; Timofeev, A V
2016-01-01
Particles interaction and value of the screening length in dusty plasma systems are of great interest in dusty plasma area. Three inter-particle potentials (Debye potential, Gurevich potential and interaction potential in the weakly collisional regime) are used to solve equilibrium equations for two dusty particles suspended in a parabolic trap. The inter-particle distance dependence on screening length, trap parameter and particle charge is obtained. The functional form of inter-particle distance dependence on ion temperature is investigated and compared with experimental data at 200-300 K in order to test used potentials applicability to dusty plasma systems at room temperatures. The preference is given to the Yukawa-type potential including effective values of particle charge and screening length. The estimated effective value of the screening length is 5-15 times larger than the Debye length. (paper)
Microcomputer system for controlling fuel rod length
Meyer, E.R.; Bouldin, D.W.; Bolfing, B.J.
1979-01-01
A system is being developed at the Oak Ridge National Laboratory (ORNL) to automatically measure and control the length of fuel rods for use in a high temperature gas-cooled reactor (HTGR). The system utilizes an LSI-11 microcomputer for monitoring fuel rod length and for adjusting the primary factor affecting length. Preliminary results indicate that the automated system can maintain fuel rod length within the specified limits of 1.940 +- 0.040 in. This system provides quality control documentation and eliminates the dependence of the current fuel rod molding process on manual length control. In addition, the microcomputer system is compatible with planned efforts to extend control to fuel rod fissile and fertile material contents
Telomere length and early severe social deprivation: linking early adversity and cellular aging
Drury, SS; Theall, K; Gleason, MM; Smyke, AT; De Vivo, I; Wong, JYY; Fox, NA; Zeanah, CH; Nelson, CA
2012-01-01
Accelerated telomere length attrition has been associated with psychological stress and early adversity in adults; however, no studies have examined whether telomere length in childhood is associated with early experiences. The Bucharest Early Intervention Project is a unique randomized controlled trial of foster care placement compared with continued care in institutions. As a result of the study design, participants were exposed to a quantified range of time in institutional care, and represented an ideal population in which to examine the association between a specific early adversity, institutional care and telomere length. We examined the association between average relative telomere length, telomere repeat copy number to single gene copy number (T/S) ratio and exposure to institutional care quantified as the percent of time at baseline (mean age 22 months) and at 54 months of age that each child lived in the institution. A significant negative correlation between T/S ratio and percentage of time was observed. Children with greater exposure to institutional care had significantly shorter relative telomere length in middle childhood. Gender modified this main effect. The percentage of time in institutional care at baseline significantly predicted telomere length in females, whereas the percentage of institutional care at 54 months was strongly predictive of telomere length in males. This is the first study to demonstrate an association between telomere length and institutionalization, the first study to find an association between adversity and telomere length in children, and contributes to the growing literature linking telomere length and early adversity. PMID:21577215
A RED modified weighted moving average for soft real-time application
Domanśka Joanna
2014-09-01
Full Text Available The popularity of TCP/IP has resulted in an increase in usage of best-effort networks for real-time communication. Much effort has been spent to ensure quality of service for soft real-time traffic over IP networks. The Internet Engineering Task Force has proposed some architecture components, such as Active Queue Management (AQM. The paper investigates the influence of the weighted moving average on packet waiting time reduction for an AQM mechanism: the RED algorithm. The proposed method for computing the average queue length is based on a difference equation (a recursive equation. Depending on a particular optimality criterion, proper parameters of the modified weighted moving average function can be chosen. This change will allow reducing the number of violations of timing constraints and better use of this mechanism for soft real-time transmissions. The optimization problem is solved through simulations performed in OMNeT++ and later verified experimentally on a Linux implementation
Radon and radon daughters indoors, problems in the determination of the annual average
Swedjemark, G.A.
1984-01-01
The annual average of the concentration of radon and radon daughters in indoor air is required both in studies such as determining the collective dose to a population and at comparing with limits. Measurements are often carried out during a time period shorter than a year for practical reasons. Methods for estimating the uncertainties due to temporal variations in an annual average calculated from measurements carried out during various lengths of the sampling periods. These methods have been applied to the results from long-term measurements of radon-222 in a few houses. The possibilities to use correction factors in order to get a more adequate annual average have also been studied and some examples have been given. (orig.)
Paulius Palevicius
2014-01-01
Full Text Available Optical investigation of movable microsystem components using time-averaged holography is investigated in this paper. It is shown that even a harmonic excitation of a non-linear microsystem may result in an unpredictable chaotic motion. Analytical results between parameters of the chaotic oscillations and the formation of time-averaged fringes provide a deeper insight into computational and experimental interpretation of time-averaged MEMS holograms.
Palevicius, Paulius; Ragulskis, Minvydas; Palevicius, Arvydas; Ostasevicius, Vytautas
2014-01-01
Optical investigation of movable microsystem components using time-averaged holography is investigated in this paper. It is shown that even a harmonic excitation of a non-linear microsystem may result in an unpredictable chaotic motion. Analytical results between parameters of the chaotic oscillations and the formation of time-averaged fringes provide a deeper insight into computational and experimental interpretation of time-averaged MEMS holograms. PMID:24451467
Palevicius, Paulius; Ragulskis, Minvydas; Palevicius, Arvydas; Ostasevicius, Vytautas
2014-01-21
Optical investigation of movable microsystem components using time-averaged holography is investigated in this paper. It is shown that even a harmonic excitation of a non-linear microsystem may result in an unpredictable chaotic motion. Analytical results between parameters of the chaotic oscillations and the formation of time-averaged fringes provide a deeper insight into computational and experimental interpretation of time-averaged MEMS holograms.
Analytical expressions for conditional averages: A numerical test
Pécseli, H.L.; Trulsen, J.
1991-01-01
Conditionally averaged random potential fluctuations are an important quantity for analyzing turbulent electrostatic plasma fluctuations. Experimentally, this averaging can be readily performed by sampling the fluctuations only when a certain condition is fulfilled at a reference position...
Experimental demonstration of squeezed-state quantum averaging
Lassen, Mikael Østergaard; Madsen, Lars Skovgaard; Sabuncu, Metin
2010-01-01
We propose and experimentally demonstrate a universal quantum averaging process implementing the harmonic mean of quadrature variances. The averaged variances are prepared probabilistically by means of linear optical interference and measurement-induced conditioning. We verify that the implemented...
Zero-point length from string fluctuations
Fontanini, Michele; Spallucci, Euro; Padmanabhan, T.
2006-01-01
One of the leading candidates for quantum gravity, viz. string theory, has the following features incorporated in it. (i) The full spacetime is higher-dimensional, with (possibly) compact extra-dimensions; (ii) there is a natural minimal length below which the concept of continuum spacetime needs to be modified by some deeper concept. On the other hand, the existence of a minimal length (zero-point length) in four-dimensional spacetime, with obvious implications as UV regulator, has been often conjectured as a natural aftermath of any correct quantum theory of gravity. We show that one can incorporate the apparently unrelated pieces of information-zero-point length, extra-dimensions, string T-duality-in a consistent framework. This is done in terms of a modified Kaluza-Klein theory that interpolates between (high-energy) string theory and (low-energy) quantum field theory. In this model, the zero-point length in four dimensions is a 'virtual memory' of the length scale of compact extra-dimensions. Such a scale turns out to be determined by T-duality inherited from the underlying fundamental string theory. From a low energy perspective short distance infinities are cutoff by a minimal length which is proportional to the square root of the string slope, i.e., α ' . Thus, we bridge the gap between the string theory domain and the low energy arena of point-particle quantum field theory
Penile length and circumference: an Indian study.
Promodu, K; Shanmughadas, K V; Bhat, S; Nair, K R
2007-01-01
Apprehension about the normal size of penis is a major concern for men. Aim of the present investigation is to estimate the penile length and circumference of Indian males and to compare the results with the data from other countries. Results will help in counseling the patients worried about the penile size and seeking penis enlargement surgery. Penile length in flaccid and stretched conditions and circumference were measured in a group of 301 physically normal men. Erected length and circumference were measured for 93 subjects. Mean flaccid length was found to be 8.21 cm, mean stretched length 10.88 cm and circumference 9.14 cm. Mean erected length was found to be 13.01 cm and erected circumference was 11.46 cm. Penile dimensions are found to be correlated with anthropometric parameters. Insight into the normative data of penile size of Indian males obtained. There are significant differences in the mean penile length and circumference of Indian sample compared to the data reported from other countries. Study need to be continued with a large sample to establish a normative data applicable to the general population.
The flattening of the average potential in models with fermions
Bornholdt, S.
1993-01-01
The average potential is a scale dependent scalar effective potential. In a phase with spontaneous symmetry breaking its inner region becomes flat as the averaging extends over infinite volume and the average potential approaches the convex effective potential. Fermion fluctuations affect the shape of the average potential in this region and its flattening with decreasing physical scale. They have to be taken into account to find the true minimum of the scalar potential which determines the scale of spontaneous symmetry breaking. (orig.)
20 CFR 404.220 - Average-monthly-wage method.
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Average-monthly-wage method. 404.220 Section... INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.220 Average-monthly-wage method. (a) Who is eligible for this method. You must...
A time-averaged cosmic ray propagation theory
Klimas, A.J.
1975-01-01
An argument is presented, which casts doubt on our ability to choose an appropriate magnetic field ensemble for computing the average behavior of cosmic ray particles. An alternate procedure, using time-averages rather than ensemble-averages, is presented. (orig.) [de
7 CFR 51.2561 - Average moisture content.
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except when...
Averaging in SU(2) open quantum random walk
Ampadu Clement
2014-01-01
We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT
Averaging in SU(2) open quantum random walk
Clement, Ampadu
2014-03-01
We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT.
Automatic Control Of Length Of Welding Arc
Iceland, William F.
1991-01-01
Nonlinear relationships among current, voltage, and length stored in electronic memory. Conceptual microprocessor-based control subsystem maintains constant length of welding arc in gas/tungsten arc-welding system, even when welding current varied. Uses feedback of current and voltage from welding arc. Directs motor to set position of torch according to previously measured relationships among current, voltage, and length of arc. Signal paths marked "calibration" or "welding" used during those processes only. Other signal paths used during both processes. Control subsystem added to existing manual or automatic welding system equipped with automatic voltage control.
Bunch Length Measurements in SPEAR3
Corbett, W.J.; Fisher, A.; Huang, X.; Safranek, J.; Sebek, J.; /SLAC; Lumpkin, A.; /Argonne; Sannibale, F.; /LBL, Berkeley; Mok, W.; /Unlisted
2007-11-28
A series of bunch length measurements were made in SPEAR3 for two different machine optics. In the achromatic optics the bunch length increases from the low-current value of 16.6ps rms to about 30ps at 25ma/bunch yielding an inductive impedance of -0.17{Omega}. Reducing the momentum compaction factor by a factor of {approx}60 [1] yields a low-current bunch length of {approx}4ps rms. In this paper we review the experimental setup and results.
OPERATIONALANALYSIS OF MECHANICAL CUT-TO-LENGTH FOREST HARVESTING SYSTEM
Nilton Cesar Fiedler
2017-08-01
Full Text Available ABSTRACT The objective of this research was to conduct an operational analysis of forest harvesting activities in a mechanized of the system cut to length in eucalypt plantations in south of Bahia, to determine the distribution of operation times, productivity, operational efficiency and mechanical availability of two models of harvester and two models of forwarder, evaluating these machines in three modules harvesting methodology through time and motion studies. Auxiliary activities corresponded to the lowest percentages within the operating times (mean 1.9% to 1.8% for harvester and forwarder, already operating activities were those that had the highest percentages. The first shift was presented the worst results of operations for the harvester (average 66.3% and the third shift for the forwarder (55.5%. For the harvester module 1 showed the best result of productive times (average 70.36%. In relation to the forwarder, this same module showed the worst results with unproductive times (average of 22.17%. The availability and mechanical parameters were superior productivity for the forwarder (mean 82.31% and 51.33 m3/h, respectively, as indicators of degree of utilization and operational efficiency were higher in harvester (average 85.01% and 66.41%, respectively. Thus, for the forwarder, the parameters mechanical availability and productivity were higher, while for the harvester, they were the indicators of degree of utilization and operational efficiency
Galliero, Guillaume; Medvedev, Oleg; Shapiro, Alexander
2005-01-01
A 322 (2004) 151). In the current study, a fast molecular dynamics scheme has been developed to determine the values of the penetration lengths in Lennard-Jones binary systems. Results deduced from computations provide a new insight into the concept of penetration lengths. It is shown for four different...... fluctuation theory and molecular dynamics scheme exhibit consistent trends and average deviations from experimental data around 10-20%. (c) 2004 Elsevier B.V. All rights reserved....
Li, Bo; Ling, Zongcheng; Zhang, Jiang; Chen, Jian; Ni, Yuheng; Liu, Chunli
2018-04-01
Wrinkle ridges are complex thrust faults commonly found in lunar mare basalts and caused by compressional stresses from both local basin and global Moon. In this paper, we select 59 single wrinkle ridges in Mare Serenitatis and 39 single wrinkle ridges in Mare Tranquillitatis according to WAC mosaic image. For each wrinkle ridge, several topographic profiles near its midpoint are generated to measure its height and maximum displacement (Dmax) through LOLA DEM data. Then we make 2D plots of displacement-length (L) for ridge population in the two maria. The Dmax-L ratios (γ) are derived by a linear fit method according to the D-L data. The γ value (2.13 × 10-2) of ridges in Mare Tranquillitatis is higher than the γ value (1.73 × 10-2) of ridges in Mare Serenitatis. In the last, the contractional strains (ε) in Mare Serenitatis and Mare Tranquillitatis are estimated to be ∼0.36% and 0.14% (assuming the fault plane dip θ is 25°). The values of the free-air gravity anomalies in Mare Serenitatis range from 78 to 358 mGal higher than those of the gravity anomalies in Mare Tranquillitatis which range from -70 to 120 mGal. The average thickness of basalts in Mare Tranquillitatis is 400 m, while that of basalts in Mare Serenitatis is 798 m. Moreover, the average age for ridge group in Mare Serenitatis is bigger than the wrinkle ridge's age in Mare Tranquillitatis. The formation of ridge group in Mare Serenitatis takes longer time than that in Mare Serenitatis. Therefore, we think the higher value of gravity anomalies, thicker basaltic units and longer formation time for wrinkle ridge in Mare Serenitatis maybe result in the higher value of contractional strain, although the formation of Tranquillitatis basin is earlier than that of Serenitatis basin.
Fassott, G.; Henseler, Jörg; Cooper, C.; Lee, N.; Farrell, A.
2015-01-01
When using measurement models with multiple indicators, researchers need to decide about the epistemic relationship between the latent variable and its indicators. In this article, we describe the nature, the estimation, the characteristics, and the validity assessment of formative measurement
Efficacy of spatial averaging of infrasonic pressure in varying wind speeds
DeWolf, Scott; Walker, Kristoffer T.; Zumberge, Mark A.; Denis, Stephane
2013-01-01
Wind noise reduction (WNR) is important in the measurement of infra-sound. Spatial averaging theory led to the development of rosette pipe arrays. The efficacy of rosettes decreases with increasing wind speed and only provides a maximum of 20 dB WNR due to a maximum size limitation. An Optical Fiber Infra-sound Sensor (OFIS) reduces wind noise by instantaneously averaging infra-sound along the sensor's length. In this study two experiments quantify the WNR achieved by rosettes and OFISs of various sizes and configurations. Specifically, it is shown that the WNR for a circular OFIS 18 m in diameter is the same as a collocated 32-inlet pipe array of the same diameter. However, linear OFISs ranging in length from 30 to 270 m provide a WNR of up to 30 dB in winds up to 5 m/s. The measured WNR is a logarithmic function of the OFIS length and depends on the orientation of the OFIS with respect to wind direction. OFISs oriented parallel to the wind direction achieve 4 dB greater WNR than those oriented perpendicular to the wind. Analytical models for the rosette and OFIS are developed that predict the general observed relationships between wind noise reduction, frequency, and wind speed. (authors)
The benefits of longer fuel cycle lengths
Kesler, D.C.
1986-01-01
Longer fuel cycle lengths have been found to increase generation and improve outage management. A study at Duke Power Company has shown that longer fuel cycles offer both increased scheduling flexibility and increased capacity factors
Atomic frequency-time-length standards
Gheorghiu, O.C.; Mandache, C.
1987-01-01
The principles of operative of atomic frequency-time-length standards and their principle characteristics are described. The role of quartz crystal oscillators which are sloved to active or passive standards is presented. (authors)
The analysis of projected fission track lengths
Laslett, G.M.; Galbraith, R.F.; Green, P.F.
1994-01-01
This article deals with the question of how features of the thermal history can be estimated from projected track length measurements, i.e. lengths of the remaining parts of tracks that have intersected a surface, projected onto that surface. The appropriate mathematical theory is described and used to provide a sound basis both for understanding the nature of projected length measurements and for analysing observed data. The estimation of thermal history parameters corresponding to the current temperature, the maximum palaeotemperature and the time since cooling, is studied using laboratory data and simulations. In general the information contained in projected track lengths and angles is fairly limited, compared, for example, with that from a much smaller number of confined tracks, though we identify some circumstances when such measurements may be useful. Also it is not straightforward to extract the information and simple ad hoc estimation methods are generally inadequate. (author)
Complementary DNA-amplified fragment length polymorphism ...
Complementary DNA-amplified fragment length polymorphism (AFLP-cDNA) analysis of differential gene expression from the xerophyte Ammopiptanthus mongolicus in response to cold, drought and cold together with drought.
Impedance of finite length resistive cylinder
S. Krinsky
2004-11-01
Full Text Available We determine the impedance of a cylindrical metal tube (resistor of radius a, length g, and conductivity σ attached at each end to perfect conductors of semi-infinite length. Our main interest is in the asymptotic behavior of the impedance at high frequency (k≫1/a. In the equilibrium regime, ka^{2}≪g, the impedance per unit length is accurately described by the well-known result for an infinite length tube with conductivity σ. In the transient regime, ka^{2}≫g, where the contribution of transition radiation arising from the discontinuity in conductivity is important, we derive an analytic expression for the impedance and compute the short-range wakefield. The analytic results are shown to agree with numerical evaluation of the impedance.
Characteristic length of the knotting probability revisited
Uehara, Erica; Deguchi, Tetsuo
2015-01-01
We present a self-avoiding polygon (SAP) model for circular DNA in which the radius of impermeable cylindrical segments corresponds to the screening length of double-stranded DNA surrounded by counter ions. For the model we evaluate the probability for a generated SAP with N segments having a given knot K through simulation. We call it the knotting probability of a knot K with N segments for the SAP model. We show that when N is large the most significant factor in the knotting probability is given by the exponentially decaying part exp(−N/N K ), where the estimates of parameter N K are consistent with the same value for all the different knots we investigated. We thus call it the characteristic length of the knotting probability. We give formulae expressing the characteristic length as a function of the cylindrical radius r ex , i.e. the screening length of double-stranded DNA. (paper)
Chord length distribution for a compound capsule
Pitřík, Pavel
2017-01-01
Chord length distribution is a factor important in the calculation of ionisation chamber responses. This article describes Monte Carlo calculations of the chord length distribution for a non-convex compound capsule. A Monte Carlo code was set up for generation of random chords and calculation of their lengths based on the input number of generations and cavity dimensions. The code was written in JavaScript and can be executed in the majority of HTML viewers. The plot of occurrence of cords of different lengths has 3 peaks. It was found that the compound capsule cavity cannot be simply replaced with a spherical cavity of a triangular design. Furthermore, the compound capsule cavity is directionally dependent, which must be taken into account in calculations involving non-isotropic fields of primary particles in the beam, unless equilibrium of the secondary charged particles is attained. (orig.)
Study on the Connecting Length of CFRP
Liu, Xiongfei; Li, Yue; Li, Zhanguo
2018-05-01
The paper studied the varying mode of shear stress in the connecting zone of CFRP. Using epoxy resin (EP) as bond material, performance of specimens with different connecting length of CFRP was tested to obtain the conclusion. CFRP-confined concrete column was tested subsequently to verify the conclusion. The results show that: (1) The binding properties of modified epoxy resin with CFRP is good; (2) As the connecting length increased, the ultimate tensile strength of CFRP increased as well in the range of the experiment parameters; (3) Tensile strength of CFRP can reach the ultimate strength when the connecting length is 90mm;(4) The connecting length of 90mm of CFRP meet the reinforcement requirements.
Fragment Length of Circulating Tumor DNA.
Underhill, Hunter R; Kitzman, Jacob O; Hellwig, Sabine; Welker, Noah C; Daza, Riza; Baker, Daniel N; Gligorich, Keith M; Rostomily, Robert C; Bronner, Mary P; Shendure, Jay
2016-07-01
Malignant tumors shed DNA into the circulation. The transient half-life of circulating tumor DNA (ctDNA) may afford the opportunity to diagnose, monitor recurrence, and evaluate response to therapy solely through a non-invasive blood draw. However, detecting ctDNA against the normally occurring background of cell-free DNA derived from healthy cells has proven challenging, particularly in non-metastatic solid tumors. In this study, distinct differences in fragment length size between ctDNAs and normal cell-free DNA are defined. Human ctDNA in rat plasma derived from human glioblastoma multiforme stem-like cells in the rat brain and human hepatocellular carcinoma in the rat flank were found to have a shorter principal fragment length than the background rat cell-free DNA (134-144 bp vs. 167 bp, respectively). Subsequently, a similar shift in the fragment length of ctDNA in humans with melanoma and lung cancer was identified compared to healthy controls. Comparison of fragment lengths from cell-free DNA between a melanoma patient and healthy controls found that the BRAF V600E mutant allele occurred more commonly at a shorter fragment length than the fragment length of the wild-type allele (132-145 bp vs. 165 bp, respectively). Moreover, size-selecting for shorter cell-free DNA fragment lengths substantially increased the EGFR T790M mutant allele frequency in human lung cancer. These findings provide compelling evidence that experimental or bioinformatic isolation of a specific subset of fragment lengths from cell-free DNA may improve detection of ctDNA.
Electron Effective-Attenuation-Length Database
SRD 82 NIST Electron Effective-Attenuation-Length Database (PC database, no charge) This database provides values of electron effective attenuation lengths (EALs) in solid elements and compounds at selected electron energies between 50 eV and 2,000 eV. The database was designed mainly to provide EALs (to account for effects of elastic-eletron scattering) for applications in surface analysis by Auger-electron spectroscopy (AES) and X-ray photoelectron spectroscopy (XPS).
Length and coverage of inhibitory decision rules
Alsolami, Fawaz
2012-01-01
Authors present algorithms for optimization of inhibitory rules relative to the length and coverage. Inhibitory rules have a relation "attribute ≠ value" on the right-hand side. The considered algorithms are based on extensions of dynamic programming. Paper contains also comparison of length and coverage of inhibitory rules constructed by a greedy algorithm and by the dynamic programming algorithm. © 2012 Springer-Verlag.
The SME gauge sector with minimum length
Belich, H.; Louzada, H.L.C. [Universidade Federal do Espirito Santo, Departamento de Fisica e Quimica, Vitoria, ES (Brazil)
2017-12-15
We study the gauge sector of the Standard Model Extension (SME) with the Lorentz covariant deformed Heisenberg algebra associated to the minimum length. In order to find and estimate corrections, we clarify whether the violation of Lorentz symmetry and the existence of a minimum length are independent phenomena or are, in some way, related. With this goal, we analyze the dispersion relations of this theory. (orig.)
The SME gauge sector with minimum length
Belich, H.; Louzada, H. L. C.
2017-12-01
We study the gauge sector of the Standard Model Extension (SME) with the Lorentz covariant deformed Heisenberg algebra associated to the minimum length. In order to find and estimate corrections, we clarify whether the violation of Lorentz symmetry and the existence of a minimum length are independent phenomena or are, in some way, related. With this goal, we analyze the dispersion relations of this theory.
Correlation between length and tilt of lipid tails
Kopelevich, Dmitry I., E-mail: dkopelevich@che.ufl.edu [Department of Chemical Engineering, University of Florida, Gainesville, Florida 32611 (United States); Nagle, John F., E-mail: nagle@cmu.edu [Department of Physics, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213 (United States)
2015-10-21
It is becoming recognized from simulations, and to a lesser extent from experiment, that the classical Helfrich-Canham membrane continuum mechanics model can be fruitfully enriched by the inclusion of molecular tilt, even in the fluid, chain disordered, biologically relevant phase of lipid bilayers. Enriched continuum theories then add a tilt modulus κ{sub θ} to accompany the well recognized bending modulus κ. Different enrichment theories largely agree for many properties, but it has been noticed that there is considerable disagreement in one prediction; one theory postulates that the average length of the hydrocarbon chain tails increases strongly with increasing tilt and another predicts no increase. Our analysis of an all-atom simulation favors the latter theory, but it also shows that the overall tail length decreases slightly with increasing tilt. We show that this deviation from continuum theory can be reconciled by consideration of the average shape of the tails, which is a descriptor not obviously includable in continuum theory.
Mobile Stride Length Estimation With Deep Convolutional Neural Networks.
Hannink, Julius; Kautz, Thomas; Pasluosta, Cristian F; Barth, Jens; Schulein, Samuel; GaBmann, Karl-Gunter; Klucken, Jochen; Eskofier, Bjoern M
2018-03-01
Accurate estimation of spatial gait characteristics is critical to assess motor impairments resulting from neurological or musculoskeletal disease. Currently, however, methodological constraints limit clinical applicability of state-of-the-art double integration approaches to gait patterns with a clear zero-velocity phase. We describe a novel approach to stride length estimation that uses deep convolutional neural networks to map stride-specific inertial sensor data to the resulting stride length. The model is trained on a publicly available and clinically relevant benchmark dataset consisting of 1220 strides from 101 geriatric patients. Evaluation is done in a tenfold cross validation and for three different stride definitions. Even though best results are achieved with strides defined from midstance to midstance with average accuracy and precision of , performance does not strongly depend on stride definition. The achieved precision outperforms state-of-the-art methods evaluated on the same benchmark dataset by . Due to the independence of stride definition, the proposed method is not subject to the methodological constrains that limit applicability of state-of-the-art double integration methods. Furthermore, it was possible to improve precision on the benchmark dataset. With more precise mobile stride length estimation, new insights to the progression of neurological disease or early indications might be gained. Due to the independence of stride definition, previously uncharted diseases in terms of mobile gait analysis can now be investigated by retraining and applying the proposed method.
Formation of topological defects
Vachaspati, T.
1991-01-01
We consider the formation of point and line topological defects (monopoles and strings) from a general point of view by allowing the probability of formation of a defect to vary. To investigate the statistical properties of the defects at formation we give qualitative arguments that are independent of any particular model in which such defects occur. These arguments are substantiated by numerical results in the case of strings and for monopoles in two dimensions. We find that the network of strings at formation undergoes a transition at a certain critical density below which there are no infinite strings and the closed-string (loop) distribution is exponentially suppressed at large lengths. The results are contrasted with the results of statistical arguments applied to a box of strings in dynamical equilibrium. We argue that if point defects were to form with smaller probability, the distance between monopoles and antimonopoles would decrease while the monopole-to-monopole distance would increase. We find that monopoles are always paired with antimonopoles but the pairing becomes clean only when the number density of defects is small. A similar reasoning would also apply to other defects
Average and local structure of selected metal deuterides
Soerby, Magnus H.
2005-07-01
deuterides at 1 bar D2 and elevated temperatures (373-573 K) is presented in Paper 1. Deuterium atoms occupy chiefly three types of tetrahedral interstitial sites; two coordinated by 4 Zr atoms and one coordinated by 3 Zr and 1 Ni atoms. The site preference is predominantly ruled by sample composition and less by temperature. On the other hand, the spatial deuterium distribution among the preferred sites is strongly temperature dependant as the long-range correlations break down on heating. The sample is fully decomposed into tetragonal ZrD2 and Zr7Ni10 at 873 K. Th2AlD4 was the only metal deuteride with reported D-D separation substantially below 2 Aa (1.79 Aa) prior to the discovery of RENiInD1.33. However, as being the first ternary deuteride ever studied by PND, the original structure solution was based on very low-resolution data. The present reinvestigation (Paper 2) shows that the site preference was correctly determined, but the deuterium atoms are slightly shifted compared to the earlier report, now yielding acceptable interatomic separations. Solely Th4 tetrahedra are occupied in various Th2Al deuterides. Th8Al4D11 (Th2AlD2.75) takes a superstructure with tripled c-axis due to deuterium ordering. Th2AlD2.3 is disordered and the average distance between partly occupied sites appears as just 1.55 Aa in Rietveld refinements. However, short-range order is expected to prevent D-D distances under 2 Aa. Paper 3 present the first Reverse Monte Carlo (RMC) study of a metal deuteride. RMC is used in combination with total neutron scattering to model short-range deuterium correlations in disordered c-VD0.77. A practically complete blocking of interstitial sites closer than 2 Aa from any occupied deuterium site is observed. The short-range correlations resemble those of the fully ordered low temperature phase c-VD0.75 at length scales up to about 3 Aa, i.e. for the first two coordination spheres. Paper 4 concerns RMC modelling of short-range deuterium correlations in ZrCr2D4
Average and local structure of selected metal deuterides
Soerby, Magnus H.
2004-01-01
elevated temperatures (373-573 K) is presented in Paper 1. Deuterium atoms occupy chiefly three types of tetrahedral interstitial sites; two coordinated by 4 Zr atoms and one coordinated by 3 Zr and 1 Ni atoms. The site preference is predominantly ruled by sample composition and less by temperature. On the other hand, the spatial deuterium distribution among the preferred sites is strongly temperature dependant as the long-range correlations break down on heating. The sample is fully decomposed into tetragonal ZrD2 and Zr7Ni10 at 873 K. Th2AlD4 was the only metal deuteride with reported D-D separation substantially below 2 Aa (1.79 Aa) prior to the discovery of RENiInD1.33. However, as being the first ternary deuteride ever studied by PND, the original structure solution was based on very low-resolution data. The present reinvestigation (Paper 2) shows that the site preference was correctly determined, but the deuterium atoms are slightly shifted compared to the earlier report, now yielding acceptable interatomic separations. Solely Th4 tetrahedra are occupied in various Th2Al deuterides. Th8Al4D11 (Th2AlD2.75) takes a superstructure with tripled c-axis due to deuterium ordering. Th2AlD2.3 is disordered and the average distance between partly occupied sites appears as just 1.55 Aa in Rietveld refinements. However, short-range order is expected to prevent D-D distances under 2 Aa. Paper 3 present the first Reverse Monte Carlo (RMC) study of a metal deuteride. RMC is used in combination with total neutron scattering to model short-range deuterium correlations in disordered c-VD0.77. A practically complete blocking of interstitial sites closer than 2 Aa from any occupied deuterium site is observed. The short-range correlations resemble those of the fully ordered low temperature phase c-VD0.75 at length scales up to about 3 Aa, i.e. for the first two coordination spheres. Paper 4 concerns RMC modelling of short-range deuterium correlations in ZrCr2D4 at ambient and low
Exact run length distribution of the double sampling x-bar chart with estimated process parameters
Teoh, W. L.
2016-05-01
Full Text Available Since the run length distribution is generally highly skewed, a significant concern about focusing too much on the average run length (ARL criterion is that we may miss some crucial information about a control chart’s performance. Thus it is important to investigate the entire run length distribution of a control chart for an in-depth understanding before implementing the chart in process monitoring. In this paper, the percentiles of the run length distribution for the double sampling (DS X chart with estimated process parameters are computed. Knowledge of the percentiles of the run length distribution provides a more comprehensive understanding of the expected behaviour of the run length. This additional information includes the early false alarm, the skewness of the run length distribution, and the median run length (MRL. A comparison of the run length distribution between the optimal ARL-based and MRL-based DS X chart with estimated process parameters is presented in this paper. Examples of applications are given to aid practitioners to select the best design scheme of the DS X chart with estimated process parameters, based on their specific purpose.
Simulations of rf-driven sheath formation in two dimensions
Riyopoulos, S.; Grossmann, W.; Drobot, A.; Kress, M.
1992-01-01
The results from two-dimensional particle simulations of sheath formation around periodic metal arrays placed inside magnetized plasmas and driven by oscillating voltages are reported. The main goal is the modeling of the plasma interaction with the Faraday bars surrounding the antennas during ion cyclotron tokamak heating. The study of the time-averaged potentials shows that the two-dimensional sheath structure depends on both the sheath length-to-thickness ratio and the inclination of the magnetic lines. The equipotential surfaces form closed, nested cells between adjacent bars. When the magnetic lines are nearly perpendicular to the potential gradients, the ion motion is dominated by the ExB drift, and ion streamlines form vortices around the equipotentials. At larger inclinations of the magnetic lines, the flow decouples from the equipotentials and ion transport is mainly along the potential gradients. The critical angle for the transition from vortex circulation to field aligned flow is computed. The effects of the cross-field ion transport on the sheath properties are discussed. It is shown that the sheath length and the magnetic line inclination affect the sheath scaling in the two-dimensional case. The one-dimensional theory results are recovered in the limit of high length-to-thickness ratio and large inclination of the magnetic lines
A Method for Determining Skeletal Lengths from DXA Images
Fogelman Ignac
2007-11-01
Full Text Available Abstract Background Skeletal ratios and bone lengths are widely used in anthropology and forensic pathology and hip axis length is a useful predictor of fracture. The aim of this study was to show that skeletal ratios, such as length of femur to height, could be accurately measured from a DXA (dual energy X-ray absorptiometry image. Methods 90 normal Caucasian females, 18–80 years old, with whole body DXA data were used as subjects. Two methods, linear pixel count (LPC and reticule and ruler (RET were used to measure skeletal sizes on DXA images and compared with real clinical measures from 20 subjects and 20 x-rays of the femur and tibia taken in 2003. Results Although both methods were highly correlated, the LPC inter- and intra-observer error was lower at 1.6% compared to that of RET at 2.3%. Both methods correlated positively with real clinical measures, with LPC having a marginally stronger correlation coefficient (r2 = 0.94; r2 = 0.84; average r2 = 0.89 than RET (r2 = 0.86; r2 = 0.84; average r2 = 0.85 with X-rays and real measures respectively. Also, the time taken to use LPC was half that of RET at 5 minutes per scan. Conclusion Skeletal ratios can be accurately and precisely measured from DXA total body scan images. The LPC method is easy to use and relatively rapid. This new phenotype will be useful for osteoporosis research for individuals or large-scale epidemiological or genetic studies.
Two independent measurements of Debye lengths in doped nonpolar liquids.
Prieve, D C; Hoggard, J D; Fu, R; Sides, P J; Bethea, R
2008-02-19
Electric current measurements were performed between 2.5 cm x 7.5 cm parallel-plate electrodes separated by 1.2 mm of heptane doped with 0-15% w/w poly(isobutylene succinimide) (PIBS) having a molecular weight of about 1700. The rapid (microsecond) initial charging of the capacitor can be used to infer the dielectric constant of the solution. The much slower decay of current arising from the polarization of electrodes depends on the differential capacitance of the diffuse clouds of charge carriers accumulating next to each electrode and on the ohmic resistance of the fluid. Using the Gouy-Chapman model for the differential capacitance, Debye lengths of 80-600 nm were deduced that decrease with increasing concentration of PIBS. Values of the Debye lengths were confirmed by performing independent measurements of double-layer repulsion between a 6 microm polystyrene (PS) latex sphere and a PS-coated glass plate using total internal reflection microscopy in the same solutions. The charge carriers appear to be inverted PIBS micelles having apparent Stokes diameters of 20-40 nm. Dynamic light scattering reveals a broad distribution of sizes having an intensity-averaged diameter of 15 nm. This smaller size might arise (1) from overestimating the electrophoretic mobility of micelles by treating them as point charges or (2) because charged micelles are larger on average than uncharged micelles. When Faradaic reactions and zeta potentials on the electrodes can be neglected, such current versus time experiments yield values for the Debye length and ionic strength with less effort than force measurements. To obtain the concentration of charge carriers from measurements of conductivity, the mobility of the charge carriers must be known.
Kim, Seung Hoi; Shin, Ueon Sang [Dankook University, Cheonan (Korea, Republic of)
2016-10-15
Tunable cutting of multi-walled carbon nanotubes (CNTs) using high pressure homogenizer and/or HNO{sub 3}/H{sub 2}SO{sub 4} solution was accomplished, resulting in the production of short CNTs with minimum length of 35 nm. Field emission scanning electron microscopy (FE-SEM) and Zeta sizer analysis showed significant reduction of CNT length from this tunable cutting (e.g. from long and entangled pristine CNTs at about 20 μm to ≥1000 nm, ⁓400 nm, ⁓200 nm, and ⁓100 nm via high pressure jet-spraying cutting within 5 h, while chemical cutting process using greatly longer hours (48 h) showed a reduction only to about 1000 nm). When CNT sample of average 1000 nm length previously shortened by HNO{sub 3}/H{sub 2}SO{sub 4} was subjected to the high pressure jet-spraying cutting process, the reduction progressed faster (≤1 h), producing ≥35 nm. Fourier transform infrared spectra and thermogravimetric analysis (TGA) indicated restricted formation of hydrophilic functional groups such as carboxylic group and hydroxyl group in the high pressure jet-spraying cutting, whereas an intensive formation of hydrophilic functional groups on the surface of shortened CNT samples was found after chemical cutting. Such short CNT samples would fulfill the requirements for carbonaceous materials with various lengths in small spheroidal fullerenes and long CNTs. The short CNTs produced are promising for scientific and technological applications in many fields such as electronics, diagnostics, pharmaceuticals, biomedical engineering, and environmental or energy industries.
Anufriieva, Elena V; Shadrin, Nickolai V
2014-03-01
Arctodiaptomus salinus inhabits water bodies across Eurasia and North Africa. Based on our own data and that from the literature, we analyzed the influences of several factors on the intra- and inter-population variability of this species. A strong negative linear correlation between temperature and average body size in the Crimean and African populations was found, in which the parameters might be influenced by salinity. Meanwhile, a significant negative correlation between female body size and the altitude of habitats was found by comparing body size in populations from different regions. Individuals from environments with highly varying abiotic parameters, e.g. temporary reservoirs, had a larger body size than individuals from permanent water bodies. The changes in average body mass in populations were at 11.4 times, whereas, those in individual metabolic activities were at 6.2 times. Moreover, two size groups of A. salinus in the Crimean and the Siberian lakes were observed. The ratio of female length to male length fluctuated between 1.02 and 1.30. The average size of A. salinus in populations and its variations were determined by both genetic and environmental factors. However, the parities of these factors were unequal in either spatial or temporal scales.
Averaging and sampling for magnetic-observatory hourly data
J. J. Love
2010-11-01
Full Text Available A time and frequency-domain analysis is made of the effects of averaging and sampling methods used for constructing magnetic-observatory hourly data values. Using 1-min data as a proxy for continuous, geomagnetic variation, we construct synthetic hourly values of two standard types: instantaneous "spot" measurements and simple 1-h "boxcar" averages. We compare these average-sample types with others: 2-h average, Gaussian, and "brick-wall" low-frequency-pass. Hourly spot measurements provide a statistically unbiased representation of the amplitude range of geomagnetic-field variation, but as a representation of continuous field variation over time, they are significantly affected by aliasing, especially at high latitudes. The 1-h, 2-h, and Gaussian average-samples are affected by a combination of amplitude distortion and aliasing. Brick-wall values are not affected by either amplitude distortion or aliasing, but constructing them is, in an operational setting, relatively more difficult than it is for other average-sample types. It is noteworthy that 1-h average-samples, the present standard for observatory hourly data, have properties similar to Gaussian average-samples that have been optimized for a minimum residual sum of amplitude distortion and aliasing. For 1-h average-samples from medium and low-latitude observatories, the average of the combination of amplitude distortion and aliasing is less than the 5.0 nT accuracy standard established by Intermagnet for modern 1-min data. For medium and low-latitude observatories, average differences between monthly means constructed from 1-min data and monthly means constructed from any of the hourly average-sample types considered here are less than the 1.0 nT resolution of standard databases. We recommend that observatories and World Data Centers continue the standard practice of reporting simple 1-h-average hourly values.
Wolthers, K. C.; Noest, A. J.; Otto, S. A.; Miedema, F.; de Boer, R. J.
1999-01-01
To study CD4+ T cell productivity during HIV-1 infection, CD4+ T cell telomere lengths were measured. Cross-sectional and longitudinal analysis of HIV-1-infected individuals with CD4+ T cells counts >300 cells/mm3 showed normal average telomeric restriction fragment (TRF) length and normal
Wolthers, K.C.; Noest, A.J.; Otto, S.A.; Miedema, F.; Boer, R.J. de
1999-01-01
To study CD4+ T cell productivity during HIV-1 infection, CD4+ T cell telomere lengths were measured. Cross-sectional and longitudinal analysis of HIV-1-infected individuals with CD4+ T cells counts >300 cells/mm3 showed normal average telomeric restriction fragment (TRF) length and normal
Kukita, Kentaro; Uechi, Tadayoshi; Shimokawa, Junji; Goto, Masakazu; Yokota, Yoshinori; Kawanaka, Shigeru; Tanamoto, Tetsufumi; Tanimoto, Hiroyoshi; Takagi, Shinichi
2018-04-01
Planar single-gate (SG) silicon (Si) tunnel field effect transistors (TFETs) are attracting interest for ultra-low voltage operation and CMOS applications. For the achievement of subthreshold swing (S.S.) less than thermal limit of Si MOSFETs (S.S. = 60 mV/decade at 300 K), previous studies have proposed the formation of a pocket region, which needs very difficult implantation process. In this work, a planar SG Si TFET without pocket was proposed by using the technology computer-aided design (TCAD) simulations. An average S.S. of less than 60 mV/decade for 0.3 V (= V gs = V ds) operation was obtained. It is found that both low average S.S. (= 27.8 mV/decade) and high on-current I on (= 3.8 µA/µm) are achieved without pocket doping by scaling the equivalent oxide thickness (EOT) and increasing the gate-to-source overlap length L ov.
Assessing the Efficacy of Adjustable Moving Averages Using ASEAN-5 Currencies.
Jacinta Chan Phooi M'ng
Full Text Available The objective of this research is to examine the trends in the exchange rate markets of the ASEAN-5 countries (Indonesia (IDR, Malaysia (MYR, the Philippines (PHP, Singapore (SGD, and Thailand (THB through the application of dynamic moving average trading systems. This research offers evidence of the usefulness of the time-varying volatility technical analysis indicator, Adjustable Moving Average (AMA' in deciphering trends in these ASEAN-5 exchange rate markets. This time-varying volatility factor, referred to as the Efficacy Ratio in this paper, is embedded in AMA'. The Efficacy Ratio adjusts the AMA' to the prevailing market conditions by avoiding whipsaws (losses due, in part, to acting on wrong trading signals, which generally occur when there is no general direction in the market in range trading and by entering early into new trends in trend trading. The efficacy of AMA' is assessed against other popular moving-average rules. Based on the January 2005 to December 2014 dataset, our findings show that the moving averages and AMA' are superior to the passive buy-and-hold strategy. Specifically, AMA' outperforms the other models for the United States Dollar against PHP (USD/PHP and USD/THB currency pairs. The results show that different length moving averages perform better in different periods for the five currencies. This is consistent with our hypothesis that a dynamic adjustable technical indicator is needed to cater for different periods in different markets.
Assessing the Efficacy of Adjustable Moving Averages Using ASEAN-5 Currencies.
Chan Phooi M'ng, Jacinta; Zainudin, Rozaimah
2016-01-01
The objective of this research is to examine the trends in the exchange rate markets of the ASEAN-5 countries (Indonesia (IDR), Malaysia (MYR), the Philippines (PHP), Singapore (SGD), and Thailand (THB)) through the application of dynamic moving average trading systems. This research offers evidence of the usefulness of the time-varying volatility technical analysis indicator, Adjustable Moving Average (AMA') in deciphering trends in these ASEAN-5 exchange rate markets. This time-varying volatility factor, referred to as the Efficacy Ratio in this paper, is embedded in AMA'. The Efficacy Ratio adjusts the AMA' to the prevailing market conditions by avoiding whipsaws (losses due, in part, to acting on wrong trading signals, which generally occur when there is no general direction in the market) in range trading and by entering early into new trends in trend trading. The efficacy of AMA' is assessed against other popular moving-average rules. Based on the January 2005 to December 2014 dataset, our findings show that the moving averages and AMA' are superior to the passive buy-and-hold strategy. Specifically, AMA' outperforms the other models for the United States Dollar against PHP (USD/PHP) and USD/THB currency pairs. The results show that different length moving averages perform better in different periods for the five currencies. This is consistent with our hypothesis that a dynamic adjustable technical indicator is needed to cater for different periods in different markets.
Shape and depth determinations from second moving average residual self-potential anomalies
Abdelrahman, E M; El-Araby, T M; Essa, K S
2009-01-01
We have developed a semi-automatic method to determine the depth and shape (shape factor) of a buried structure from second moving average residual self-potential anomalies obtained from observed data using filters of successive window lengths. The method involves using a relationship between the depth and the shape to source and a combination of windowed observations. The relationship represents a parametric family of curves (window curves). For a fixed window length, the depth is determined for each shape factor. The computed depths are plotted against the shape factors, representing a continuous monotonically increasing curve. The solution for the shape and depth is read at the common intersection of the window curves. The validity of the method is tested on a synthetic example with and without random errors and on two field examples from Turkey and Germany. In all cases examined, the depth and the shape solutions obtained are in very good agreement with the true ones
length-weight relationhip of freshwater wild fish species
Dr Naeem
2012-06-21
Jun 21, 2012 ... Length-weight (LWR) and length-length relationships (LLR) were determined for a freshwater catfish ... Key words: Mystus bleekeri, length-weight relationship, length-length relationship, predictive equations. INTRODUCTION. Mystus bleekeri (freshwater catfish Day, 1877), locally ..... fish farmers, Aquacult.
Telschow, Samira; Jappe Frandsen, Flemming; Theisen, Kirsten
2012-01-01
Cement production has been subject to several technological changes, each of which requires detailed knowledge about the high multiplicity of processes, especially the high temperature process involved in the rotary kiln. This article gives an introduction to the topic of cement, including...... an overview of cement production, selected cement properties, and clinker phase relations. An extended summary of laboratory-scale investigations on clinkerization reactions, the most important reactions in cement production, is provided. Clinker formations by solid state reactions, solid−liquid and liquid......−liquid reactions are discussed, as are the influences of particles sizes on clinker phase formation. Furthermore, a mechanism for clinker phase formation in an industrial rotary kiln reactor is outlined....
Woodward, P.R.
1978-01-01
Theoretical models of star formation are discussed beginning with the earliest stages and ending in the formation of rotating, self-gravitating disks or rings. First a model of the implosion of very diffuse gas clouds is presented which relies upon a shock at the edge of a galactic spiral arm to drive the implosion. Second, models are presented for the formation of a second generation of massive stars in such a cloud once a first generation has formed. These models rely on the ionizing radiation from massive stars or on the supernova shocks produced when these stars explode. Finally, calculations of the gravitational collapse of rotating clouds are discussed with special focus on the question of whether rotating disks or rings are the result of such a collapse. 65 references
Sparre, Martin
Galaxy formation is an enormously complex discipline due to the many physical processes that play a role in shaping galaxies. The objective of this thesis is to study galaxy formation with two different approaches: First, numerical simulations are used to study the structure of dark matter and how...... galaxies form stars throughout the history of the Universe, and secondly it is shown that observations of gamma-ray bursts (GRBs) can be used to probe galaxies with active star formation in the early Universe. A conclusion from the hydrodynamical simulations is that the galaxies from the stateof...... is important, since it helps constraining chemical evolution models at high redshift. A new project studying how the population of galaxies hosting GRBs relate to other galaxy population is outlined in the conclusion of this thesis. The core of this project will be to quantify how the stellar mass function...
Blum, J.
2014-07-01
There has been vast progress in our understanding of planetesimal formation over the past decades, owing to a number of laboratory experiments as well as to refined models of dust and ice agglomeration in protoplanetary disks. Coagulation rapidly forms cm-sized ''pebbles'' by direct sticking in collisions at low velocities (Güttler et al. 2010; Zsom et al. 2010). For the further growth, two model approaches are currently being discussed: (1) Local concentration of pebbles in nebular instabilities until gravitational instability occurs (Johansen et al. 2007). (2) A competition between fragmentation and mass transfer in collisions among the dusty bodies, in which a few ''lucky winners'' make it to planetesimal sizes (Windmark et al. 2012a,b; Garaud et al. 2013). Predictions of the physical properties of the resulting bodies in both models allow a distinction of the two formation scenarios of planetesimals. In particular, the tensile strength (i.e, the inner cohesion) of the planetesimals differ widely between the two models (Skorov & Blum 2012; Blum et al. 2014). While model (1) predicts tensile strengths on the order of ˜ 1 Pa, model (2) results in rather compactified dusty bodies with tensile strengths in the kPa regime. If comets are km-sized survivors of the planetesimal-formation era, they should in principle hold the secret of their formation process. Water ice is the prime volatile responsible for the activity of comets. Thermophysical models of the heat and mass transport close to the comet-nucleus surface predict water-ice sublimation temperatures that relate to maximum sublimation pressures well below the kPa regime predicted for formation scenario (2). Model (1), however, is in agreement with the observed dust and gas activity of comets. Thus, a formation scenario for cometesimals involving gravitational instability is favored (Blum et al. 2014).
Safety Impact of Average Speed Control in the UK
Lahrmann, Harry Spaabæk; Brassøe, Bo; Johansen, Jonas Wibert
2016-01-01
of automatic speed control was point-based, but in recent years a potentially more effective alternative automatic speed control method has been introduced. This method is based upon records of drivers’ average travel speed over selected sections of the road and is normally called average speed control...... in the UK. The study demonstrates that the introduction of average speed control results in statistically significant and substantial reductions both in speed and in number of accidents. The evaluation indicates that average speed control has a higher safety effect than point-based automatic speed control....
on the performance of Autoregressive Moving Average Polynomial
Timothy Ademakinwa
Distributed Lag (PDL) model, Autoregressive Polynomial Distributed Lag ... Moving Average Polynomial Distributed Lag (ARMAPDL) model. ..... Global Journal of Mathematics and Statistics. Vol. 1. ... Business and Economic Research Center.
Decision trees with minimum average depth for sorting eight elements
AbouEisha, Hassan M.
2015-11-19
We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees is approximately equal to 8.548×10^326365), has also minimum depth. Both problems were considered by Knuth (1998). To obtain these results, we use tools based on extensions of dynamic programming which allow us to make sequential optimization of decision trees relative to depth and average depth, and to count the number of decision trees with minimum average depth.
Comparison of Interpolation Methods as Applied to Time Synchronous Averaging
Decker, Harry
1999-01-01
Several interpolation techniques were investigated to determine their effect on time synchronous averaging of gear vibration signals and also the effects on standard health monitoring diagnostic parameters...
Light-cone averaging in cosmology: formalism and applications
Gasperini, M.; Marozzi, G.; Veneziano, G.; Nugier, F.
2011-01-01
We present a general gauge invariant formalism for defining cosmological averages that are relevant for observations based on light-like signals. Such averages involve either null hypersurfaces corresponding to a family of past light-cones or compact surfaces given by their intersection with timelike hypersurfaces. Generalized Buchert-Ehlers commutation rules for derivatives of these light-cone averages are given. After introducing some adapted ''geodesic light-cone'' coordinates, we give explicit expressions for averaging the redshift to luminosity-distance relation and the so-called ''redshift drift'' in a generic inhomogeneous Universe
Length expectation values in quantum Regge calculus
Khatsymovsky, V.M.
2004-01-01
Regge calculus configuration superspace can be embedded into a more general superspace where the length of any edge is defined ambiguously depending on the 4-tetrahedron containing the edge. Moreover, the latter superspace can be extended further so that even edge lengths in each the 4-tetrahedron are not defined, only area tensors of the 2-faces in it are. We make use of our previous result concerning quantization of the area tensor Regge calculus which gives finite expectation values for areas. Also our result is used showing that quantum measure in the Regge calculus can be uniquely fixed once we know quantum measure on (the space of the functionals on) the superspace of the theory with ambiguously defined edge lengths. We find that in this framework quantization of the usual Regge calculus is defined up to a parameter. The theory may possess nonzero (of the order of Planck scale) or zero length expectation values depending on whether this parameter is larger or smaller than a certain value. Vanishing length expectation values means that the theory is becoming continuous, here dynamically in the originally discrete framework
Explaining the length threshold of polyglutamine aggregation
De Los Rios, Paolo; Hafner, Marc; Pastore, Annalisa
2012-01-01
The existence of a length threshold, of about 35 residues, above which polyglutamine repeats can give rise to aggregation and to pathologies, is one of the hallmarks of polyglutamine neurodegenerative diseases such as Huntington’s disease. The reason why such a minimal length exists at all has remained one of the main open issues in research on the molecular origins of such classes of diseases. Following the seminal proposals of Perutz, most research has focused on the hunt for a special structure, attainable only above the minimal length, able to trigger aggregation. Such a structure has remained elusive and there is growing evidence that it might not exist at all. Here we review some basic polymer and statistical physics facts and show that the existence of a threshold is compatible with the modulation that the repeat length imposes on the association and dissociation rates of polyglutamine polypeptides to and from oligomers. In particular, their dramatically different functional dependence on the length rationalizes the very presence of a threshold and hints at the cellular processes that might be at play, in vivo, to prevent aggregation and the consequent onset of the disease. (paper)
Explaining the length threshold of polyglutamine aggregation
De Los Rios, Paolo; Hafner, Marc; Pastore, Annalisa
2012-06-01
The existence of a length threshold, of about 35 residues, above which polyglutamine repeats can give rise to aggregation and to pathologies, is one of the hallmarks of polyglutamine neurodegenerative diseases such as Huntington’s disease. The reason why such a minimal length exists at all has remained one of the main open issues in research on the molecular origins of such classes of diseases. Following the seminal proposals of Perutz, most research has focused on the hunt for a special structure, attainable only above the minimal length, able to trigger aggregation. Such a structure has remained elusive and there is growing evidence that it might not exist at all. Here we review some basic polymer and statistical physics facts and show that the existence of a threshold is compatible with the modulation that the repeat length imposes on the association and dissociation rates of polyglutamine polypeptides to and from oligomers. In particular, their dramatically different functional dependence on the length rationalizes the very presence of a threshold and hints at the cellular processes that might be at play, in vivo, to prevent aggregation and the consequent onset of the disease.
Effects of dietary protein levels on length-weight relationships and ...
Feeding trial involving different protein levels on length–weight relationships and condition factor of Clarias gariepinus was conducted in floating hapa system. Fingerlings (average weight, 4.50± 0.01g and average length, 8.0±0.2 cm) were randomly stocked at 20 fish/1m3. Five diets with crude protein: 40.0, 42.5, 45.0, 47.5 ...
Measurement of the average B hadron lifetime in Z0 decays using reconstructed vertices
Abe, K.; Abt, I.; Ahn, C.J.; Akagi, T.; Allen, N.J.; Ash, W.W.; Aston, D.; Baird, K.G.; Baltay, C.; Band, H.R.; Barakat, M.B.; Baranko, G.; Bardon, O.; Barklow, T.; Bazarko, A.O.; Ben-David, R.; Benvenuti, A.C.; Bilei, G.M.; Bisello, D.; Blaylock, G.; Bogart, J.R.; Bolton, T.; Bower, G.R.; Brau, J.E.; Breidenbach, M.; Bugg, W.M.; Burke, D.; Burnett, T.H.; Burrows, P.N.; Busza, W.; Calcaterra, A.; Caldwell, D.O.; Calloway, D.; Camanzi, B.; Carpinelli, M.; Cassell, R.; Castaldi, R.; Castro, A.; Cavalli-Sforza, M.; Church, E.; Cohn, H.O.; Coller, J.A.; Cook, V.; Cotton, R.; Cowan, R.F.; Coyne, D.G.; D'Oliveira, A.; Damerell, C.J.S.; Daoudi, M.; De Sangro, R.; De Simone, P.; Dell'Orso, R.; Dima, M.; Du, P.Y.C.; Dubois, R.; Eisenstein, B.I.; Elia, R.; Falciai, D.; Fan, C.; Fero, M.J.; Frey, R.; Furuno, K.; Gillman, T.; Gladding, G.; Gonzalez, S.; Hallewell, G.D.; Hart, E.L.; Hasegawa, Y.; Hedges, S.; Hertzbach, S.S.; Hildreth, M.D.; Huber, J.; Huffer, M.E.; Hughes, E.W.; Hwang, H.; Iwasaki, Y.; Jackson, D.J.; Jacques, P.; Jaros, J.; Johnson, A.S.; Johnson, J.R.; Johnson, R.A.; Junk, T.; Kajikawa, R.; Kalelkar, M.; Kang, H.J.; Karliner, I.; Kawahara, H.; Kendall, H.W.; Kim, Y.; King, M.E.; King, R.; Kofler, R.R.; Krishna, N.M.; Kroeger, R.S.; Labs, J.F.; Langston, M.; Lath, A.; Lauber, J.A.; Leith, D.W.G.S.; Liu, M.X.; Liu, X.; Loreti, M.; Lu, A.; Lynch, H.L.; Ma, J.; Mancinelli, G.; Manly, S.; Mantovani, G.; Markiewicz, T.W.; Maruyama, T.; Massetti, R.; Masuda, H.; Mazzucato, E.; McKemey, A.K.; Meadows, B.T.; Messner, R.; Mockett, P.M.; Moffeit, K.C.; Mours, B.; Mueller, G.; Muller, D.; Nagamine, T.; Nauenberg, U.; Neal, H.; Nussbaum, M.; Ohnishi, Y.; Osborne, L.S.; Panvini, R.S.; Park, H.; Pavel, T.J.; Peruzzi, I.; Piccolo, M.; Piemontese, L.; Pieroni, E.; Pitts, K.T.; Plano, R.J.; Prepost, R.; Prescott, C.Y.; Punkar, G.D.; Quigley, J.; Ratcliff, B.N.; Reeves, T.W.; Reidy, J.; Rensing, P.E.; Rochester, L.S.; Rothberg, J.E.; Rowson, P.C.; Russell, J.J.
1995-01-01
We report a measurement of the average B hadron lifetime using data collected with the SLD detector at the SLAC Linear Collider in 1993. An inclusive analysis selected three-dimensional vertices with B hadron lifetime information in a sample of 50x10 3 Z 0 decays. A lifetime of 1.564±0.030(stat)±0.036(syst) ps was extracted from the decay length distribution of these vertices using a binned maximum likelihood method. copyright 1995 The American Physical Society
Delineation of facial archetypes by 3d averaging.
Shaweesh, Ashraf I; Thomas, C David L; Bankier, Agnes; Clement, John G
2004-10-01
The objective of this study was to investigate the feasibility of creating archetypal 3D faces through computerized 3D facial averaging. A 3D surface scanner Fiore and its software were used to acquire the 3D scans of the faces while 3D Rugle3 and locally-developed software generated the holistic facial averages. 3D facial averages were created from two ethnic groups; European and Japanese and from children with three previous genetic disorders; Williams syndrome, achondroplasia and Sotos syndrome as well as the normal control group. The method included averaging the corresponding depth (z) coordinates of the 3D facial scans. Compared with other face averaging techniques there was not any warping or filling in the spaces by interpolation; however, this facial average lacked colour information. The results showed that as few as 14 faces were sufficient to create an archetypal facial average. In turn this would make it practical to use face averaging as an identification tool in cases where it would be difficult to recruit a larger number of participants. In generating the average, correcting for size differences among faces was shown to adjust the average outlines of the facial features. It is assumed that 3D facial averaging would help in the identification of the ethnic status of persons whose identity may not be known with certainty. In clinical medicine, it would have a great potential for the diagnosis of syndromes with distinctive facial features. The system would also assist in the education of clinicians in the recognition and identification of such syndromes.
Nuclear reactor with scrammable part length rod
Bevilacqua, F.
1979-01-01
A new part length rod is provided. It may be used to control xenon induced power oscillations but to contribute to shutdown reactivity when a rapid shutdown of the reactor is required. The part length rod consists of a control rod with three regions. The lower control region is a longer weaker active portion separated from an upper stronger shorter poison section by an intermediate section which is a relative non-absorber of neutrons. The combination of the longer weaker control section with the upper high worth poison section permits the part length rod of this to be scrammed into the core when a reactor shutdown is required but also permits the control rod to be used as a tool to control power distribution in both the axial and radial directions during normal operation
Resonance effects in neutron scattering lengths
Lynn, J.E.
1989-06-01
The nature of neutron scattering lengths is described and the nuclear effects giving rise to their variation is discussed. Some examples of the shortcomings of the available nuclear data base, particularly for heavy nuclei, are given. Methods are presented for improving this data base, in particular for obtaining the energy variation of the complex coherent scattering length from long to sub-/angstrom/ wave lengths from the available sources of slow neutron cross section data. Examples of this information are given for several of the rare earth nuclides. Some examples of the effect of resonances in neutron reflection and diffraction are discussed. This report documents a seminar given at Argonne National Laboratory in March 1989. 18 refs., 18 figs.
Aminophylline increases seizure length during electroconvulsive therapy.
Stern, L; Dannon, P N; Hirschmann, S; Schriber, S; Amytal, D; Dolberg, O T; Grunhaus, L
1999-12-01
Electroconvulsive therapy (ECT) is considered to be one of the most effective treatments for patients with major depression and persistent psychosis. Seizure characteristics probably determine the therapeutic effect of ECT; as a consequence, short seizures are accepted as one of the factors of poor outcome. During most ECT courses seizure threshold increases and seizure duration decreases. Methylxanthine preparations, caffeine, and theophylline have been used to prolong seizure duration. The use of aminophylline, more readily available than caffeine, has not been well documented. The objective of this study was to test the effects of aminophylline on seizure length. Fourteen drug-free patients with diagnoses of affective disorder or psychotic episode receiving ECT participated in this study. Seizure length was assessed clinically and per EEG. Statistical comparisons were done using paired t tests. A significant increase (p < 0.04) in seizure length was achieved and maintained on three subsequent treatments with aminophylline. No adverse events were noted from the addition of aminophylline.
Minimal Length Scale Scenarios for Quantum Gravity.
Hossenfelder, Sabine
2013-01-01
We review the question of whether the fundamental laws of nature limit our ability to probe arbitrarily short distances. First, we examine what insights can be gained from thought experiments for probes of shortest distances, and summarize what can be learned from different approaches to a theory of quantum gravity. Then we discuss some models that have been developed to implement a minimal length scale in quantum mechanics and quantum field theory. These models have entered the literature as the generalized uncertainty principle or the modified dispersion relation, and have allowed the study of the effects of a minimal length scale in quantum mechanics, quantum electrodynamics, thermodynamics, black-hole physics and cosmology. Finally, we touch upon the question of ways to circumvent the manifestation of a minimal length scale in short-distance physics.
Extending electronic length frequency analysis in R
Taylor, M. H.; Mildenberger, Tobias K.
2017-01-01
VBGF (soVBGF) requires a more intensive search due to two additional parameters. This work describes the implementation of two optimisation approaches ("simulated annealing" and "genetic algorithm") for growth function fitting using the open-source software "R." Using a generated LFQ data set......Electronic length frequency analysis (ELEFAN) is a system of stock assessment methods using length-frequency (LFQ) data. One step is the estimation of growth from the progression of LFQ modes through time using the von Bertalanffy growth function (VBGF). The option to fit a seasonally oscillating...... of the asymptotic length parameter (L-infinity) are found to have significant effects on parameter estimation error. An outlook provides context as to the significance of the R-based implementation for further testing and development, as well as the general relevance of the method for data-limited stock assessment....
Resonance effects in neutron scattering lengths
Lynn, J.E.
1989-01-01
The nature of neutron scattering lengths is described and the nuclear effects giving rise to their variation is discussed. Some examples of the shortcomings of the available nuclear data base, particularly for heavy nuclei, are given. Methods are presented for improving this data base, in particular for obtaining the energy variation of the complex coherent scattering length from long to sub-angstrom wave lengths from the available sources of slow neutron cross section data. Examples of this information are given for several of the rare earth nuclides. Some examples of the effect of resonances in neutron reflection and diffraction are discussed. This report documents a seminar given at Argonne National Laboratory in March 1989. 18 refs., 18 figs
Minimal Length Scale Scenarios for Quantum Gravity
Sabine Hossenfelder
2013-01-01
Full Text Available We review the question of whether the fundamental laws of nature limit our ability to probe arbitrarily short distances. First, we examine what insights can be gained from thought experiments for probes of shortest distances, and summarize what can be learned from different approaches to a theory of quantum gravity. Then we discuss some models that have been developed to implement a minimal length scale in quantum mechanics and quantum field theory. These models have entered the literature as the generalized uncertainty principle or the modified dispersion relation, and have allowed the study of the effects of a minimal length scale in quantum mechanics, quantum electrodynamics, thermodynamics, black-hole physics and cosmology. Finally, we touch upon the question of ways to circumvent the manifestation of a minimal length scale in short-distance physics.
Stride length: measuring its instantaneous value
Campiglio, G C; Mazzeo, J R
2007-01-01
Human gait has been studied from different viewpoints: kinematics, dynamics, sensibility and others. Many of its characteristics still remain open to research, both for normal gait and for pathological gait. Objective measures of some of its most significant spatial/temporal parameters are important in this context. Stride length, one of these parameters, is defined as the distance between two consecutive contacts of one foot with ground. On this work we present a device designed to provide automatic measures of stride length. Its features make it particularly appropriate for the evaluation of pathological gait
Interpreting Bivariate Regression Coefficients: Going beyond the Average
Halcoussis, Dennis; Phillips, G. Michael
2010-01-01
Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…
Average stress in a Stokes suspension of disks
Prosperetti, Andrea
2004-01-01
The ensemble-average velocity and pressure in an unbounded quasi-random suspension of disks (or aligned cylinders) are calculated in terms of average multipoles allowing for the possibility of spatial nonuniformities in the system. An expression for the stress due to the suspended particles is
47 CFR 1.959 - Computation of average terrain elevation.
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false Computation of average terrain elevation. 1.959 Section 1.959 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Wireless Radio Services Applications and Proceedings Application Requirements and Procedures § 1.959 Computation of average terrain elevation. Except a...
47 CFR 80.759 - Average terrain elevation.
2010-10-01
... 47 Telecommunication 5 2010-10-01 2010-10-01 false Average terrain elevation. 80.759 Section 80.759 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.759 Average terrain elevation. (a)(1) Draw radials...
The average covering tree value for directed graph games
Khmelnitskaya, Anna Borisovna; Selcuk, Özer; Talman, Dolf
We introduce a single-valued solution concept, the so-called average covering tree value, for the class of transferable utility games with limited communication structure represented by a directed graph. The solution is the average of the marginal contribution vectors corresponding to all covering
The Average Covering Tree Value for Directed Graph Games
Khmelnitskaya, A.; Selcuk, O.; Talman, A.J.J.
2012-01-01
Abstract: We introduce a single-valued solution concept, the so-called average covering tree value, for the class of transferable utility games with limited communication structure represented by a directed graph. The solution is the average of the marginal contribution vectors corresponding to all
18 CFR 301.7 - Average System Cost methodology functionalization.
2010-04-01
... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Average System Cost... REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER MARKETING ADMINISTRATIONS AVERAGE SYSTEM COST METHODOLOGY FOR SALES FROM UTILITIES TO BONNEVILLE POWER ADMINISTRATION UNDER NORTHWEST POWER...
Analytic computation of average energy of neutrons inducing fission
Clark, Alexander Rich
2016-01-01
The objective of this report is to describe how I analytically computed the average energy of neutrons that induce fission in the bare BeRP ball. The motivation of this report is to resolve a discrepancy between the average energy computed via the FMULT and F4/FM cards in MCNP6 by comparison to the analytic results.
An alternative scheme of the Bogolyubov's average method
Ortiz Peralta, T.; Ondarza R, R.; Camps C, E.
1990-01-01
In this paper the average energy and the magnetic moment conservation laws in the Drift Theory of charged particle motion are obtained in a simple way. The approach starts from the energy and magnetic moment conservation laws and afterwards the average is performed. This scheme is more economic from the standpoint of time and algebraic calculations than the usual procedure of Bogolyubov's method. (Author)
Decision trees with minimum average depth for sorting eight elements
AbouEisha, Hassan M.; Chikalov, Igor; Moshkov, Mikhail
2015-01-01
We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees
Bounds on Average Time Complexity of Decision Trees
Chikalov, Igor
2011-01-01
In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any
A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages
Malzahn, Dorthe; Opper, Manfred
2003-01-01
We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages...
Word length, set size, and lexical factors: Re-examining what causes the word length effect.
Guitard, Dominic; Gabel, Andrew J; Saint-Aubin, Jean; Surprenant, Aimée M; Neath, Ian
2018-04-19
The word length effect, better recall of lists of short (fewer syllables) than long (more syllables) words has been termed a benchmark effect of working memory. Despite this, experiments on the word length effect can yield quite different results depending on set size and stimulus properties. Seven experiments are reported that address these 2 issues. Experiment 1 replicated the finding of a preserved word length effect under concurrent articulation for large stimulus sets, which contrasts with the abolition of the word length effect by concurrent articulation for small stimulus sets. Experiment 2, however, demonstrated that when the short and long words are equated on more dimensions, concurrent articulation abolishes the word length effect for large stimulus sets. Experiment 3 shows a standard word length effect when output time is equated, but Experiments 4-6 show no word length effect when short and long words are equated on increasingly more dimensions that previous demonstrations have overlooked. Finally, Experiment 7 compared recall of a small and large neighborhood words that were equated on all the dimensions used in Experiment 6 (except for those directly related to neighborhood size) and a neighborhood size effect was still observed. We conclude that lexical factors, rather than word length per se, are better predictors of when the word length effect will occur. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Sighting optics including an optical element having a first focal length and a second focal length
Crandall, David Lynn [Idaho Falls, ID
2011-08-01
One embodiment of sighting optics according to the teachings provided herein may include a front sight and a rear sight positioned in spaced-apart relation. The rear sight includes an optical element having a first focal length and a second focal length. The first focal length is selected so that it is about equal to a distance separating the optical element and the front sight and the second focal length is selected so that it is about equal to a target distance. The optical element thus brings into simultaneous focus, for a user, images of the front sight and the target.
Cutting Whole Length or Partial Length of Internal Anal Sphincter in Managementof Fissure in Ano
Furat Shani Aoda
2017-12-01
Full Text Available A chronic anal fissure is a common painful perianal condition.The main operative procedure to treat this painful condition is a lateral internal sphincteretomy (LIS.The aim of study is to compare the outcome and complications of closed LIS up to the dentate line (whole length of internal sphincter or up to the fissure apex (partial length of internal sphincter in the treatment of anal fissure.It is a prospective comparativestudy including 100 patients with chronic fissure in ano. All patients assigned to undergo closed LIS. Those patients were randomly divided into two groups: 50 patients underwent LIS to the level of dentate line (whole length and other 50 patients underwent LIS to the level of fissure apex (partial length. Patients were followed up weekly in the 1st month, twice monthly in the second month then monthly for next 2 months and finally after 1 year. There was satisfactory relief of pain in all patients in both groups & complete healing of the fissure occurred. Regarding post operative incontinence no major degree of incontinence occur in both group but minor degree of incontinence persists In 7 patients after whole length LIS after one year. In conclusion, both whole length & partial length LIS associated with improvement of pain, good chance of healing but whole length LIS associated with more chance of long term flatus incontinence. Hence,we recommend partial length LIS as treatment forchronic anal fissure.
Modelling lidar volume-averaging and its significance to wind turbine wake measurements
Meyer Forsting, A. R.; Troldborg, N.; Borraccino, A.
2017-05-01
Lidar velocity measurements need to be interpreted differently than conventional in-situ readings. A commonly ignored factor is “volume-averaging”, which refers to lidars not sampling in a single, distinct point but along its entire beam length. However, especially in regions with large velocity gradients, like the rotor wake, can it be detrimental. Hence, an efficient algorithm mimicking lidar flow sampling is presented, which considers both pulsed and continous-wave lidar weighting functions. The flow-field around a 2.3 MW turbine is simulated using Detached Eddy Simulation in combination with an actuator line to test the algorithm and investigate the potential impact of volume-averaging. Even with very few points discretising the lidar beam is volume-averaging captured accurately. The difference in a lidar compared to a point measurement is greatest at the wake edges and increases from 30% one rotor diameter (D) downstream of the rotor to 60% at 3D.
Self-similarity of higher-order moving averages
Arianos, Sergio; Carbone, Anna; Türk, Christian
2011-10-01
In this work, higher-order moving average polynomials are defined by straightforward generalization of the standard moving average. The self-similarity of the polynomials is analyzed for fractional Brownian series and quantified in terms of the Hurst exponent H by using the detrending moving average method. We prove that the exponent H of the fractional Brownian series and of the detrending moving average variance asymptotically agree for the first-order polynomial. Such asymptotic values are compared with the results obtained by the simulations. The higher-order polynomials correspond to trend estimates at shorter time scales as the degree of the polynomial increases. Importantly, the increase of polynomial degree does not require to change the moving average window. Thus trends at different time scales can be obtained on data sets with the same size. These polynomials could be interesting for those applications relying on trend estimates over different time horizons (financial markets) or on filtering at different frequencies (image analysis).
Anomalous behavior of q-averages in nonextensive statistical mechanics
Abe, Sumiyoshi
2009-01-01
A generalized definition of average, termed the q-average, is widely employed in the field of nonextensive statistical mechanics. Recently, it has however been pointed out that such an average value may behave unphysically under specific deformations of probability distributions. Here, the following three issues are discussed and clarified. Firstly, the deformations considered are physical and may be realized experimentally. Secondly, in view of the thermostatistics, the q-average is unstable in both finite and infinite discrete systems. Thirdly, a naive generalization of the discussion to continuous systems misses a point, and a norm better than the L 1 -norm should be employed for measuring the distance between two probability distributions. Consequently, stability of the q-average is shown not to be established in all of the cases
Bootstrapping pre-averaged realized volatility under market microstructure noise
Hounyo, Ulrich; Goncalves, Sílvia; Meddahi, Nour
The main contribution of this paper is to propose a bootstrap method for inference on integrated volatility based on the pre-averaging approach of Jacod et al. (2009), where the pre-averaging is done over all possible overlapping blocks of consecutive observations. The overlapping nature of the pre......-averaged returns implies that these are kn-dependent with kn growing slowly with the sample size n. This motivates the application of a blockwise bootstrap method. We show that the "blocks of blocks" bootstrap method suggested by Politis and Romano (1992) (and further studied by Bühlmann and Künsch (1995......)) is valid only when volatility is constant. The failure of the blocks of blocks bootstrap is due to the heterogeneity of the squared pre-averaged returns when volatility is stochastic. To preserve both the dependence and the heterogeneity of squared pre-averaged returns, we propose a novel procedure...
[Ultrasonographic evaluation of the uterine cervix length remaining after LOOP-excision].
Robert, A-L; Nicolas, F; Lavoué, V; Henno, S; Mesbah, H; Porée, P; Levêque, J
2014-04-01
To assess whether there is a correlation between the length of a conization specimen and the length of the cervix measured by vaginal ultrasonography after the operation Prospective observational study including patients less than 45 years with measurement of cervical length before and the day of the conization, and measuring the histological length of the specimen. Among the 40 patients enrolled, the average ultrasound measurements before conization was 26.9 mm (± 4.9 mm) against 18.1mm (± 4.4mm) after conization with a mean difference of 8.8mm (± 2.4mm) (difference statistically significant Pcervix length remove by loop-excision in our series is 33% (± 8.5%). A good correlation between the measurements of the specimen and the cervical ultrasound length before and after conization was found, as a significant reduction in cervical length after conization. The precise length of the specimen should be known in case of pregnancy and the prevention of prematurity due to conization rests on selected indications and efficient surgical technique. Copyright © 2013 Elsevier Masson SAS. All rights reserved.
Whitcher, Ralph
2007-01-01
1 - Description of program or function: SACALC2B calculates the average solid angle subtended by a rectangular or circular detector window to a coaxial or non-coaxial rectangular, circular or point source, including where the source and detector planes are not parallel. SACALC C YL calculates the average solid angle subtended by a cylinder to a rectangular or circular source, plane or thick, at any location and orientation. This is needed, for example, in calculating the intrinsic gamma efficiency of a detector such as a GM tube. The program also calculates the number of hits on the cylinder side and on each end, and the average path length through the detector volume (assuming no scattering or absorption). Point sources can be modelled by using a circular source of zero radius. NEA-1688/03: Documentation has been updated (January 2006). 2 - Methods: The program uses a Monte Carlo method to calculate average solid angle for source-detector geometries that are difficult to analyse by analytical methods. The values of solid angle are calculated to accuracies of typically better than 0.1%. The calculated values from the Monte Carlo method agree closely with those produced by polygon approximation and numerical integration by Gardner and Verghese, and others. 3 - Restrictions on the complexity of the problem: The program models a circular or rectangular detector in planes that are not necessarily coaxial, nor parallel. Point sources can be modelled by using a circular source of zero radius. The sources are assumed to be uniformly distributed. NEA-1688/04: In SACALC C YL, to avoid rounding errors, differences less than 1 E-12 are assumed to be zero
An averaging method for nonlinear laminar Ekman layers
Andersen, Anders Peter; Lautrup, B.; Bohr, T.
2003-01-01
-similar ansatz for the velocity profile, which assumes that a single length scale describes the boundary layer structure, and a new non-self-similar ansatz in which the decay and the oscillations of the boundary layer are described by two different length scales. For both profiles we calculate the up......-flow in a vortex core in solid-body rotation analytically. We compare the quantitative predictions of the model with the family of exact similarity solutions due to von Karman and find that the results for the non-self-similar profile are in almost perfect quantitative agreement with the exact solutions...
Neutron scattering lengths of 3He
Alfimenkov, V.P.; Akopian, G.G.; Wierzbicki, J.; Govorov, A.M.; Pikelner, L.B.; Sharapov, E.I.
1976-01-01
The total neutron scattering cross-section of 3 He has been measured in the neutron energy range from 20 meV to 2 eV. Together with the known value of coherent scattering amplitude it leads to the two sts of n 3 He scattering lengths
Phonological length, phonetic duration and aphasia
Gilbers, D.G.; Bastiaanse, Y.R.M.; van der Linde, K.J.
1997-01-01
This study discusses an error type that is expected to occur in aphasics suffering from a phonological disorder, i.e. Wernicke's and conduction aphasics, but not in aphasics suffering from a phonetic disorder, i.e. Broca's aphasics. The critical notion is 'phonological length'. It will be argued
Information-theoretic lengths of Jacobi polynomials
Guerrero, A; Dehesa, J S [Departamento de Fisica Atomica, Molecular y Nuclear, Universidad de Granada, Granada (Spain); Sanchez-Moreno, P, E-mail: agmartinez@ugr.e, E-mail: pablos@ugr.e, E-mail: dehesa@ugr.e [Instituto ' Carlos I' de Fisica Teorica y Computacional, Universidad de Granada, Granada (Spain)
2010-07-30
The information-theoretic lengths of the Jacobi polynomials P{sup ({alpha}, {beta})}{sub n}(x), which are information-theoretic measures (Renyi, Shannon and Fisher) of their associated Rakhmanov probability density, are investigated. They quantify the spreading of the polynomials along the orthogonality interval [- 1, 1] in a complementary but different way as the root-mean-square or standard deviation because, contrary to this measure, they do not refer to any specific point of the interval. The explicit expressions of the Fisher length are given. The Renyi lengths are found by the use of the combinatorial multivariable Bell polynomials in terms of the polynomial degree n and the parameters ({alpha}, {beta}). The Shannon length, which cannot be exactly calculated because of its logarithmic functional form, is bounded from below by using sharp upper bounds to general densities on [- 1, +1] given in terms of various expectation values; moreover, its asymptotics is also pointed out. Finally, several computational issues relative to these three quantities are carefully analyzed.
Context quantization by minimum adaptive code length
Forchhammer, Søren; Wu, Xiaolin
2007-01-01
Context quantization is a technique to deal with the issue of context dilution in high-order conditional entropy coding. We investigate the problem of context quantizer design under the criterion of minimum adaptive code length. A property of such context quantizers is derived for binary symbols....
Asymptotic Translation Length in the Curve Complex
Valdivia, Aaron D.
2013-01-01
We show that when the genus and punctures of a surface are directly proportional by some rational number the minimal asymptotic translation length in the curve complex has behavior inverse to the square of the Euler characteristic. We also show that when the genus is fixed and the number of punctures varies the behavior is inverse to the Euler characteristic.
Minimum Description Length Shape and Appearance Models
Thodberg, Hans Henrik
2003-01-01
The Minimum Description Length (MDL) approach to shape modelling is reviewed. It solves the point correspondence problem of selecting points on shapes defined as curves so that the points correspond across a data set. An efficient numerical implementation is presented and made available as open s...
Hydrodynamic slip length as a surface property
Ramos-Alvarado, Bladimir; Kumar, Satish; Peterson, G. P.
2016-02-01
Equilibrium and nonequilibrium molecular dynamics simulations were conducted in order to evaluate the hypothesis that the hydrodynamic slip length is a surface property. The system under investigation was water confined between two graphite layers to form nanochannels of different sizes (3-8 nm). The water-carbon interaction potential was calibrated by matching wettability experiments of graphitic-carbon surfaces free of airborne hydrocarbon contamination. Three equilibrium theories were used to calculate the hydrodynamic slip length. It was found that one of the recently reported equilibrium theories for the calculation of the slip length featured confinement effects, while the others resulted in calculations significantly hindered by the large margin of error observed between independent simulations. The hydrodynamic slip length was found to be channel-size independent using equilibrium calculations, i.e., suggesting a consistency with the definition of a surface property, for 5-nm channels and larger. The analysis of the individual trajectories of liquid particles revealed that the reason for observing confinement effects in 3-nm nanochannels is the high mobility of the bulk particles. Nonequilibrium calculations were not consistently affected by size but by noisiness in the smallest systems.
2010-04-01
... 23 Highways 1 2010-04-01 2010-04-01 false Length. 658.13 Section 658.13 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION ENGINEERING AND TRAFFIC OPERATIONS TRUCK SIZE AND WEIGHT, ROUTE... Network or in transit between these highways and terminals or service locations pursuant to § 658.19. (b...
Link lengths and their growth powers
Huh, Youngsik; No, Sungjong; Oh, Seungsang; Rawdon, Eric J
2015-01-01
For a certain infinite family F of knots or links, we study the growth power ratios of their stick number, lattice stick number, minimum lattice length and minimum ropelength compared with their minimum crossing number c(K) for every K∈F. It is known that the stick number and lattice stick number grow between the (1/2) and linear power of the crossing number, and minimum lattice length and minimum ropelength grow with at least the (3/4) power of crossing number (which is called the four-thirds power law). Furthermore, the minimal lattice length and minimum ropelength grow at most as O (c(K)[ln(c(K))] 5 ), but it is unknown whether any family exhibits superlinear growth. For any real number r between (1/2) and 1, we give an infinite family of non-splittable prime links in which the stick number and lattice stick number grow exactly as the rth power of crossing number. Furthermore for any real number r between (3/4) and 1, we give another infinite family of non-splittable prime links in which the minimum lattice length and minimum ropelength grow exactly as the rth power of crossing number. (paper)
Exciton diffusion length in narrow bandgap polymers
Mikhnenko, O.V.; Azimi, H.; Morana, M.; Blom, P.W.M.; Loi, M.A.
2012-01-01
We developed a new method to accurately extract the singlet exciton diffusion length in organic semiconductors by blending them with a low concentration of methanofullerene[6,6]-phenyl-C61-butyric acid methyl ester (PCBM). The dependence of photoluminescence (PL) decay time on the fullerene
Scale Length of the Galactic Thin Disk
tribpo
thin disk density scale length, hR, is rather short (2.7 ± 0.1 kpc). Key words. ... The 2MASS near infrared data provide, for the first time, deep star counts on a ... peaks allows to adjust the spatial extinction law in the model. ... probability that fi.
Axial length and keratometry readings in Nigeria- a guide to biometr
Dr Adio
though not ideal may be useful where biometric machines are either not available or faulty. Surgeries are increasingly being done in the rural areas and in camp settings and a working figure for the axial length and/or keratometry values in the locality where the procedure is taking place are useful. The average A-scan ...
Shan, Juan; Cornelissen, Ludo Johannes; Liu, Jing; Ben Youssef, J.; Liang, Lei; van Wees, Bart
2017-01-01
The nonlocal transport of thermally generated magnons not only unveils the underlying mechanism of the spin Seebeck effect, but also allows for the extraction of the magnon relaxation length (λm) in a magnetic material, the average distance over which thermal magnons can propagate. In this study, we
Bounds on Average Time Complexity of Decision Trees
Chikalov, Igor
2011-01-01
In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any diagnostic problem, the minimum average depth of decision tree is bounded from below by the entropy of probability distribution (with a multiplier 1/log2 k for a problem over a k-valued information system). Among diagnostic problems, the problems with a complete set of attributes have the lowest minimum average depth of decision trees (e.g, the problem of building optimal prefix code [1] and a blood test study in assumption that exactly one patient is ill [23]). For such problems, the minimum average depth of decision tree exceeds the lower bound by at most one. The minimum average depth reaches the maximum on the problems in which each attribute is "indispensable" [44] (e.g., a diagnostic problem with n attributes and kn pairwise different rows in the decision table and the problem of implementing the modulo 2 summation function). These problems have the minimum average depth of decision tree equal to the number of attributes in the problem description. © Springer-Verlag Berlin Heidelberg 2011.
Lateral dispersion coefficients as functions of averaging time
Sheih, C.M.
1980-01-01
Plume dispersion coefficients are discussed in terms of single-particle and relative diffusion, and are investigated as functions of averaging time. To demonstrate the effects of averaging time on the relative importance of various dispersion processes, and observed lateral wind velocity spectrum is used to compute the lateral dispersion coefficients of total, single-particle and relative diffusion for various averaging times and plume travel times. The results indicate that for a 1 h averaging time the dispersion coefficient of a plume can be approximated by single-particle diffusion alone for travel times <250 s and by relative diffusion for longer travel times. Furthermore, it is shown that the power-law formula suggested by Turner for relating pollutant concentrations for other averaging times to the corresponding 15 min average is applicable to the present example only when the averaging time is less than 200 s and the tral time smaller than about 300 s. Since the turbulence spectrum used in the analysis is an observed one, it is hoped that the results could represent many conditions encountered in the atmosphere. However, as the results depend on the form of turbulence spectrum, the calculations are not for deriving a set of specific criteria but for demonstrating the need in discriminating various processes in studies of plume dispersion
2010-07-01
... volume of gasoline produced or imported in batch i. Si=The sulfur content of batch i determined under § 80.330. n=The number of batches of gasoline produced or imported during the averaging period. i=Individual batch of gasoline produced or imported during the averaging period. (b) All annual refinery or...
2010-07-01
... and average carbon-related exhaust emissions. 600.510-12 Section 600.510-12 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF... Transportation. (iv) [Reserved] (2) Average carbon-related exhaust emissions will be calculated to the nearest...
Average inactivity time model, associated orderings and reliability properties
Kayid, M.; Izadkhah, S.; Abouammoh, A. M.
2018-02-01
In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.
Average L-shell fluorescence, Auger, and electron yields
Krause, M.O.
1980-01-01
The dependence of the average L-shell fluorescence and Auger yields on the initial vacancy distribution is shown to be small. By contrast, the average electron yield pertaining to both Auger and Coster-Kronig transitions is shown to display a strong dependence. Numerical examples are given on the basis of Krause's evaluation of subshell radiative and radiationless yields. Average yields are calculated for widely differing vacancy distributions and are intercompared graphically for 40 3 subshell yields in most cases of inner-shell ionization
Simultaneous inference for model averaging of derived parameters
Jensen, Signe Marie; Ritz, Christian
2015-01-01
Model averaging is a useful approach for capturing uncertainty due to model selection. Currently, this uncertainty is often quantified by means of approximations that do not easily extend to simultaneous inference. Moreover, in practice there is a need for both model averaging and simultaneous...... inference for derived parameters calculated in an after-fitting step. We propose a method for obtaining asymptotically correct standard errors for one or several model-averaged estimates of derived parameters and for obtaining simultaneous confidence intervals that asymptotically control the family...
Salecker-Wigner-Peres clock and average tunneling times
Lunardi, Jose T.; Manzoni, Luiz A.; Nystrom, Andrew T.
2011-01-01
The quantum clock of Salecker-Wigner-Peres is used, by performing a post-selection of the final state, to obtain average transmission and reflection times associated to the scattering of localized wave packets by static potentials in one dimension. The behavior of these average times is studied for a Gaussian wave packet, centered around a tunneling wave number, incident on a rectangular barrier and, in particular, on a double delta barrier potential. The regime of opaque barriers is investigated and the results show that the average transmission time does not saturate, showing no evidence of the Hartman effect (or its generalized version).
Time average vibration fringe analysis using Hilbert transformation
Kumar, Upputuri Paul; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad
2010-01-01
Quantitative phase information from a single interferogram can be obtained using the Hilbert transform (HT). We have applied the HT method for quantitative evaluation of Bessel fringes obtained in time average TV holography. The method requires only one fringe pattern for the extraction of vibration amplitude and reduces the complexity in quantifying the data experienced in the time average reference bias modulation method, which uses multiple fringe frames. The technique is demonstrated for the measurement of out-of-plane vibration amplitude on a small scale specimen using a time average microscopic TV holography system.
Average multiplications in deep inelastic processes and their interpretation
Kiselev, A.V.; Petrov, V.A.
1983-01-01
Inclusive production of hadrons in deep inelastic proceseseus is considered. It is shown that at high energies the jet evolution in deep inelastic processes is mainly of nonperturbative character. With the increase of a final hadron state energy the leading contribution to an average multiplicity comes from a parton subprocess due to production of massive quark and gluon jets and their further fragmentation as diquark contribution becomes less and less essential. The ratio of the total average multiplicity in deep inelastic processes to the average multiplicity in e + e - -annihilation at high energies tends to unity
Fitting a function to time-dependent ensemble averaged data
Fogelmark, Karl; Lomholt, Michael A.; Irbäck, Anders
2018-01-01
Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion...... method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software....
Average wind statistics for SRP area meteorological towers
Laurinat, J.E.
1987-01-01
A quality assured set of average wind Statistics for the seven SRP area meteorological towers has been calculated for the five-year period 1982--1986 at the request of DOE/SR. A Similar set of statistics was previously compiled for the years 1975-- 1979. The updated wind statistics will replace the old statistics as the meteorological input for calculating atmospheric radionuclide doses from stack releases, and will be used in the annual environmental report. This report details the methods used to average the wind statistics and to screen out bad measurements and presents wind roses generated by the averaged statistics
The Length of Maternity Leave and Family Health
Beuchert-Pedersen, Louise Voldby; Humlum, Maria Knoth; Vejlin, Rune Majlund
We study the relationship between the length of maternity leave and the physical and psychological health of the family. Using a reform of the parental leave scheme in Denmark that increased the number of weeks of leave with full benefit compensation, we estimate the effect of the lenght...... of maternity leave on a range of health indicators including the number of hospital admissions for both mother and child and the probability of the mother receiving antidepressants. The reform led to an increase in average post-birth maternity leave matters for child or maternal health outcomes and thus we...... complement the existing evidence on maternity leave expansions that tends to find limited effects on children's later deveopmental, educational, and labor market outcomes. Our results suggest that any beneficial effects of increasing the lenght of maternity leave are greater for low-resource families....
Explicit analytical solution of a pendulum with periodically varying length
Yang Tianzhi; Fang Bo; Li Song; Huang Wenhu
2010-01-01
A pendulum with periodically varying length is an interesting physical system. It has been studied by some researchers using traditional perturbation methods (for example, the averaging method). But due to the limitation of the conventional perturbation methods, the solutions are not valid for long-term prediction of the pendulum. In this paper, we use the homotopy analysis method to explore the approximate solution to this system. The method can easily self-adjust and control the convergence region. By applying the method to the governing equation of the pendulum, we obtain the approximation solution in a closed form. It is shown by the numerical method that the homotopy analysis method supplies a more accurate analytical solution for predicting the long-term behaviour of the pendulum. We believe that this system may be a good example for undergraduate and graduate students for better understanding of nonlinear oscillations.
Expected value of finite fission chain lengths of pulse reactors
Liu Jianjun; Zhou Zhigao; Zhang Ben'ai
2007-01-01
The average neutron population necessary for sponsoring a persistent fission chain in a multiplying system, is discussed. In the point reactor model, the probability function θ(n, t 0 , t) of a source neutron at time t 0 leading to n neutrons at time t is dealt with. The non-linear partial differential equation for the probability generating function G(z; t 0 , t) is derived. By solving the equation, we have obtained an approximate analytic solution for a slightly prompt supercritical system. For the pulse reactor Godiva-II, the mean value of finite fission chain lengths is estimated in this work and shows that the estimated value is reasonable for the experimental analysis. (authors)
Real-time traffic signal optimization model based on average delay time per person
Pengpeng Jiao
2015-10-01
Full Text Available Real-time traffic signal control is very important for relieving urban traffic congestion. Many existing traffic control models were formulated using optimization approach, with the objective functions of minimizing vehicle delay time. To improve people’s trip efficiency, this article aims to minimize delay time per person. Based on the time-varying traffic flow data at intersections, the article first fits curves of accumulative arrival and departure vehicles, as well as the corresponding functions. Moreover, this article transfers vehicle delay time to personal delay time using average passenger load of cars and buses, employs such time as the objective function, and proposes a signal timing optimization model for intersections to achieve real-time signal parameters, including cycle length and green time. This research further implements a case study based on practical data collected at an intersection in Beijing, China. The average delay time per person and queue length are employed as evaluation indices to show the performances of the model. The results show that the proposed methodology is capable of improving traffic efficiency and is very effective for real-world applications.
Effect of TiO2 nanotube length and lateral tubular spacing on ...
Abstract. The main objective of this study is to show the effect of TiO2 nanotube length, diameter and intertubular ... formation of nanotube arrays spread uniformly over a large area. ... 36, 48 and 72 h at an applied voltage of 40 V. The anodized ... and phase analysis for the obtained nanotubes were done .... Using an extra-.
Average monthly and annual climate maps for Bolivia
Vicente-Serrano, Sergio M.
2015-02-24
This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.
Medicare Part B Drug Average Sales Pricing Files
U.S. Department of Health & Human Services — Manufacturer reporting of Average Sales Price (ASP) data - A manufacturers ASP must be calculated by the manufacturer every calendar quarter and submitted to CMS...
A time averaged background compensator for Geiger-Mueller counters
Bhattacharya, R.C.; Ghosh, P.K.
1983-01-01
The GM tube compensator described stores background counts to cancel an equal number of pulses from the measuring channel providing time averaged compensation. The method suits portable instruments. (orig.)
Time averaging, ageing and delay analysis of financial time series
Cherstvy, Andrey G.; Vinod, Deepak; Aghion, Erez; Chechkin, Aleksei V.; Metzler, Ralf
2017-06-01
We introduce three strategies for the analysis of financial time series based on time averaged observables. These comprise the time averaged mean squared displacement (MSD) as well as the ageing and delay time methods for varying fractions of the financial time series. We explore these concepts via statistical analysis of historic time series for several Dow Jones Industrial indices for the period from the 1960s to 2015. Remarkably, we discover a simple universal law for the delay time averaged MSD. The observed features of the financial time series dynamics agree well with our analytical results for the time averaged measurables for geometric Brownian motion, underlying the famed Black-Scholes-Merton model. The concepts we promote here are shown to be useful for financial data analysis and enable one to unveil new universal features of stock market dynamics.
Historical Data for Average Processing Time Until Hearing Held
Social Security Administration — This dataset provides historical data for average wait time (in days) from the hearing request date until a hearing was held. This dataset includes data from fiscal...
GIS Tools to Estimate Average Annual Daily Traffic
2012-06-01
This project presents five tools that were created for a geographical information system to estimate Annual Average Daily : Traffic using linear regression. Three of the tools can be used to prepare spatial data for linear regression. One tool can be...
A high speed digital signal averager for pulsed NMR
Srinivasan, R.; Ramakrishna, J.; Ra agopalan, S.R.
1978-01-01
A 256-channel digital signal averager suitable for pulsed nuclear magnetic resonance spectroscopy is described. It implements 'stable averaging' algorithm and hence provides a calibrated display of the average signal at all times during the averaging process on a CRT. It has a maximum sampling rate of 2.5 μ sec and a memory capacity of 256 x 12 bit words. Number of sweeps is selectable through a front panel control in binary steps from 2 3 to 2 12 . The enhanced signal can be displayed either on a CRT or by a 3.5-digit LED display. The maximum S/N improvement that can be achieved with this instrument is 36 dB. (auth.)
The average-shadowing property and topological ergodicity for flows
Gu Rongbao; Guo Wenjing
2005-01-01
In this paper, the transitive property for a flow without sensitive dependence on initial conditions is studied and it is shown that a Lyapunov stable flow with the average-shadowing property on a compact metric space is topologically ergodic
Application of Bayesian approach to estimate average level spacing
Huang Zhongfu; Zhao Zhixiang
1991-01-01
A method to estimate average level spacing from a set of resolved resonance parameters by using Bayesian approach is given. Using the information given in the distributions of both levels spacing and neutron width, the level missing in measured sample can be corrected more precisely so that better estimate for average level spacing can be obtained by this method. The calculation of s-wave resonance has been done and comparison with other work was carried out
Annual average equivalent dose of workers form health area
Daltro, T.F.L.; Campos, L.L.
1992-01-01
The data of personnel monitoring during 1985 and 1991 of personnel that work in health area were studied, obtaining a general overview of the value change of annual average equivalent dose. Two different aspects were presented: the analysis of annual average equivalent dose in the different sectors of a hospital and the comparison of these doses in the same sectors in different hospitals. (C.G.C.)
A precise measurement of the average b hadron lifetime
Buskulic, Damir; De Bonis, I; Décamp, D; Ghez, P; Goy, C; Lees, J P; Lucotte, A; Minard, M N; Odier, P; Pietrzyk, B; Ariztizabal, F; Chmeissani, M; Crespo, J M; Efthymiopoulos, I; Fernández, E; Fernández-Bosman, M; Gaitan, V; Garrido, L; Martínez, M; Orteu, S; Pacheco, A; Padilla, C; Palla, Fabrizio; Pascual, A; Perlas, J A; Sánchez, F; Teubert, F; Colaleo, A; Creanza, D; De Palma, M; Farilla, A; Gelao, G; Girone, M; Iaselli, Giuseppe; Maggi, G; Maggi, M; Marinelli, N; Natali, S; Nuzzo, S; Ranieri, A; Raso, G; Romano, F; Ruggieri, F; Selvaggi, G; Silvestris, L; Tempesta, P; Zito, G; Huang, X; Lin, J; Ouyang, Q; Wang, T; Xie, Y; Xu, R; Xue, S; Zhang, J; Zhang, L; Zhao, W; Bonvicini, G; Cattaneo, M; Comas, P; Coyle, P; Drevermann, H; Engelhardt, A; Forty, Roger W; Frank, M; Hagelberg, R; Harvey, J; Jacobsen, R; Janot, P; Jost, B; Knobloch, J; Lehraus, Ivan; Markou, C; Martin, E B; Mato, P; Meinhard, H; Minten, Adolf G; Miquel, R; Oest, T; Palazzi, P; Pater, J R; Pusztaszeri, J F; Ranjard, F; Rensing, P E; Rolandi, Luigi; Schlatter, W D; Schmelling, M; Schneider, O; Tejessy, W; Tomalin, I R; Venturi, A; Wachsmuth, H W; Wiedenmann, W; Wildish, T; Witzeling, W; Wotschack, J; Ajaltouni, Ziad J; Bardadin-Otwinowska, Maria; Barrès, A; Boyer, C; Falvard, A; Gay, P; Guicheney, C; Henrard, P; Jousset, J; Michel, B; Monteil, S; Montret, J C; Pallin, D; Perret, P; Podlyski, F; Proriol, J; Rossignol, J M; Saadi, F; Fearnley, Tom; Hansen, J B; Hansen, J D; Hansen, J R; Hansen, P H; Nilsson, B S; Kyriakis, A; Simopoulou, Errietta; Siotis, I; Vayaki, Anna; Zachariadou, K; Blondel, A; Bonneaud, G R; Brient, J C; Bourdon, P; Passalacqua, L; Rougé, A; Rumpf, M; Tanaka, R; Valassi, Andrea; Verderi, M; Videau, H L; Candlin, D J; Parsons, M I; Focardi, E; Parrini, G; Corden, M; Delfino, M C; Georgiopoulos, C H; Jaffe, D E; Antonelli, A; Bencivenni, G; Bologna, G; Bossi, F; Campana, P; Capon, G; Chiarella, V; Felici, G; Laurelli, P; Mannocchi, G; Murtas, F; Murtas, G P; Pepé-Altarelli, M; Dorris, S J; Halley, A W; ten Have, I; Knowles, I G; Lynch, J G; Morton, W T; O'Shea, V; Raine, C; Reeves, P; Scarr, J M; Smith, K; Smith, M G; Thompson, A S; Thomson, F; Thorn, S; Turnbull, R M; Becker, U; Braun, O; Geweniger, C; Graefe, G; Hanke, P; Hepp, V; Kluge, E E; Putzer, A; Rensch, B; Schmidt, M; Sommer, J; Stenzel, H; Tittel, K; Werner, S; Wunsch, M; Beuselinck, R; Binnie, David M; Cameron, W; Colling, D J; Dornan, Peter J; Konstantinidis, N P; Moneta, L; Moutoussi, A; Nash, J; San Martin, G; Sedgbeer, J K; Stacey, A M; Dissertori, G; Girtler, P; Kneringer, E; Kuhn, D; Rudolph, G; Bowdery, C K; Brodbeck, T J; Colrain, P; Crawford, G; Finch, A J; Foster, F; Hughes, G; Sloan, Terence; Whelan, E P; Williams, M I; Galla, A; Greene, A M; Kleinknecht, K; Quast, G; Raab, J; Renk, B; Sander, H G; Wanke, R; Van Gemmeren, P; Zeitnitz, C; Aubert, Jean-Jacques; Bencheikh, A M; Benchouk, C; Bonissent, A; Bujosa, G; Calvet, D; Carr, J; Diaconu, C A; Etienne, F; Thulasidas, M; Nicod, D; Payre, P; Rousseau, D; Talby, M; Abt, I; Assmann, R W; Bauer, C; Blum, Walter; Brown, D; Dietl, H; Dydak, Friedrich; Ganis, G; Gotzhein, C; Jakobs, K; Kroha, H; Lütjens, G; Lutz, Gerhard; Männer, W; Moser, H G; Richter, R H; Rosado-Schlosser, A; Schael, S; Settles, Ronald; Seywerd, H C J; Stierlin, U; Saint-Denis, R; Wolf, G; Alemany, R; Boucrot, J; Callot, O; Cordier, A; Courault, F; Davier, M; Duflot, L; Grivaz, J F; Heusse, P; Jacquet, M; Kim, D W; Le Diberder, F R; Lefrançois, J; Lutz, A M; Musolino, G; Nikolic, I A; Park, H J; Park, I C; Schune, M H; Simion, S; Veillet, J J; Videau, I; Abbaneo, D; Azzurri, P; Bagliesi, G; Batignani, G; Bettarini, S; Bozzi, C; Calderini, G; Carpinelli, M; Ciocci, M A; Ciulli, V; Dell'Orso, R; Fantechi, R; Ferrante, I; Foà, L; Forti, F; Giassi, A; Giorgi, M A; Gregorio, A; Ligabue, F; Lusiani, A; Marrocchesi, P S; Messineo, A; Rizzo, G; Sanguinetti, G; Sciabà, A; Spagnolo, P; Steinberger, Jack; Tenchini, Roberto; Tonelli, G; Triggiani, G; Vannini, C; Verdini, P G; Walsh, J; Betteridge, A P; Blair, G A; Bryant, L M; Cerutti, F; Gao, Y; Green, M G; Johnson, D L; Medcalf, T; Mir, L M; Perrodo, P; Strong, J A; Bertin, V; Botterill, David R; Clifft, R W; Edgecock, T R; Haywood, S; Edwards, M; Maley, P; Norton, P R; Thompson, J C; Bloch-Devaux, B; Colas, P; Duarte, H; Emery, S; Kozanecki, Witold; Lançon, E; Lemaire, M C; Locci, E; Marx, B; Pérez, P; Rander, J; Renardy, J F; Rosowsky, A; Roussarie, A; Schuller, J P; Schwindling, J; Si Mohand, D; Trabelsi, A; Vallage, B; Johnson, R P; Kim, H Y; Litke, A M; McNeil, M A; Taylor, G; Beddall, A; Booth, C N; Boswell, R; Cartwright, S L; Combley, F; Dawson, I; Köksal, A; Letho, M; Newton, W M; Rankin, C; Thompson, L F; Böhrer, A; Brandt, S; Cowan, G D; Feigl, E; Grupen, Claus; Lutters, G; Minguet-Rodríguez, J A; Rivera, F; Saraiva, P; Smolik, L; Stephan, F; Apollonio, M; Bosisio, L; Della Marina, R; Giannini, G; Gobbo, B; Ragusa, F; Rothberg, J E; Wasserbaech, S R; Armstrong, S R; Bellantoni, L; Elmer, P; Feng, P; Ferguson, D P S; Gao, Y S; González, S; Grahl, J; Harton, J L; Hayes, O J; Hu, H; McNamara, P A; Nachtman, J M; Orejudos, W; Pan, Y B; Saadi, Y; Schmitt, M; Scott, I J; Sharma, V; Turk, J; Walsh, A M; Wu Sau Lan; Wu, X; Yamartino, J M; Zheng, M; Zobernig, G
1996-01-01
An improved measurement of the average b hadron lifetime is performed using a sample of 1.5 million hadronic Z decays, collected during the 1991-1993 runs of ALEPH, with the silicon vertex detector fully operational. This uses the three-dimensional impact parameter distribution of lepton tracks coming from semileptonic b decays and yields an average b hadron lifetime of 1.533 \\pm 0.013 \\pm 0.022 ps.
Averaging Bias Correction for Future IPDA Lidar Mission MERLIN
Tellier Yoann
2018-01-01
Full Text Available The CNES/DLR MERLIN satellite mission aims at measuring methane dry-air mixing ratio column (XCH4 and thus improving surface flux estimates. In order to get a 1% precision on XCH4 measurements, MERLIN signal processing assumes an averaging of data over 50 km. The induced biases due to the non-linear IPDA lidar equation are not compliant with accuracy requirements. This paper analyzes averaging biases issues and suggests correction algorithms tested on realistic simulated scenes.
Averaging Bias Correction for Future IPDA Lidar Mission MERLIN
Tellier, Yoann; Pierangelo, Clémence; Wirth, Martin; Gibert, Fabien
2018-04-01
The CNES/DLR MERLIN satellite mission aims at measuring methane dry-air mixing ratio column (XCH4) and thus improving surface flux estimates. In order to get a 1% precision on XCH4 measurements, MERLIN signal processing assumes an averaging of data over 50 km. The induced biases due to the non-linear IPDA lidar equation are not compliant with accuracy requirements. This paper analyzes averaging biases issues and suggests correction algorithms tested on realistic simulated scenes.
The average action for scalar fields near phase transitions
Wetterich, C.
1991-08-01
We compute the average action for fields in two, three and four dimensions, including the effects of wave function renormalization. A study of the one loop evolution equations for the scale dependence of the average action gives a unified picture of the qualitatively different behaviour in various dimensions for discrete as well as abelian and nonabelian continuous symmetry. The different phases and the phase transitions can be infered from the evolution equation. (orig.)
Wave function collapse implies divergence of average displacement
Marchewka, A.; Schuss, Z.
2005-01-01
We show that propagating a truncated discontinuous wave function by Schr\\"odinger's equation, as asserted by the collapse axiom, gives rise to non-existence of the average displacement of the particle on the line. It also implies that there is no Zeno effect. On the other hand, if the truncation is done so that the reduced wave function is continuous, the average coordinate is finite and there is a Zeno effect. Therefore the collapse axiom of measurement needs to be revised.
Average geodesic distance of skeleton networks of Sierpinski tetrahedron
Yang, Jinjin; Wang, Songjing; Xi, Lifeng; Ye, Yongchao
2018-04-01
The average distance is concerned in the research of complex networks and is related to Wiener sum which is a topological invariant in chemical graph theory. In this paper, we study the skeleton networks of the Sierpinski tetrahedron, an important self-similar fractal, and obtain their asymptotic formula for average distances. To provide the formula, we develop some technique named finite patterns of integral of geodesic distance on self-similar measure for the Sierpinski tetrahedron.
Driving force for hydrophobic interaction at different length scales.
Zangi, Ronen
2011-03-17
We study by molecular dynamics simulations the driving force for the hydrophobic interaction between graphene sheets of different sizes down to the atomic scale. Similar to the prediction by Lum, Chandler, and Weeks for hard-sphere solvation [J. Phys. Chem. B 1999, 103, 4570-4577], we find the driving force to be length-scale dependent, despite the fact that our model systems do not exhibit dewetting. For small hydrophobic solutes, the association is purely entropic, while enthalpy favors dissociation. The latter is demonstrated to arise from the enhancement of hydrogen bonding between the water molecules around small hydrophobes. On the other hand, the attraction between large graphene sheets is dominated by enthalpy which mainly originates from direct solute-solute interactions. The crossover length is found to be inside the range of 0.3-1.5 nm(2) of the surface area of the hydrophobe that is eliminated in the association process. In the large-scale regime, different thermodynamic properties are scalable with this change of surface area. In particular, upon dimerization, a total and a water-induced stabilization of approximately 65 and 12 kJ/mol/nm(2) are obtained, respectively, and on average around one hydrogen bond is gained per 1 nm(2) of graphene sheet association. Furthermore, the potential of mean force between the sheets is also scalable except for interplate distances smaller than 0.64 nm which corresponds to the region around the barrier for removing the last layer of water. It turns out that, as the surface area increases, the relative height of the barrier for association decreases and the range of attraction increases. It is also shown that, around small hydrophobic solutes, the lifetime of the hydrogen bonds is longer than in the bulk, while around large hydrophobes it is the same. Nevertheless, the rearrangement of the hydrogen-bond network for both length-scale regimes is slower than in bulk water. © 2011 American Chemical Society
Beyond Mixing-length Theory: A Step Toward 321D
Arnett, W. David; Meakin, Casey; Viallet, Maxime; Campbell, Simon W.; Lattanzio, John C.; Mocák, Miroslav
2015-08-01
We examine the physical basis for algorithms to replace mixing-length theory (MLT) in stellar evolutionary computations. Our 321D procedure is based on numerical solutions of the Navier-Stokes equations. These implicit large eddy simulations (ILES) are three-dimensional (3D), time-dependent, and turbulent, including the Kolmogorov cascade. We use the Reynolds-averaged Navier-Stokes (RANS) formulation to make concise the 3D simulation data, and use the 3D simulations to give closure for the RANS equations. We further analyze this data set with a simple analytical model, which is non-local and time-dependent, and which contains both MLT and the Lorenz convective roll as particular subsets of solutions. A characteristic length (the damping length) again emerges in the simulations; it is determined by an observed balance between (1) the large-scale driving, and (2) small-scale damping. The nature of mixing and convective boundaries is analyzed, including dynamic, thermal and compositional effects, and compared to a simple model. We find that (1) braking regions (boundary layers in which mixing occurs) automatically appear beyond the edges of convection as defined by the Schwarzschild criterion, (2) dynamic (non-local) terms imply a non-zero turbulent kinetic energy flux (unlike MLT), (3) the effects of composition gradients on flow can be comparable to thermal effects, and (4) convective boundaries in neutrino-cooled stages differ in nature from those in photon-cooled stages (different Péclet numbers). The algorithms are based upon ILES solutions to the Navier-Stokes equations, so that, unlike MLT, they do not require any calibration to astronomical systems in order to predict stellar properties. Implications for solar abundances, helioseismology, asteroseismology, nucleosynthesis yields, supernova progenitors and core collapse are indicated.
BEYOND MIXING-LENGTH THEORY: A STEP TOWARD 321D
Arnett, W. David; Meakin, Casey; Viallet, Maxime; Campbell, Simon W.; Lattanzio, John C.; Mocák, Miroslav
2015-01-01
We examine the physical basis for algorithms to replace mixing-length theory (MLT) in stellar evolutionary computations. Our 321D procedure is based on numerical solutions of the Navier–Stokes equations. These implicit large eddy simulations (ILES) are three-dimensional (3D), time-dependent, and turbulent, including the Kolmogorov cascade. We use the Reynolds-averaged Navier–Stokes (RANS) formulation to make concise the 3D simulation data, and use the 3D simulations to give closure for the RANS equations. We further analyze this data set with a simple analytical model, which is non-local and time-dependent, and which contains both MLT and the Lorenz convective roll as particular subsets of solutions. A characteristic length (the damping length) again emerges in the simulations; it is determined by an observed balance between (1) the large-scale driving, and (2) small-scale damping. The nature of mixing and convective boundaries is analyzed, including dynamic, thermal and compositional effects, and compared to a simple model. We find that (1) braking regions (boundary layers in which mixing occurs) automatically appear beyond the edges of convection as defined by the Schwarzschild criterion, (2) dynamic (non-local) terms imply a non-zero turbulent kinetic energy flux (unlike MLT), (3) the effects of composition gradients on flow can be comparable to thermal effects, and (4) convective boundaries in neutrino-cooled stages differ in nature from those in photon-cooled stages (different Péclet numbers). The algorithms are based upon ILES solutions to the Navier–Stokes equations, so that, unlike MLT, they do not require any calibration to astronomical systems in order to predict stellar properties. Implications for solar abundances, helioseismology, asteroseismology, nucleosynthesis yields, supernova progenitors and core collapse are indicated
Axial length of atomic bomb survivors in Nagasaki
Wakiyama, Harumi; Kishikawa, Yasuhiro; Imamura, Naoki; Amemiya, Tsugio
2002-01-01
We reviewed a series of 778 patients who had cataract surgery during the past 4 years at the Nagasaki Atomic Bomb Memorial Hospital. We evaluated the history of exposure to radiation by atomic bomb in 1945, axial length and state of refraction. All were born before 1945. The series comprised 263 males and 515 females. Their ages averaged 76.5±8.6 years. History of exposure to radiation was present in 356 patients. The remaining 422 patients served as control. There was no difference in the type of cataract between the two groups. High myopia was present in 11 irradiated patients (3.2%) and in 24 patients in the control group (6.0%). The difference was not significant (p=0.083). There was no high myopia among 24 patients who were aged 18 years or less at the time of radiation and who were within 2 km from the epicenter. No difference was present regarding the axial length between the two groups or between both sexes. The present result is not definitive because ''irradiated group'' would include those with little or no exposure and because precise data has not been available about the dosis of radiation. (author)
Problems with Excessive Residual Lower Leg Length in Pediatric Amputees
Osebold, William R; Lester, Edward L; Christenson, Donald M
2001-01-01
We studied six pediatric amputees with long below-knee residual limbs, in order to delineate their functional and prosthetic situations, specifically in relation to problems with fitting for dynamic-response prosthetic feet. Three patients had congenital pseudoarthrosis of the tibia secondary to neurofibromatosis, one had fibular hemimelia, one had a traumatic amputation, and one had amputation secondary to burns. Five patients had Syme's amputations, one had a Boyd amputation. Ages at amputation ranged from nine months to five years (average age 3 years 1 month). After amputation, the long residual below-knee limbs allowed fitting with only the lowest-profile prostheses, such as deflection plates. In three patients, the femoral dome to tibial plafond length was greater on the amputated side than on the normal side. To allow room for more dynamic-response (and larger) foot prostheses, two patients have undergone proximal and distal tibial-fibular epiphyseodeses (one at age 5 years 10 months, the other at 3 years 7 months) and one had a proximal tibial-fibular epiphyseodesis at age 7 years 10 months. (All three patients are still skeletally immature.) The families of two other patients are considering epiphyseodeses, and one patient is not a candidate (skeletally mature). Scanogram data indicate that at skeletal maturity the epiphyseodesed patients will have adequate length distal to their residual limbs to fit larger and more dynamic-response prosthetic feet. PMID:11813953
Concentration and length dependence of DNA looping in transcriptional regulation.
Lin Han
2009-05-01
Full Text Available In many cases, transcriptional regulation involves the binding of transcription factors at sites on the DNA that are not immediately adjacent to the promoter of interest. This action at a distance is often mediated by the formation of DNA loops: Binding at two or more sites on the DNA results in the formation of a loop, which can bring the transcription factor into the immediate neighborhood of the relevant promoter. These processes are important in settings ranging from the historic bacterial examples (bacterial metabolism and the lytic-lysogeny decision in bacteriophage, to the modern concept of gene regulation to regulatory processes central to pattern formation during development of multicellular organisms. Though there have been a variety of insights into the combinatorial aspects of transcriptional control, the mechanism of DNA looping as an agent of combinatorial control in both prokaryotes and eukaryotes remains unclear. We use single-molecule techniques to dissect DNA looping in the lac operon. In particular, we measure the propensity for DNA looping by the Lac repressor as a function of the concentration of repressor protein and as a function of the distance between repressor binding sites. As with earlier single-molecule studies, we find (at least two distinct looped states and demonstrate that the presence of these two states depends both upon the concentration of repressor protein and the distance between the two repressor binding sites. We find that loops form even at interoperator spacings considerably shorter than the DNA persistence length, without the intervention of any other proteins to prebend the DNA. The concentration measurements also permit us to use a simple statistical mechanical model of DNA loop formation to determine the free energy of DNA looping, or equivalently, the for looping.
Vocal tract length and formant frequency dispersion correlate with body size in rhesus macaques.
Fitch, W T
1997-08-01
Body weight, length, and vocal tract length were measured for 23 rhesus macaques (Macaca mulatta) of various sizes using radiographs and computer graphic techniques. linear predictive coding analysis of tape-recorded threat vocalizations were used to determine vocal tract resonance frequencies ("formants") for the same animals. A new acoustic variable is proposed, "formant dispersion," which should theoretically depend upon vocal tract length. Formant dispersion is the averaged difference between successive formant frequencies, and was found to be closely tied to both vocal tract length and body size. Despite the common claim that voice fundamental frequency (F0) provides an acoustic indication of body size, repeated investigations have failed to support such a relationship in many vertebrate species including humans. Formant dispersion, unlike voice pitch, is proposed to be a reliable predictor of body size in macaques, and probably many other species.
Bond-length fluctuations in the copper oxide superconductors
Goodenough, John B [Texas Materials Institute, ETC 9.102, University of Texas at Austin, Austin, TX 78712 (United States)
2003-02-26
Superconductivity in the copper oxides occurs at a crossover from localized to itinerant electronic behaviour, a transition that is first order. A spinodal phase segregation is normally accomplished by atomic diffusion; but where it occurs at too low a temperature for atomic diffusion, it may be realized by cooperative atomic displacements. Locally cooperative, fluctuating atomic displacements may stabilize a distinguishable phase lying between a localized-electron phase and a Fermi-liquid phase; this intermediate phase exhibits quantum-critical-point behaviour with strong electron-lattice interactions making charge transport vibronic. Ordering of the bond-length fluctuations at lower temperatures would normally stabilize a charge-density wave (CDW), which suppresses superconductivity. It is argued that in the copper oxide superconductors, crossover occurs at an optimal doping concentration for the formation of ordered two-electron/two-hole bosonic bags of spin S = 0 in a matrix of localized spins; the correlation bags contain two holes in a linear cluster of four copper centres ordered within alternate Cu-O-Cu rows of a CuO{sub 2} sheet. This ordering is optimal at a hole concentration per Cu atom of p {approx} 1/6, but it is not static. Hybridization of the vibronic electrons with the phonons that define long-range order of the fluctuating (Cu-O) bond lengths creates barely itinerant, vibronic quasiparticles of heavy mass. The heavy itinerant vibrons form Cooper pairs having a coherence length of the dimension of the bosonic bags. It is the hybridization of electrons and phonons that, it is suggested, stabilizes the superconductive state relative to a CDW state. (topical review)
String matching with variable length gaps
Bille, Philip; Gørtz, Inge Li; Vildhøj, Hjalte Wedel
2012-01-01
primitive in computational biology applications. Let m and n be the lengths of P and T, respectively, and let k be the number of strings in P. We present a new algorithm achieving time O(nlogk+m+α) and space O(m+A), where A is the sum of the lower bounds of the lengths of the gaps in P and α is the total...... number of occurrences of the strings in P within T. Compared to the previous results this bound essentially achieves the best known time and space complexities simultaneously. Consequently, our algorithm obtains the best known bounds for almost all combinations of m, n, k, A, and α. Our algorithm...
Distance and Cable Length Measurement System
Hernández, Sergio Elias; Acosta, Leopoldo; Toledo, Jonay
2009-01-01
A simple, economic and successful design for distance and cable length detection is presented. The measurement system is based on the continuous repetition of a pulse that endlessly travels along the distance to be detected. There is a pulse repeater at both ends of the distance or cable to be measured. The endless repetition of the pulse generates a frequency that varies almost inversely with the distance to be measured. The resolution and distance or cable length range could be adjusted by varying the repetition time delay introduced at both ends and the measurement time. With this design a distance can be measured with centimeter resolution using electronic system with microsecond resolution, simplifying classical time of flight designs which require electronics with picosecond resolution. This design was also applied to position measurement. PMID:22303169
Investigations on quantum mechanics with minimal length
Chargui, Yassine
2009-01-01
We consider a modified quantum mechanics where the coordinates and momenta are assumed to satisfy a non-standard commutation relation of the form( X i , P j ) = iℎ(δ ij (1+βP 2 )+β'P i P j ). Such an algebra results in a generalized uncertainty relation which leads to the existence of a minimal observable length. Moreover, it incorporates an UV/IR mixing and non commutative position space. We analyse the possible representations in terms of differential operators. The latter are used to study the low energy effects of the minimal length by considering different quantum systems : the harmonic oscillator, the Klein-Gordon oscillator, the spinless Salpeter Coulomb problem, and the Dirac equation with a linear confining potential. We also discuss whether such effects are observable in precision measurements on a relativistic electron trapped in strong magnetic field.
Aberrant leukocyte telomere length in Birdshot Uveitis.
Vazirpanah, Nadia; Verhagen, Fleurieke H; Rothova, Anna; Missotten, Tom O A R; van Velthoven, Mirjam; Den Hollander, Anneke I; Hoyng, Carel B; Radstake, Timothy R D J; Broen, Jasper C A; Kuiper, Jonas J W
2017-01-01
Birdshot Uveitis (BU) is an archetypical chronic inflammatory eye disease, with poor visual prognosis, that provides an excellent model for studying chronic inflammation. BU typically affects patients in the fifth decade of life. This suggests that it may represent an age-related chronic inflammatory disease, which has been linked to increased erosion of telomere length of leukocytes. To study this in detail, we exploited a sensitive standardized quantitative real-time polymerase chain reaction to determine the peripheral blood leukocyte telomere length (LTL) in 91 genotyped Dutch BU patients and 150 unaffected Dutch controls. Although LTL erosion rates were very similar between BU patients and healthy controls, we observed that BU patients displayed longer LTL, with a median of log (LTL) = 4.87 (= 74131 base pair) compared to 4.31 (= 20417 base pair) in unaffected controls (PRTEL1. These findings suggest that BU is accompanied by significantly longer LTL.
B. Azzouz
2007-01-01
Full Text Available The textile fibre mixture as a multicomponent blend of variable fibres imposes regarding the proper method to predict the characteristics of the final blend. The length diagram and the fibrogram of cotton are generated. Then the length distribution, the length diagram, and the fibrogram of a blend of different categories of cotton are determined. The length distributions by weight of five different categories of cotton (Egyptian, USA (Pima, Brazilian, USA (Upland, and Uzbekistani are measured by AFIS. From these distributions, the length distribution, the length diagram, and the fibrogram by weight of four binary blends are expressed. The length parameters of these cotton blends are calculated and their variations are plotted against the mass fraction x of one component in the blend .These calculated parameters are compared to those of real blends. Finally, the selection of the optimal blends using the linear programming method, based on the hypothesis that the cotton blend parameters vary linearly in function of the components rations, is proved insufficient.
Tranvåg, Eirik Joakim; Ali, Merima; Norheim, Ole Frithjof
2013-07-11
Most studies on health inequalities use average measures, but describing the distribution of health can also provide valuable knowledge. In this paper, we estimate and compare within-group and between-group inequalities in length of life for population groups in Ethiopia in 2000 and 2011. We used data from the 2011 and 2000 Ethiopia Demographic and Health Survey and the Global Burden of Disease study 2010, and the MODMATCH modified logit life table system developed by the World Health Organization to model mortality rates, life expectancy, and length of life for Ethiopian population groups stratified by wealth quintiles, gender and residence. We then estimated and compared within-group and between-group inequality in length of life using the Gini index and absolute length of life inequality. Length of life inequality has decreased and life expectancy has increased for all population groups between 2000 and 2011. Length of life inequality within wealth quintiles is about three times larger than the between-group inequality of 9 years. Total length of life inequality in Ethiopia was 27.6 years in 2011. Longevity has increased and the distribution of health in Ethiopia is more equal in 2011 than 2000, with length of life inequality reduced for all population groups. Still there is considerable potential for further improvement. In the Ethiopian context with a poor and highly rural population, inequality in length of life within wealth quintiles is considerably larger than between them. This suggests that other factors than wealth substantially contribute to total health inequality in Ethiopia and that identification and quantification of these factors will be important for identifying proper measures to further reduce length of life inequality.
Allometry of sexual size dimorphism in turtles: a comparison of mass and length data.
Regis, Koy W; Meik, Jesse M
2017-01-01
The macroevolutionary pattern of Rensch's Rule (positive allometry of sexual size dimorphism) has had mixed support in turtles. Using the largest carapace length dataset and only large-scale body mass dataset assembled for this group, we determine (a) whether turtles conform to Rensch's Rule at the order, suborder, and family levels, and (b) whether inferences regarding allometry of sexual size dimorphism differ based on choice of body size metric used for analyses. We compiled databases of mean body mass and carapace length for males and females for as many populations and species of turtles as possible. We then determined scaling relationships between males and females for average body mass and straight carapace length using traditional and phylogenetic comparative methods. We also used regression analyses to evalutate sex-specific differences in the variance explained by carapace length on body mass. Using traditional (non-phylogenetic) analyses, body mass supports Rensch's Rule, whereas straight carapace length supports isometry. Using phylogenetic independent contrasts, both body mass and straight carapace length support Rensch's Rule with strong congruence between metrics. At the family level, support for Rensch's Rule is more frequent when mass is used and in phylogenetic comparative analyses. Turtles do not differ in slopes of sex-specific mass-to-length regressions and more variance in body size within each sex is explained by mass than by carapace length. Turtles display Rensch's Rule overall and within families of Cryptodires, but not within Pleurodire families. Mass and length are strongly congruent with respect to Rensch's Rule across turtles, and discrepancies are observed mostly at the family level (the level where Rensch's Rule is most often evaluated). At macroevolutionary scales, the purported advantages of length measurements over weight are not supported in turtles.
Quark ensembles with infinite correlation length
Molodtsov, S. V.; Zinovjev, G. M.
2014-01-01
By studying quark ensembles with infinite correlation length we formulate the quantum field theory model that, as we show, is exactly integrable and develops an instability of its standard vacuum ensemble (the Dirac sea). We argue such an instability is rooted in high ground state degeneracy (for 'realistic' space-time dimensions) featuring a fairly specific form of energy distribution, and with the cutoff parameter going to infinity this inherent energy distribution becomes infinitely narrow...
Summary of coherent neutron scattering length
Rauch, H.
1981-07-01
Experimental values of neutron-nuclei bound scattering lengths for some 354 isotopes and elements and the various spin-states are compiled in a uniform way together with their error bars as quoted in the original literature. Recommended values are also given. The definitions of the relevant quantities presented in the data tables and the basic principles of measurements are explained in the introductory chapters. The data is also available on a magnetic tape
Asymptotic safety, emergence and minimal length
Percacci, Roberto; Vacca, Gian Paolo
2010-01-01
There seems to be a common prejudice that asymptotic safety is either incompatible with, or at best unrelated to, the other topics in the title. This is not the case. In fact, we show that (1) the existence of a fixed point with suitable properties is a promising way of deriving emergent properties of gravity, and (2) there is a sense in which asymptotic safety implies a minimal length. In doing so we also discuss possible signatures of asymptotic safety in scattering experiments.
Minimal length uncertainty relation and ultraviolet regularization
Kempf, Achim; Mangano, Gianpiero
1997-06-01
Studies in string theory and quantum gravity suggest the existence of a finite lower limit Δx0 to the possible resolution of distances, at the latest on the scale of the Planck length of 10-35 m. Within the framework of the Euclidean path integral we explicitly show ultraviolet regularization in field theory through this short distance structure. Both rotation and translation invariance can be preserved. An example is studied in detail.
Liu, Xiaojia; An, Haizhong; Wang, Lijun; Guan, Qing
2017-09-01
The moving average strategy is a technical indicator that can generate trading signals to assist investment. While the trading signals tell the traders timing to buy or sell, the moving average cannot tell the trading volume, which is a crucial factor for investment. This paper proposes a fuzzy moving average strategy, in which the fuzzy logic rule is used to determine the strength of trading signals, i.e., the trading volume. To compose one fuzzy logic rule, we use four types of moving averages, the length of the moving average period, the fuzzy extent, and the recommend value. Ten fuzzy logic rules form a fuzzy set, which generates a rating level that decides the trading volume. In this process, we apply genetic algorithms to identify an optimal fuzzy logic rule set and utilize crude oil futures prices from the New York Mercantile Exchange (NYMEX) as the experiment data. Each experiment is repeated for 20 times. The results show that firstly the fuzzy moving average strategy can obtain a more stable rate of return than the moving average strategies. Secondly, holding amounts series is highly sensitive to price series. Thirdly, simple moving average methods are more efficient. Lastly, the fuzzy extents of extremely low, high, and very high are more popular. These results are helpful in investment decisions.
The Effective Coherence Length in Anisotropic Superconductors
Polturak, E.; Koren, G.; Nesher, O
1999-01-01
If electrons are transmitted from a normal conductor(N) into a superconductor(S), common wisdom has it that the electrons are converted into Cooper pairs within a coherence length from the interface. This is true in conventional superconductors with an isotropic order parameter. We have established experimentally that the situation is rather different in high Tc superconductors having an anisotropic order parameter. We used epitaxial thin film S/N bilayers having different interface orientations in order to inject carriers from S into N along different directions. The distance to which these carriers penetrate were determined through their effect on the Tc of the bilayers. We found that the effective coherence length is 20A only along the a or b directions, while in other directions we find a length of 250dr20A out of plane, and an even larger value for in-plane, off high symmetry directions. These observations can be explained using the Blonder-Tinkham-Klapwijk model adapted to anisotropic superconductivity. Several implications of our results on outstanding problems with high Tc junctions will be discussed
FTO associations with obesity and telomere length.
Zhou, Yuling; Hambly, Brett D; McLachlan, Craig S
2017-09-01
This review examines the biology of the Fat mass- and obesity-associated gene (FTO), and the implications of genetic association of FTO SNPs with obesity and genetic aging. Notably, we focus on the role of FTO in the regulation of methylation status as possible regulators of weight gain and genetic aging. We present a theoretical review of the FTO gene with a particular emphasis on associations with UCP2, AMPK, RBL2, IRX3, CUX1, mTORC1 and hormones involved in hunger regulation. These associations are important for dietary behavior regulation and cellular nutrient sensing via amino acids. We suggest that these pathways may also influence telomere regulation. Telomere length (TL) attrition may be influenced by obesity-related inflammation and oxidative stress, and FTO gene-involved pathways. There is additional emerging evidence to suggest that telomere length and obesity are bi-directionally associated. However, the role of obesity risk-related genotypes and associations with TL are not well understood. The FTO gene may influence pathways implicated in regulation of TL, which could help to explain some of the non-consistent relationship between weight phenotype and telomere length that is observed in population studies investigating obesity.
Development of the Heated Length Correction Factor
Park, Ho-Young; Kim, Kang-Hoon; Nahm, Kee-Yil; Jung, Yil-Sup; Park, Eung-Jun
2008-01-01
The Critical Heat Flux (CHF) on a nuclear fuel is defined by the function of flow channel geometry and flow condition. According to the selection of the explanatory variable, there are three hypotheses to explain CHF at uniformly heated vertical rod (inlet condition hypothesis, exit condition hypothesis, local condition hypothesis). For inlet condition hypothesis, CHF is characterized by function of system pressure, rod diameter, rod length, mass flow and inlet subcooling. For exit condition hypothesis, exit quality substitutes for inlet subcooling. Generally the heated length effect on CHF in exit condition hypothesis is smaller than that of other variables. Heated length is usually excluded in local condition hypothesis to describe the CHF with only local fluid conditions. Most of commercial plants currently use the empirical CHF correlation based on local condition hypothesis. Empirical CHF correlation is developed by the method of fitting the selected sensitive local variables to CHF test data using the multiple non-linear regression. Because this kind of method can not explain physical meaning, it is difficult to reflect the proper effect of complex geometry. So the recent CHF correlation development strategy of nuclear fuel vendor is making the basic CHF correlation which consists of basic flow variables (local fluid conditions) at first, and then the geometrical correction factors are compensated additionally. Because the functional forms of correction factors are determined from the independent test data which represent the corresponding geometry separately, it can be applied to other CHF correlation directly only with minor coefficient modification
Slip length crossover on a graphene surface
Liang, Zhi, E-mail: liangz3@rpi.edu [Rensselaer Nanotechnology Center, Rensselaer Polytechnic Institute, Troy, New York 12180 (United States); Keblinski, Pawel, E-mail: keplip@rpi.edu [Rensselaer Nanotechnology Center, Rensselaer Polytechnic Institute, Troy, New York 12180 (United States); Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, Troy, New York 12180 (United States)
2015-04-07
Using equilibrium and non-equilibrium molecular dynamics simulations, we study the flow of argon fluid above the critical temperature in a planar nanochannel delimited by graphene walls. We observe that, as a function of pressure, the slip length first decreases due to the decreasing mean free path of gas molecules, reaches the minimum value when the pressure is close to the critical pressure, and then increases with further increase in pressure. We demonstrate that the slip length increase at high pressures is due to the fact that the viscosity of fluid increases much faster with pressure than the friction coefficient between the fluid and the graphene. This behavior is clearly exhibited in the case of graphene due to a very smooth potential landscape originating from a very high atomic density of graphene planes. By contrast, on surfaces with lower atomic density, such as an (100) Au surface, the slip length for high fluid pressures is essentially zero, regardless of the nature of interaction between fluid and the solid wall.
Short Rayleigh length free electron lasers
W. B. Colson
2006-03-01
Full Text Available Conventional free electron laser (FEL oscillators minimize the optical mode volume around the electron beam in the undulator by making the resonator Rayleigh length about one third to one half of the undulator length. This maximizes gain and beam-mode coupling. In compact configurations of high-power infrared FELs or moderate power UV FELs, the resulting optical intensity can damage the resonator mirrors. To increase the spot size and thereby reduce the optical intensity at the mirrors below the damage threshold, a shorter Rayleigh length can be used, but the FEL interaction is significantly altered. We model this interaction using a coordinate system that expands with the rapidly diffracting optical mode from the ends of the undulator to the mirrors. Simulations show that the interaction of the strongly focused optical mode with a narrow electron beam inside the undulator distorts the optical wave front so it is no longer in the fundamental Gaussian mode. The simulations are used to study how mode distortion affects the single-pass gain in weak fields, and the steady-state extraction in strong fields.
Average Soil Water Retention Curves Measured by Neutron Radiography
Cheng, Chu-Lin [ORNL; Perfect, Edmund [University of Tennessee, Knoxville (UTK); Kang, Misun [ORNL; Voisin, Sophie [ORNL; Bilheux, Hassina Z [ORNL; Horita, Juske [Texas Tech University (TTU); Hussey, Dan [NIST Center for Neutron Research (NCRN), Gaithersburg, MD
2011-01-01
Water retention curves are essential for understanding the hydrologic behavior of partially-saturated porous media and modeling flow transport processes within the vadose zone. In this paper we report direct measurements of the main drying and wetting branches of the average water retention function obtained using 2-dimensional neutron radiography. Flint sand columns were saturated with water and then drained under quasi-equilibrium conditions using a hanging water column setup. Digital images (2048 x 2048 pixels) of the transmitted flux of neutrons were acquired at each imposed matric potential (~10-15 matric potential values per experiment) at the NCNR BT-2 neutron imaging beam line. Volumetric water contents were calculated on a pixel by pixel basis using Beer-Lambert s law after taking into account beam hardening and geometric corrections. To remove scattering effects at high water contents the volumetric water contents were normalized (to give relative saturations) by dividing the drying and wetting sequences of images by the images obtained at saturation and satiation, respectively. The resulting pixel values were then averaged and combined with information on the imposed basal matric potentials to give average water retention curves. The average relative saturations obtained by neutron radiography showed an approximate one-to-one relationship with the average values measured volumetrically using the hanging water column setup. There were no significant differences (at p < 0.05) between the parameters of the van Genuchten equation fitted to the average neutron radiography data and those estimated from replicated hanging water column data. Our results indicate that neutron imaging is a very effective tool for quantifying the average water retention curve.
Ueno, Hiromasa; Suga, Tadashi; Takao, Kenji; Tanaka, Takahiro; Misaki, Jun; Miyake, Yuto; Nagano, Akinori; Isaka, Tadao
2018-02-01
This study aimed to determine the relationship between Achilles tendon (AT) length and running performance, including running economy, in well-trained endurance runners. We also examined the reasonable portion of the AT related to running performance among AT lengths measured in three different portions. The AT lengths at three portions and cross-sectional area (CSA) of 30 endurance runners were measured using magnetic resonance imaging. Each AT length was calculated as the distance from the calcaneal tuberosity to the muscle-tendon junction of the soleus, gastrocnemius medialis (GM AT ), and gastrocnemius lateralis, respectively. These AT lengths were normalized with shank length. The AT CSA was calculated as the average of 10, 20, and 30 mm above the distal insertion of the AT and normalized with body mass. Running economy was evaluated by measuring energy cost during three 4-minutes submaximal treadmill running trials at 14, 16, and 18 km/h, respectively. Among three AT lengths, only a GM AT correlated significantly with personal best 5000-m race time (r=-.376, P=.046). Furthermore, GM AT correlated significantly with energy cost during submaximal treadmill running trials at 14 km/h and 18 km/h (r=-.446 and -.429, respectively, Prunning performance. These findings suggest that longer AT, especially GM AT , may be advantageous to achieve superior running performance, with better running economy, in endurance runners. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Robles, Theodore F; Carroll, Judith E; Bai, Sunhye; Reynolds, Bridget M; Esquivel, Stephanie; Repetti, Rena L
2016-01-01
Conceptualizations of links between stress and cellular aging in childhood suggest that accumulating stress predicts shorter leukocyte telomere length (LTL). At the same time, several models suggest that emotional reactivity to stressors may play a key role in predicting cellular aging. Using intensive repeated measures, we tested whether exposure or emotional "reactivity" to conflict and warmth in the family were related to LTL. Children (N=39; 30 target children and 9 siblings) between 8 and 13 years of age completed daily diary questionnaires for 56 consecutive days assessing daily warmth and conflict in the marital and the parent-child dyad, and daily positive and negative mood. To assess exposure to conflict and warmth, diary scale scores were averaged over the 56 days. Mood "reactivity" was operationalized by using multilevel modeling to generate estimates of the slope of warmth or conflict scores (marital and parent-child, separately) predicting same-day mood for each individual child. After diary collection, a blood sample was collected to determine LTL. Among children aged 8-13 years, a stronger association between negative mood and marital conflict, suggesting greater negative mood reactivity to marital conflict, was related to shorter LTL (B=-1.51, pfamily and marital conflict and warmth, and positive and negative mood over a two-month period. To our knowledge, these findings, although cross-sectional, represent the first evidence showing that link between children's affective responses and daily family interactions may have implications for telomere length. Copyright © 2015 Elsevier Ltd. All rights reserved.
Negi, Sujay, E-mail: negi.sujay@gmail.com [Indian Institute of Technology, Roorkee 247667 (India); Kumar, Ravi, E-mail: ravikfme@gmail.com [Indian Institute of Technology, Roorkee 247667 (India); Majumdar, P., E-mail: pmajum@barc.gov.in [Bhabha Atomic Research Centre, Mumbai 400085 (India); Mukopadhyay, D., E-mail: dmukho@barc.gov.in [Bhabha Atomic Research Centre, Mumbai 400085 (India)
2017-03-15
Highlights: • At 16 kW/m input, thermal stability was attained at 595 °C, without PT-CT contact. • At 20 kW/m step input, PT-CT contact occurred at 637 °C near bottom-center of the tube. • PT integrity was maintained throughout the experiment. - Abstract: An experimental investigation was conducted to simulate the sagging behavior of a full length Pressure Tube of a channel of 220 MWe Indian PHWR. The investigation aimed to recreate a condition resembling Loss of Coolant Accident (LOCA) with Emergency Core Cooling System (ECCS) failure in a nuclear power plant. A full length channel assembly immersed in moderator was subjected to electrical resistance heating of Pressure Tube (PT) to simulate the residual heat after shutting down of reactor. The temperature of PT started rising and the contact between PT and CT was established at the center of the tube where average bottom temperature was 637 °C. The integrity of PT was maintained throughout the experiment and the PT heat up was arrested on contact with the CT due to transfer of heat to the moderator.
Podolak, Morris
2018-04-01
Modern observational techniques are still not powerful enough to directly view planet formation, and so it is necessary to rely on theory. However, observations do give two important clues to the formation process. The first is that the most primitive form of material in interstellar space exists as a dilute gas. Some of this gas is unstable against gravitational collapse, and begins to contract. Because the angular momentum of the gas is not zero, it contracts along the spin axis, but remains extended in the plane perpendicular to that axis, so that a disk is formed. Viscous processes in the disk carry most of the mass into the center where a star eventually forms. In the process, almost as a by-product, a planetary system is formed as well. The second clue is the time required. Young stars are indeed observed to have gas disks, composed mostly of hydrogen and helium, surrounding them, and observations tell us that these disks dissipate after about 5 to 10 million years. If planets like Jupiter and Saturn, which are very rich in hydrogen and helium, are to form in such a disk, they must accrete their gas within 5 million years of the time of the formation of the disk. Any formation scenario one proposes must produce Jupiter in that time, although the terrestrial planets, which don't contain significant amounts of hydrogen and helium, could have taken longer to build. Modern estimates for the formation time of the Earth are of the order of 100 million years. To date there are two main candidate theories for producing Jupiter-like planets. The core accretion (CA) scenario supposes that any solid materials in the disk slowly coagulate into protoplanetary cores with progressively larger masses. If the core remains small enough it won't have a strong enough gravitational force to attract gas from the surrounding disk, and the result will be a terrestrial planet. If the core grows large enough (of the order of ten Earth masses), and the disk has not yet dissipated, then
Dynamical and Radiative Properties of X-Ray Pulsar Accretion Columns: Phase-averaged Spectra
West, Brent F. [Department of Electrical and Computer Engineering, United States Naval Academy, Annapolis, MD (United States); Wolfram, Kenneth D. [Naval Research Laboratory (retired), Washington, DC (United States); Becker, Peter A., E-mail: bwest@usna.edu, E-mail: kswolfram@gmail.com, E-mail: pbecker@gmu.edu [Department of Physics and Astronomy, George Mason University, Fairfax, VA (United States)
2017-02-01
The availability of the unprecedented spectral resolution provided by modern X-ray observatories is opening up new areas for study involving the coupled formation of the continuum emission and the cyclotron absorption features in accretion-powered X-ray pulsar spectra. Previous research focusing on the dynamics and the associated formation of the observed spectra has largely been confined to the single-fluid model, in which the super-Eddington luminosity inside the column decelerates the flow to rest at the stellar surface, while the dynamical effect of gas pressure is ignored. In a companion paper, we have presented a detailed analysis of the hydrodynamic and thermodynamic structure of the accretion column obtained using a new self-consistent model that includes the effects of both gas and radiation pressures. In this paper, we explore the formation of the associated X-ray spectra using a rigorous photon transport equation that is consistent with the hydrodynamic and thermodynamic structure of the column. We use the new model to obtain phase-averaged spectra and partially occulted spectra for Her X-1, Cen X-3, and LMC X-4. We also use the new model to constrain the emission geometry, and compare the resulting parameters with those obtained using previously published models. Our model sheds new light on the structure of the column, the relationship between the ionized gas and the photons, the competition between diffusive and advective transport, and the magnitude of the energy-averaged cyclotron scattering cross-section.
Jasper Foolen
Full Text Available Generating and maintaining gradients of cell density and extracellular matrix (ECM components is a prerequisite for the development of functionality of healthy tissue. Therefore, gaining insights into the drivers of spatial organization of cells and the role of ECM during tissue morphogenesis is vital. In a 3D model system of tissue morphogenesis, a fibronectin-FRET sensor recently revealed the existence of two separate fibronectin populations with different conformations in microtissues, i.e. 'compact and adsorbed to collagen' versus 'extended and fibrillar' fibronectin that does not colocalize with the collagen scaffold. Here we asked how the presence of fibronectin might drive this cell-induced tissue morphogenesis, more specifically the formation of gradients in cell density and ECM composition. Microtissues were engineered in a high-throughput model system containing rectangular microarrays of 12 posts, which constrained fibroblast-populated collagen gels, remodeled by the contractile cells into trampoline-shaped microtissues. Fibronectin's contribution during the tissue maturation process was assessed using fibronectin-knockout mouse embryonic fibroblasts (Fn-/- MEFs and floxed equivalents (Fnf/f MEFs, in fibronectin-depleted growth medium with and without exogenously added plasma fibronectin (full-length, or various fragments. In the absence of full-length fibronectin, Fn-/- MEFs remained homogenously distributed throughout the cell-contracted collagen gels. In contrast, in the presence of full-length fibronectin, both cell types produced shell-like tissues with a predominantly cell-free compacted collagen core and a peripheral surface layer rich in cells. Single cell assays then revealed that Fn-/- MEFs applied lower total strain energy on nanopillar arrays coated with either fibronectin or vitronectin when compared to Fnf/f MEFs, but that the presence of exogenously added plasma fibronectin rescued their contractility. While collagen
Estimating average glandular dose by measuring glandular rate in mammograms
Goto, Sachiko; Azuma, Yoshiharu; Sumimoto, Tetsuhiro; Eiho, Shigeru
2003-01-01
The glandular rate of the breast was objectively measured in order to calculate individual patient exposure dose (average glandular dose) in mammography. By employing image processing techniques and breast-equivalent phantoms with various glandular rate values, a conversion curve for pixel value to glandular rate can be determined by a neural network. Accordingly, the pixel values in clinical mammograms can be converted to the glandular rate value for each pixel. The individual average glandular dose can therefore be calculated using the individual glandular rates on the basis of the dosimetry method employed for quality control in mammography. In the present study, a data set of 100 craniocaudal mammograms from 50 patients was used to evaluate our method. The average glandular rate and average glandular dose of the data set were 41.2% and 1.79 mGy, respectively. The error in calculating the individual glandular rate can be estimated to be less than ±3%. When the calculation error of the glandular rate is taken into consideration, the error in the individual average glandular dose can be estimated to be 13% or less. We feel that our method for determining the glandular rate from mammograms is useful for minimizing subjectivity in the evaluation of patient breast composition. (author)
Accurate phenotyping: Reconciling approaches through Bayesian model averaging.
Carla Chia-Ming Chen
Full Text Available Genetic research into complex diseases is frequently hindered by a lack of clear biomarkers for phenotype ascertainment. Phenotypes for such diseases are often identified on the basis of clinically defined criteria; however such criteria may not be suitable for understanding the genetic composition of the diseases. Various statistical approaches have been proposed for phenotype definition; however our previous studies have shown that differences in phenotypes estimated using different approaches have substantial impact on subsequent analyses. Instead of obtaining results based upon a single model, we propose a new method, using Bayesian model averaging to overcome problems associated with phenotype definition. Although Bayesian model averaging has been used in other fields of research, this is the first study that uses Bayesian model averaging to reconcile phenotypes obtained using multiple models. We illustrate the new method by applying it to simulated genetic and phenotypic data for Kofendred personality disorder-an imaginary disease with several sub-types. Two separate statistical methods were used to identify clusters of individuals with distinct phenotypes: latent class analysis and grade of membership. Bayesian model averaging was then used to combine the two clusterings for the purpose of subsequent linkage analyses. We found that causative genetic loci for the disease produced higher LOD scores using model averaging than under either individual model separately. We attribute this improvement to consolidation of the cores of phenotype clusters identified using each individual method.
Yearly, seasonal and monthly daily average diffuse sky radiation models
Kassem, A.S.; Mujahid, A.M.; Turner, D.W.
1993-01-01
A daily average diffuse sky radiation regression model based on daily global radiation was developed utilizing two year data taken near Blytheville, Arkansas (Lat. =35.9 0 N, Long. = 89.9 0 W), U.S.A. The model has a determination coefficient of 0.91 and 0.092 standard error of estimate. The data were also analyzed for a seasonal dependence and four seasonal average daily models were developed for the spring, summer, fall and winter seasons. The coefficient of determination is 0.93, 0.81, 0.94 and 0.93, whereas the standard error of estimate is 0.08, 0.102, 0.042 and 0.075 for spring, summer, fall and winter, respectively. A monthly average daily diffuse sky radiation model was also developed. The coefficient of determination is 0.92 and the standard error of estimate is 0.083. A seasonal monthly average model was also developed which has 0.91 coefficient of determination and 0.085 standard error of estimate. The developed monthly daily average and daily models compare well with a selected number of previously developed models. (author). 11 ref., figs., tabs
Food Security and Leukocyte Telomere Length in Adult Americans
Mohsen Mazidi
2017-01-01
Full Text Available Background and Purpose. Leukocyte telomere length (LTL is a biomarker of biologic age. Whether food security status modulates LTL is still unknown. We investigated the association between food security and LTL in participants of the 1999–2002 US National Health and Nutrition Examination Survey (NHANES. Methods. Analysis of covariance (ANCOVA was used to evaluate the association between food security categories and LTL controlling for sex, race, and education and accounting for the survey design and sample weights. Results. We included 10,888 participants with 5228 (48.0% being men. They were aged on average 44.1 years. In all, 2362 (21.7% had less than high school, 2787 (25.6% had achieved high school, while 5705 (52.5% had done more than high school. In sex-, race-, and education-adjusted ANCOVA, average LTL (T/S ratio for participants with high food security versus those with marginal, low, or very low food security was 1.32 versus 1.20 for the age group 25–35 years and 1.26 versus 1.11 for the 35–45 years, (p<0.001. Conclusion. The association between food insecurity and LTL shortening in young adults suggest that some of the future effects of food insecurity on chronic disease risk in this population could be mediated by telomere shortening.
Length change of the alloys Waspaloy and Inconel 718 after long-term annealing
Kinzel, Svenja
2016-01-01
Within the scope of this work the contraction behavior of Ni-based superalloy Waspaloy could in detail be referred to a combination of different microstructural changes and the results could partially be transferred to Ni-Fe-based alloy Inconel 718. Isothermal annealing of sample rods at temperatures between 450 C and 750 C induces an average relative length contraction of about -2.10"-"4. It is apparent that contraction is more pronounced for lower temperatures (-3.10"-"4 at 550 C) than for higher ones (-1.10"-"4 at 750 C). Within the first 300 hours of annealing the contraction reaches about 70-75% of the value measured after 10,000 hours. This means the major part of the effect takes place at the beginning of long term annealing but even after 10,000 hours no saturation occurs. On the basis of lattice parameter measurements it could be found that within the first 300 hours a significant lattice parameter decrease of matrix and γ"' phase emerged. Longer annealing time does not cause further lattice contraction. This sample behavior can be explained by temperature dependence of phase fractions and phase compositions. Thermodynamic calculations as well as stereological analysis of micrographs show a decrease of stable γ"'-phase content with increasing temperature. In parallel, TEM-EDS measurements and calculated phase fractions show concentration fluctuations due to the different precipitate fraction, which cause contraction of the lattice parameter. Furthermore, within the first 100 hours at temperatures up to 650 C the formation or Ni-Cr rich domains could be observed. As these domains exhibit a smaller lattice parameter than the matrix they contribute to the more pronounced contraction at lower temperatures. While XRD measurements point to the formation of Ni_3Cr, TEM-EDS measurements reveal a composition of (Ni,Co)_2Cr. Stress relief heat treatment at higher temperatures (815 C) after annealing shows that the effect of contraction is reversible. It causes an
Heating tar sands formations while controlling pressure
Stegemeier, George Leo [Houston, TX; Beer, Gary Lee [Houston, TX; Zhang, Etuan [Houston, TX
2010-01-12
Methods for treating a tar sands formation are described herein. Methods may include heating at least a section of a hydrocarbon layer in the formation from a plurality of heaters located in the formation. A pressure in the majority of the section may be maintained below a fracture pressure of the formation. The pressure in the majority of the section may be reduced to a selected pressure after the average temperature reaches a temperature that is above 240.degree. C. and is at or below pyrolysis temperatures of hydrocarbons in the section. At least some hydrocarbon fluids may be produced from the formation.
Correlated evolution of sternal keel length and ilium length in birds
Tao Zhao
2017-07-01
Full Text Available The interplay between the pectoral module (the pectoral girdle and limbs and the pelvic module (the pelvic girdle and limbs plays a key role in shaping avian evolution, but prior empirical studies on trait covariation between the two modules are limited. Here we empirically test whether (size-corrected sternal keel length and ilium length are correlated during avian evolution using phylogenetic comparative methods. Our analyses on extant birds and Mesozoic birds both recover a significantly positive correlation. The results provide new evidence regarding the integration between the pelvic and pectoral modules. The correlated evolution of sternal keel length and ilium length may serve as a mechanism to cope with the effect on performance caused by a tradeoff in muscle mass between the pectoral and pelvic modules, via changing moment arms of muscles that function in flight and in terrestrial locomotion.