WorldWideScience

Sample records for lines show average

  1. MN Temperature Average (1961-1990) - Line

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  2. Many Teenagers Can't Distinguish Harassment Lines, Research Shows

    Sparks, Sarah D.

    2011-01-01

    A national survey finds that, when it comes to sexual harassment in school, many students do not know where to draw the line. Based on the first nationally representative survey in a decade of students in grades 7-12, the study conducted by the American Association of University Women (AAUW), found that 48 percent of nearly 2,000 students surveyed…

  3. Evidence of redshifts in the average solar line profiles of C IV and Si IV from OSO-8 observations

    Roussel-Dupre, D.; Shine, R. A.

    1982-01-01

    Line profiles of C IV and Si V obtained by the Colorado spectrometer on OSO-8 are presented. It is shown that the mean profiles are redshifted with a magnitude varying from 6-20 km/s, and with a mean of 12 km/s. An apparent average downflow of material in the 50,000-100,000 K temperature range is measured. The redshifts are observed in the line center positions of spatially and temporally averaged profiles and are measured either relative to chromospheric Si I lines or from a comparison of sun center and limb profiles. The observations of 6-20 km/s redshifts place constraints on the mechanisms that dominate EUV line emission since it requires a strong weighting of the emission in regions of downward moving material, and since there is little evidence for corresponding upward moving materials in these lines.

  4. Dihydrochalcone Compounds Isolated from Crabapple Leaves Showed Anticancer Effects on Human Cancer Cell Lines

    Xiaoxiao Qin

    2015-11-01

    Full Text Available Seven dihydrochalcone compounds were isolated from the leaves of Malus crabapples, cv. “Radiant”, and their chemical structures were elucidated by UV, IR, ESI-MS, 1H-NMR and 13C-NMR analyses. These compounds, which include trilobatin (A1, phloretin (A2, 3-hydroxyphloretin (A3, phloretin rutinoside (A4, phlorizin (A5, 6′′-O-coumaroyl-4′-O-glucopyranosylphloretin (A6, and 3′′′-methoxy-6′′-O-feruloy-4′-O-glucopyranosyl-phloretin (A7, all belong to the phloretin class and its derivatives. Compounds A6 and A7 are two new rare dihydrochalcone compounds. The results of a MTT cancer cell growth inhibition assay demonstrated that phloretin and these derivatives showed significant positive anticancer activities against several human cancer cell lines, including the A549 human lung cancer cell line, Bel 7402 liver cancer cell line, HepG2 human ileocecal cancer cell line, and HT-29 human colon cancer cell line. A7 had significant effects on all cancer cell lines, suggesting potential applications for phloretin and its derivatives. Adding a methoxyl group to phloretin dramatically increases phloretin’s anticancer activity.

  5. Three-dimensional topography of the gingival line of young adult maxillary teeth: curve averaging using reverse-engineering methods.

    Park, Young-Seok; Chang, Mi-Sook; Lee, Seung-Pyo

    2011-01-01

    This study attempted to establish three-dimensional average curves of the gingival line of maxillary teeth using reconstructed virtual models to utilize as guides for dental implant restorations. Virtual models from 100 full-mouth dental stone cast sets were prepared with a three-dimensional scanner and special reconstruction software. Marginal gingival lines were defined by transforming the boundary points to the NURBS (nonuniform rational B-spline) curve. Using an iterative closest point algorithm, the sample models were aligned and the gingival curves were isolated. Each curve was tessellated by 200 points using a uniform interval. The 200 tessellated points of each sample model were averaged according to the index of each model. In a pilot experiment, regression and fitting analysis of one obtained average curve was performed to depict it as mathematical formulae. The three-dimensional average curves of six maxillary anterior teeth, two maxillary right premolars, and a maxillary right first molar were obtained, and their dimensions were measured. Average curves of the gingival lines of young people were investigated. It is proposed that dentists apply these data to implant platforms or abutment designs to achieve ideal esthetics. The curves obtained in the present study may be incorporated as a basis for implant component design to improve the biologic nature and related esthetics of restorations.

  6. The reverse transcription inhibitor abacavir shows anticancer activity in prostate cancer cell lines.

    Francesca Carlini

    Full Text Available BACKGROUND: Transposable Elements (TEs comprise nearly 45% of the entire genome and are part of sophisticated regulatory network systems that control developmental processes in normal and pathological conditions. The retroviral/retrotransposon gene machinery consists mainly of Long Interspersed Nuclear Elements (LINEs-1 and Human Endogenous Retroviruses (HERVs that code for their own endogenous reverse transcriptase (RT. Interestingly, RT is typically expressed at high levels in cancer cells. Recent studies report that RT inhibition by non-nucleoside reverse transcriptase inhibitors (NNRTIs induces growth arrest and cell differentiation in vitro and antagonizes growth of human tumors in animal model. In the present study we analyze the anticancer activity of Abacavir (ABC, a nucleoside reverse transcription inhibitor (NRTI, on PC3 and LNCaP prostate cancer cell lines. PRINCIPAL FINDINGS: ABC significantly reduces cell growth, migration and invasion processes, considerably slows S phase progression, induces senescence and cell death in prostate cancer cells. Consistent with these observations, microarray analysis on PC3 cells shows that ABC induces specific and dose-dependent changes in gene expression, involving multiple cellular pathways. Notably, by quantitative Real-Time PCR we found that LINE-1 ORF1 and ORF2 mRNA levels were significantly up-regulated by ABC treatment. CONCLUSIONS: Our results demonstrate the potential of ABC as anticancer agent able to induce antiproliferative activity and trigger senescence in prostate cancer cells. Noteworthy, we show that ABC elicits up-regulation of LINE-1 expression, suggesting the involvement of these elements in the observed cellular modifications.

  7. Dinosaur incubation periods directly determined from growth-line counts in embryonic teeth show reptilian-grade development.

    Erickson, Gregory M; Zelenitsky, Darla K; Kay, David Ian; Norell, Mark A

    2017-01-17

    Birds stand out from other egg-laying amniotes by producing relatively small numbers of large eggs with very short incubation periods (average 11-85 d). This aspect promotes high survivorship by limiting exposure to predation and environmental perturbation, allows for larger more fit young, and facilitates rapid attainment of adult size. Birds are living dinosaurs; their rapid development has been considered to reflect the primitive dinosaurian condition. Here, nonavian dinosaurian incubation periods in both small and large ornithischian taxa are empirically determined through growth-line counts in embryonic teeth. Our results show unexpectedly slow incubation (2.8 and 5.8 mo) like those of outgroup reptiles. Developmental and physiological constraints would have rendered tooth formation and incubation inherently slow in other dinosaur lineages and basal birds. The capacity to determine incubation periods in extinct egg-laying amniotes has implications for dinosaurian embryology, life history strategies, and survivorship across the Cretaceous-Paleogene mass extinction event.

  8. Polarized Line Formation in Arbitrary Strength Magnetic Fields Angle-averaged and Angle-dependent Partial Frequency Redistribution

    Sampoorna, M.; Nagendra, K. N. [Indian Institute of Astrophysics, Koramangala, Bengaluru 560 034 (India); Stenflo, J. O., E-mail: sampoorna@iiap.res.in, E-mail: knn@iiap.res.in, E-mail: stenflo@astro.phys.ethz.ch [Institute of Astronomy, ETH Zurich, CH-8093 Zurich (Switzerland)

    2017-08-01

    Magnetic fields in the solar atmosphere leave their fingerprints in the polarized spectrum of the Sun via the Hanle and Zeeman effects. While the Hanle and Zeeman effects dominate, respectively, in the weak and strong field regimes, both these effects jointly operate in the intermediate field strength regime. Therefore, it is necessary to solve the polarized line transfer equation, including the combined influence of Hanle and Zeeman effects. Furthermore, it is required to take into account the effects of partial frequency redistribution (PRD) in scattering when dealing with strong chromospheric lines with broad damping wings. In this paper, we present a numerical method to solve the problem of polarized PRD line formation in magnetic fields of arbitrary strength and orientation. This numerical method is based on the concept of operator perturbation. For our studies, we consider a two-level atom model without hyperfine structure and lower-level polarization. We compare the PRD idealization of angle-averaged Hanle–Zeeman redistribution matrices with the full treatment of angle-dependent PRD, to indicate when the idealized treatment is inadequate and what kind of polarization effects are specific to angle-dependent PRD. Because the angle-dependent treatment is presently computationally prohibitive when applied to realistic model atmospheres, we present the computed emergent Stokes profiles for a range of magnetic fields, with the assumption of an isothermal one-dimensional medium.

  9. Relationship Between Selected Strength and Power Assessments to Peak and Average Velocity of the Drive Block in Offensive Line Play.

    Jacobson, Bert H; Conchola, Eric C; Smith, Doug B; Akehi, Kazuma; Glass, Rob G

    2016-08-01

    Jacobson, BH, Conchola, EC, Smith, DB, Akehi, K, and Glass, RG. Relationship between selected strength and power assessments to peak and average velocity of the drive block in offensive line play. J Strength Cond Res 30(8): 2202-2205, 2016-Typical strength training for football includes the squat and power clean (PC) and routinely measured variables include 1 repetition maximum (1RM) squat and 1RM PC along with the vertical jump (VJ) for power. However, little research exists regarding the association between the strength exercises and velocity of an actual on-the-field performance. The purpose of this study was to investigate the relationship of peak velocity (PV) and average velocity (AV) of the offensive line drive block to 1RM squat, 1RM PC, the VJ, body mass (BM), and body composition. One repetition maximum assessments for the squat and PC were recorded along with VJ height, BM, and percent body fat. These data were correlated with PV and AV while performing the drive block. Peal velocity and AV were assessed using a Tendo Power and Speed Analyzer as the linemen fired, from a 3-point stance into a stationary blocking dummy. Pearson product analysis yielded significant (p ≤ 0.05) correlations between PV and AV and the VJ, the squat, and the PC. A significant inverse association was found for both PV and AV and body fat. These data help to confirm that the typical exercises recommended for American football linemen is positively associated with both PV and AV needed for the drive block effectiveness. It is recommended that these exercises remain the focus of a weight room protocol and that ancillary exercises be built around these exercises. Additionally, efforts to reduce body fat are recommended.

  10. Photographic but not line-drawn faces show early perceptual neural sensitivity to eye gaze direction

    Alejandra eRossi

    2015-04-01

    Full Text Available Our brains readily decode facial movements and changes in social attention, reflected in earlier and larger N170 event-related potentials (ERPs to viewing gaze aversions vs. direct gaze in real faces (Puce et al. 2000. In contrast, gaze aversions in line-drawn faces do not produce these N170 differences (Rossi et al., 2014, suggesting that physical stimulus properties or experimental context may drive these effects. Here we investigated the role of stimulus-induced context on neurophysiological responses to dynamic gaze. Sixteen healthy adults viewed line-drawn and real faces, with dynamic eye aversion and direct gaze transitions, and control stimuli (scrambled arrays and checkerboards while continuous electroencephalographic (EEG activity was recorded. EEG data from 2 temporo-occipital clusters of 9 electrodes in each hemisphere where N170 activity is known to be maximal were selected for analysis. N170 peak amplitude and latency, and temporal dynamics from event-related spectral perturbations (ERSPs were measured in 16 healthy subjects. Real faces generated larger N170s for averted vs. direct gaze motion, however, N170s to real and direct gaze were as large as those to respective controls. N170 amplitude did not differ across line-drawn gaze changes. Overall, bilateral mean gamma power changes for faces relative to control stimuli occurred between 150-350 ms, potentially reflecting signal detection of facial motion.Our data indicate that experimental context does not drive N170 differences to viewed gaze changes. Low-level stimulus properties, such as the high sclera/iris contrast change in real eyes likely drive the N170 changes to viewed aversive movements.

  11. Line-averaging measurement methods to estimate the gap in the CO2 balance closure – possibilities, challenges, and uncertainties

    A. Ziemann

    2017-11-01

    Full Text Available An imbalance of surface energy fluxes using the eddy covariance (EC method is observed in global measurement networks although all necessary corrections and conversions are applied to the raw data. Mainly during nighttime, advection can occur, resulting in a closing gap that consequently should also affect the CO2 balances. There is the crucial need for representative concentration and wind data to measure advective fluxes. Ground-based remote sensing techniques are an ideal tool as they provide the spatially representative CO2 concentration together with wind components within the same voxel structure. For this purpose, the presented SQuAd (Spatially resolved Quantification of the Advection influence on the balance closure of greenhouse gases approach applies an integrated method combination of acoustic and optical remote sensing. The innovative combination of acoustic travel-time tomography (A-TOM and open-path Fourier-transform infrared spectroscopy (OP-FTIR will enable an upscaling and enhancement of EC measurements. OP-FTIR instrumentation offers the significant advantage of real-time simultaneous measurements of line-averaged concentrations for CO2 and other greenhouse gases (GHGs. A-TOM is a scalable method to remotely resolve 3-D wind and temperature fields. The paper will give an overview about the proposed SQuAd approach and first results of experimental tests at the FLUXNET site Grillenburg in Germany. Preliminary results of the comprehensive experiments reveal a mean nighttime horizontal advection of CO2 of about 10 µmol m−2 s−1 estimated by the spatially integrating and representative SQuAd method. Additionally, uncertainties in determining CO2 concentrations using passive OP-FTIR and wind speed applying A-TOM are systematically quantified. The maximum uncertainty for CO2 concentration was estimated due to environmental parameters, instrumental characteristics, and retrieval procedure with a total amount of approximately

  12. Line-averaging measurement methods to estimate the gap in the CO2 balance closure - possibilities, challenges, and uncertainties

    Ziemann, Astrid; Starke, Manuela; Schütze, Claudia

    2017-11-01

    An imbalance of surface energy fluxes using the eddy covariance (EC) method is observed in global measurement networks although all necessary corrections and conversions are applied to the raw data. Mainly during nighttime, advection can occur, resulting in a closing gap that consequently should also affect the CO2 balances. There is the crucial need for representative concentration and wind data to measure advective fluxes. Ground-based remote sensing techniques are an ideal tool as they provide the spatially representative CO2 concentration together with wind components within the same voxel structure. For this purpose, the presented SQuAd (Spatially resolved Quantification of the Advection influence on the balance closure of greenhouse gases) approach applies an integrated method combination of acoustic and optical remote sensing. The innovative combination of acoustic travel-time tomography (A-TOM) and open-path Fourier-transform infrared spectroscopy (OP-FTIR) will enable an upscaling and enhancement of EC measurements. OP-FTIR instrumentation offers the significant advantage of real-time simultaneous measurements of line-averaged concentrations for CO2 and other greenhouse gases (GHGs). A-TOM is a scalable method to remotely resolve 3-D wind and temperature fields. The paper will give an overview about the proposed SQuAd approach and first results of experimental tests at the FLUXNET site Grillenburg in Germany. Preliminary results of the comprehensive experiments reveal a mean nighttime horizontal advection of CO2 of about 10 µmol m-2 s-1 estimated by the spatially integrating and representative SQuAd method. Additionally, uncertainties in determining CO2 concentrations using passive OP-FTIR and wind speed applying A-TOM are systematically quantified. The maximum uncertainty for CO2 concentration was estimated due to environmental parameters, instrumental characteristics, and retrieval procedure with a total amount of approximately 30 % for a single

  13. Chicken lines divergently selected for antibody responses to sheep red blood cells show line-specific differences in sensitivity to immunomodulation by diet. Part I: Humoral parameters.

    Adriaansen-Tennekes, R; de Vries Reilingh, G; Nieuwland, M G B; Parmentier, H K; Savelkoul, H F J

    2009-09-01

    Individual differences in nutrient sensitivity have been suggested to be related with differences in stress sensitivity. Here we used layer hens divergently selected for high and low specific antibody responses to SRBC (i.e., low line hens and high line hens), reflecting a genetically based differential immune competence. The parental line of these hens was randomly bred as the control line and was used as well. Recently, we showed that these selection lines differ in their stress reactivity; the low line birds show a higher hypothalamic-pituitary-adrenal (HPA) axis reactivity. To examine maternal effects and neonatal nutritional exposure on nutrient sensitivity, we studied 2 subsequent generations. This also created the opportunity to examine egg production in these birds. The 3 lines were fed 2 different nutritionally complete layer feeds for a period of 22 wk in the first generation. The second generation was fed from hatch with the experimental diets. At several time intervals, parameters reflecting humoral immunity were determined such as specific antibody to Newcastle disease and infectious bursal disease vaccines; levels of natural antibodies binding lipopolysaccharide, lipoteichoic acid, and keyhole limpet hemocyanin; and classical and alternative complement activity. The most pronounced dietary-induced effects were found in the low line birds of the first generation: specific antibody titers to Newcastle disease vaccine were significantly elevated by 1 of the 2 diets. In the second generation, significant differences were found in lipoteichoic acid natural antibodies of the control and low line hens. At the end of the observation period of egg parameters, a significant difference in egg weight was found in birds of the high line. Our results suggest that nutritional differences have immunomodulatory effects on innate and adaptive humoral immune parameters in birds with high HPA axis reactivity and affect egg production in birds with low HPA axis reactivity.

  14. Novel mesostructured inclusions in the epidermal lining of Artemia franciscana ovisacs show optical activity

    Elena Hollergschwandtner

    2017-10-01

    Full Text Available Background Biomineralization, e.g., in sea urchins or mollusks, includes the assembly of mesoscopic superstructures from inorganic crystalline components and biopolymers. The resulting mesocrystals inspire biophysicists and material scientists alike, because of their extraordinary physical properties. Current efforts to replicate mesocrystal synthesis in vitro require understanding the principles of their self-assembly in vivo. One question, not addressed so far, is whether intracellular crystals of proteins can assemble with biopolymers into functional mesocrystal-like structures. During our electron microscopy studies into Artemia franciscana (Crustacea: Branchiopoda, we found initial evidence of such proteinaceous mesostructures. Results EM preparations with high-pressure freezing and accelerated freeze substitution revealed an extraordinary intracellular source of mesostructured inclusions in both the cyto-and nucleoplasm of the epidermal lining of ovisacs of A. franciscana. Confocal reflection microscopy not only confirmed our finding; it also revealed reflective, light dispersing activity of these flake-like structures, their positioning and orientation with respect to the ovisac inside. Both the striation of alternating electron dense and electron-lucent components and the sharp edges of the flakes indicate self-assembly of material of yet unknown origin under supposed participation of crystallization. However, selected area electron diffraction could not verify the status of crystallization. Energy dispersive X-ray analysis measured a marked increase in nitrogen within the flake-like inclusion, and the almost complete absence of elements that are typically involved in inorganic crystallization. This rise in nitrogen could possibility be related to higher package density of proteins, achieved by mesostructure assembly. Conclusions The ovisac lining of A. franciscana is endowed with numerous mesostructured inclusions that have not been

  15. Testing Delays Resulting in Increased Identification Accuracy in Line-Ups and Show-Ups.

    Dekle, Dawn J.

    1997-01-01

    Investigated time delays (immediate, two-three days, one week) between viewing a staged theft and attempting an eyewitness identification. Compared lineups to one-person showups in a laboratory analogue involving 412 subjects. Results show that across all time delays, participants maintained a higher identification accuracy with the showup…

  16. Off-line phase-averaged particle image velocimetry and OH chemiluminescence measurements using acoustic time series

    Fischer, A; Bake, F; Heinze, J; Willert, C; Diers, O; Röhle, I

    2009-01-01

    In order to analyze unsteady flow phenomena in combustion facilities two phase-sorting methods have been developed and investigated for the retrieval of phase-resolved data from (randomly) sampled 'single-shot' data such as PIV recordings or chemiluminescence imagery in a post-processing step. This is made possible by simultaneously recorded continuous time traces of reference data (e.g., pressure signal). Using this off-line method synchronous phase-locked PIV and OH chemiluminescence visualizations could be recovered from data obtained in two different combustion facilities. This paper also presents some of the theoretical background necessary for the application of two different phase-sorting algorithms

  17. A very sensitive nonintercepting beam average velocity monitoring system for the TRIUMF 300-keV injection line

    Yin, Y.; Laxdal, R.E.; Zelenski, A.; Ostroumov, P.

    1997-01-01

    A nonintercepting beam velocity monitoring system has been installed in the 300-keV injection line of the TRIUMF cyclotron to reproduce the injection energy for beam from different ion sources and to monitor any beam energy fluctuations. By using a programmable beam signal leveling method the system can work with a beam current dynamic range of 50 dB. Using synchronous detection, the system can detect 0.5 eV peak-to-peak energy modulation of the beam, sensitivity is 1.7x10 -6 . The paper will describe the principle and beam measurement results. copyright 1997 American Institute of Physics

  18. Domestic sheep show average Coxiella burnetii seropositivity generations after a sheep-associated human Q fever outbreak and lack detectable shedding by placental, vaginal, and fecal routes

    Oliveira, Ryan D.; Mousel, Michelle R.; Pabilonia, Kristy L.; Highland, Margaret A.; Taylor, J. Bret; Knowles, Donald P.

    2017-01-01

    Coxiella burnetii is a globally distributed zoonotic bacterial pathogen that causes abortions in ruminant livestock. In humans, an influenza-like illness results with the potential for hospitalization, chronic infection, abortion, and fatal endocarditis. Ruminant livestock, particularly small ruminants, are hypothesized to be the primary transmission source to humans. A recent Netherlands outbreak from 2007–2010 traced to dairy goats resulted in over 4,100 human cases with estimated costs of more than 300 million euros. Smaller human Q fever outbreaks of small ruminant origin have occurred in the United States, and characterizing shedding is important to understand the risk of future outbreaks. In this study, we assessed bacterial shedding and seroprevalence in 100 sheep from an Idaho location associated with a 1984 human Q fever outbreak. We observed 5% seropositivity, which was not significantly different from the national average of 2.7% for the U.S. (P>0.05). Furthermore, C. burnetii was not detected by quantitative PCR from placentas, vaginal swabs, or fecal samples. Specifically, a three-target quantitative PCR of placenta identified 0.0% shedding (exact 95% confidence interval: 0.0%-2.9%). While presence of seropositive individuals demonstrates some historical C. burnetii exposure, the placental sample confidence interval suggests 2016 shedding events were rare or absent. The location maintained the flock with little or no depopulation in 1984 and without C. burnetii vaccination during or since 1984. It is not clear how a zero-shedding rate was achieved in these sheep beyond natural immunity, and more work is required to discover and assess possible factors that may contribute towards achieving zero-shedding status. We provide the first U.S. sheep placental C. burnetii shedding update in over 60 years and demonstrate potential for C. burnetii shedding to reach undetectable levels after an outbreak event even in the absence of targeted interventions, such

  19. Domestic sheep show average Coxiella burnetii seropositivity generations after a sheep-associated human Q fever outbreak and lack detectable shedding by placental, vaginal, and fecal routes.

    Ryan D Oliveira

    Full Text Available Coxiella burnetii is a globally distributed zoonotic bacterial pathogen that causes abortions in ruminant livestock. In humans, an influenza-like illness results with the potential for hospitalization, chronic infection, abortion, and fatal endocarditis. Ruminant livestock, particularly small ruminants, are hypothesized to be the primary transmission source to humans. A recent Netherlands outbreak from 2007-2010 traced to dairy goats resulted in over 4,100 human cases with estimated costs of more than 300 million euros. Smaller human Q fever outbreaks of small ruminant origin have occurred in the United States, and characterizing shedding is important to understand the risk of future outbreaks. In this study, we assessed bacterial shedding and seroprevalence in 100 sheep from an Idaho location associated with a 1984 human Q fever outbreak. We observed 5% seropositivity, which was not significantly different from the national average of 2.7% for the U.S. (P>0.05. Furthermore, C. burnetii was not detected by quantitative PCR from placentas, vaginal swabs, or fecal samples. Specifically, a three-target quantitative PCR of placenta identified 0.0% shedding (exact 95% confidence interval: 0.0%-2.9%. While presence of seropositive individuals demonstrates some historical C. burnetii exposure, the placental sample confidence interval suggests 2016 shedding events were rare or absent. The location maintained the flock with little or no depopulation in 1984 and without C. burnetii vaccination during or since 1984. It is not clear how a zero-shedding rate was achieved in these sheep beyond natural immunity, and more work is required to discover and assess possible factors that may contribute towards achieving zero-shedding status. We provide the first U.S. sheep placental C. burnetii shedding update in over 60 years and demonstrate potential for C. burnetii shedding to reach undetectable levels after an outbreak event even in the absence of targeted

  20. Measurement of the single 100 diffraction line and evaluation of the average crystallite sizes along the fiber axis for mesophase-pitch-based carbon fiber P100

    Yoshida, Akira; Kaburagi, Yutaka; Hishiyama, Yoshihiro

    2007-01-01

    Mesophase-pitch-based carbon fiber P100 is known as a well-oriented carbon fiber in which the partially graphitized crystallites align along the fiber axis. The X-ray powder diffraction pattern for P100 measured by the X-ray diffractometer reveals the 100 diffraction line as a composite peak with the 101 diffraction line. The composite peak is usually not easy to separate into the component peaks of 100 and 101 lines. In the present article, a method to measure the single 100 diffraction line with the X-ray diffractometer using fiber samples of P100 has been developed. It has been found that there exist two types of crystallites oriented to their basal planes along the fiber axis in each of the P100 fibers; the Z-type crystallite with the zigzag boundary planes and the A-type crystallite with the armchair boundary planes, both of the boundary planes are perpendicular to the fiber axis. The average crystallite sizes along the fiber axis evaluated are 53 nm for the Z-type crystallites and 800 nm for the armchair crystallites. The average crystallite thickness for both types is about 120 nm. (author)

  1. A 100 m x 10 m Sonic to observe area averaged wind and temperature data in comparison to FTIR line integrated measurements

    Schleichardt, A; Barth, M; Raabe, A; Schaefer, K

    2008-01-01

    An acoustic tomographic system has been used to estimate area averaged wind and temperature data within an area of 97 m x 12 m considering the dependence of sound speed on meteorological conditions To obtain information about vertical structure of meteorological data, eight sound sources and receivers were placed in two different heights above the ground (0.5 m and 2.7 m). Spatially, the acoustic measurements correspond to line integrated N 2 O concentration measurements (98 m) using FTIR-spectrometers Taking stability of atmospheric layering into account, acoustic tomographic measurements serve as basis for estimating vertical fluxes of momentum and sensible heat

  2. Rescuing Alu: recovery of new inserts shows LINE-1 preserves Alu activity through A-tail expansion.

    Bradley J Wagstaff

    Full Text Available Alu elements are trans-mobilized by the autonomous non-LTR retroelement, LINE-1 (L1. Alu-induced insertion mutagenesis contributes to about 0.1% human genetic disease and is responsible for the majority of the documented instances of human retroelement insertion-induced disease. Here we introduce a SINE recovery method that provides a complementary approach for comprehensive analysis of the impact and biological mechanisms of Alu retrotransposition. Using this approach, we recovered 226 de novo tagged Alu inserts in HeLa cells. Our analysis reveals that in human cells marked Alu inserts driven by either exogenously supplied full length L1 or ORF2 protein are indistinguishable. Four percent of de novo Alu inserts were associated with genomic deletions and rearrangements and lacked the hallmarks of retrotransposition. In contrast to L1 inserts, 5' truncations of Alu inserts are rare, as most of the recovered inserts (96.5% are full length. De novo Alus show a random pattern of insertion across chromosomes, but further characterization revealed an Alu insertion bias exists favoring insertion near other SINEs, highly conserved elements, with almost 60% landing within genes. De novo Alu inserts show no evidence of RNA editing. Priming for reverse transcription rarely occurred within the first 20 bp (most 5' of the A-tail. The A-tails of recovered inserts show significant expansion, with many at least doubling in length. Sequence manipulation of the construct led to the demonstration that the A-tail expansion likely occurs during insertion due to slippage by the L1 ORF2 protein. We postulate that the A-tail expansion directly impacts Alu evolution by reintroducing new active source elements to counteract the natural loss of active Alus and minimizing Alu extinction.

  3. Chicken lines divergently selected for antibody responses to sheep red blood cells show line-specific differences in sensitivity to immunomodulation by diet. Part I: Humoral parameters

    Adriaansen-Tennekes, R.; Vries Reilingh, de G.; Nieuwland, M.G.B.; Parmentier, H.K.; Savelkoul, H.F.J.

    2009-01-01

    Individual differences in nutrient sensitivity have been suggested to be related with differences in stress sensitivity. Here we used layer hens divergently selected for high and low specific antibody responses to SRBC (i.e., low line hens and high line hens), reflecting a genetically based

  4. GEANT4 simulation diagram showing the architecture of the ATLAS test line: the detectors are positioned to receive the beam from the SPS. A muon particle which enters the magnet and crosses all detectors is shown (blue line).

    2004-01-01

    GEANT4 simulation diagram showing the architecture of the ATLAS test line: the detectors are positioned to receive the beam from the SPS. A muon particle which enters the magnet and crosses all detectors is shown (blue line).

  5. State Averages

    U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...

  6. Lung Adenocarcinomas and Lung Cancer Cell Lines Show Association of MMP-1 Expression With STAT3 Activation

    Alexander Schütz

    2015-04-01

    Full Text Available Signal transducer and activator of transcription 3 (STAT3 is constitutively activated in the majority of lung cancer. This study aims at defining connections between STAT3 function and the malignant properties of non–small cell lung carcinoma (NSCLC cells. To address possible mechanisms by which STAT3 influences invasiveness, the expression of matrix metalloproteinase-1 (MMP-1 was analyzed and correlated with the STAT3 activity status. Studies on both surgical biopsies and on lung cancer cell lines revealed a coincidence of STAT3 activation and strong expression of MMP-1. MMP-1 and tyrosine-phosphorylated activated STAT3 were found co-localized in cancer tissues, most pronounced in tumor fronts, and in particular in adenocarcinomas. STAT3 activity was constitutive, although to different degrees, in the lung cancer cell lines investigated. Three cell lines (BEN, KNS62, and A549 were identified in which STAT3 activitation was inducible by Interleukin-6 (IL-6. In A549 cells, STAT3 activity enhanced the level of MMP-1 mRNA and stimulated transcription from the MMP-1 promoter in IL-6–stimulated A549 cells. STAT3 specificity of this effect was confirmed by STAT3 knockdown through RNA interference. Our results link aberrant activity of STAT3 in lung cancer cells to malignant tumor progression through up-regulation of expression of invasiveness-associated MMPs.

  7. Measurements of line-averaged electron density of pulsed plasmas using a He-Ne laser interferometer in a magnetized coaxial plasma gun device

    Iwamoto, D.; Sakuma, I.; Kitagawa, Y.; Kikuchi, Y.; Fukumoto, N.; Nagata, M.

    2012-10-01

    In next step of fusion devices such as ITER, lifetime of plasma-facing materials (PFMs) is strongly affected by transient heat and particle loads during type I edge localized modes (ELMs) and disruption. To clarify damage characteristics of the PFMs, transient heat and particle loads have been simulated by using a plasma gun device. We have performed simulation experiments by using a magnetized coaxial plasma gun (MCPG) device at University of Hyogo. The line-averaged electron density measured by a He-Ne interferometer is 2x10^21 m-3 in a drift tube. The plasma velocity measured by a time of flight technique and ion Doppler spectrometer was 70 km/s, corresponding to the ion energy of 100 eV for helium. Thus, the ion flux density is 1.4x10^26 m-2s-1. On the other hand, the MCPG is connected to a target chamber for material irradiation experiments. It is important to measure plasma parameters in front of target materials in the target chamber. In particular, a vapor cloud layer in front of the target material produced by the pulsed plasma irradiation has to be characterized in order to understand surface damage of PFMs under ELM-like plasma bombardment. In the conference, preliminary results of application of the He-Ne laser interferometer for the above experiment will be shown.

  8. A novel cell line derived from pleomorphic adenoma expresses MMP2, MMP9, TIMP1, TIMP2, and shows numeric chromosomal anomalies.

    Aline Semblano Carreira Falcão

    Full Text Available Pleomorphic adenoma is the most common salivary gland neoplasm, and it can be locally invasive, despite its slow growth. This study aimed to establish a novel cell line (AP-1 derived from a human pleomorphic adenoma sample to better understand local invasiveness of this tumor. AP-1 cell line was characterized by cell growth analysis, expression of epithelial and myoepithelial markers by immunofluorescence, electron microscopy, 3D cell culture assays, cytogenetic features and transcriptomic study. Expression of matrix metalloproteinases (MMPs and their tissue inhibitors (TIMPs was also analyzed by immunofluorescence and zymography. Furthermore, epithelial and myoepithelial markers, MMPs and TIMPs were studied in the tumor that originated the cell line. AP-1 cells showed neoplastic epithelial and myoepithelial markers, such as cytokeratins, vimentin, S100 protein and smooth-muscle actin. These molecules were also found in vivo, in the tumor that originated the cell line. MMPs and TIMPs were observed in vivo and in AP-1 cells. Growth curve showed that AP-1 exhibited a doubling time of 3.342 days. AP-1 cells grown inside Matrigel recapitulated tumor architecture. Different numerical and structural chromosomal anomalies were visualized in cytogenetic analysis. Transcriptomic analysis addressed expression of 7 target genes (VIM, TIMP2, MMP2, MMP9, TIMP1, ACTA2 e PLAG1. Results were compared to transcriptomic profile of non-neoplastic salivary gland cells (HSG. Only MMP9 was not expressed in both libraries, and VIM was expressed solely in AP-1 library. The major difference regarding gene expression level between AP-1 and HSG samples occurred for MMP2. This gene was 184 times more expressed in AP-1 cells. Our findings suggest that AP-1 cell line could be a useful model for further studies on pleomorphic adenoma biology.

  9. LINES

    Minas Bakalchev

    2015-10-01

    Full Text Available The perception of elements in a system often creates their interdependence, interconditionality, and suppression. The lines from a basic geometrical element have become the model of a reductive world based on isolation according to certain criteria such as function, structure, and social organization. Their traces are experienced in the contemporary world as fragments or ruins of a system of domination of an assumed hierarchical unity. How can one release oneself from such dependence or determinism? How can the lines become less “systematic” and forms more autonomous, and less reductive? How is a form released from modernistic determinism on the new controversial ground? How can these elements or forms of representation become forms of action in the present complex world? In this paper, the meaning of lines through the ideas of Le Corbusier, Leonidov, Picasso, and Hitchcock is presented. Spatial research was made through a series of examples arising from the projects of the architectural studio “Residential Transformations”, which was a backbone for mapping the possibilities ranging from playfulness to exactness, as tactics of transformation in the different contexts of the contemporary world.

  10. The sandfly Lutzomyia longipalpis LL5 embryonic cell line has active Toll and Imd pathways and shows immune responses to bacteria, yeast and Leishmania.

    Tinoco-Nunes, Bruno; Telleria, Erich Loza; da Silva-Neves, Monique; Marques, Christiane; Azevedo-Brito, Daisy Aline; Pitaluga, André Nóbrega; Traub-Csekö, Yara Maria

    2016-04-20

    Lutzomyia longipalpis is the main vector of visceral leishmaniasis in Latin America. Sandfly immune responses are poorly understood. In previous work we showed that these vector insects respond to bacterial infections by modulating a defensin gene expression and activate the Imd pathway in response to Leishmania infection. Aspects of innate immune pathways in insects (including mosquito vectors of human diseases) have been revealed by studying insect cell lines, and we have previously demonstrated antiviral responses in the L. longipalpis embryonic cell line LL5. The expression patterns of antimicrobial peptides (AMPs) and transcription factors were evaluated after silencing the repressors of the Toll pathway (cactus) and Imd pathway (caspar). AMPs and transcription factor expression patterns were also evaluated after challenge with heat-killed bacteria, heat-killed yeast, or live Leishmania. These studies showed that LL5 cells have active Toll and Imd pathways, since they displayed an increased expression of AMP genes following silencing of the repressors cactus and caspar, respectively. These pathways were also activated by challenges with bacteria, yeast and Leishmania infantum chagasi. We demonstrated that L. longipalpis LL5 embryonic cells respond to immune stimuli and are therefore a good model to study the immunological pathways of this important vector of leishmaniasis.

  11. Theileria parva antigens recognized by CD8+ T cells show varying degrees of diversity in buffalo-derived infected cell lines.

    Sitt, Tatjana; Pelle, Roger; Chepkwony, Maurine; Morrison, W Ivan; Toye, Philip

    2018-05-06

    The extent of sequence diversity among the genes encoding 10 antigens (Tp1-10) known to be recognized by CD8+ T lymphocytes from cattle immune to Theileria parva was analysed. The sequences were derived from parasites in 23 buffalo-derived cell lines, three cattle-derived isolates and one cloned cell line obtained from a buffalo-derived stabilate. The results revealed substantial variation among the antigens through sequence diversity. The greatest nucleotide and amino acid diversity were observed in Tp1, Tp2 and Tp9. Tp5 and Tp7 showed the least amount of allelic diversity, and Tp5, Tp6 and Tp7 had the lowest levels of protein diversity. Tp6 was the most conserved protein; only a single non-synonymous substitution was found in all obtained sequences. The ratio of non-synonymous: synonymous substitutions varied from 0.84 (Tp1) to 0.04 (Tp6). Apart from Tp2 and Tp9, we observed no variation in the other defined CD8+ T cell epitopes (Tp4, 5, 7 and 8), indicating that epitope variation is not a universal feature of T. parva antigens. In addition to providing markers that can be used to examine the diversity in T. parva populations, the results highlight the potential for using conserved antigens to develop vaccines that provide broad protection against T. parva.

  12. A naturally derived gastric cancer cell line shows latency I Epstein-Barr virus infection closely resembling EBV-associated gastric cancer

    Oh, Sang Taek; Seo, Jung Seon; Moon, Uk Yeol; Kang, Kyeong Hee; Shin, Dong-Jik; Yoon, Sungjoo Kim; Kim, Woo Ho; Park, Jae-Gahb; Lee, Suk Kyeong

    2004-01-01

    In a process seeking out a good model cell line for Epstein-Barr virus (EBV)-associated gastric cancer, we found that one previously established gastric adenocarcinoma cell line is infected with type 1 EBV. This SNU-719 cell line from a Korean patient expressed cytokeratin without CD19 or CD21 expression. In SNU-719, EBNA1 and LMP2A were expressed, while LMP1 and EBNA2 were not. None of the tested lytic EBV proteins were detected in this cell line unless stimulated with phorbol ester. EBV infection was also shown in the original carcinoma tissue of SNU-719 cell line. Our results support the possibility of a CD21-independent EBV infection of gastric epithelial cells in vivo. As the latent EBV gene expression pattern of SNU-719 closely resembles that of the EBV-associated gastric cancer, this naturally derived cell line may serve as a valuable model system to clarify the precise role of EBV in gastric carcinogenesis

  13. Cisgenic Rvi6 scab-resistant apple lines show no differences in Rvi6 transcription when compared with conventionally bred cultivars.

    Chizzali, Cornelia; Gusberti, Michele; Schouten, Henk J; Gessler, Cesare; Broggini, Giovanni A L

    2016-03-01

    The expression of the apple scab resistance gene Rvi6 in different apple cultivars and lines is not modulated by biotic or abiotic factors. All commercially important apple cultivars are susceptible to Venturia inaequalis, the causal organism of apple scab. A limited number of apple cultivars were bred to express the resistance gene Vf from the wild apple genotype Malus floribunda 821. Positional cloning of the Vf locus allowed the identification of the Rvi6 (formerly HcrVf2) scab resistance gene that was subsequently used to generate cisgenic apple lines. It is important to understand and compare how this resistance gene is transcribed and modulated during infection in conventionally bred cultivars and in cisgenic lines. The aim of this work was to study the transcription pattern of Rvi6 in three classically bred apple cultivars and six lines of 'Gala' genetically modified to express Rvi6. Rvi6 transcription was analyzed at two time points using quantitative real-time PCR (RT-qPCR) following inoculation with V. inaequalis conidia or water. Rvi6 transcription was assessed in relation to five reference genes. β-Actin, RNAPol, and UBC were the most suited to performing RT-qPCR experiments on Malus × domestica. Inoculation with V. inaequalis conidia under conditions conducive to scab infection failed to produce any significant changes to the transcription level of Rvi6. Rvi6 expression levels were inconsistent in response to external treatments in the different apple cultivars, and transgenic, intragenic or cisgenic lines.

  14. Cisgenic Rvi6 scab-resistant apple lines show no differences in Rvi6 transcription when compared with conventionally bred cultivars

    Chizzali, Cornelia; Gusberti, Michele; Schouten, H.J.; Gessler, Cesare; Broggini, G.A.L.

    2016-01-01

    Main conclusion: The expression of the apple scab resistance geneRvi6in different apple cultivars and lines is not modulated by biotic or abiotic factors.All commercially important apple cultivars are susceptible to Venturia inaequalis, the causal organism of apple scab. A limited number of apple

  15. Mice lacking Ras-GRF1 show contextual fear conditioning but not spatial memory impairments: convergent evidence from two independently generated mouse mutant lines

    Raffaele ed'Isa

    2011-12-01

    Full Text Available Ras-GRF1 is a neuronal specific guanine exchange factor that, once activated by both ionotropic and metabotropic neurotransmitter receptors, can stimulate Ras proteins, leading to long-term phosphorylation of downstream signaling. The two available reports on the behavior of two independently generated Ras-GRF1 deficient mouse lines provide contrasting evidence on the role of Ras-GRF1 in spatial memory and contextual fear conditioning. These discrepancies may be due to the distinct alterations introduced in the mouse genome by gene targeting in the two lines that could differentially affect expression of nearby genes located in the imprinted region containing the Ras-grf1 locus. In order to determine the real contribution of Ras-GRF1 to spatial memory we compared in Morris Water Maze learning the Brambilla’s mice with a third mouse line (GENA53 in which a nonsense mutation was introduced in the Ras-GRF1 coding region without additional changes in the genome and we found that memory in this task is normal. Also, we measured both contextual and cued fear conditioning, which were previously reported to be affected in the Brambilla’s mice, and we confirmed that contextual learning but not cued conditioning is impaired in both mouse lines. In addition, we also tested both lines for the first time in conditioned place aversion in the Intellicage, an ecological and remotely controlled behavioral test, and we observed normal learning. Finally, based on previous reports of other mutant lines suggesting that Ras-GRF1 may control body weight, we also measured this non-cognitive phenotype and we confirmed that both Ras-GRF1 deficient mutants are smaller than their control littermates. In conclusion, we demonstrate that Ras-GRF1 has no unique role in spatial memory while its function in contextual fear conditioning is likely to be due not only to its involvement in amygdalar functions but possibly to some distinct hippocampal connections specific to

  16. Average nuclear surface properties

    Groote, H. von.

    1979-01-01

    The definition of the nuclear surface energy is discussed for semi-infinite matter. This definition is extended also for the case that there is a neutron gas instead of vacuum on the one side of the plane surface. The calculations were performed with the Thomas-Fermi Model of Syler and Blanchard. The parameters of the interaction of this model were determined by a least squares fit to experimental masses. The quality of this fit is discussed with respect to nuclear masses and density distributions. The average surface properties were calculated for different particle asymmetry of the nucleon-matter ranging from symmetry beyond the neutron-drip line until the system no longer can maintain the surface boundary and becomes homogeneous. The results of the calculations are incorporated in the nuclear Droplet Model which then was fitted to experimental masses. (orig.)

  17. Epigenetic variants of a transgenic petunia line show hypermethylation in transgene DNA: an indication for specific recognition of foreign DNA in transgenic plants.

    Meyer, P; Heidmann, I

    1994-05-25

    We analysed de novo DNA methylation occurring in plants obtained from the transgenic petunia line R101-17. This line contains one copy of the maize A1 gene that leads to the production of brick-red pelargonidin pigment in the flowers. Due to its integration into an unmethylated genomic region the A1 transgene is hypomethylated and transcriptionally active. Several epigenetic variants of line 17 were selected that exhibit characteristic and somatically stable pigmentation patterns, displaying fully coloured, marbled or colourless flowers. Analysis of the DNA methylation patterns revealed that the decrease in pigmentation among the epigenetic variants was correlated with an increase in methylation, specifically of the transgene DNA. No change in methylation of the hypomethylated integration region could be detected. A similar increase in methylation, specifically in the transgene region, was also observed among progeny of R101-17del, a deletion derivative of R101-17 that no longer produces pelargonidin pigments due to a deletion in the A1 coding region. Again de novo methylation is specifically directed to the transgene, while the hypomethylated character of neighbouring regions is not affected. Possible mechanisms for transgene-specific methylation and its consequences for long-term use of transgenic material are discussed.

  18. Average-energy games

    Patricia Bouyer

    2015-09-01

    Full Text Available Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in NP inter coNP and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements.

  19. Physiological investigation of C4-phosphoenolpyruvate-carboxylase-introduced rice line shows that sucrose metabolism is involved in the improved drought tolerance.

    Zhang, Chen; Li, Xia; He, Yafei; Zhang, Jinfei; Yan, Ting; Liu, Xiaolong

    2017-06-01

    We compared the drought tolerance of wild-type (WT) and transgenic rice plants (PC) over-expressing the maize C 4 PEPC gene, which encodes phosphoenolpyruvate carboxylase (PEPC, EC 4.1.1.31) gene, and evaluated the roles of saccharide and sugar-related enzymes in the drought response. Pot-grown seedlings were subjected to real drought conditions outdoors, and the yield components were compared between PC and untransformed wild-type (WT) plants. The stable yield from PC plants was associated with higher net photosynthetic rate under the real drought treatment. The physiological characters of WT and PC seedlings under a simulated drought treatment (25% (w/v) polyethylene glycol-6000 for 3 h; PEG 6000 treatment) were analyzed in detail for the early response of drought. The relative water content was higher in PC than in WT, and PEPC activity and the C 4 -PEPC transcript level in PC were elevated under the simulated drought conditions. The endogenous saccharide responses also differed between PC and WT under simulated drought stress. The higher sugar decomposition rate in PC than in WT under drought analog stress was related to the increased activities of sucrose phosphate synthase, sucrose synthase, acid invertase, and neutral invertase, increased transcript levels of VIN1, CIN1, NIN1, SUT2, SUT4, and SUT5, and increased activities of superoxide dismutase and peroxidase in the leaves. The greater antioxidant defense capacity of PC and its relationship with saccharide metabolism was one of the reasons for the improved drought tolerance. In conclusion, PEPC effectively alleviated oxidative damage and enhanced the drought tolerance in rice plants, which were more related to the increase of the endogenous saccharide decomposition. These findings show that components of C 4 photosynthesis can be used to increase the yield of rice under drought conditions. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  20. Neutron resonance averaging

    Chrien, R.E.

    1986-10-01

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs

  1. Wave function collapse implies divergence of average displacement

    Marchewka, A.; Schuss, Z.

    2005-01-01

    We show that propagating a truncated discontinuous wave function by Schr\\"odinger's equation, as asserted by the collapse axiom, gives rise to non-existence of the average displacement of the particle on the line. It also implies that there is no Zeno effect. On the other hand, if the truncation is done so that the reduced wave function is continuous, the average coordinate is finite and there is a Zeno effect. Therefore the collapse axiom of measurement needs to be revised.

  2. On Averaging Rotations

    Gramkow, Claus

    1999-01-01

    In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belo...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion......In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  3. Ergodic averages via dominating processes

    Møller, Jesper; Mengersen, Kerrie

    2006-01-01

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain....

  4. Averaged RMHD equations

    Ichiguchi, Katsuji

    1998-01-01

    A new reduced set of resistive MHD equations is derived by averaging the full MHD equations on specified flux coordinates, which is consistent with 3D equilibria. It is confirmed that the total energy is conserved and the linearized equations for ideal modes are self-adjoint. (author)

  5. Determining average yarding distance.

    Roger H. Twito; Charles N. Mann

    1979-01-01

    Emphasis on environmental and esthetic quality in timber harvesting has brought about increased use of complex boundaries of cutting units and a consequent need for a rapid and accurate method of determining the average yarding distance and area of these units. These values, needed for evaluation of road and landing locations in planning timber harvests, are easily and...

  6. Average Revisited in Context

    Watson, Jane; Chick, Helen

    2012-01-01

    This paper analyses the responses of 247 middle school students to items requiring the concept of average in three different contexts: a city's weather reported in maximum daily temperature, the number of children in a family, and the price of houses. The mixed but overall disappointing performance on the six items in the three contexts indicates…

  7. Averaging operations on matrices

    2014-07-03

    Jul 3, 2014 ... Role of Positive Definite Matrices. • Diffusion Tensor Imaging: 3 × 3 pd matrices model water flow at each voxel of brain scan. • Elasticity: 6 × 6 pd matrices model stress tensors. • Machine Learning: n × n pd matrices occur as kernel matrices. Tanvi Jain. Averaging operations on matrices ...

  8. The flinders sensitive line rats, a genetic model of depression, show abnormal serotonin receptor mRNA expression in the brain that is reversed by 17beta-estradiol.

    Osterlund, M K; Overstreet, D H; Hurd, Y L

    1999-12-10

    The possible link between estrogen and serotonin (5-HT) in depression was investigated using a genetic animal model of depression, the Flinders Sensitive Line (FSL) rats, in comparison to control Flinders Resistant Line rats. The mRNA levels of the estrogen receptor (ER) alpha and beta subtypes and the 5-HT(1A) and 5-HT(2A) receptors were analyzed in several limbic-related areas of ovariectomized FSL and FRL rats treated with 17beta-estradiol (0.15 microg/g) or vehicle. The FSL animals were shown to express significantly lower levels of the 5-HT(2A) receptor transcripts in the perirhinal cortex, piriform cortex, and medial anterodorsal amygdala and higher levels in the CA 2-3 region of the hippocampus. The only significant difference between the rat lines in ER mRNA expression was found in the medial posterodorsal amygdala, where the FSL rats showed lower ERalpha expression levels. Overall, estradiol treatment increased 5-HT(2A) and decreased 5-HT(1A) receptor mRNA levels in several of the examined regions of both lines. Thus, in many areas, estradiol was found to regulate the 5-HT receptor mRNA expression in the opposite direction to the alterations found in the FSL rats. These findings further support the implication of 5-HT receptors, in particular the 5-HT(2A) subtype, in the etiology of affective disorders. Moreover, the ability of estradiol to regulate the expression of the 5-HT(1A) and 5-HT(2A) receptor genes might account for the reported influence of gonadal hormones in mood and depression.

  9. On Averaging Rotations

    Gramkow, Claus

    2001-01-01

    In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong ...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation.......In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  10. Average is Over

    Eliazar, Iddo

    2018-02-01

    The popular perception of statistical distributions is depicted by the iconic bell curve which comprises of a massive bulk of 'middle-class' values, and two thin tails - one of small left-wing values, and one of large right-wing values. The shape of the bell curve is unimodal, and its peak represents both the mode and the mean. Thomas Friedman, the famous New York Times columnist, recently asserted that we have entered a human era in which "Average is Over" . In this paper we present mathematical models for the phenomenon that Friedman highlighted. While the models are derived via different modeling approaches, they share a common foundation. Inherent tipping points cause the models to phase-shift from a 'normal' bell-shape statistical behavior to an 'anomalous' statistical behavior: the unimodal shape changes to an unbounded monotone shape, the mode vanishes, and the mean diverges. Hence: (i) there is an explosion of small values; (ii) large values become super-large; (iii) 'middle-class' values are wiped out, leaving an infinite rift between the small and the super large values; and (iv) "Average is Over" indeed.

  11. Americans' Average Radiation Exposure

    2000-01-01

    We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body

  12. Trajectory averaging for stochastic approximation MCMC algorithms

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  13. Computation of the bounce-average code

    Cutler, T.A.; Pearlstein, L.D.; Rensink, M.E.

    1977-01-01

    The bounce-average computer code simulates the two-dimensional velocity transport of ions in a mirror machine. The code evaluates and bounce-averages the collision operator and sources along the field line. A self-consistent equilibrium magnetic field is also computed using the long-thin approximation. Optionally included are terms that maintain μ, J invariance as the magnetic field changes in time. The assumptions and analysis that form the foundation of the bounce-average code are described. When references can be cited, the required results are merely stated and explained briefly. A listing of the code is appended

  14. Lessons from the use of genetically modified Drosophila melanogaster in ecological studies: Hsf mutant lines show highly trait-specific performance in field and laboratory thermal assays

    Sørensen, Jesper Givskov; Loeschcke, Volker; Kristensen, Torsten Nygård

    2009-01-01

    . 2.  We have tested the importance of inducible heat shock proteins (Hsps) under different thermal conditions using two heat shock factor (Hsf) mutant lines (either able (Hsf+) or unable (Hsf0) to mount a heat stress response) and an outbred laboratory adapted wild-type line of Drosophila......1.  Laboratory studies on genetically modified strains may reveal important information on mechanisms involved in coping with thermal stress. However, to address the evolutionary significance of specific genes or physiological mechanisms, ecologically relevant field tests should also be performed...

  15. Topological quantization of ensemble averages

    Prodan, Emil

    2009-01-01

    We define the current of a quantum observable and, under well-defined conditions, we connect its ensemble average to the index of a Fredholm operator. The present work builds on a formalism developed by Kellendonk and Schulz-Baldes (2004 J. Funct. Anal. 209 388) to study the quantization of edge currents for continuous magnetic Schroedinger operators. The generalization given here may be a useful tool to scientists looking for novel manifestations of the topological quantization. As a new application, we show that the differential conductance of atomic wires is given by the index of a certain operator. We also comment on how the formalism can be used to probe the existence of edge states

  16. The difference between alternative averages

    James Vaupel

    2012-09-01

    Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.

  17. Spacetime averaging of exotic singularity universes

    Dabrowski, Mariusz P.

    2011-01-01

    Taking a spacetime average as a measure of the strength of singularities we show that big-rips (type I) are stronger than big-bangs. The former have infinite spacetime averages while the latter have them equal to zero. The sudden future singularities (type II) and w-singularities (type V) have finite spacetime averages. The finite scale factor (type III) singularities for some values of the parameters may have an infinite average and in that sense they may be considered stronger than big-bangs.

  18. Flexible time domain averaging technique

    Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng

    2013-09-01

    Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

  19. Average Soil Water Retention Curves Measured by Neutron Radiography

    Cheng, Chu-Lin [ORNL; Perfect, Edmund [University of Tennessee, Knoxville (UTK); Kang, Misun [ORNL; Voisin, Sophie [ORNL; Bilheux, Hassina Z [ORNL; Horita, Juske [Texas Tech University (TTU); Hussey, Dan [NIST Center for Neutron Research (NCRN), Gaithersburg, MD

    2011-01-01

    Water retention curves are essential for understanding the hydrologic behavior of partially-saturated porous media and modeling flow transport processes within the vadose zone. In this paper we report direct measurements of the main drying and wetting branches of the average water retention function obtained using 2-dimensional neutron radiography. Flint sand columns were saturated with water and then drained under quasi-equilibrium conditions using a hanging water column setup. Digital images (2048 x 2048 pixels) of the transmitted flux of neutrons were acquired at each imposed matric potential (~10-15 matric potential values per experiment) at the NCNR BT-2 neutron imaging beam line. Volumetric water contents were calculated on a pixel by pixel basis using Beer-Lambert s law after taking into account beam hardening and geometric corrections. To remove scattering effects at high water contents the volumetric water contents were normalized (to give relative saturations) by dividing the drying and wetting sequences of images by the images obtained at saturation and satiation, respectively. The resulting pixel values were then averaged and combined with information on the imposed basal matric potentials to give average water retention curves. The average relative saturations obtained by neutron radiography showed an approximate one-to-one relationship with the average values measured volumetrically using the hanging water column setup. There were no significant differences (at p < 0.05) between the parameters of the van Genuchten equation fitted to the average neutron radiography data and those estimated from replicated hanging water column data. Our results indicate that neutron imaging is a very effective tool for quantifying the average water retention curve.

  20. How to average logarithmic retrievals?

    B. Funke

    2012-04-01

    Full Text Available Calculation of mean trace gas contributions from profiles obtained by retrievals of the logarithm of the abundance rather than retrievals of the abundance itself are prone to biases. By means of a system simulator, biases of linear versus logarithmic averaging were evaluated for both maximum likelihood and maximum a priori retrievals, for various signal to noise ratios and atmospheric variabilities. These biases can easily reach ten percent or more. As a rule of thumb we found for maximum likelihood retrievals that linear averaging better represents the true mean value in cases of large local natural variability and high signal to noise ratios, while for small local natural variability logarithmic averaging often is superior. In the case of maximum a posteriori retrievals, the mean is dominated by the a priori information used in the retrievals and the method of averaging is of minor concern. For larger natural variabilities, the appropriateness of the one or the other method of averaging depends on the particular case because the various biasing mechanisms partly compensate in an unpredictable manner. This complication arises mainly because of the fact that in logarithmic retrievals the weight of the prior information depends on abundance of the gas itself. No simple rule was found on which kind of averaging is superior, and instead of suggesting simple recipes we cannot do much more than to create awareness of the traps related with averaging of mixing ratios obtained from logarithmic retrievals.

  1. Lagrangian averaging with geodesic mean.

    Oliver, Marcel

    2017-11-01

    This paper revisits the derivation of the Lagrangian averaged Euler (LAE), or Euler- α equations in the light of an intrinsic definition of the averaged flow map as the geodesic mean on the volume-preserving diffeomorphism group. Under the additional assumption that first-order fluctuations are statistically isotropic and transported by the mean flow as a vector field, averaging of the kinetic energy Lagrangian of an ideal fluid yields the LAE Lagrangian. The derivation presented here assumes a Euclidean spatial domain without boundaries.

  2. Cluster Analysis of Maize Inbred Lines

    Jiban Shrestha

    2016-12-01

    Full Text Available The determination of diversity among inbred lines is important for heterosis breeding. Sixty maize inbred lines were evaluated for their eight agro morphological traits during winter season of 2011 to analyze their genetic diversity. Clustering was done by average linkage method. The inbred lines were grouped into six clusters. Inbred lines grouped into Clusters II had taller plants with maximum number of leaves. The cluster III was characterized with shorter plants with minimum number of leaves. The inbred lines categorized into cluster V had early flowering whereas the group into cluster VI had late flowering time. The inbred lines grouped into the cluster III were characterized by higher value of anthesis silking interval (ASI and those of cluster VI had lower value of ASI. These results showed that the inbred lines having widely divergent clusters can be utilized in hybrid breeding programme.

  3. Averaging in spherically symmetric cosmology

    Coley, A. A.; Pelavas, N.

    2007-01-01

    The averaging problem in cosmology is of fundamental importance. When applied to study cosmological evolution, the theory of macroscopic gravity (MG) can be regarded as a long-distance modification of general relativity. In the MG approach to the averaging problem in cosmology, the Einstein field equations on cosmological scales are modified by appropriate gravitational correlation terms. We study the averaging problem within the class of spherically symmetric cosmological models. That is, we shall take the microscopic equations and effect the averaging procedure to determine the precise form of the correlation tensor in this case. In particular, by working in volume-preserving coordinates, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. We find that the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background must be of the form of a spatial curvature. Inhomogeneities and spatial averaging, through this spatial curvature correction term, can have a very significant dynamical effect on the dynamics of the Universe and cosmological observations; in particular, we discuss whether spatial averaging might lead to a more conservative explanation of the observed acceleration of the Universe (without the introduction of exotic dark matter fields). We also find that the correlation tensor for a non-FLRW background can be interpreted as the sum of a spatial curvature and an anisotropic fluid. This may lead to interesting effects of averaging on astrophysical scales. We also discuss the results of averaging an inhomogeneous Lemaitre-Tolman-Bondi solution as well as calculations of linear perturbations (that is, the backreaction) in an FLRW background, which support the main conclusions of the analysis

  4. An approximate analytical approach to resampling averages

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....

  5. Averaging models: parameters estimation with the R-Average procedure

    S. Noventa

    2010-01-01

    Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.

  6. Average Nuclear properties based on statistical model

    El-Jaick, L.J.

    1974-01-01

    The rough properties of nuclei were investigated by statistical model, in systems with the same and different number of protons and neutrons, separately, considering the Coulomb energy in the last system. Some average nuclear properties were calculated based on the energy density of nuclear matter, from Weizsscker-Beth mass semiempiric formulae, generalized for compressible nuclei. In the study of a s surface energy coefficient, the great influence exercised by Coulomb energy and nuclear compressibility was verified. For a good adjust of beta stability lines and mass excess, the surface symmetry energy were established. (M.C.K.) [pt

  7. Multiphase averaging of periodic soliton equations

    Forest, M.G.

    1979-01-01

    The multiphase averaging of periodic soliton equations is considered. Particular attention is given to the periodic sine-Gordon and Korteweg-deVries (KdV) equations. The periodic sine-Gordon equation and its associated inverse spectral theory are analyzed, including a discussion of the spectral representations of exact, N-phase sine-Gordon solutions. The emphasis is on physical characteristics of the periodic waves, with a motivation from the well-known whole-line solitons. A canonical Hamiltonian approach for the modulational theory of N-phase waves is prescribed. A concrete illustration of this averaging method is provided with the periodic sine-Gordon equation; explicit averaging results are given only for the N = 1 case, laying a foundation for a more thorough treatment of the general N-phase problem. For the KdV equation, very general results are given for multiphase averaging of the N-phase waves. The single-phase results of Whitham are extended to general N phases, and more importantly, an invariant representation in terms of Abelian differentials on a Riemann surface is provided. Several consequences of this invariant representation are deduced, including strong evidence for the Hamiltonian structure of N-phase modulational equations

  8. Molecularly characterized solvent extracts and saponins from Polygonum hydropiper L show high anti-angiogenic, anti-tumor, brine shrimp and fibroblast NIH/3T3 cell line cytotoxicity

    Muhammad eAyaz

    2016-03-01

    Full Text Available Polygonum hydropiper is used as anti-cancer and anti-rheumatic agent in folk medicine. This study was designed to investigate the anti-angiogenic, anti-tumor and cytotoxic potentials of different solvent extracts and isolated saponins. Samples were analyzed using GC, GC-MS to identify major and bioactive compounds. Quantitation of antiangiogenesis for the plant's samples including methanolic extract (Ph.Cr, its subsequent fractions; n-hexane (Ph.Hex, chloroform (Ph.Chf, ethyl acetate (Ph.EtAc, n-Butanol (Ph.Bt, aqueous (Ph.Aq, saponins (Ph.Sp were performed using the chick embryo chorioallantoic membrane (CAM assay. Potato disc anti-tumor assay was performed on Agrobacterium tumefaciens containing tumor inducing plasmid. Cytotoxicity was performed on Artemia salina and mouse embryonic fibroblast NIH/3T3 cell line using brine shrimps and MTT cells viability assays. The GC-MS analysis of Ph.Cr, Ph.Hex, Ph.Chf, Ph.Bt and Ph.EtAc identified 126, 124, 153, 131 and 164 compounds respectively. In anti-angiogenic assay, Ph.Chf, Ph.Sp, Ph.EtAc and Ph.Cr exhibited highest activity with IC50 of 28.65, 19.21, 88.75 and 461.53 µg/ml respectively. In anti-tumor assay, Ph.Sp, Ph.Chf, Ph.EtAc and Ph.Cr were most potent with IC50 of 18.39, 73.81, 217.19 and 342.53 µg/ml respectively. In MTT cells viability assay, Ph.Chf, Ph.EtAc, Ph.Sp were most active causing 79.00, 72.50 and 71.50% cytotoxicity respectively at 1000 µg/ml with the LD50 of 140, 160 and 175 µg/ml respectively. In overall study, Ph.Chf and Ph.Sp have shown overwhelming results which signifies their potentials as sources of therapeutic agents against cancer.

  9. Evaluations of average level spacings

    Liou, H.I.

    1980-01-01

    The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of 168 Er data. 19 figures, 2 tables

  10. High average power supercontinuum sources

    The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium.

  11. A cosmic ray muon going through CMS with the magnet at full field. The line shows the path of the muon reconstructed from information recorded in the various detectors.

    Ianna, Osborne

    2007-01-01

    The event display of the event 3981 from the MTCC run 2605. The data has been taken with a magnetic field of 3.8 T. A detailed model of the magnetic field corresponding to 4T is shown as a color gradient from 4T in the center (red) to 0 T outside of the detector (blue). The cosmic muon has been detected by all four detectors participating in the run: the drift tubes, the HCAL, the tracker and the ECAL subdetectors and it has been reconstructed online. The event display shows the reconstructed 4D segments in the drift tubes (magenta), the reconstructed hits in HCAL (blue), the locally reconstructed track in the tracker (green), the uncalibrated rec hits in ECAL (light green). A muon track was reconstructed in the drift tubes and extrapolated back into the detector taking the magnetic field into account (green).

  12. A Novel Busbar Protection Based on the Average Product of Fault Components

    Guibin Zou

    2018-05-01

    Full Text Available This paper proposes an original busbar protection method, based on the characteristics of the fault components. The method firstly extracts the fault components of the current and voltage after the occurrence of a fault, secondly it uses a novel phase-mode transformation array to obtain the aerial mode components, and lastly, it obtains the sign of the average product of the aerial mode voltage and current. For a fault on the busbar, the average products that are detected on all of the lines that are linked to the faulted busbar are all positive within a specific duration of the post-fault. However, for a fault at any one of these lines, the average product that has been detected on the faulted line is negative, while those on the non-faulted lines are positive. On the basis of the characteristic difference that is mentioned above, the identification criterion of the fault direction is established. Through comparing the fault directions on all of the lines, the busbar protection can quickly discriminate between an internal fault and an external fault. By utilizing the PSCAD/EMTDC software (4.6.0.0, Manitoba HVDC Research Centre, Winnipeg, MB, Canada, a typical 500 kV busbar model, with one and a half circuit breakers configuration, was constructed. The simulation results show that the proposed busbar protection has a good adjustability, high reliability, and rapid operation speed.

  13. When good = better than average

    Don A. Moore

    2007-10-01

    Full Text Available People report themselves to be above average on simple tasks and below average on difficult tasks. This paper proposes an explanation for this effect that is simpler than prior explanations. The new explanation is that people conflate relative with absolute evaluation, especially on subjective measures. The paper then presents a series of four studies that test this conflation explanation. These tests distinguish conflation from other explanations, such as differential weighting and selecting the wrong referent. The results suggest that conflation occurs at the response stage during which people attempt to disambiguate subjective response scales in order to choose an answer. This is because conflation has little effect on objective measures, which would be equally affected if the conflation occurred at encoding.

  14. Autoregressive Moving Average Graph Filtering

    Isufi, Elvin; Loukas, Andreas; Simonetto, Andrea; Leus, Geert

    2016-01-01

    One of the cornerstones of the field of signal processing on graphs are graph filters, direct analogues of classical filters, but intended for signals defined on graphs. This work brings forth new insights on the distributed graph filtering problem. We design a family of autoregressive moving average (ARMA) recursions, which (i) are able to approximate any desired graph frequency response, and (ii) give exact solutions for tasks such as graph signal denoising and interpolation. The design phi...

  15. Averaging Robertson-Walker cosmologies

    Brown, Iain A.; Robbers, Georg; Behrend, Juliane

    2009-01-01

    The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ω eff 0 ≈ 4 × 10 −6 , with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10 −8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w eff < −1/3 can be found for strongly phantom models

  16. Bayesian Averaging is Well-Temperated

    Hansen, Lars Kai

    2000-01-01

    Bayesian predictions are stochastic just like predictions of any other inference scheme that generalize from a finite sample. While a simple variational argument shows that Bayes averaging is generalization optimal given that the prior matches the teacher parameter distribution the situation is l...

  17. Gibbs equilibrium averages and Bogolyubov measure

    Sankovich, D.P.

    2011-01-01

    Application of the functional integration methods in equilibrium statistical mechanics of quantum Bose-systems is considered. We show that Gibbs equilibrium averages of Bose-operators can be represented as path integrals over a special Gauss measure defined in the corresponding space of continuous functions. We consider some problems related to integration with respect to this measure

  18. Function reconstruction from noisy local averages

    Chen Yu; Huang Jianguo; Han Weimin

    2008-01-01

    A regularization method is proposed for the function reconstruction from noisy local averages in any dimension. Error bounds for the approximate solution in L 2 -norm are derived. A number of numerical examples are provided to show computational performance of the method, with the regularization parameters selected by different strategies

  19. Exploiting scale dependence in cosmological averaging

    Mattsson, Teppo; Ronkainen, Maria

    2008-01-01

    We study the role of scale dependence in the Buchert averaging method, using the flat Lemaitre–Tolman–Bondi model as a testing ground. Within this model, a single averaging scale gives predictions that are too coarse, but by replacing it with the distance of the objects R(z) for each redshift z, we find an O(1%) precision at z<2 in the averaged luminosity and angular diameter distances compared to their exact expressions. At low redshifts, we show the improvement for generic inhomogeneity profiles, and our numerical computations further verify it up to redshifts z∼2. At higher redshifts, the method breaks down due to its inability to capture the time evolution of the inhomogeneities. We also demonstrate that the running smoothing scale R(z) can mimic acceleration, suggesting that it could be at least as important as the backreaction in explaining dark energy as an inhomogeneity induced illusion

  20. Aperture averaging in strong oceanic turbulence

    Gökçe, Muhsin Caner; Baykal, Yahya

    2018-04-01

    Receiver aperture averaging technique is employed in underwater wireless optical communication (UWOC) systems to mitigate the effects of oceanic turbulence, thus to improve the system performance. The irradiance flux variance is a measure of the intensity fluctuations on a lens of the receiver aperture. Using the modified Rytov theory which uses the small-scale and large-scale spatial filters, and our previously presented expression that shows the atmospheric structure constant in terms of oceanic turbulence parameters, we evaluate the irradiance flux variance and the aperture averaging factor of a spherical wave in strong oceanic turbulence. Irradiance flux variance variations are examined versus the oceanic turbulence parameters and the receiver aperture diameter are examined in strong oceanic turbulence. Also, the effect of the receiver aperture diameter on the aperture averaging factor is presented in strong oceanic turbulence.

  1. Show-Bix &

    2014-01-01

    The anti-reenactment 'Show-Bix &' consists of 5 dias projectors, a dial phone, quintophonic sound, and interactive elements. A responsive interface will enable the Dias projectors to show copies of original dias slides from the Show-Bix piece ”March på Stedet”, 265 images in total. The copies are...

  2. The average Indian female nose.

    Patil, Surendra B; Kale, Satish M; Jaiswal, Sumeet; Khare, Nishant; Math, Mahantesh

    2011-12-01

    This study aimed to delineate the anthropometric measurements of the noses of young women of an Indian population and to compare them with the published ideals and average measurements for white women. This anthropometric survey included a volunteer sample of 100 young Indian women ages 18 to 35 years with Indian parents and no history of previous surgery or trauma to the nose. Standardized frontal, lateral, oblique, and basal photographs of the subjects' noses were taken, and 12 standard anthropometric measurements of the nose were determined. The results were compared with published standards for North American white women. In addition, nine nasal indices were calculated and compared with the standards for North American white women. The nose of Indian women differs significantly from the white nose. All the nasal measurements for the Indian women were found to be significantly different from those for North American white women. Seven of the nine nasal indices also differed significantly. Anthropometric analysis suggests differences between the Indian female nose and the North American white nose. Thus, a single aesthetic ideal is inadequate. Noses of Indian women are smaller and wider, with a less projected and rounded tip than the noses of white women. This study established the nasal anthropometric norms for nasal parameters, which will serve as a guide for cosmetic and reconstructive surgery in Indian women.

  3. Model averaging, optimal inference and habit formation

    Thomas H B FitzGerald

    2014-06-01

    Full Text Available Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function – the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge – that of determining which model or models of their environment are the best for guiding behaviour. Bayesian model averaging – which says that an agent should weight the predictions of different models according to their evidence – provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent’s behaviour should show an equivalent balance. We hypothesise that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realisable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behaviour. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded Bayesian inference, focussing particularly upon the relationship between goal-directed and habitual behaviour.

  4. HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS

    2005-01-01

    Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department

  5. Talking with TV shows

    Sandvik, Kjetil; Laursen, Ditte

    2014-01-01

    User interaction with radio and television programmes is not a new thing. However, with new cross-media production concepts such as X Factor and Voice, this is changing dramatically. The second-screen logic of these productions encourages viewers, along with TV’s traditional one-way communication...... mode, to communicate on interactive (dialogue-enabling) devices such as laptops, smartphones and tablets. Using the TV show Voice as our example, this article shows how the technological and situational set-up of the production invites viewers to engage in new ways of interaction and communication...

  6. Talk Show Science.

    Moore, Mitzi Ruth

    1992-01-01

    Proposes having students perform skits in which they play the roles of the science concepts they are trying to understand. Provides the dialog for a skit in which hot and cold gas molecules are interviewed on a talk show to study how these properties affect wind, rain, and other weather phenomena. (MDH)

  7. Obesity in show cats.

    Corbee, R J

    2014-12-01

    Obesity is an important disease with a high prevalence in cats. Because obesity is related to several other diseases, it is important to identify the population at risk. Several risk factors for obesity have been described in the literature. A higher incidence of obesity in certain cat breeds has been suggested. The aim of this study was to determine whether obesity occurs more often in certain breeds. The second aim was to relate the increased prevalence of obesity in certain breeds to the official standards of that breed. To this end, 268 cats of 22 different breeds investigated by determining their body condition score (BCS) on a nine-point scale by inspection and palpation, at two different cat shows. Overall, 45.5% of the show cats had a BCS > 5, and 4.5% of the show cats had a BCS > 7. There were significant differences between breeds, which could be related to the breed standards. Most overweight and obese cats were in the neutered group. It warrants firm discussions with breeders and cat show judges to come to different interpretations of the standards in order to prevent overweight conditions in certain breeds from being the standard of beauty. Neutering predisposes for obesity and requires early nutritional intervention to prevent obese conditions. Journal of Animal Physiology and Animal Nutrition © 2014 Blackwell Verlag GmbH.

  8. Honored Teacher Shows Commitment.

    Ratte, Kathy

    1987-01-01

    Part of the acceptance speech of the 1985 National Council for the Social Studies Teacher of the Year, this article describes the censorship experience of this honored social studies teacher. The incident involved the showing of a videotape version of the feature film entitled "The Seduction of Joe Tynan." (JDH)

  9. Averaging in SU(2) open quantum random walk

    Ampadu Clement

    2014-01-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT

  10. Averaging in SU(2) open quantum random walk

    Clement, Ampadu

    2014-03-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT.

  11. The energy show

    1988-01-01

    The Energy Show is a new look at the problems of world energy, where our supplies come from, now and in the future. The programme looks at how we need energy to maintain our standards of living. Energy supply is shown as the complicated set of problems it is - that Fossil Fuels are both raw materials and energy sources, that some 'alternatives' so readily suggested as practical options are in reality a long way from being effective. (author)

  12. Statistics on exponential averaging of periodograms

    Peeters, T.T.J.M. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Ciftcioglu, Oe. [Istanbul Technical Univ. (Turkey). Dept. of Electrical Engineering

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a {chi}{sup 2} distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.).

  13. Statistics on exponential averaging of periodograms

    Peeters, T.T.J.M.; Ciftcioglu, Oe.

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a χ 2 distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.)

  14. Trend of Average Wages as Indicator of Hypothetical Money Illusion

    Julian Daszkowski

    2010-06-01

    Full Text Available The definition of wage in Poland not before 1998 includes any value of social security contribution. Changed definition creates higher level of reported wages, but was expected not to influence the take home pay. Nevertheless, the trend of average wages, after a short period, has returned to its previous line. Such effect is explained in the term of money illusion.

  15. Showing Value (Editorial

    Denise Koufogiannakis

    2009-06-01

    Full Text Available When Su Cleyle and I first decided to start Evidence Based Library and Information Practice, one of the things we agreed upon immediately was that the journal be open access. We knew that a major obstacle to librarians using the research literature was that they did not have access to the research literature. Although Su and I are both academic librarians who can access a wide variety of library and information literature from our institutions, we belong to a profession where not everyone has equal access to the research in our field. Without such access to our own body of literature, how can we ever hope for practitioners to use research evidence in their decision making? It would have been contradictory to the principles of evidence based library and information practice to do otherwise.One of the specific groups we thought could use such an open access venue for discovering research literature was school librarians. School librarians are often isolated and lacking access to the research literature that may help them prove to stakeholders the importance of their libraries and their role within schools. Certainly, school libraries have been in decline and the use of evidence to show value is needed. As Ken Haycock noted in his 2003 report, The Crisis in Canada’s School Libraries: The Case for Reform and Reinvestment, “Across the country, teacher-librarians are losing their jobs or being reassigned. Collections are becoming depleted owing to budget cuts. Some principals believe that in the age of the Internet and the classroom workstation, the school library is an artifact” (9. Within this context, school librarians are looking to our research literature for evidence of the impact that school library programs have on learning outcomes and student success. They are integrating that evidence into their practice, and reflecting upon what can be improved locally. They are focusing on students and showing the impact of school libraries and

  16. Unscrambling The "Average User" Of Habbo Hotel

    Mikael Johnson

    2007-01-01

    Full Text Available The “user” is an ambiguous concept in human-computer interaction and information systems. Analyses of users as social actors, participants, or configured users delineate approaches to studying design-use relationships. Here, a developer’s reference to a figure of speech, termed the “average user,” is contrasted with design guidelines. The aim is to create an understanding about categorization practices in design through a case study about the virtual community, Habbo Hotel. A qualitative analysis highlighted not only the meaning of the “average user,” but also the work that both the developer and the category contribute to this meaning. The average user a represents the unknown, b influences the boundaries of the target user groups, c legitimizes the designer to disregard marginal user feedback, and d keeps the design space open, thus allowing for creativity. The analysis shows how design and use are intertwined and highlights the developers’ role in governing different users’ interests.

  17. Time-dependent angularly averaged inverse transport

    Bal, Guillaume; Jollivet, Alexandre

    2009-01-01

    This paper concerns the reconstruction of the absorption and scattering parameters in a time-dependent linear transport equation from knowledge of angularly averaged measurements performed at the boundary of a domain of interest. Such measurement settings find applications in medical and geophysical imaging. We show that the absorption coefficient and the spatial component of the scattering coefficient are uniquely determined by such measurements. We obtain stability results on the reconstruction of the absorption and scattering parameters with respect to the measured albedo operator. The stability results are obtained by a precise decomposition of the measurements into components with different singular behavior in the time domain

  18. A Martian PFS average spectrum: Comparison with ISO SWS

    Formisano, V.; Encrenaz, T.; Fonti, S.; Giuranna, M.; Grassi, D.; Hirsh, H.; Khatuntsev, I.; Ignatiev, N.; Lellouch, E.; Maturilli, A.; Moroz, V.; Orleanski, P.; Piccioni, G.; Rataj, M.; Saggin, B.; Zasova, L.

    2005-08-01

    The evaluation of the planetary Fourier spectrometer performance at Mars is presented by comparing an average spectrum with the ISO spectrum published by Lellouch et al. [2000. Planet. Space Sci. 48, 1393.]. First, the average conditions of Mars atmosphere are compared, then the mixing ratios of the major gases are evaluated. Major and minor bands of CO 2 are compared, from the point of view of features characteristics and bands depth. The spectral resolution is also compared using several solar lines. The result indicates that PFS radiance is valid to better than 1% in the wavenumber range 1800-4200 cm -1 for the average spectrum considered (1680 measurements). The PFS monochromatic transfer function generates an overshooting on the left-hand side of strong narrow lines (solar or atmospheric). The spectral resolution of PFS is of the order of 1.3 cm -1 or better. A large number of narrow features to be identified are discovered.

  19. On-line calculation of 3-D power distribution

    Park, Y. H.; In, W. K.; Park, J. R.; Lee, C. C.; Auh, G. S.

    1996-01-01

    The 3-D power distribution synthesis scheme was implemented in Totally Integrated Core Operation Monitoring System (TICOMS), which is under development as the next generation core monitoring system. The on-line 3-D core power distribution obtained from the measured fixed incore detector readings is used to construct the hot pin power as well as the core average axial power distribution. The core average axial power distribution and the hot pin power of TICOMS were compared with those of the current digital on-line core monitoring system, COLSS, which construct the core average axial power distribution and the pseudo hot pin power. The comparison shows that TICOMS results in the slightly more accurate core average axial power distribution and the less conservative hot pin power. Therefore, these results increased the core operating margins. In addition, the on-line 3-D power distribution is expected to be very useful for the core operation in the future

  20. High-average-power solid state lasers

    Summers, M.A.

    1989-01-01

    In 1987, a broad-based, aggressive R ampersand D program aimed at developing the technologies necessary to make possible the use of solid state lasers that are capable of delivering medium- to high-average power in new and demanding applications. Efforts were focused along the following major lines: development of laser and nonlinear optical materials, and of coatings for parasitic suppression and evanescent wave control; development of computational design tools; verification of computational models on thoroughly instrumented test beds; and applications of selected aspects of this technology to specific missions. In the laser materials areas, efforts were directed towards producing strong, low-loss laser glasses and large, high quality garnet crystals. The crystal program consisted of computational and experimental efforts aimed at understanding the physics, thermodynamics, and chemistry of large garnet crystal growth. The laser experimental efforts were directed at understanding thermally induced wave front aberrations in zig-zag slabs, understanding fluid mechanics, heat transfer, and optical interactions in gas-cooled slabs, and conducting critical test-bed experiments with various electro-optic switch geometries. 113 refs., 99 figs., 18 tabs

  1. Averaging of nonlinearity-managed pulses

    Zharnitsky, Vadim; Pelinovsky, Dmitry

    2005-01-01

    We consider the nonlinear Schroedinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons

  2. Asymmetric network connectivity using weighted harmonic averages

    Morrison, Greg; Mahadevan, L.

    2011-02-01

    We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.

  3. Cusps enable line attractors for neural computation

    Xiao, Zhuocheng; Zhang, Jiwei; Sornborger, Andrew T.; Tao, Louis

    2017-01-01

    Here, line attractors in neuronal networks have been suggested to be the basis of many brain functions, such as working memory, oculomotor control, head movement, locomotion, and sensory processing. In this paper, we make the connection between line attractors and pulse gating in feed-forward neuronal networks. In this context, because of their neutral stability along a one-dimensional manifold, line attractors are associated with a time-translational invariance that allows graded information to be propagated from one neuronal population to the next. To understand how pulse-gating manifests itself in a high-dimensional, nonlinear, feedforward integrate-and-fire network, we use a Fokker-Planck approach to analyze system dynamics. We make a connection between pulse-gated propagation in the Fokker-Planck and population-averaged mean-field (firing rate) models, and then identify an approximate line attractor in state space as the essential structure underlying graded information propagation. An analysis of the line attractor shows that it consists of three fixed points: a central saddle with an unstable manifold along the line and stable manifolds orthogonal to the line, which is surrounded on either side by stable fixed points. Along the manifold defined by the fixed points, slow dynamics give rise to a ghost. We show that this line attractor arises at a cusp catastrophe, where a fold bifurcation develops as a function of synaptic noise; and that the ghost dynamics near the fold of the cusp underly the robustness of the line attractor. Understanding the dynamical aspects of this cusp catastrophe allows us to show how line attractors can persist in biologically realistic neuronal networks and how the interplay of pulse gating, synaptic coupling, and neuronal stochasticity can be used to enable attracting one-dimensional manifolds and, thus, dynamically control the processing of graded information.

  4. Cusps enable line attractors for neural computation

    Xiao, Zhuocheng; Zhang, Jiwei; Sornborger, Andrew T.; Tao, Louis

    2017-11-01

    Line attractors in neuronal networks have been suggested to be the basis of many brain functions, such as working memory, oculomotor control, head movement, locomotion, and sensory processing. In this paper, we make the connection between line attractors and pulse gating in feed-forward neuronal networks. In this context, because of their neutral stability along a one-dimensional manifold, line attractors are associated with a time-translational invariance that allows graded information to be propagated from one neuronal population to the next. To understand how pulse-gating manifests itself in a high-dimensional, nonlinear, feedforward integrate-and-fire network, we use a Fokker-Planck approach to analyze system dynamics. We make a connection between pulse-gated propagation in the Fokker-Planck and population-averaged mean-field (firing rate) models, and then identify an approximate line attractor in state space as the essential structure underlying graded information propagation. An analysis of the line attractor shows that it consists of three fixed points: a central saddle with an unstable manifold along the line and stable manifolds orthogonal to the line, which is surrounded on either side by stable fixed points. Along the manifold defined by the fixed points, slow dynamics give rise to a ghost. We show that this line attractor arises at a cusp catastrophe, where a fold bifurcation develops as a function of synaptic noise; and that the ghost dynamics near the fold of the cusp underly the robustness of the line attractor. Understanding the dynamical aspects of this cusp catastrophe allows us to show how line attractors can persist in biologically realistic neuronal networks and how the interplay of pulse gating, synaptic coupling, and neuronal stochasticity can be used to enable attracting one-dimensional manifolds and, thus, dynamically control the processing of graded information.

  5. Tomato Fruits Show Wide Phenomic Diversity but Fruit Developmental Genes Show Low Genomic Diversity.

    Vijee Mohan

    Full Text Available Domestication of tomato has resulted in large diversity in fruit phenotypes. An intensive phenotyping of 127 tomato accessions from 20 countries revealed extensive morphological diversity in fruit traits. The diversity in fruit traits clustered the accessions into nine classes and identified certain promising lines having desirable traits pertaining to total soluble salts (TSS, carotenoids, ripening index, weight and shape. Factor analysis of the morphometric data from Tomato Analyzer showed that the fruit shape is a complex trait shared by several factors. The 100% variance between round and flat fruit shapes was explained by one discriminant function having a canonical correlation of 0.874 by stepwise discriminant analysis. A set of 10 genes (ACS2, COP1, CYC-B, RIN, MSH2, NAC-NOR, PHOT1, PHYA, PHYB and PSY1 involved in various plant developmental processes were screened for SNP polymorphism by EcoTILLING. The genetic diversity in these genes revealed a total of 36 non-synonymous and 18 synonymous changes leading to the identification of 28 haplotypes. The average frequency of polymorphism across the genes was 0.038/Kb. Significant negative Tajima'D statistic in two of the genes, ACS2 and PHOT1 indicated the presence of rare alleles in low frequency. Our study indicates that while there is low polymorphic diversity in the genes regulating plant development, the population shows wider phenotype diversity. Nonetheless, morphological and genetic diversity of the present collection can be further exploited as potential resources in future.

  6. Averaged null energy condition from causality

    Hartman, Thomas; Kundu, Sandipan; Tajdini, Amirhossein

    2017-07-01

    Unitary, Lorentz-invariant quantum field theories in flat spacetime obey mi-crocausality: commutators vanish at spacelike separation. For interacting theories in more than two dimensions, we show that this implies that the averaged null energy, ∫ duT uu , must be non-negative. This non-local operator appears in the operator product expansion of local operators in the lightcone limit, and therefore contributes to n-point functions. We derive a sum rule that isolates this contribution and is manifestly positive. The argument also applies to certain higher spin operators other than the stress tensor, generating an infinite family of new constraints of the form ∫ duX uuu··· u ≥ 0. These lead to new inequalities for the coupling constants of spinning operators in conformal field theory, which include as special cases (but are generally stronger than) the existing constraints from the lightcone bootstrap, deep inelastic scattering, conformal collider methods, and relative entropy. We also comment on the relation to the recent derivation of the averaged null energy condition from relative entropy, and suggest a more general connection between causality and information-theoretic inequalities in QFT.

  7. Asymptotic Time Averages and Frequency Distributions

    Muhammad El-Taha

    2016-01-01

    Full Text Available Consider an arbitrary nonnegative deterministic process (in a stochastic setting {X(t,  t≥0} is a fixed realization, i.e., sample-path of the underlying stochastic process with state space S=(-∞,∞. Using a sample-path approach, we give necessary and sufficient conditions for the long-run time average of a measurable function of process to be equal to the expectation taken with respect to the same measurable function of its long-run frequency distribution. The results are further extended to allow unrestricted parameter (time space. Examples are provided to show that our condition is not superfluous and that it is weaker than uniform integrability. The case of discrete-time processes is also considered. The relationship to previously known sufficient conditions, usually given in stochastic settings, will also be discussed. Our approach is applied to regenerative processes and an extension of a well-known result is given. For researchers interested in sample-path analysis, our results will give them the choice to work with the time average of a process or its frequency distribution function and go back and forth between the two under a mild condition.

  8. Average Gait Differential Image Based Human Recognition

    Jinyan Chen

    2014-01-01

    Full Text Available The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI, AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition.

  9. The average size of ordered binary subgraphs

    van Leeuwen, J.; Hartel, Pieter H.

    To analyse the demands made on the garbage collector in a graph reduction system, the change in size of an average graph is studied when an arbitrary edge is removed. In ordered binary trees the average number of deleted nodes as a result of cutting a single edge is equal to the average size of a

  10. Interpreting Bivariate Regression Coefficients: Going beyond the Average

    Halcoussis, Dennis; Phillips, G. Michael

    2010-01-01

    Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…

  11. Decision trees with minimum average depth for sorting eight elements

    AbouEisha, Hassan M.; Chikalov, Igor; Moshkov, Mikhail

    2015-01-01

    We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees

  12. Decision trees with minimum average depth for sorting eight elements

    AbouEisha, Hassan M.

    2015-11-19

    We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees is approximately equal to 8.548×10^326365), has also minimum depth. Both problems were considered by Knuth (1998). To obtain these results, we use tools based on extensions of dynamic programming which allow us to make sequential optimization of decision trees relative to depth and average depth, and to count the number of decision trees with minimum average depth.

  13. Determining average path length and average trapping time on generalized dual dendrimer

    Li, Ling; Guan, Jihong

    2015-03-01

    Dendrimer has wide number of important applications in various fields. In some cases during transport or diffusion process, it transforms into its dual structure named Husimi cactus. In this paper, we study the structure properties and trapping problem on a family of generalized dual dendrimer with arbitrary coordination numbers. We first calculate exactly the average path length (APL) of the networks. The APL increases logarithmically with the network size, indicating that the networks exhibit a small-world effect. Then we determine the average trapping time (ATT) of the trapping process in two cases, i.e., the trap placed on a central node and the trap is uniformly distributed in all the nodes of the network. In both case, we obtain explicit solutions of ATT and show how they vary with the networks size. Besides, we also discuss the influence of the coordination number on trapping efficiency.

  14. Chromospheric oscillations observed with OSO 8. III. Average phase spectra for Si II

    White, O.R.; Athay, R.G.

    1979-01-01

    Time series of intensity and Doppler-shift fluctuations in the Si II emission lines lambda816.93 and lambda817.45 are Fourier analyzed to determine the frequency variation of phase differences between intensity and velocity and between these two lines formed 300 km apart in the middle chromosphere. Average phase spectra show that oscillations between 2 and 9 mHz in the two lines have time delays from 35 to 40 s, which is consistent with the upward propagation of sound wave at 8.6-7.5 km s -1 . In this same frequency band near 3 mHz, maximum brightness leads maximum blueshift by 60 0 . At frequencies above 11 mHz where the power spectrum is flat, the phase differences are uncertain, but approximately 65% of the cases indicate upward propagation. At these higher frequencies, the phase lead between intensity and blue Doppler shift ranges from 0 0 to 180 0 with an average value of 90 0 . However, the phase estimates in this upper band are corrupted by both aliasing and randomness inherent to the measured signals. Phase differences in the two narrow spectral features seen at 10.5 and 27 mHz in the power spectra are shown to be consistent with properties expected for aliases of the wheel rotation rate of the spacecraft wheel section

  15. The Health Effects of Income Inequality: Averages and Disparities.

    Truesdale, Beth C; Jencks, Christopher

    2016-01-01

    Much research has investigated the association of income inequality with average life expectancy, usually finding negative correlations that are not very robust. A smaller body of work has investigated socioeconomic disparities in life expectancy, which have widened in many countries since 1980. These two lines of work should be seen as complementary because changes in average life expectancy are unlikely to affect all socioeconomic groups equally. Although most theories imply long and variable lags between changes in income inequality and changes in health, empirical evidence is confined largely to short-term effects. Rising income inequality can affect individuals in two ways. Direct effects change individuals' own income. Indirect effects change other people's income, which can then change a society's politics, customs, and ideals, altering the behavior even of those whose own income remains unchanged. Indirect effects can thus change both average health and the slope of the relationship between individual income and health.

  16. Diversity analysis of Ethiopian mustard breeding lines using RAPD ...

    Using cluster analysis based on unweighted pair-group method with arithmetic average (UPGMA) and principal coordinate analysis (PCoA), the 21 Ethiopian inbred lines were grouped into three subgroups and the single genotype introduced from Sweden formed a separate group. The clustering pattern failed to show a ...

  17. Focusing of cosmic radiation near power lines. A theoretical approach

    Skedsmo, A.; Vistnes, A.I.

    1997-02-01

    The purpose of this work was to determine if, and to what extent, cosmic radiation can be focused by power lines. As an alternative to experimental measurements, a computer program was developed for simulation of particle trajectories. Starting from given initial values, the cosmic particles trajectories through the electromagnetic field surrounding power lines were simulated. Particular efforts have been made to choose initial values which represent the actual physical condition of the cosmic radiation at ground level. The results show an average decrease in the particle flux density in an area below a power line and a corresponding increased flux between 12 m and 45 m on either side of the centre of the power line. The average shift in flux density is, however, extremely small (less than 0.1%) and probably not measurable with existing detector technology. 11 refs., 4 figs., 2 tabs

  18. Averaging for solitons with nonlinearity management

    Pelinovsky, D.E.; Kevrekidis, P.G.; Frantzeskakis, D.J.

    2003-01-01

    We develop an averaging method for solitons of the nonlinear Schroedinger equation with a periodically varying nonlinearity coefficient, which is used to effectively describe solitons in Bose-Einstein condensates, in the context of the recently proposed technique of Feshbach resonance management. Using the derived local averaged equation, we study matter-wave bright and dark solitons and demonstrate a very good agreement between solutions of the averaged and full equations

  19. Salecker-Wigner-Peres clock and average tunneling times

    Lunardi, Jose T.; Manzoni, Luiz A.; Nystrom, Andrew T.

    2011-01-01

    The quantum clock of Salecker-Wigner-Peres is used, by performing a post-selection of the final state, to obtain average transmission and reflection times associated to the scattering of localized wave packets by static potentials in one dimension. The behavior of these average times is studied for a Gaussian wave packet, centered around a tunneling wave number, incident on a rectangular barrier and, in particular, on a double delta barrier potential. The regime of opaque barriers is investigated and the results show that the average transmission time does not saturate, showing no evidence of the Hartman effect (or its generalized version).

  20. DSCOVR Magnetometer Level 2 One Minute Averages

    National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-minute average of Level 1 data

  1. DSCOVR Magnetometer Level 2 One Second Averages

    National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-second average of Level 1 data

  2. NOAA Average Annual Salinity (3-Zone)

    California Natural Resource Agency — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...

  3. Improving consensus structure by eliminating averaging artifacts

    KC Dukka B

    2009-03-01

    Full Text Available Abstract Background Common structural biology methods (i.e., NMR and molecular dynamics often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures. However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%; in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38 of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA 1, our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction 2, which

  4. 40 CFR 76.11 - Emissions averaging.

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Emissions averaging. 76.11 Section 76.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General...

  5. Determinants of College Grade Point Averages

    Bailey, Paul Dean

    2012-01-01

    Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the…

  6. Dimensioning of lining galleries in deep clays

    Bernaud, D.; Rousset, G.

    1991-01-01

    The aim of the work presented in this report is to study the mechanical behaviour of lining galleries in deep clays. This text constitutes a part of the researches on the feasibility of a geological disposal of radioactive waste, which the scope is to assure the gallery long term stabilization and also to optimize its dimensioning. In particular, we are interested here in the study of a closure controlled lining, that constitutes a direct application of the convergence-confinement method, especially well fitted to deep clays. The presentation and interpretation of the convergence controlled lining test, which was performed in the experimental gallery of Mol in Belgium, is given in this report. The instrumentation was conceived in order to find out the stress field exerced by the rockmass on the lining, the internal stress field inside the lining and the gallery closure. The analysis of all measurements results, obtained between november 1987 and December 1989, shows that they are all in good agreement and that the lining design was well chosen. Two years after the gallery construction, the average closure is of the order of 2% and the average confinement pressure is about 1.6 MPa (the third of the lithostatic pressure). The time dependent effects of the rockmass are very well modelled by the non linear elasto-viscoplastic law developed at L.M.S. with the laboratory tests. The elastic-plastic model of the lining are shown to be well fitted to simulate the sliding of the ribs. Finally, the numerical results have shown a very good agreement with the measurements results

  7. Bootstrapping pre-averaged realized volatility under market microstructure noise

    Hounyo, Ulrich; Goncalves, Sílvia; Meddahi, Nour

    The main contribution of this paper is to propose a bootstrap method for inference on integrated volatility based on the pre-averaging approach of Jacod et al. (2009), where the pre-averaging is done over all possible overlapping blocks of consecutive observations. The overlapping nature of the pre......-averaged returns implies that these are kn-dependent with kn growing slowly with the sample size n. This motivates the application of a blockwise bootstrap method. We show that the "blocks of blocks" bootstrap method suggested by Politis and Romano (1992) (and further studied by Bühlmann and Künsch (1995......)) is valid only when volatility is constant. The failure of the blocks of blocks bootstrap is due to the heterogeneity of the squared pre-averaged returns when volatility is stochastic. To preserve both the dependence and the heterogeneity of squared pre-averaged returns, we propose a novel procedure...

  8. High Line

    Kiib, Hans

    2015-01-01

    At just over 10 meters above street level, the High Line extends three kilometers through three districts of Southwestern Manhattan in New York. It consists of simple steel construction, and previously served as an elevated rail line connection between Penn Station on 34th Street and the many....... The High Line project has been carried out as part of an open conversion strategy. The result is a remarkable urban architectural project, which works as a catalyst for the urban development of Western Manhattan. The greater project includes the restoration and reuse of many old industrial buildings...

  9. Transmission line capital costs

    Hughes, K.R.; Brown, D.R.

    1995-05-01

    The displacement or deferral of conventional AC transmission line installation is a key benefit associated with several technologies being developed with the support of the U.S. Department of Energy's Office of Energy Management (OEM). Previous benefits assessments conducted within OEM have been based on significantly different assumptions for the average cost per mile of AC transmission line. In response to this uncertainty, an investigation of transmission line capital cost data was initiated. The objective of this study was to develop a database for preparing preliminary estimates of transmission line costs. An extensive search of potential data sources identified databases maintained by the Bonneville Power Administration (BPA) and the Western Area Power Administration (WAPA) as superior sources of transmission line cost data. The BPA and WAPA data were adjusted to a common basis and combined together. The composite database covers voltage levels from 13.8 to 765 W, with cost estimates for a given voltage level varying depending on conductor size, tower material type, tower frame type, and number of circuits. Reported transmission line costs vary significantly, even for a given voltage level. This can usually be explained by variation in the design factors noted above and variation in environmental and land (right-of-way) costs, which are extremely site-specific. Cost estimates prepared from the composite database were compared to cost data collected by the Federal Energy Regulatory Commission (FERC) for investor-owned utilities from across the United States. The comparison was hampered because the only design specifications included with the FERC data were voltage level and line length. Working within this limitation, the FERC data were not found to differ significantly from the composite database. Therefore, the composite database was judged to be a reasonable proxy for estimating national average costs

  10. Delineation of facial archetypes by 3d averaging.

    Shaweesh, Ashraf I; Thomas, C David L; Bankier, Agnes; Clement, John G

    2004-10-01

    The objective of this study was to investigate the feasibility of creating archetypal 3D faces through computerized 3D facial averaging. A 3D surface scanner Fiore and its software were used to acquire the 3D scans of the faces while 3D Rugle3 and locally-developed software generated the holistic facial averages. 3D facial averages were created from two ethnic groups; European and Japanese and from children with three previous genetic disorders; Williams syndrome, achondroplasia and Sotos syndrome as well as the normal control group. The method included averaging the corresponding depth (z) coordinates of the 3D facial scans. Compared with other face averaging techniques there was not any warping or filling in the spaces by interpolation; however, this facial average lacked colour information. The results showed that as few as 14 faces were sufficient to create an archetypal facial average. In turn this would make it practical to use face averaging as an identification tool in cases where it would be difficult to recruit a larger number of participants. In generating the average, correcting for size differences among faces was shown to adjust the average outlines of the facial features. It is assumed that 3D facial averaging would help in the identification of the ethnic status of persons whose identity may not be known with certainty. In clinical medicine, it would have a great potential for the diagnosis of syndromes with distinctive facial features. The system would also assist in the education of clinicians in the recognition and identification of such syndromes.

  11. World lines.

    Waser Jürgen; Fuchs Raphael; Ribicic Hrvoje; Schindler Benjamin; Blöschl Günther; Gröller Eduard

    2010-01-01

    In this paper we present World Lines as a novel interactive visualization that provides complete control over multiple heterogeneous simulation runs. In many application areas decisions can only be made by exploring alternative scenarios. The goal of the suggested approach is to support users in this decision making process. In this setting the data domain is extended to a set of alternative worlds where only one outcome will actually happen. World Lines integrate simulation visualization and...

  12. Rotational averaging of multiphoton absorption cross sections

    Friese, Daniel H., E-mail: daniel.h.friese@uit.no; Beerepoot, Maarten T. P.; Ruud, Kenneth [Centre for Theoretical and Computational Chemistry, University of Tromsø — The Arctic University of Norway, N-9037 Tromsø (Norway)

    2014-11-28

    Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.

  13. Sea Surface Temperature Average_SST_Master

    National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using ArcGIS...

  14. Trajectory averaging for stochastic approximation MCMC algorithms

    Liang, Faming

    2010-01-01

    to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic

  15. Should the average tax rate be marginalized?

    Feldman, N. E.; Katuščák, Peter

    -, č. 304 (2006), s. 1-65 ISSN 1211-3298 Institutional research plan: CEZ:MSM0021620846 Keywords : tax * labor supply * average tax Subject RIV: AH - Economics http://www.cerge-ei.cz/pdf/wp/Wp304.pdf

  16. A practical guide to averaging functions

    Beliakov, Gleb; Calvo Sánchez, Tomasa

    2016-01-01

    This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...

  17. MN Temperature Average (1961-1990) - Polygon

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  18. Anther and isolated microspore culture of wheat lines from northwestern and eastern Europe

    Holme, I B; Olesen, A; Hansen, N J P

    1999-01-01

    Hexaploid wheat genotypes from north-western Europe show low responses to current anther culture techniques. This phenomenon was investigated on 145 north-western European wheat lines. Twenty-seven lines from eastern Europe were included to observe the response pattern of wheat from an area, where...... the technique has been used successfully. On average, eastern European wheat lines produced 3.6 green plants per 111 anthers, while only 1.4 green plants per 111 anthers were obtained in north-western European lines. This difference was due to the high capacity for embryo formation among the eastern European...... lines, while the ability to regenerate green plants was widespread in both germplasm groups. Isolated wheat microspore culture performed on 85 of these wheat lines gave an average 3.7-fold increase in green plants per anther compared with the anther culture response. The increased recovery of green...

  19. Average Bandwidth Allocation Model of WFQ

    Tomáš Balogh

    2012-01-01

    Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.

  20. Nonequilibrium statistical averages and thermo field dynamics

    Marinaro, A.; Scarpetta, Q.

    1984-01-01

    An extension of thermo field dynamics is proposed, which permits the computation of nonequilibrium statistical averages. The Brownian motion of a quantum oscillator is treated as an example. In conclusion it is pointed out that the procedure proposed to computation of time-dependent statistical average gives the correct two-point Green function for the damped oscillator. A simple extension can be used to compute two-point Green functions of free particles

  1. Bivariate copulas on the exponentially weighted moving average control chart

    Sasigarn Kuvattana

    2016-10-01

    Full Text Available This paper proposes four types of copulas on the Exponentially Weighted Moving Average (EWMA control chart when observations are from an exponential distribution using a Monte Carlo simulation approach. The performance of the control chart is based on the Average Run Length (ARL which is compared for each copula. Copula functions for specifying dependence between random variables are used and measured by Kendall’s tau. The results show that the Normal copula can be used for almost all shifts.

  2. Covariant electromagnetic field lines

    Hadad, Y.; Cohen, E.; Kaminer, I.; Elitzur, A. C.

    2017-08-01

    Faraday introduced electric field lines as a powerful tool for understanding the electric force, and these field lines are still used today in classrooms and textbooks teaching the basics of electromagnetism within the electrostatic limit. However, despite attempts at generalizing this concept beyond the electrostatic limit, such a fully relativistic field line theory still appears to be missing. In this work, we propose such a theory and define covariant electromagnetic field lines that naturally extend electric field lines to relativistic systems and general electromagnetic fields. We derive a closed-form formula for the field lines curvature in the vicinity of a charge, and show that it is related to the world line of the charge. This demonstrates how the kinematics of a charge can be derived from the geometry of the electromagnetic field lines. Such a theory may also provide new tools in modeling and analyzing electromagnetic phenomena, and may entail new insights regarding long-standing problems such as radiation-reaction and self-force. In particular, the electromagnetic field lines curvature has the attractive property of being non-singular everywhere, thus eliminating all self-field singularities without using renormalization techniques.

  3. Optimization of lining design in deep clays

    Rousset, G.; Bublitz, D.

    1989-01-01

    The main features of the mechanical behaviour of deep clay are time dependent effects and also the existence of a long term cohesion which may be taken into account for dimensioning galleries. In this text, a lining optimization test is presented. It concerns a gallery driven in deep clay, 230 m. deep, at Mol (Belgium). We show that sliding rib lining gives both: - an optimal tunnel face advance speed, a minimal closure of the gallery wall before setting the lining and therefore less likelihood of failure developing inside the rock mass. - limitation of the length of the non-lined part of the gallery. The chosen process allows on one hand the preservation of the rock mass integrity, and, on the other, use of the confinement effect to allow closure under high average stress conditions; this process can be considered as an optimal application of the convergence-confinement method. An important set of measurement devices is then presented along with results obtained for one year's operation. We show in particular that stress distribution in the lining is homogeneous and that the sliding limit can be measured with high precision

  4. Average [O II] nebular emission associated with Mg II absorbers: dependence on Fe II absorption

    Joshi, Ravi; Srianand, Raghunathan; Petitjean, Patrick; Noterdaeme, Pasquier

    2018-05-01

    We investigate the effect of Fe II equivalent width (W2600) and fibre size on the average luminosity of [O II] λλ3727, 3729 nebular emission associated with Mg II absorbers (at 0.55 ≤ z ≤ 1.3) in the composite spectra of quasars obtained with 3 and 2 arcsec fibres in the Sloan Digital Sky Survey. We confirm the presence of strong correlations between [O II] luminosity (L_{[O II]}) and equivalent width (W2796) and redshift of Mg II absorbers. However, we show L_{[O II]} and average luminosity surface density suffer from fibre size effects. More importantly, for a given fibre size, the average L_{[O II]} strongly depends on the equivalent width of Fe II absorption lines and found to be higher for Mg II absorbers with R ≡W2600/W2796 ≥ 0.5. In fact, we show the observed strong correlations of L_{[O II]} with W2796 and z of Mg II absorbers are mainly driven by such systems. Direct [O II] detections also confirm the link between L_{[O II]} and R. Therefore, one has to pay attention to the fibre losses and dependence of redshift evolution of Mg II absorbers on W2600 before using them as a luminosity unbiased probe of global star formation rate density. We show that the [O II] nebular emission detected in the stacked spectrum is not dominated by few direct detections (i.e. detections ≥3σ significant level). On an average, the systems with R ≥ 0.5 and W2796 ≥ 2 Å are more reddened, showing colour excess E(B - V) ˜ 0.02, with respect to the systems with R < 0.5 and most likely trace the high H I column density systems.

  5. Improved averaging for non-null interferometry

    Fleig, Jon F.; Murphy, Paul E.

    2013-09-01

    Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.

  6. Evaluation of Efficient Line Lengths for Better Readability

    Zahid Hussain

    2012-01-01

    Full Text Available In this paper the major findings of a formal experiment about onscreen text line lengths are presented. The experiment examined the effects of four different line lengths on the reading speed and the reading efficiency. Efficiency is defined as a combination of reading speed and accuracy. Sixteen people between the age of 24 and 36 participated at the experiment. The subjects had to read four different texts with an average line length around 2000 characters. The texts contained substitution words, which had to be detected by the subjects to measure reading accuracy. Besides objective measures like reading speed and accuracy, the subjects were asked to subjectively vote on their reading experience. The results from our objective measures show strong similarities to those of the work done previously by different researchers. The absolute reading speed grows when the line length grows from CPL (Characters Per Line 30-120. The measured reading efficiency, however, doesn\\\\\\'t grow steadily, although a growing trend can be seen. This is due to the fact, that the test persons found in average more substitution words from the 60 CPL text than they did from the 30 and 90 CPL texts. The reading speed seems to increase while the line length increases but the overall comprehension seems to peak at medium line lengths. As in the previous studies, our test persons also prefer the medium (60 and 90 CPL line lengths, although they perform better when reading longer lines. In the overall subjective opinion 13 out of 16 test persons selected the 60 or 90 CPL line length as their favorite. The literature doesn\\\\\\'t truly provide a scientific explanation for the difference between the objective performance and the subjective preference. A natural hypothesis would be that the line length that is the fastest to read would also feel most comfortable to the readers but in the light of this and the earlier research it seems like this is not the case.

  7. Silver linings.

    Bultas, Margaret W; Pohlman, Shawn

    2014-01-01

    The purpose of this interpretive phenomenological study was to gain a better understanding of the experiences of 11 mothers of preschool children with autism spectrum disorder (ASD). Mothers were interviewed three times over a 6 week period. Interviews were analyzed using interpretive methods. This manuscript highlights one particular theme-a positive perspective mothers described as the "silver lining." This "silver lining" represents optimism despite the adversities associated with parenting a child with ASD. A deeper understanding of this side of mothering children with ASD may help health care providers improve rapport, communication, and result in more authentic family centered care. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Asynchronous Gossip for Averaging and Spectral Ranking

    Borkar, Vivek S.; Makhijani, Rahul; Sundaresan, Rajesh

    2014-08-01

    We consider two variants of the classical gossip algorithm. The first variant is a version of asynchronous stochastic approximation. We highlight a fundamental difficulty associated with the classical asynchronous gossip scheme, viz., that it may not converge to a desired average, and suggest an alternative scheme based on reinforcement learning that has guaranteed convergence to the desired average. We then discuss a potential application to a wireless network setting with simultaneous link activation constraints. The second variant is a gossip algorithm for distributed computation of the Perron-Frobenius eigenvector of a nonnegative matrix. While the first variant draws upon a reinforcement learning algorithm for an average cost controlled Markov decision problem, the second variant draws upon a reinforcement learning algorithm for risk-sensitive control. We then discuss potential applications of the second variant to ranking schemes, reputation networks, and principal component analysis.

  9. Benchmarking statistical averaging of spectra with HULLAC

    Klapisch, Marcel; Busquet, Michel

    2008-11-01

    Knowledge of radiative properties of hot plasmas is important for ICF, astrophysics, etc When mid-Z or high-Z elements are present, the spectra are so complex that one commonly uses statistically averaged description of atomic systems [1]. In a recent experiment on Fe[2], performed under controlled conditions, high resolution transmission spectra were obtained. The new version of HULLAC [3] allows the use of the same model with different levels of details/averaging. We will take advantage of this feature to check the effect of averaging with comparison with experiment. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Quant. Spectros. Rad. Transf. 65, 43 (2000). [2] J. E. Bailey, G. A. Rochau, C. A. Iglesias et al., Phys. Rev. Lett. 99, 265002-4 (2007). [3]. M. Klapisch, M. Busquet, and A. Bar-Shalom, AIP Conference Proceedings 926, 206-15 (2007).

  10. An approach to averaging digitized plantagram curves.

    Hawes, M R; Heinemeyer, R; Sovak, D; Tory, B

    1994-07-01

    The averaging of outline shapes of the human foot for the purposes of determining information concerning foot shape and dimension within the context of comfort of fit of sport shoes is approached as a mathematical problem. An outline of the human footprint is obtained by standard procedures and the curvature is traced with a Hewlett Packard Digitizer. The paper describes the determination of an alignment axis, the identification of two ray centres and the division of the total curve into two overlapping arcs. Each arc is divided by equiangular rays which intersect chords between digitized points describing the arc. The radial distance of each ray is averaged within groups of foot lengths which vary by +/- 2.25 mm (approximately equal to 1/2 shoe size). The method has been used to determine average plantar curves in a study of 1197 North American males (Hawes and Sovak 1993).

  11. Books average previous decade of economic misery.

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  12. Books Average Previous Decade of Economic Misery

    Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  13. Stochastic Averaging and Stochastic Extremum Seeking

    Liu, Shu-Jun

    2012-01-01

    Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering  and analysis of bacterial  convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...

  14. Phase diagram distortion from traffic parameter averaging.

    Stipdonk, H. Toorenburg, J. van & Postema, M.

    2010-01-01

    Motorway traffic congestion is a major bottleneck for economic growth. Therefore, research of traffic behaviour is carried out in many countries. Although well describing the undersaturated free flow phase as an almost straight line in a (k,q)-phase diagram, congested traffic observations and

  15. Regional averaging and scaling in relativistic cosmology

    Buchert, Thomas; Carfora, Mauro

    2002-01-01

    Averaged inhomogeneous cosmologies lie at the forefront of interest, since cosmological parameters such as the rate of expansion or the mass density are to be considered as volume-averaged quantities and only these can be compared with observations. For this reason the relevant parameters are intrinsically scale-dependent and one wishes to control this dependence without restricting the cosmological model by unphysical assumptions. In the latter respect we contrast our way to approach the averaging problem in relativistic cosmology with shortcomings of averaged Newtonian models. Explicitly, we investigate the scale-dependence of Eulerian volume averages of scalar functions on Riemannian three-manifolds. We propose a complementary view of a Lagrangian smoothing of (tensorial) variables as opposed to their Eulerian averaging on spatial domains. This programme is realized with the help of a global Ricci deformation flow for the metric. We explain rigorously the origin of the Ricci flow which, on heuristic grounds, has already been suggested as a possible candidate for smoothing the initial dataset for cosmological spacetimes. The smoothing of geometry implies a renormalization of averaged spatial variables. We discuss the results in terms of effective cosmological parameters that would be assigned to the smoothed cosmological spacetime. In particular, we find that on the smoothed spatial domain B-bar evaluated cosmological parameters obey Ω-bar B-bar m + Ω-bar B-bar R + Ω-bar B-bar A + Ω-bar B-bar Q 1, where Ω-bar B-bar m , Ω-bar B-bar R and Ω-bar B-bar A correspond to the standard Friedmannian parameters, while Ω-bar B-bar Q is a remnant of cosmic variance of expansion and shear fluctuations on the averaging domain. All these parameters are 'dressed' after smoothing out the geometrical fluctuations, and we give the relations of the 'dressed' to the 'bare' parameters. While the former provide the framework of interpreting observations with a 'Friedmannian bias

  16. Average: the juxtaposition of procedure and context

    Watson, Jane; Chick, Helen; Callingham, Rosemary

    2014-09-01

    This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.

  17. Average-case analysis of numerical problems

    2000-01-01

    The average-case analysis of numerical problems is the counterpart of the more traditional worst-case approach. The analysis of average error and cost leads to new insight on numerical problems as well as to new algorithms. The book provides a survey of results that were mainly obtained during the last 10 years and also contains new results. The problems under consideration include approximation/optimal recovery and numerical integration of univariate and multivariate functions as well as zero-finding and global optimization. Background material, e.g. on reproducing kernel Hilbert spaces and random fields, is provided.

  18. Grassmann Averages for Scalable Robust PCA

    Hauberg, Søren; Feragen, Aasa; Black, Michael J.

    2014-01-01

    As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...

  19. Silicon tunnel FET with average subthreshold slope of 55 mV/dec at low drain currents

    Narimani, K.; Glass, S.; Bernardy, P.; von den Driesch, N.; Zhao, Q. T.; Mantl, S.

    2018-05-01

    In this paper we present a silicon tunnel FET based on line-tunneling to achieve better subthreshold performance. The fabricated device shows an on-current of Ion = 2.55 × 10-7 A/μm at Vds = Von = Vgs - Voff = -0.5 V for an Ioff = 1 nA/μm and an average SS of 55 mV/dec over two orders of magnitude of Id. Furthermore, the analog figures of merit have been calculated and show that the transconductance efficiency gm/Id beats the MOSFET performance at low currents.

  20. Generalized Jackknife Estimators of Weighted Average Derivatives

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic...

  1. Average beta measurement in EXTRAP T1

    Hedin, E.R.

    1988-12-01

    Beginning with the ideal MHD pressure balance equation, an expression for the average poloidal beta, Β Θ , is derived. A method for unobtrusively measuring the quantities used to evaluate Β Θ in Extrap T1 is described. The results if a series of measurements yielding Β Θ as a function of externally applied toroidal field are presented. (author)

  2. High average-power induction linacs

    Prono, D.S.; Barrett, D.; Bowles, E.; Caporaso, G.J.; Chen, Yu-Jiuan; Clark, J.C.; Coffield, F.; Newton, M.A.; Nexsen, W.; Ravenscroft, D.; Turner, W.C.; Watson, J.A.

    1989-01-01

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of ∼ 50-ns duration pulses to > 100 MeV. In this paper the authors report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  3. A singularity theorem based on spatial averages

    journal of. July 2007 physics pp. 31–47. A singularity theorem based on spatial ... In this paper I would like to present a result which confirms – at least partially – ... A detailed analysis of how the model fits in with the .... Further, the statement that the spatial average ...... Financial support under grants FIS2004-01626 and no.

  4. A dynamic analysis of moving average rules

    Chiarella, C.; He, X.Z.; Hommes, C.H.

    2006-01-01

    The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type

  5. Essays on model averaging and political economics

    Wang, W.

    2013-01-01

    This thesis first investigates various issues related with model averaging, and then evaluates two policies, i.e. West Development Drive in China and fiscal decentralization in U.S, using econometric tools. Chapter 2 proposes a hierarchical weighted least squares (HWALS) method to address multiple

  6. 7 CFR 1209.12 - On average.

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false On average. 1209.12 Section 1209.12 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS... CONSUMER INFORMATION ORDER Mushroom Promotion, Research, and Consumer Information Order Definitions § 1209...

  7. High average-power induction linacs

    Prono, D.S.; Barrett, D.; Bowles, E.

    1989-01-01

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  8. Average Costs versus Net Present Value

    E.A. van der Laan (Erwin); R.H. Teunter (Ruud)

    2000-01-01

    textabstractWhile the net present value (NPV) approach is widely accepted as the right framework for studying production and inventory control systems, average cost (AC) models are more widely used. For the well known EOQ model it can be verified that (under certain conditions) the AC approach gives

  9. Average beta-beating from random errors

    Tomas Garcia, Rogelio; Langner, Andy Sven; Malina, Lukas; Franchi, Andrea; CERN. Geneva. ATS Department

    2018-01-01

    The impact of random errors on average β-beating is studied via analytical derivations and simulations. A systematic positive β-beating is expected from random errors quadratic with the sources or, equivalently, with the rms β-beating. However, random errors do not have a systematic effect on the tune.

  10. Reliability Estimates for Undergraduate Grade Point Average

    Westrick, Paul A.

    2017-01-01

    Undergraduate grade point average (GPA) is a commonly employed measure in educational research, serving as a criterion or as a predictor depending on the research question. Over the decades, researchers have used a variety of reliability coefficients to estimate the reliability of undergraduate GPA, which suggests that there has been no consensus…

  11. Tendon surveillance requirements - average tendon force

    Fulton, J.F.

    1982-01-01

    Proposed Rev. 3 to USNRC Reg. Guide 1.35 discusses the need for comparing, for individual tendons, the measured and predicted lift-off forces. Such a comparison is intended to detect any abnormal tendon force loss which might occur. Recognizing that there are uncertainties in the prediction of tendon losses, proposed Guide 1.35.1 has allowed specific tolerances on the fundamental losses. Thus, the lift-off force acceptance criteria for individual tendons appearing in Reg. Guide 1.35, Proposed Rev. 3, is stated relative to a lower bound predicted tendon force, which is obtained using the 'plus' tolerances on the fundamental losses. There is an additional acceptance criterion for the lift-off forces which is not specifically addressed in these two Reg. Guides; however, it is included in a proposed Subsection IWX to ASME Code Section XI. This criterion is based on the overriding requirement that the magnitude of prestress in the containment structure be sufficeint to meet the minimum prestress design requirements. This design requirement can be expressed as an average tendon force for each group of vertical hoop, or dome tendons. For the purpose of comparing the actual tendon forces with the required average tendon force, the lift-off forces measured for a sample of tendons within each group can be averaged to construct the average force for the entire group. However, the individual lift-off forces must be 'corrected' (normalized) prior to obtaining the sample average. This paper derives the correction factor to be used for this purpose. (orig./RW)

  12. Analysis and comparison of safety models using average daily, average hourly, and microscopic traffic.

    Wang, Ling; Abdel-Aty, Mohamed; Wang, Xuesong; Yu, Rongjie

    2018-02-01

    There have been plenty of traffic safety studies based on average daily traffic (ADT), average hourly traffic (AHT), or microscopic traffic at 5 min intervals. Nevertheless, not enough research has compared the performance of these three types of safety studies, and seldom of previous studies have intended to find whether the results of one type of study is transferable to the other two studies. First, this study built three models: a Bayesian Poisson-lognormal model to estimate the daily crash frequency using ADT, a Bayesian Poisson-lognormal model to estimate the hourly crash frequency using AHT, and a Bayesian logistic regression model for the real-time safety analysis using microscopic traffic. The model results showed that the crash contributing factors found by different models were comparable but not the same. Four variables, i.e., the logarithm of volume, the standard deviation of speed, the logarithm of segment length, and the existence of diverge segment, were positively significant in the three models. Additionally, weaving segments experienced higher daily and hourly crash frequencies than merge and basic segments. Then, each of the ADT-based, AHT-based, and real-time models was used to estimate safety conditions at different levels: daily and hourly, meanwhile, the real-time model was also used in 5 min intervals. The results uncovered that the ADT- and AHT-based safety models performed similar in predicting daily and hourly crash frequencies, and the real-time safety model was able to provide hourly crash frequency. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE

    Carmen BOGHEAN

    2013-12-01

    Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.

  14. Weighted estimates for the averaging integral operator

    Opic, Bohumír; Rákosník, Jiří

    2010-01-01

    Roč. 61, č. 3 (2010), s. 253-262 ISSN 0010-0757 R&D Projects: GA ČR GA201/05/2033; GA ČR GA201/08/0383 Institutional research plan: CEZ:AV0Z10190503 Keywords : averaging integral operator * weighted Lebesgue spaces * weights Subject RIV: BA - General Mathematics Impact factor: 0.474, year: 2010 http://link.springer.com/article/10.1007%2FBF03191231

  15. Average Transverse Momentum Quantities Approaching the Lightfront

    Boer, Daniel

    2015-01-01

    In this contribution to Light Cone 2014, three average transverse momentum quantities are discussed: the Sivers shift, the dijet imbalance, and the $p_T$ broadening. The definitions of these quantities involve integrals over all transverse momenta that are overly sensitive to the region of large transverse momenta, which conveys little information about the transverse momentum distributions of quarks and gluons inside hadrons. TMD factorization naturally suggests alternative definitions of su...

  16. Time-averaged MSD of Brownian motion

    Andreanov, Alexei; Grebenkov, Denis

    2012-01-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we de...

  17. Average configuration of the geomagnetic tail

    Fairfield, D.H.

    1979-01-01

    Over 3000 hours of Imp 6 magnetic field data obtained between 20 and 33 R/sub E/ in the geomagnetic tail have been used in a statistical study of the tail configuration. A distribution of 2.5-min averages of B/sub z/ as a function of position across the tail reveals that more flux crosses the equatorial plane near the dawn and dusk flanks (B-bar/sub z/=3.γ) than near midnight (B-bar/sub z/=1.8γ). The tail field projected in the solar magnetospheric equatorial plane deviates from the x axis due to flaring and solar wind aberration by an angle α=-0.9 Y/sub SM/-2.7, where Y/sub SM/ is in earth radii and α is in degrees. After removing these effects, the B/sub y/ component of the tail field is found to depend on interplanetary sector structure. During an 'away' sector the B/sub y/ component of the tail field is on average 0.5γ greater than that during a 'toward' sector, a result that is true in both tail lobes and is independent of location across the tail. This effect means the average field reversal between northern and southern lobes of the tail is more often 178 0 rather than the 180 0 that is generally supposed

  18. Changing mortality and average cohort life expectancy

    Robert Schoen

    2005-10-01

    Full Text Available Period life expectancy varies with changes in mortality, and should not be confused with the life expectancy of those alive during that period. Given past and likely future mortality changes, a recent debate has arisen on the usefulness of the period life expectancy as the leading measure of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure, the average cohort life expectancy (ACLE, to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate measures of mortality are calculated for England and Wales, Norway, and Switzerland for the years 1880 to 2000. CAL is found to be sensitive to past and present changes in death rates. ACLE requires the most data, but gives the best representation of the survivorship of cohorts present at a given time.

  19. Non-self-averaging nucleation rate due to quenched disorder

    Sear, Richard P

    2012-01-01

    We study the nucleation of a new thermodynamic phase in the presence of quenched disorder. The quenched disorder is a generic model of both impurities and disordered porous media; both are known to have large effects on nucleation. We find that the nucleation rate is non-self-averaging. This is in a simple Ising model with clusters of quenched spins. We also show that non-self-averaging behaviour is straightforward to detect in experiments, and may be rather common. (fast track communication)

  20. A note on moving average models for Gaussian random fields

    Hansen, Linda Vadgård; Thorarinsdottir, Thordis L.

    The class of moving average models offers a flexible modeling framework for Gaussian random fields with many well known models such as the Matérn covariance family and the Gaussian covariance falling under this framework. Moving average models may also be viewed as a kernel smoothing of a Lévy...... basis, a general modeling framework which includes several types of non-Gaussian models. We propose a new one-parameter spatial correlation model which arises from a power kernel and show that the associated Hausdorff dimension of the sample paths can take any value between 2 and 3. As a result...

  1. Risk Aversion in Game Shows

    Andersen, Steffen; Harrison, Glenn W.; Lau, Morten I.

    2008-01-01

    We review the use of behavior from television game shows to infer risk attitudes. These shows provide evidence when contestants are making decisions over very large stakes, and in a replicated, structured way. Inferences are generally confounded by the subjective assessment of skill in some games......, and the dynamic nature of the task in most games. We consider the game shows Card Sharks, Jeopardy!, Lingo, and finally Deal Or No Deal. We provide a detailed case study of the analyses of Deal Or No Deal, since it is suitable for inference about risk attitudes and has attracted considerable attention....

  2. Measuring performance at trade shows

    Hansen, Kåre

    2004-01-01

    Trade shows is an increasingly important marketing activity to many companies, but current measures of trade show performance do not adequately capture dimensions important to exhibitors. Based on the marketing literature's outcome and behavior-based control system taxonomy, a model is built...... that captures a outcome-based sales dimension and four behavior-based dimensions (i.e. information-gathering, relationship building, image building, and motivation activities). A 16-item instrument is developed for assessing exhibitors perceptions of their trade show performance. The paper presents evidence...

  3. Average corotation of line segments near a point and vortex identification

    Kolář, Václav; Šístek, Jakub; Cirak, F.; Moses, P.

    2013-01-01

    Roč. 51, č. 11 (2013), s. 2678-2694 ISSN 0001-1452 R&D Projects: GA AV ČR IAA200600801 Institutional support: RVO:67985874 ; RVO:67985840 Keywords : vortices * vortex identification * vortical structures Subject RIV: BK - Fluid Dynamics; BA - General Mathematics (MU-W) Impact factor: 1.165, year: 2013

  4. A One Line Derivation of DCC: Application of a Vector Random Coefficient Moving Average Process

    C.M. Hafner (Christian); M.J. McAleer (Michael)

    2014-01-01

    markdownabstract__Abstract__ One of the most widely-used multivariate conditional volatility models is the dynamic conditional correlation (or DCC) specification. However, the underlying stochastic process to derive DCC has not yet been established, which has made problematic the derivation of

  5. Is average daily travel time expenditure constant? In search of explanations for an increase in average travel time.

    van Wee, B.; Rietveld, P.; Meurs, H.

    2006-01-01

    Recent research suggests that the average time spent travelling by the Dutch population has increased over the past decades. However, different data sources show different levels of increase. This paper explores possible causes for this increase. They include a rise in incomes, which has probably

  6. production lines

    Jingshan Li

    2000-01-01

    Full Text Available In this work, serial production lines with finished goods buffers operating in the pull regime are considered. The machines are assumed to obey Bernoulli reliability model. The problem of satisfying customers demand is addressed. The level of demand satisfaction is quantified by the due-time performance (DTP, which is defined as the probability to ship to the customer a required number of parts during a fixed time interval. Within this scenario, the definitions of DTP bottlenecks are introduced and a method for their identification is developed.

  7. Operator product expansion and its thermal average

    Mallik, S [Saha Inst. of Nuclear Physics, Calcutta (India)

    1998-05-01

    QCD sum rules at finite temperature, like the ones at zero temperature, require the coefficients of local operators, which arise in the short distance expansion of the thermal average of two-point functions of currents. We extend the configuration space method, applied earlier at zero temperature, to the case at finite temperature. We find that, upto dimension four, two new operators arise, in addition to the two appearing already in the vacuum correlation functions. It is argued that the new operators would contribute substantially to the sum rules, when the temperature is not too low. (orig.) 7 refs.

  8. Fluctuations of wavefunctions about their classical average

    Benet, L; Flores, J; Hernandez-Saldana, H; Izrailev, F M; Leyvraz, F; Seligman, T H

    2003-01-01

    Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics

  9. Phase-averaged transport for quasiperiodic Hamiltonians

    Bellissard, J; Schulz-Baldes, H

    2002-01-01

    For a class of discrete quasi-periodic Schroedinger operators defined by covariant re- presentations of the rotation algebra, a lower bound on phase-averaged transport in terms of the multifractal dimensions of the density of states is proven. This result is established under a Diophantine condition on the incommensuration parameter. The relevant class of operators is distinguished by invariance with respect to symmetry automorphisms of the rotation algebra. It includes the critical Harper (almost-Mathieu) operator. As a by-product, a new solution of the frame problem associated with Weyl-Heisenberg-Gabor lattices of coherent states is given.

  10. Baseline-dependent averaging in radio interferometry

    Wijnholds, S. J.; Willis, A. G.; Salvini, S.

    2018-05-01

    This paper presents a detailed analysis of the applicability and benefits of baseline-dependent averaging (BDA) in modern radio interferometers and in particular the Square Kilometre Array. We demonstrate that BDA does not affect the information content of the data other than a well-defined decorrelation loss for which closed form expressions are readily available. We verify these theoretical findings using simulations. We therefore conclude that BDA can be used reliably in modern radio interferometry allowing a reduction of visibility data volume (and hence processing costs for handling visibility data) by more than 80 per cent.

  11. Multistage parallel-serial time averaging filters

    Theodosiou, G.E.

    1980-01-01

    Here, a new time averaging circuit design, the 'parallel filter' is presented, which can reduce the time jitter, introduced in time measurements using counters of large dimensions. This parallel filter could be considered as a single stage unit circuit which can be repeated an arbitrary number of times in series, thus providing a parallel-serial filter type as a result. The main advantages of such a filter over a serial one are much less electronic gate jitter and time delay for the same amount of total time uncertainty reduction. (orig.)

  12. Time-averaged MSD of Brownian motion

    Andreanov, Alexei; Grebenkov, Denis S

    2012-01-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution

  13. Independence, Odd Girth, and Average Degree

    Löwenstein, Christian; Pedersen, Anders Sune; Rautenbach, Dieter

    2011-01-01

      We prove several tight lower bounds in terms of the order and the average degree for the independence number of graphs that are connected and/or satisfy some odd girth condition. Our main result is the extension of a lower bound for the independence number of triangle-free graphs of maximum...... degree at most three due to Heckman and Thomas [Discrete Math 233 (2001), 233–237] to arbitrary triangle-free graphs. For connected triangle-free graphs of order n and size m, our result implies the existence of an independent set of order at least (4n−m−1) / 7.  ...

  14. Bootstrapping Density-Weighted Average Derivatives

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...... (1989). In many cases validity of bootstrap-based inference procedures is found to depend crucially on whether the bandwidth sequence satisfies a particular (asymptotic linearity) condition. An exception to this rule occurs for inference procedures involving a studentized estimator employing a "robust...

  15. Time-averaged MSD of Brownian motion

    Andreanov, Alexei; Grebenkov, Denis S.

    2012-07-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution.

  16. Limit lines for risk

    Cox, D.C.; Baybutt, P.

    1982-01-01

    Approaches to the regulation of risk from technological systems, such as nuclear power plants or chemical process plants, in which potential accidents may result in a broad range of adverse consequences must take into account several different aspects of risk. These include overall or average risk, accidents posing high relative risks, the rate at which accident probability decreases with increasing accident consequences, and the impact of high frequency, low consequence accidents. A hypothetical complementary cumulative distribution function (CCDF), with appropriately chosen parametric form, meets all these requirements. The Farmer limit line, by contrast, places limits on the risks due to individual accident sequences, and cannot adequately account for overall risk. This reduces its usefulness as a regulatory tool. In practice, the CCDF is used in the Canadian nuclear licensing process, while the Farmer limit line approach, supplemented by separate qualitative limits on overall risk, is employed in the United Kingdom

  17. Bayesian model averaging and weighted average least squares : Equivariance, stability, and numerical issues

    De Luca, G.; Magnus, J.R.

    2011-01-01

    In this article, we describe the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals, which implement, respectively, the exact Bayesian model-averaging estimator and the weighted-average least-squares

  18. Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.

    Dirks, Jean; And Others

    1983-01-01

    Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)

  19. Optical emission line spectra of Seyfert galaxies and radio galaxies

    Osterbrock, D.E.

    1978-01-01

    Many radio galaxies have strong emission lines in their optical spectra, similar to the emission lines in the spectra of Seyfert galaxies. The range of ionization extends from [O I] and [N I] through [Ne V] and [Fe VII] to [Fe X]. The emission-line spectra of radio galaxies divide into two types, narrow-line radio galaxies whose spectra are indistinguishable from Seyfert 2 galaxies, and broad-line radio galaxies whose spectra are similar to Seyfert 1 galaxies. However on the average the broad-line radio galaxies have steeper Balmer decrements, stronger [O III] and weaker Fe II emission than the Seyfert 1 galaxies, though at least one Seyfert 1 galaxy not known to be a radio source has a spectrum very similar to typical broad-line radio galaxies. Intermediate-type Seyfert galaxies exist that show various mixtures of the Seyfert 1 and Seyfert 2 properties, and the narrow-line or Seyfert 2 property seems to be strongly correlated with radio emission. (Auth.)

  20. Line facilities outline

    1998-08-01

    This book deals with line facilities. The contents of this book are outline line of wire telecommunication ; development of line, classification of section of line and theory of transmission of line, cable line ; structure of line, line of cable in town, line out of town, domestic cable and other lines, Optical communication ; line of optical cable, transmission method, measurement of optical communication and cable of the sea bottom, Equipment of telecommunication line ; telecommunication line facilities and telecommunication of public works, construction of cable line and maintenance and Regulation of line equipment ; regulation on technique, construction and maintenance.

  1. Beta-energy averaging and beta spectra

    Stamatelatos, M.G.; England, T.R.

    1976-07-01

    A simple yet highly accurate method for approximately calculating spectrum-averaged beta energies and beta spectra for radioactive nuclei is presented. This method should prove useful for users who wish to obtain accurate answers without complicated calculations of Fermi functions, complex gamma functions, and time-consuming numerical integrations as required by the more exact theoretical expressions. Therefore, this method should be a good time-saving alternative for investigators who need to make calculations involving large numbers of nuclei (e.g., fission products) as well as for occasional users interested in restricted number of nuclides. The average beta-energy values calculated by this method differ from those calculated by ''exact'' methods by no more than 1 percent for nuclides with atomic numbers in the 20 to 100 range and which emit betas of energies up to approximately 8 MeV. These include all fission products and the actinides. The beta-energy spectra calculated by the present method are also of the same quality

  2. Chaotic Universe, Friedmannian on the average 2

    Marochnik, L S [AN SSSR, Moscow. Inst. Kosmicheskikh Issledovanij

    1980-11-01

    The cosmological solutions are found for the equations for correlators, describing a statistically chaotic Universe, Friedmannian on the average in which delta-correlated fluctuations with amplitudes h >> 1 are excited. For the equation of state of matter p = n epsilon, the kind of solutions depends on the position of maximum of the spectrum of the metric disturbances. The expansion of the Universe, in which long-wave potential and vortical motions and gravitational waves (modes diverging at t ..-->.. 0) had been excited, tends asymptotically to the Friedmannian one at t ..-->.. identity and depends critically on n: at n < 0.26, the solution for the scalefactor is situated higher than the Friedmannian one, and lower at n > 0.26. The influence of finite at t ..-->.. 0 long-wave fluctuation modes leads to an averaged quasiisotropic solution. The contribution of quantum fluctuations and of short-wave parts of the spectrum of classical fluctuations to the expansion law is considered. Their influence is equivalent to the contribution from an ultrarelativistic gas with corresponding energy density and pressure. The restrictions are obtained for the degree of chaos (the spectrum characteristics) compatible with the observed helium abundance, which could have been retained by a completely chaotic Universe during its expansion up to the nucleosynthesis epoch.

  3. Averaging in the presence of sliding errors

    Yost, G.P.

    1991-08-01

    In many cases the precision with which an experiment can measure a physical quantity depends on the value of that quantity. Not having access to the true value, experimental groups are forced to assign their errors based on their own measured value. Procedures which attempt to derive an improved estimate of the true value by a suitable average of such measurements usually weight each experiment's measurement according to the reported variance. However, one is in a position to derive improved error estimates for each experiment from the average itself, provided an approximate idea of the functional dependence of the error on the central value is known. Failing to do so can lead to substantial biases. Techniques which avoid these biases without loss of precision are proposed and their performance is analyzed with examples. These techniques are quite general and can bring about an improvement even when the behavior of the errors is not well understood. Perhaps the most important application of the technique is in fitting curves to histograms

  4. Tokyo Motor Show 2003; Tokyo Motor Show 2003

    Joly, E.

    2004-01-01

    The text which follows present the different techniques exposed during the 37. Tokyo Motor Show. The report points out the great tendencies of developments of the Japanese automobile industry. The hybrid electric-powered vehicles or those equipped with fuel cells have been highlighted by the Japanese manufacturers which allow considerable budgets in the research of less polluting vehicles. The exposed models, although being all different according to the manufacturer, use always a hybrid system: fuel cell/battery. The manufacturers have stressed too on the intelligent systems for navigation and safety as well as on the design and comfort. (O.M.)

  5. High average power linear induction accelerator development

    Bayless, J.R.; Adler, R.J.

    1987-07-01

    There is increasing interest in linear induction accelerators (LIAs) for applications including free electron lasers, high power microwave generators and other types of radiation sources. Lawrence Livermore National Laboratory has developed LIA technology in combination with magnetic pulse compression techniques to achieve very impressive performance levels. In this paper we will briefly discuss the LIA concept and describe our development program. Our goals are to improve the reliability and reduce the cost of LIA systems. An accelerator is presently under construction to demonstrate these improvements at an energy of 1.6 MeV in 2 kA, 65 ns beam pulses at an average beam power of approximately 30 kW. The unique features of this system are a low cost accelerator design and an SCR-switched, magnetically compressed, pulse power system. 4 refs., 7 figs

  6. FEL system with homogeneous average output

    Douglas, David R.; Legg, Robert; Whitney, R. Roy; Neil, George; Powers, Thomas Joseph

    2018-01-16

    A method of varying the output of a free electron laser (FEL) on very short time scales to produce a slightly broader, but smooth, time-averaged wavelength spectrum. The method includes injecting into an accelerator a sequence of bunch trains at phase offsets from crest. Accelerating the particles to full energy to result in distinct and independently controlled, by the choice of phase offset, phase-energy correlations or chirps on each bunch train. The earlier trains will be more strongly chirped, the later trains less chirped. For an energy recovered linac (ERL), the beam may be recirculated using a transport system with linear and nonlinear momentum compactions M.sub.56, which are selected to compress all three bunch trains at the FEL with higher order terms managed.

  7. Quetelet, the average man and medical knowledge.

    Caponi, Sandra

    2013-01-01

    Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.

  8. [Quetelet, the average man and medical knowledge].

    Caponi, Sandra

    2013-01-01

    Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.

  9. Angle-averaged Compton cross sections

    Nickel, G.H.

    1983-01-01

    The scattering of a photon by an individual free electron is characterized by six quantities: α = initial photon energy in units of m 0 c 2 ; α/sub s/ = scattered photon energy in units of m 0 c 2 ; β = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV

  10. Reynolds averaged simulation of unsteady separated flow

    Iaccarino, G.; Ooi, A.; Durbin, P.A.; Behnia, M.

    2003-01-01

    The accuracy of Reynolds averaged Navier-Stokes (RANS) turbulence models in predicting complex flows with separation is examined. The unsteady flow around square cylinder and over a wall-mounted cube are simulated and compared with experimental data. For the cube case, none of the previously published numerical predictions obtained by steady-state RANS produced a good match with experimental data. However, evidence exists that coherent vortex shedding occurs in this flow. Its presence demands unsteady RANS computation because the flow is not statistically stationary. The present study demonstrates that unsteady RANS does indeed predict periodic shedding, and leads to much better concurrence with available experimental data than has been achieved with steady computation

  11. Angle-averaged Compton cross sections

    Nickel, G.H.

    1983-01-01

    The scattering of a photon by an individual free electron is characterized by six quantities: ..cap alpha.. = initial photon energy in units of m/sub 0/c/sup 2/; ..cap alpha../sub s/ = scattered photon energy in units of m/sub 0/c/sup 2/; ..beta.. = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV.

  12. Zonally averaged chemical-dynamical model of the lower thermosphere

    Kasting, J.F.; Roble, R.G.

    1981-01-01

    A zonally averaged numerical model of the thermosphere is used to examine the coupling between neutral composition, including N 2 , O 2 and O, temperature, and winds at solstice for solar minimum conditions. The meridional circulation forced by solar heating results in a summer-to-winter flow, with a winter enhancement in atomic oxygen density that is a factor of about 1.8 greater than the summer hemisphere at 160 km. The O 2 and N 2 variations are associated with a latitudinal gradient in total number density, which is required to achieve pressure balance in the presence of large zonal jets. Latitudinal profiles OI (5577A) green line emission intensity are calculated by using both Chapman and Barth mechanisms. Composition of the lower thermosphere is shown to be strongly influenced by circulation patterns initiated in the stratosphere and lower mesosphere, below the lower boundary used in the model

  13. Parallel Lines

    James G. Worner

    2017-05-01

    Full Text Available James Worner is an Australian-based writer and scholar currently pursuing a PhD at the University of Technology Sydney. His research seeks to expose masculinities lost in the shadow of Australia’s Anzac hegemony while exploring new opportunities for contemporary historiography. He is the recipient of the Doctoral Scholarship in Historical Consciousness at the university’s Australian Centre of Public History and will be hosted by the University of Bologna during 2017 on a doctoral research writing scholarship.   ‘Parallel Lines’ is one of a collection of stories, The Shapes of Us, exploring liminal spaces of modern life: class, gender, sexuality, race, religion and education. It looks at lives, like lines, that do not meet but which travel in proximity, simultaneously attracted and repelled. James’ short stories have been published in various journals and anthologies.

  14. Environmental stresses can alleviate the average deleterious effect of mutations

    Leibler Stanislas

    2003-05-01

    Full Text Available Abstract Background Fundamental questions in evolutionary genetics, including the possible advantage of sexual reproduction, depend critically on the effects of deleterious mutations on fitness. Limited existing experimental evidence suggests that, on average, such effects tend to be aggravated under environmental stresses, consistent with the perception that stress diminishes the organism's ability to tolerate deleterious mutations. Here, we ask whether there are also stresses with the opposite influence, under which the organism becomes more tolerant to mutations. Results We developed a technique, based on bioluminescence, which allows accurate automated measurements of bacterial growth rates at very low cell densities. Using this system, we measured growth rates of Escherichia coli mutants under a diverse set of environmental stresses. In contrast to the perception that stress always reduces the organism's ability to tolerate mutations, our measurements identified stresses that do the opposite – that is, despite decreasing wild-type growth, they alleviate, on average, the effect of deleterious mutations. Conclusions Our results show a qualitative difference between various environmental stresses ranging from alleviation to aggravation of the average effect of mutations. We further show how the existence of stresses that are biased towards alleviation of the effects of mutations may imply the existence of average epistatic interactions between mutations. The results thus offer a connection between the two main factors controlling the effects of deleterious mutations: environmental conditions and epistatic interactions.

  15. The balanced survivor average causal effect.

    Greene, Tom; Joffe, Marshall; Hu, Bo; Li, Liang; Boucher, Ken

    2013-05-07

    Statistical analysis of longitudinal outcomes is often complicated by the absence of observable values in patients who die prior to their scheduled measurement. In such cases, the longitudinal data are said to be "truncated by death" to emphasize that the longitudinal measurements are not simply missing, but are undefined after death. Recently, the truncation by death problem has been investigated using the framework of principal stratification to define the target estimand as the survivor average causal effect (SACE), which in the context of a two-group randomized clinical trial is the mean difference in the longitudinal outcome between the treatment and control groups for the principal stratum of always-survivors. The SACE is not identified without untestable assumptions. These assumptions have often been formulated in terms of a monotonicity constraint requiring that the treatment does not reduce survival in any patient, in conjunction with assumed values for mean differences in the longitudinal outcome between certain principal strata. In this paper, we introduce an alternative estimand, the balanced-SACE, which is defined as the average causal effect on the longitudinal outcome in a particular subset of the always-survivors that is balanced with respect to the potential survival times under the treatment and control. We propose a simple estimator of the balanced-SACE that compares the longitudinal outcomes between equivalent fractions of the longest surviving patients between the treatment and control groups and does not require a monotonicity assumption. We provide expressions for the large sample bias of the estimator, along with sensitivity analyses and strategies to minimize this bias. We consider statistical inference under a bootstrap resampling procedure.

  16. New β-delayed proton lines from 23Al

    Kirsebom, O.S.; Fynbo, H.O.U.; Riisager, K.; Jokinen, A.; Saastamoinen, A.; Aeystoe, J.; Madurga, M.; Tengblad, O.

    2011-01-01

    We report on a new measurement of the β-delayed proton spectrum of 23 Al. Higher statistics compared to previous measurements allow us to identify new proton lines in the energy range 1-2 MeV. A statistical analysis of the observed β strength shows that the B (GT) values are fully consistent with having a Porter-Thomas distribution. This is indicative of chaotic behaviour and implies that only the average β strength carries physical meaning. (orig.)

  17. Reality show: um paradoxo nietzschiano

    Ilana Feldman

    2011-01-01

    Full Text Available

    O fenômeno dos reality shows - e a subseqüente relação entre imagem e verdade - assenta-se sobre uma série de paradoxos. Tais paradoxos podem ser compreendidos à luz do pensamento do filósofo alemão Friedrich Nietzsche, que, através dos usos de formulações paradoxais, concebia a realidade como um mundo de pura aparência e a verdade como um acréscimo ficcional, como um efeito. A ficção é então tomada, na filosofia de Nietzsche, não em seu aspecto falsificante e desrealizador - como sempre pleiteou nossa tradição metafísica -, mas como condição necessária para que certa espécie de invenção possa operar como verdade. Sendo assim, a própria expressão reality show, através de sua formulação paradoxal, engendra explicitamente um mundo de pura aparência, em que a verdade, a parte reality da proposição, é da ordem do suplemento, daquilo que se acrescenta ficcionalmente - como um adjetivo - a show. O ornamento, nesse caso, passa a ocupar o lugar central, apontando para o efeito produzido: o efeito-de-verdade. Seguindo, então, o pensamento nietzschiano e sua atualização na contemporaneidade, investigaremos de que forma os televisivos “shows de realidade” operam paradoxalmente, em consonância com nossas paradoxais práticas culturais.

  18. Large-Scale No-Show Patterns and Distributions for Clinic Operational Research

    Michael L. Davies

    2016-02-01

    Full Text Available Patient no-shows for scheduled primary care appointments are common. Unused appointment slots reduce patient quality of care, access to services and provider productivity while increasing loss to follow-up and medical costs. This paper describes patterns of no-show variation by patient age, gender, appointment age, and type of appointment request for six individual service lines in the United States Veterans Health Administration (VHA. This retrospective observational descriptive project examined 25,050,479 VHA appointments contained in individual-level records for eight years (FY07-FY14 for 555,183 patients. Multifactor analysis of variance (ANOVA was performed, with no-show rate as the dependent variable, and gender, age group, appointment age, new patient status, and service line as factors. The analyses revealed that males had higher no-show rates than females to age 65, at which point males and females exhibited similar rates. The average no-show rates decreased with age until 75–79, whereupon rates increased. As appointment age increased, males and new patients had increasing no-show rates. Younger patients are especially prone to no-show as appointment age increases. These findings provide novel information to healthcare practitioners and management scientists to more accurately characterize no-show and attendance rates and the impact of certain patient factors. Future general population data could determine whether findings from VHA data generalize to others.

  19. Large-Scale No-Show Patterns and Distributions for Clinic Operational Research.

    Davies, Michael L; Goffman, Rachel M; May, Jerrold H; Monte, Robert J; Rodriguez, Keri L; Tjader, Youxu C; Vargas, Dominic L

    2016-02-16

    Patient no-shows for scheduled primary care appointments are common. Unused appointment slots reduce patient quality of care, access to services and provider productivity while increasing loss to follow-up and medical costs. This paper describes patterns of no-show variation by patient age, gender, appointment age, and type of appointment request for six individual service lines in the United States Veterans Health Administration (VHA). This retrospective observational descriptive project examined 25,050,479 VHA appointments contained in individual-level records for eight years (FY07-FY14) for 555,183 patients. Multifactor analysis of variance (ANOVA) was performed, with no-show rate as the dependent variable, and gender, age group, appointment age, new patient status, and service line as factors. The analyses revealed that males had higher no-show rates than females to age 65, at which point males and females exhibited similar rates. The average no-show rates decreased with age until 75-79, whereupon rates increased. As appointment age increased, males and new patients had increasing no-show rates. Younger patients are especially prone to no-show as appointment age increases. These findings provide novel information to healthcare practitioners and management scientists to more accurately characterize no-show and attendance rates and the impact of certain patient factors. Future general population data could determine whether findings from VHA data generalize to others.

  20. Experimental verification of the line-shape distortion in resonance Auger spectra

    Aksela, S.; Kukk, E.; Aksela, H.; Svensson, S.

    1995-01-01

    When the mean excitation energy and the width of a broad photon band are varied the Kr 3d 5/2 -1 5p→4p -2 5p resonance Auger electron lines show strong asymmetry and their average kinetic energies shift. Even extra peaks appear. Our results demonstrate experimentally, for the first time, that the incident photon energy distribution has very crucial importance on the resonance Auger line shape and thus on the reliable data analysis of complicated Auger spectra

  1. Ethylene responses in three Hydrangea lines

    Lauridsen, Uffe Bjerre; Müller, Renate; Lütken, Henrik Vlk

    2015-01-01

    Abstract The ornamental scrub Hydrangea is generally not considered to be particularly sensitive to the phytohormone ethylene. The present study aimed at testing ethylene sensitivity in three different Hydrangea lines: 1, 2 and 3 taking into account the effect of temperature. Ethylene response...... was measured as leaf epinasty and leaf drop. Data indicated that higher temperature accelerates the effect of 2 μl L-1 ethylene over a 12-day period, and if the inhibitor 1-methylcyclopopene 1-MCP is able to attenuate this effect. Breeding line 1 and 3 dropped 3.8±0.6 and 5.0±0.4 leaves on average......, respectively, during the 12-day experimental period. Non-treated controls of line 1 and 3 dropped 1.8±0.6 and 1.8±0.4 leaves, respectively. In contrast, line 2 did not show a significant response to ethylene treatment with a leaf drop of 2.1±0.3 leaves, compared to a leaf drop of 0.8±0.3 in non...

  2. Bike Map Lines

    Town of Chapel Hill, North Carolina — Chapel Hill Bike Map Lines from KMZ file.This data came from the wiki comment board for the public, not an “official map” showing the Town of Chapel Hill's plans or...

  3. Average Case Analysis of Java 7's Dual Pivot Quicksort

    Wild, Sebastian; Nebel, Markus E.

    2013-01-01

    Recently, a new Quicksort variant due to Yaroslavskiy was chosen as standard sorting method for Oracle's Java 7 runtime library. The decision for the change was based on empirical studies showing that on average, the new algorithm is faster than the formerly used classic Quicksort. Surprisingly, the improvement was achieved by using a dual pivot approach, an idea that was considered not promising by several theoretical studies in the past. In this paper, we identify the reason for this unexpe...

  4. Industrial Applications of High Average Power FELS

    Shinn, Michelle D

    2005-01-01

    The use of lasers for material processing continues to expand, and the annual sales of such lasers exceeds $1 B (US). Large scale (many m2) processing of materials require the economical production of laser powers of the tens of kilowatts, and therefore are not yet commercial processes, although they have been demonstrated. The development of FELs based on superconducting RF (SRF) linac technology provides a scaleable path to laser outputs above 50 kW in the IR, rendering these applications economically viable, since the cost/photon drops as the output power increases. This approach also enables high average power ~ 1 kW output in the UV spectrum. Such FELs will provide quasi-cw (PRFs in the tens of MHz), of ultrafast (pulsewidth ~ 1 ps) output with very high beam quality. This talk will provide an overview of applications tests by our facility's users such as pulsed laser deposition, laser ablation, and laser surface modification, as well as present plans that will be tested with our upgraded FELs. These upg...

  5. Calculating Free Energies Using Average Force

    Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)

    2001-01-01

    A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.

  6. Geographic Gossip: Efficient Averaging for Sensor Networks

    Dimakis, Alexandros D. G.; Sarwate, Anand D.; Wainwright, Martin J.

    Gossip algorithms for distributed computation are attractive due to their simplicity, distributed nature, and robustness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repeatedly recirculating redundant information. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing times of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of $n$ and $\\sqrt{n}$ respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy $\\epsilon$ using $O(\\frac{n^{1.5}}{\\sqrt{\\log n}} \\log \\epsilon^{-1})$ radio transmissions, which yields a $\\sqrt{\\frac{n}{\\log n}}$ factor improvement over standard gossip algorithms. We illustrate these theoretical results with experimental comparisons between our algorithm and standard methods as applied to various classes of random fields.

  7. The concept of average LET values determination

    Makarewicz, M.

    1981-01-01

    The concept of average LET (linear energy transfer) values determination, i.e. ordinary moments of LET in absorbed dose distribution vs. LET of ionizing radiation of any kind and any spectrum (even the unknown ones) has been presented. The method is based on measurement of ionization current with several values of voltage supplying an ionization chamber operating in conditions of columnar recombination of ions or ion recombination in clusters while the chamber is placed in the radiation field at the point of interest. By fitting a suitable algebraic expression to the measured current values one can obtain coefficients of the expression which can be interpreted as values of LET moments. One of the advantages of the method is its experimental and computational simplicity. It has been shown that for numerical estimation of certain effects dependent on LET of radiation it is not necessary to know the dose distribution but only a number of parameters of the distribution, i.e. the LET moments. (author)

  8. On spectral averages in nuclear spectroscopy

    Verbaarschot, J.J.M.

    1982-01-01

    In nuclear spectroscopy one tries to obtain a description of systems of bound nucleons. By means of theoretical models one attemps to reproduce the eigenenergies and the corresponding wave functions which then enable the computation of, for example, the electromagnetic moments and the transition amplitudes. Statistical spectroscopy can be used for studying nuclear systems in large model spaces. In this thesis, methods are developed and applied which enable the determination of quantities in a finite part of the Hilbert space, which is defined by specific quantum values. In the case of averages in a space defined by a partition of the nucleons over the single-particle orbits, the propagation coefficients reduce to Legendre interpolation polynomials. In chapter 1 these polynomials are derived with the help of a generating function and a generalization of Wick's theorem. One can then deduce the centroid and the variance of the eigenvalue distribution in a straightforward way. The results are used to calculate the systematic energy difference between states of even and odd parity for nuclei in the mass region A=10-40. In chapter 2 an efficient method for transforming fixed angular momentum projection traces into fixed angular momentum for the configuration space traces is developed. In chapter 3 it is shown that the secular behaviour can be represented by a Gaussian function of the energies. (Auth.)

  9. Average subentropy, coherence and entanglement of random mixed quantum states

    Zhang, Lin, E-mail: godyalin@163.com [Institute of Mathematics, Hangzhou Dianzi University, Hangzhou 310018 (China); Singh, Uttam, E-mail: uttamsingh@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India); Pati, Arun K., E-mail: akpati@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India)

    2017-02-15

    Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate that mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.

  10. Bounding quantum gate error rate based on reported average fidelity

    Sanders, Yuval R; Wallman, Joel J; Sanders, Barry C

    2016-01-01

    Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli distance as a measure of this deviation, and we show that knowledge of the Pauli distance enables tighter estimates of the error rate of quantum gates. (fast track communication)

  11. Post-model selection inference and model averaging

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  12. VT Digital Line Graph Miscellaneous Transmission Lines

    Vermont Center for Geographic Information — (Link to Metadata) This datalayer is comprised of Miscellaineous Transmission Lines. Digital line graph (DLG) data are digital representations of cartographic...

  13. A NON-LTE STUDY OF SILICON ABUNDANCES IN GIANT STARS FROM THE Si i INFRARED LINES IN THE zJ -BAND

    Tan, Kefeng; Shi, Jianrong; Zhao, Gang; Takada-Hidai, Masahide; Takeda, Yoichi

    2016-01-01

    We investigate the feasibility of Si i infrared (IR) lines as Si abundance indicators for giant stars. We find that Si abundances obtained from the Si i IR lines based on the local thermodynamic equilibrium (LTE) analysis show large line-to-line scatter (mean value of 0.13 dex), and are higher than those from the optical lines. However, when non-LTE effects are taken into account, the line-to-line scatter reduces significantly (mean value of 0.06 dex), and the Si abundances are consistent with those from the optical lines. The typical average non-LTE correction of [Si/Fe] for our sample stars is about −0.35 dex. Our results demonstrate that the Si i IR lines could be reliable abundance indicators, provided that the non-LTE effects are properly taken into account.

  14. Improving restorer line of hybrid rice by irradiation

    Guo Guangrong; Yi Weiping; Liu Wuquan

    1995-03-01

    The work for improving restorer line of hybrid rice has been done. The results showed the radiosensitivity of foreign varieties overtakes Chinese ones at average level. Because of their different blood relationship, there are various situation on foreign varieties, i.e. varieties from IR system are not sensitive, Shui-yun system are second and Miyang system are sensitive. The radiosensitivity for restorer lines of hybrid F 0 overtakes one for F 1 . According to this results. We have put forward the point of view 'Multi-gene type blend system'. M 2 mutant frequency of restorer line was investigated. The results showed: there was a little difference between the total mutant frequencies from the different varieties. But, there may be difference in some characters by over thirty times between them. So a problem, worthy to be further studied is proposed that do the differences of radiosensitivity between varieties relate to the mutant frequency of these characters? Various mutants were obtained by irradiation treatment, among which a few mutants changed to maintainer line because losing restorer genes, other more mutants still were restorer lines. New combinations developed by these new mutant restorer lines have strong heterosis. The optimum combinations have been utilized in rice production. (7 tabs.)

  15. Average resonance capture studies of 102Ru

    Shi, Z.R.; Casten, R.F.; Stachel, J.; Bruce, A.M.

    1984-01-01

    The 102 Ru nucleus has been investigated via the ARC technique which ensures a complete set of J/sup π/ = 0 + , 1 +- , 2 +- , 3 +- , 4 +- , and 5 + levels up to 2 MeV. The results are discussed in the framework of the IBA-1 with Consistent Q. The calculations show good agreement with the empirical data especially for the O 2 + state, suggesting that it can be described in terms of collective degrees of freedom

  16. High-spin isomer in 211Rn, and the shape of the yrast line

    Dracoulis, G.D.; Fahlander, C.; Poletti, A.R.

    1981-08-01

    High spin yrast states in 211 Rn have been identified. A 61/2 - , 380 ns isomer found at 8856 keV is characterised as a core-excited configuration. The average shape of the yrast line shows a smooth behaviour with spin, in contrast to its neighbour 212 Rn. This difference is attributed to the presence of the neutron hole

  17. General and Local: Averaged k-Dependence Bayesian Classifiers

    Limin Wang

    2015-06-01

    Full Text Available The inference of a general Bayesian network has been shown to be an NP-hard problem, even for approximate solutions. Although k-dependence Bayesian (KDB classifier can construct at arbitrary points (values of k along the attribute dependence spectrum, it cannot identify the changes of interdependencies when attributes take different values. Local KDB, which learns in the framework of KDB, is proposed in this study to describe the local dependencies implicated in each test instance. Based on the analysis of functional dependencies, substitution-elimination resolution, a new type of semi-naive Bayesian operation, is proposed to substitute or eliminate generalization to achieve accurate estimation of conditional probability distribution while reducing computational complexity. The final classifier, averaged k-dependence Bayesian (AKDB classifiers, will average the output of KDB and local KDB. Experimental results on the repository of machine learning databases from the University of California Irvine (UCI showed that AKDB has significant advantages in zero-one loss and bias relative to naive Bayes (NB, tree augmented naive Bayes (TAN, Averaged one-dependence estimators (AODE, and KDB. Moreover, KDB and local KDB show mutually complementary characteristics with respect to variance.

  18. Ultra-low noise miniaturized neural amplifier with hardware averaging.

    Dweiri, Yazan M; Eggers, Thomas; McCallum, Grant; Durand, Dominique M

    2015-08-01

    Peripheral nerves carry neural signals that could be used to control hybrid bionic systems. Cuff electrodes provide a robust and stable interface but the recorded signal amplitude is small (concept of hardware averaging to nerve recordings obtained with cuff electrodes. An optimization procedure is developed to minimize noise and power simultaneously. The novel design was based on existing neural amplifiers (Intan Technologies, LLC) and is validated with signals obtained from the FINE in chronic dog experiments. We showed that hardware averaging leads to a reduction in the total recording noise by a factor of 1/√N or less depending on the source resistance. Chronic recording of physiological activity with FINE using the presented design showed significant improvement on the recorded baseline noise with at least two parallel operation transconductance amplifiers leading to a 46.1% reduction at N = 8. The functionality of these recordings was quantified by the SNR improvement and shown to be significant for N = 3 or more. The present design was shown to be capable of generating hardware averaging on noise improvement for neural recording with cuff electrodes, and can accommodate the presence of high source impedances that are associated with the miniaturized contacts and the high channel count in electrode arrays. This technique can be adopted for other applications where miniaturized and implantable multichannel acquisition systems with ultra-low noise and low power are required.

  19. N III Bowen Lines and Fluorescence Mechanism in the Symbiotic Star AG Peg

    Hyung, Siek; Lee, Seong-Jae; Lee, Kang Hwan

    2018-03-01

    We have investigated the intensities and full width at half maximum (FWHM) of the high dispersion spectroscopic N III emission lines of AG Peg, observed with the Hamilton Echelle Spectrograph (HES) in three different epochs at Mt. Hamilton's Lick Observatory. The earlier theoretical Bowen line study assumed the continuum fluorescence effect, presenting a large discrepancy with the present data. Hence, we analyzed the observed N III lines assuming line fluorescence as the only suitable source: (1) The O III and N III resonance line profiles near λ 374 were decomposed, using the Gaussian function, and the contributions from various O III line components were determined. (2) Based on the theoretical resonant N III intensities, the expected N III Bowen intensities were obtained to fit the observed values. Our study shows that the incoming line photon number ratio must be considered to balance at each N III Bowen line level in the ultraviolet radiation according to the observed lines in the optical zone. We also found that the average FWHM of the N III Bowen lines was about 5 km·s-1 greater than that of the O III Bowen lines, perhaps due to the inherently different kinematic characteristics of their emission zones.

  20. Average spectral efficiency analysis of FSO links over turbulence channel with adaptive transmissions and aperture averaging

    Aarthi, G.; Ramachandra Reddy, G.

    2018-03-01

    In our paper, the impact of adaptive transmission schemes: (i) optimal rate adaptation (ORA) and (ii) channel inversion with fixed rate (CIFR) on the average spectral efficiency (ASE) are explored for free-space optical (FSO) communications with On-Off Keying (OOK), Polarization shift keying (POLSK), and Coherent optical wireless communication (Coherent OWC) systems under different turbulence regimes. Further to enhance the ASE we have incorporated aperture averaging effects along with the above adaptive schemes. The results indicate that ORA adaptation scheme has the advantage of improving the ASE performance compared with CIFR under moderate and strong turbulence regime. The coherent OWC system with ORA excels the other modulation schemes and could achieve ASE performance of 49.8 bits/s/Hz at the average transmitted optical power of 6 dBm under strong turbulence. By adding aperture averaging effect we could achieve an ASE of 50.5 bits/s/Hz under the same conditions. This makes ORA with Coherent OWC modulation as a favorable candidate for improving the ASE of the FSO communication system.

  1. SEASONAL AVERAGE FLOW IN RÂUL NEGRU HYDROGRAPHIC BASIN

    VIGH MELINDA

    2015-03-01

    Full Text Available The Râul Negru hydrographic basin is a well individualised physical-geographical unit inside the Braşov Depression. The flow is controlled by six hydrometric stations placed on the main river and on two important tributaries. The data base for seasonal flow analysis contains the discharges from 1950-2012. The results of data analysis show that there significant space-time differences between multiannual seasonal averages. Some interesting conclusions can be obtained by comparing abundant and scarce periods. Flow analysis was made using seasonal charts Q = f(T. The similarities come from the basin’s relative homogeneity, and the differences from flow’s evolution and trend. Flow variation is analysed using variation coefficient. In some cases appear significant Cv values differences. Also, Cv values trends are analysed according to basins’ average altitude.

  2. A collisional-radiative average atom model for hot plasmas

    Rozsnyai, B.F.

    1996-01-01

    A collisional-radiative 'average atom' (AA) model is presented for the calculation of opacities of hot plasmas not in the condition of local thermodynamic equilibrium (LTE). The electron impact and radiative rate constants are calculated using the dipole oscillator strengths of the average atom. A key element of the model is the photon escape probability which at present is calculated for a semi infinite slab. The Fermi statistics renders the rate equation for the AA level occupancies nonlinear, which requires iterations until the steady state. AA level occupancies are found. Detailed electronic configurations are built into the model after the self-consistent non-LTE AA state is found. The model shows a continuous transition from the non-LTE to the LTE state depending on the optical thickness of the plasma. 22 refs., 13 figs., 1 tab

  3. The U-line line balancing problem

    Miltenburg, G.J.; Wijngaard, J.

    1994-01-01

    The traditional line balancing (LB) problem considers a production line in which stations are arranged consecutively in a line. A balance is determined by grouping tasks into stations while moving forward (or backward) through a precedence network. Recently many production lines are being arranged

  4. Radiosensitivity of mesothelioma cell lines

    Haekkinen, A.M.; Laasonen, A.; Linnainmaa, K.; Mattson, K.; Pyrhoenen, S.

    1996-01-01

    The present study was carried out in order to examine the radiosensitivity of malignant pleural mesothelioma cell lines. Cell kinetics, radiation-induced delay of the cell cycle and DNA ploidy of the cell lines were also determined. For comparison an HeLa and a human foetal fibroblast cell line were simultaneously explored. Six previously cytogenetically and histologically characterized mesothelioma tumor cell lines were applied. A rapid tiazolyl blue microtiter (MTT) assay was used to analyze radiosensitivity and cell kinetics and DNA ploidy of the cultured cells were determined by flow cytometry. The survival fraction after a dose of 2 Gy (SF2), parameters α and β of the linear quadratic model (LQ-model) and mean inactivation dose (D MID ) were also estimated. The DNA index of four cell lines equaled 1.0 and two cell lines equaled 1.5 and 1.6. Different mesothelioma cell lines showed a great variation in radiosensitivity. Mean survival fraction after a radiation dose of 2 Gy (SF2) was 0.60 and ranged from 0.36 to 0.81 and mean α value was 0.26 (range 0.48-0.083). The SF2 of the most sensitive diploid mesothelioma cell line was 0.36: Less than that of the foetal fibroblast cell line (0.49). The survival fractions (0.81 and 0.74) of the two most resistant cell lines, which also were aneuploid, were equal to that of the HeLa cell line (0.78). The α/β ratios of the most sensitive cell lines were almost an order of magnitude greater than those of the two most resistant cell lines. Radiation-induced delay of the most resistant aneuploid cell line was similar to that of HeLa cells but in the most sensitive (diploid cells) there was practically no entry into the G1 phase following the 2 Gy radiation dose during 36 h. (orig.)

  5. Ghost lines in Moessbauer relaxation spectra

    Price, D.C.

    1985-01-01

    The appearance in Moessbauer relaxation spectra of 'ghost' lines, which are narrow lines that do not correspond to transitions between real hyperfine energy levels of the resonant system, is examined. It is shown that in many cases of interest, the appearance of these 'ghost' lines can be interpreted in terms of the relaxational averaging of one or more of the static interactions of the ion. (orig.)

  6. A Predictive Likelihood Approach to Bayesian Averaging

    Tomáš Jeřábek

    2015-01-01

    Full Text Available Multivariate time series forecasting is applied in a wide range of economic activities related to regional competitiveness and is the basis of almost all macroeconomic analysis. In this paper we combine multivariate density forecasts of GDP growth, inflation and real interest rates from four various models, two type of Bayesian vector autoregression (BVAR models, a New Keynesian dynamic stochastic general equilibrium (DSGE model of small open economy and DSGE-VAR model. The performance of models is identified using historical dates including domestic economy and foreign economy, which is represented by countries of the Eurozone. Because forecast accuracy of observed models are different, the weighting scheme based on the predictive likelihood, the trace of past MSE matrix, model ranks are used to combine the models. The equal-weight scheme is used as a simple combination scheme. The results show that optimally combined densities are comparable to the best individual models.

  7. Model of averaged turbulent flow around cylindrical column for simulation of the saltation

    Kharlamova, Irina; Kharlamov, Alexander; Vlasák, Pavel

    2014-01-01

    Roč. 21, č. 2 (2014), s. 103-110 ISSN 1802-1484 R&D Projects: GA ČR GA103/09/1718 Institutional research plan: CEZ:AV0Z20600510 Institutional support: RVO:67985874 Keywords : sediment transport * flow around cylinder * logarithmic profile * dipole line * averaged turbulent flow Subject RIV: BK - Fluid Dynamics

  8. To quantum averages through asymptotic expansion of classical averages on infinite-dimensional space

    Khrennikov, Andrei

    2007-01-01

    We study asymptotic expansions of Gaussian integrals of analytic functionals on infinite-dimensional spaces (Hilbert and nuclear Frechet). We obtain an asymptotic equality coupling the Gaussian integral and the trace of the composition of scaling of the covariation operator of a Gaussian measure and the second (Frechet) derivative of a functional. In this way we couple classical average (given by an infinite-dimensional Gaussian integral) and quantum average (given by the von Neumann trace formula). We can interpret this mathematical construction as a procedure of 'dequantization' of quantum mechanics. We represent quantum mechanics as an asymptotic projection of classical statistical mechanics with infinite-dimensional phase space. This space can be represented as the space of classical fields, so quantum mechanics is represented as a projection of 'prequantum classical statistical field theory'

  9. Calculating ensemble averaged descriptions of protein rigidity without sampling.

    Luis C González

    Full Text Available Previous works have demonstrated that protein rigidity is related to thermodynamic stability, especially under conditions that favor formation of native structure. Mechanical network rigidity properties of a single conformation are efficiently calculated using the integer body-bar Pebble Game (PG algorithm. However, thermodynamic properties require averaging over many samples from the ensemble of accessible conformations to accurately account for fluctuations in network topology. We have developed a mean field Virtual Pebble Game (VPG that represents the ensemble of networks by a single effective network. That is, all possible number of distance constraints (or bars that can form between a pair of rigid bodies is replaced by the average number. The resulting effective network is viewed as having weighted edges, where the weight of an edge quantifies its capacity to absorb degrees of freedom. The VPG is interpreted as a flow problem on this effective network, which eliminates the need to sample. Across a nonredundant dataset of 272 protein structures, we apply the VPG to proteins for the first time. Our results show numerically and visually that the rigidity characterizations of the VPG accurately reflect the ensemble averaged [Formula: see text] properties. This result positions the VPG as an efficient alternative to understand the mechanical role that chemical interactions play in maintaining protein stability.

  10. Calculating ensemble averaged descriptions of protein rigidity without sampling.

    González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J

    2012-01-01

    Previous works have demonstrated that protein rigidity is related to thermodynamic stability, especially under conditions that favor formation of native structure. Mechanical network rigidity properties of a single conformation are efficiently calculated using the integer body-bar Pebble Game (PG) algorithm. However, thermodynamic properties require averaging over many samples from the ensemble of accessible conformations to accurately account for fluctuations in network topology. We have developed a mean field Virtual Pebble Game (VPG) that represents the ensemble of networks by a single effective network. That is, all possible number of distance constraints (or bars) that can form between a pair of rigid bodies is replaced by the average number. The resulting effective network is viewed as having weighted edges, where the weight of an edge quantifies its capacity to absorb degrees of freedom. The VPG is interpreted as a flow problem on this effective network, which eliminates the need to sample. Across a nonredundant dataset of 272 protein structures, we apply the VPG to proteins for the first time. Our results show numerically and visually that the rigidity characterizations of the VPG accurately reflect the ensemble averaged [Formula: see text] properties. This result positions the VPG as an efficient alternative to understand the mechanical role that chemical interactions play in maintaining protein stability.

  11. Role of spatial averaging in multicellular gradient sensing.

    Smith, Tyler; Fancher, Sean; Levchenko, Andre; Nemenman, Ilya; Mugler, Andrew

    2016-05-20

    Gradient sensing underlies important biological processes including morphogenesis, polarization, and cell migration. The precision of gradient sensing increases with the length of a detector (a cell or group of cells) in the gradient direction, since a longer detector spans a larger range of concentration values. Intuition from studies of concentration sensing suggests that precision should also increase with detector length in the direction transverse to the gradient, since then spatial averaging should reduce the noise. However, here we show that, unlike for concentration sensing, the precision of gradient sensing decreases with transverse length for the simplest gradient sensing model, local excitation-global inhibition. The reason is that gradient sensing ultimately relies on a subtraction of measured concentration values. While spatial averaging indeed reduces the noise in these measurements, which increases precision, it also reduces the covariance between the measurements, which results in the net decrease in precision. We demonstrate how a recently introduced gradient sensing mechanism, regional excitation-global inhibition (REGI), overcomes this effect and recovers the benefit of transverse averaging. Using a REGI-based model, we compute the optimal two- and three-dimensional detector shapes, and argue that they are consistent with the shapes of naturally occurring gradient-sensing cell populations.

  12. Fitting a function to time-dependent ensemble averaged data.

    Fogelmark, Karl; Lomholt, Michael A; Irbäck, Anders; Ambjörnsson, Tobias

    2018-05-03

    Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion constants). A commonly overlooked challenge in such function fitting procedures is that fluctuations around mean values, by construction, exhibit temporal correlations. We show that the only available general purpose function fitting methods, correlated chi-square method and the weighted least squares method (which neglects correlation), fail at either robust parameter estimation or accurate error estimation. We remedy this by deriving a new closed-form error estimation formula for weighted least square fitting. The new formula uses the full covariance matrix, i.e., rigorously includes temporal correlations, but is free of the robustness issues, inherent to the correlated chi-square method. We demonstrate its accuracy in four examples of importance in many fields: Brownian motion, damped harmonic oscillation, fractional Brownian motion and continuous time random walks. We also successfully apply our method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software.

  13. Occurrence and average behavior of pulsating aurora

    Partamies, N.; Whiter, D.; Kadokura, A.; Kauristie, K.; Nesse Tyssøy, H.; Massetti, S.; Stauning, P.; Raita, T.

    2017-05-01

    Motivated by recent event studies and modeling efforts on pulsating aurora, which conclude that the precipitation energy during these events is high enough to cause significant chemical changes in the mesosphere, this study looks for the bulk behavior of auroral pulsations. Based on about 400 pulsating aurora events, we outline the typical duration, geomagnetic conditions, and change in the peak emission height for the events. We show that the auroral peak emission height for both green and blue emission decreases by about 8 km at the start of the pulsating aurora interval. This brings the hardest 10% of the electrons down to about 90 km altitude. The median duration of pulsating aurora is about 1.4 h. This value is a conservative estimate since in many cases the end of event is limited by the end of auroral imaging for the night or the aurora drifting out of the camera field of view. The longest durations of auroral pulsations are observed during events which start within the substorm recovery phases. As a result, the geomagnetic indices are not able to describe pulsating aurora. Simultaneous Antarctic auroral images were found for 10 pulsating aurora events. In eight cases auroral pulsations were seen in the southern hemispheric data as well, suggesting an equatorial precipitation source and a frequent interhemispheric occurrence. The long lifetimes of pulsating aurora, their interhemispheric occurrence, and the relatively high-precipitation energies make this type of aurora an effective energy deposition process which is easy to identify from the ground-based image data.

  14. The average crossing number of equilateral random polygons

    Diao, Y; Dobay, A; Kusner, R B; Millett, K; Stasiak, A

    2003-01-01

    In this paper, we study the average crossing number of equilateral random walks and polygons. We show that the mean average crossing number ACN of all equilateral random walks of length n is of the form (3/16)n ln n + O(n). A similar result holds for equilateral random polygons. These results are confirmed by our numerical studies. Furthermore, our numerical studies indicate that when random polygons of length n are divided into individual knot types, the for each knot type K can be described by a function of the form = a(n-n 0 )ln(n-n 0 ) + b(n-n 0 ) + c where a, b and c are constants depending on K and n 0 is the minimal number of segments required to form K. The profiles diverge from each other, with more complex knots showing higher than less complex knots. Moreover, the profiles intersect with the profile of all closed walks. These points of intersection define the equilibrium length of K, i.e., the chain length n e (K) at which a statistical ensemble of configurations with given knot type K-upon cutting, equilibration and reclosure to a new knot type K'-does not show a tendency to increase or decrease . This concept of equilibrium length seems to be universal, and applies also to other length-dependent observables for random knots, such as the mean radius of gyration g >

  15. Boron tolerance in NS wheat lines

    Brdar Milka

    2006-01-01

    Full Text Available Boron is an essential micronutrient for higher plants. Present in excessive amounts boron becomes toxic and can limit plant growth and yield. Suppression of root growth is one of the symptoms of boron toxicity in wheat. This study was undertaken to investigate the response of 10 perspective NS lines of wheat to high concentrations of boron. Analysis of root growth was done on young plants, germinated and grown in the presence of different concentrations of boric acid (0, 50,100 and 150 mg/1. Significant differences occurred between analyzed genotypes and treatments regarding root length. Average suppression of root growth was between 11,6 and 34,2%, for line NS 252/02 are even noted 61,4% longer roots at treatments in relation to the control. Lines with mean suppression of root growth less than 20% (NS 101/02, NS 138/01, NS 53/03 and NS 73/02 may be considered as boron tolerant. Spearmans coefficients showed high level of agreement regarding rang of root length for genotypes treated with 100 and 150 mg H3BO3/l.

  16. Application of autoregressive moving average model in reactor noise analysis

    Tran Dinh Tri

    1993-01-01

    The application of an autoregressive (AR) model to estimating noise measurements has achieved many successes in reactor noise analysis in the last ten years. The physical processes that take place in the nuclear reactor, however, are described by an autoregressive moving average (ARMA) model rather than by an AR model. Consequently more correct results could be obtained by applying the ARMA model instead of the AR model to reactor noise analysis. In this paper the system of the generalised Yule-Walker equations is derived from the equation of an ARMA model, then a method for its solution is given. Numerical results show the applications of the method proposed. (author)

  17. Optimization of line configuration and balancing for flexible machining lines

    Liu, Xuemei; Li, Aiping; Chen, Zurui

    2016-05-01

    Line configuration and balancing is to select the type of line and allot a given set of operations as well as machines to a sequence of workstations to realize high-efficiency production. Most of the current researches for machining line configuration and balancing problems are related to dedicated transfer lines with dedicated machine workstations. With growing trends towards great product variety and fluctuations in market demand, dedicated transfer lines are being replaced with flexible machining line composed of identical CNC machines. This paper deals with the line configuration and balancing problem for flexible machining lines. The objective is to assign operations to workstations and find the sequence of execution, specify the number of machines in each workstation while minimizing the line cycle time and total number of machines. This problem is subject to precedence, clustering, accessibility and capacity constraints among the features, operations, setups and workstations. The mathematical model and heuristic algorithm based on feature group strategy and polychromatic sets theory are presented to find an optimal solution. The feature group strategy and polychromatic sets theory are used to establish constraint model. A heuristic operations sequencing and assignment algorithm is given. An industrial case study is carried out, and multiple optimal solutions in different line configurations are obtained. The case studying results show that the solutions with shorter cycle time and higher line balancing rate demonstrate the feasibility and effectiveness of the proposed algorithm. This research proposes a heuristic line configuration and balancing algorithm based on feature group strategy and polychromatic sets theory which is able to provide better solutions while achieving an improvement in computing time.

  18. Myopes show increased susceptibility to nearwork aftereffects.

    Ciuffreda, K J; Wallis, D M

    1998-09-01

    Some aspects of accommodation may be slightly abnormal (or different) in myopes, compared with accommodation in emmetropes and hyperopes. For example, the initial magnitude of accommodative adaptation in the dark after nearwork is greatest in myopes. However, the critical test is to assess this initial accommodative aftereffect and its subsequent decay in the light under more natural viewing conditions with blur-related visual feedback present, if a possible link between this phenomenon and clinical myopia is to be considered. Subjects consisted of adult late- (n = 11) and early-onset (n = 13) myopes, emmetropes (n = 11), and hyperopes (n = 9). The distance-refractive state was assessed objectively using an autorefractor immediately before and after a 10-minute binocular near task at 20 cm (5 diopters [D]). Group results showed that myopes were most susceptible to the nearwork aftereffect. It averaged 0.35 D in initial magnitude, with considerably faster posttask decay to baseline in the early-onset (35 seconds) versus late-onset (63 seconds) myopes. There was no myopic aftereffect in the remaining two refractive groups. The myopes showed particularly striking accommodatively related nearwork aftereffect susceptibility. As has been speculated and found by many others, transient pseudomyopia may cause or be a precursor to permanent myopia or myopic progression. Time-integrated increased retinal defocus causing axial elongation is proposed as a possible mechanism.

  19. Identification of exotic genetic components and DNA methylation pattern analysis of three cotton introgression lines from Gossypium bickii.

    He, Shou-Pu; Sun, Jun-Ling; Zhang, Chao; Du, Xiong-Ming

    2011-01-01

    The impact of alien DNA fragments on plant genome has been studied in many species. However, little is known about the introgression lines of Gossypium. To study the consequences of introgression in Gossypium, we investigated 2000 genomic and 800 epigenetic sites in three typical cotton introgression lines, as well as their cultivar (Gossypium hirsutum) and wild parents (Gossypium bickii), by amplified fragment length polymorphism (AFLP) and methylation-sensitive amplified polymorphism (MSAP). The results demonstrate that an average of 0.5% of exotic DNA segments from wild cotton is transmitted into the genome of each introgression line, with the addition of other forms of genetic variation. In total, an average of 0.7% of genetic variation sites is identified in introgression lines. Simultaneously, the overall cytosine methylation level in each introgression line is very close to that of the upland cotton parent (an average of 22.6%). Further dividing patterns reveal that both hypomethylation and hypermethylation occurred in introgression lines in comparison with the upland cotton parent. Sequencing of nine methylation polymorphism fragments showed that most (7 of 9) of the methylation alternations occurred in the noncoding sequences. The molecular evidence of introgression from wild cotton into introgression lines in our study is identified by AFLP. Moreover, the causes of petal variation in introgression lines are discussed.

  20. XMM-Newton observation of the NLS1 galaxy Ark 564. I. Spectral analysis of the time-average spectrum

    Papadakis, I.E.; Brinkmann, W.; Page, M.J.; McHardy, I.; Uttley, P.

    2007-01-01

    Context: .We present the results from the spectral analysis of the time-average spectrum of the Narrow Line Seyfert 1 (NLS1) galaxy Ark 564 from a ~100 ks XMM-Newton observation. Aims: .Our aim is to characterize accurately the shape of the time-average, X-ray continuum spectrum of the source and

  1. Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method

    Tsai, F. T. C.; Elshall, A. S.

    2014-12-01

    Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.

  2. 20 CFR 404.221 - Computing your average monthly wage.

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the average...

  3. Smile line assessment comparing quantitative measurement and visual estimation.

    Van der Geld, Pieter; Oosterveld, Paul; Schols, Jan; Kuijpers-Jagtman, Anne Marie

    2011-02-01

    Esthetic analysis of dynamic functions such as spontaneous smiling is feasible by using digital videography and computer measurement for lip line height and tooth display. Because quantitative measurements are time-consuming, digital videography and semiquantitative (visual) estimation according to a standard categorization are more practical for regular diagnostics. Our objective in this study was to compare 2 semiquantitative methods with quantitative measurements for reliability and agreement. The faces of 122 male participants were individually registered by using digital videography. Spontaneous and posed smiles were captured. On the records, maxillary lip line heights and tooth display were digitally measured on each tooth and also visually estimated according to 3-grade and 4-grade scales. Two raters were involved. An error analysis was performed. Reliability was established with kappa statistics. Interexaminer and intraexaminer reliability values were high, with median kappa values from 0.79 to 0.88. Agreement of the 3-grade scale estimation with quantitative measurement showed higher median kappa values (0.76) than the 4-grade scale estimation (0.66). Differentiating high and gummy smile lines (4-grade scale) resulted in greater inaccuracies. The estimation of a high, average, or low smile line for each tooth showed high reliability close to quantitative measurements. Smile line analysis can be performed reliably with a 3-grade scale (visual) semiquantitative estimation. For a more comprehensive diagnosis, additional measuring is proposed, especially in patients with disproportional gingival display. Copyright © 2011 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.

  4. Dry aerosol jet printing of conductive silver lines on a heated silicon substrate

    Efimov, A. A.; Arsenov, P. V.; Protas, N. V.; Minkov, K. N.; Urazov, M. N.; Ivanov, V. V.

    2018-02-01

    A new method for dry aerosol jet printing conductive lines on a heated substrate is presented. The method is based on the use of a spark discharge generator as a source of dry nanoparticles and a heating plate for their sintering. This method allows creating conductive silver lines on a heated silicon substrate up to 300 °C without an additional sintering step. It was found that for effective sintering lines of silver nanoparticles the temperature of the heated substrate should be about more than 200-250 °C. Average thickness of the sintered silver lines was equal to ∼20 µm. Printed lines showed electrical resistivity equal to 35 μΩ·cm, which is 23 times greater than the resistivity of bulk silver.

  5. Average and local structure of α-CuI by configurational averaging

    Mohn, Chris E; Stoelen, Svein

    2007-01-01

    Configurational Boltzmann averaging together with density functional theory are used to study in detail the average and local structure of the superionic α-CuI. We find that the coppers are spread out with peaks in the atom-density at the tetrahedral sites of the fcc sublattice of iodines. We calculate Cu-Cu, Cu-I and I-I pair radial distribution functions, the distribution of coordination numbers and the distribution of Cu-I-Cu, I-Cu-I and Cu-Cu-Cu bond-angles. The partial pair distribution functions are in good agreement with experimental neutron diffraction-reverse Monte Carlo, extended x-ray absorption fine structure and ab initio molecular dynamics results. In particular, our results confirm the presence of a prominent peak at around 2.7 A in the Cu-Cu pair distribution function as well as a broader, less intense peak at roughly 4.3 A. We find highly flexible bonds and a range of coordination numbers for both iodines and coppers. This structural flexibility is of key importance in order to understand the exceptional conductivity of coppers in α-CuI; the iodines can easily respond to changes in the local environment as the coppers diffuse, and a myriad of different diffusion-pathways is expected due to the large variation in the local motifs

  6. FAR-INFRARED LINE SPECTRA OF SEYFERT GALAXIES FROM THE HERSCHEL-PACS SPECTROMETER

    Spinoglio, Luigi; Pereira-Santaella, Miguel; Busquet, Gemma; Dasyra, Kalliopi M.; Calzoletti, Luca; Malkan, Matthew A.; Tommasin, Silvia

    2015-01-01

    We observed the far-IR fine-structure lines of 26 Seyfert galaxies with the Herschel-PACS spectrometer. These observations are complemented with Spitzer Infrared Spectrograph and Herschel SPIRE spectroscopy. We used the ionic lines to determine electron densities in the ionized gas and the [C I] lines, observed with SPIRE, to measure the neutral gas densities, while the [O I] lines measure the gas temperature, at densities below ∼10 4  cm –3 . Using the [O I]145 μm/63 μm and [S III]33/18 μm line ratios, we find an anti-correlation of the temperature with the gas density. Various fine-structure line ratios show density stratifications in these active galaxies. On average, electron densities increase with the ionization potential of the ions. The infrared lines arise partly in the narrow line region, photoionized by the active galactic nucleus (AGN), partly in H II regions photoionized by hot stars, and partly in photo-dissociated regions. We attempt to separate the contributions to the line emission produced in these different regions by comparing our observed emission line ratios to theoretical values. In particular, we tried to separate the contribution of AGNs and star formation by using a combination of Spitzer and Herschel lines, and we found that besides the well-known mid-IR line ratios, the line ratio of [O III]88 μm/[O IV]26 μm can reliably discriminate the two emission regions, while the far-IR line ratio of [C II]157 μm/[O I]63 μm is only able to mildly separate the two regimes. By comparing the observed [C II]157 μm/[N II]205 μm ratio with photoionization models, we also found that most of the [C II] emission in the galaxies we examined is due to photodissociation regions

  7. A new method for the measurement of two-phase mass flow rate using average bi-directional flow tube

    Yoon, B. J.; Uh, D. J.; Kang, K. H.; Song, C. H.; Paek, W. P.

    2004-01-01

    Average bi-directional flow tube was suggested to apply in the air/steam-water flow condition. Its working principle is similar with Pitot tube, however, it makes it possible to eliminate the cooling system which is normally needed to prevent from flashing in the pressure impulse line of pitot tube when it is used in the depressurization condition. The suggested flow tube was tested in the air-water vertical test section which has 80mm inner diameter and 10m length. The flow tube was installed at 120 of L/D from inlet of test section. In the test, the pressure drop across the average bi-directional flow tube, system pressure and average void fraction were measured on the measuring plane. In the test, fluid temperature and injected mass flow rates of air and water phases were also measured by a RTD and two coriolis flow meters, respectively. To calculate the phasic mass flow rates : from the measured differential pressure and void fraction, Chexal drift-flux correlation was used. In the test a new correlation of momentum exchange factor was suggested. The test result shows that the suggested instrumentation using the measured void fraction and Chexal drift-flux correlation can predict the mass flow rates within 10% error of measured data

  8. Evaluation of 99 S/sub 1/ lines of maize for inbreeding depression

    Ahmad, M.; Khan, S.; Ahmad, F.; Shah, N.H.; Akhtar, N.

    2010-01-01

    The research was conducted to evaluate the performance of S1 lines for inbreeding depression regarding different parameters, using maize variety Azam. The maize variety was self-pollinated for one generation in spring season and in the next sowing season 99 S1 lines obtained from selfing was sown with a parental line. Days to silking, pollen-shedding, plant height , ear-height, ear-length, ear-diameter, number of ears/row, kernel rows/ear and 100 kernel weight showed inbreeding depression with varying degrees while yield kg/ha showed severe inbreeding depression with an average of 362.08 kg/ha. Average value of inbreeding depression for days to silking and pollen-shedding was calculated as 2.02 and 2.21 days, respectively. Average values of inbreeding depression for plant height and ear-height were recorded as 21.50 cm and 4.87 cm, respectively. While, for earlength, ear-diameter, number of ears/row, kernel rows/ear and 100 grain weight, the average value of inbreeding depression was recorded as 1.80 cm, 0.2 cm, 2.5, 2.11 and 3.89 g, respectively. Grain yield was positively and significantly correlated with plant height, ear height and yield components. Maturity traits were positively and significantly linked with each other. It is concluded that by subjecting the maize to self-pollination nearly all the lines were affected; however, some lines were affected severely and others tolerated inbreeding to some extent. The lines showing tolerance against inbreeding depression was selected for further maize breeding. (author)

  9. Multiple Lines of Evidence

    Amidan, Brett G. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Venzin, Alexander M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bramer, Lisa M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-06-03

    This paper discusses the process of identifying factors that influence the contamination level of a given decision area and then determining the likelihood that the area remains unacceptable. This process is referred to as lines of evidence. These lines of evidence then serve as inputs for the stratified compliance sampling (SCS) method, which requires a decision area to be divided into strata based upon contamination expectations. This is done in order to focus sampling efforts more within stratum where contamination is more likely and to use the domain knowledge about these likelihoods of the stratum remaining unacceptable to buy down the number of samples necessary, if possible. Two different building scenarios were considered as an example (see Table 3.1). SME expertise was elicited concerning four lines of evidence factors (see Table 3.2): 1) amount of contamination that was seen before decontamination, 2) post-decontamination air sampling information, 3) the applied decontaminant information, and 4) the surface material. Statistical experimental design and logistic regression modelling were used to help determine the likelihood that example stratum remained unacceptable for a given example scenario. The number of samples necessary for clearance was calculated by applying the SCS method to the example scenario, using the estimated likelihood of each stratum remaining unacceptable as was determined using the lines of evidence approach. The commonly used simple random sampling (SRS) method was also used to calculate the number of samples necessary for clearance for comparison purposes. The lines of evidence with SCS approach resulted in a 19% to 43% reduction in total number of samples necessary for clearance (see Table 3.6). The reduction depended upon the building scenario, as well as the level of percent clean criteria. A sensitivity analysis was also performed showing how changing the estimated likelihoods of stratum remaining unacceptable affect the number

  10. Average gluon and quark jet multiplicities at higher orders

    Bolzoni, Paolo; Kniehl, Bernd A. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Kotikov, Anatoly V. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Joint Institute of Nuclear Research, Moscow (Russian Federation). Bogoliubov Lab. of Theoretical Physics

    2013-05-15

    We develop a new formalism for computing and including both the perturbative and nonperturbative QCD contributions to the scale evolution of average gluon and quark jet multiplicities. The new method is motivated by recent progress in timelike small-x resummation obtained in the MS factorization scheme. We obtain next-to-next-to-leading-logarithmic (NNLL) resummed expressions, which represent generalizations of previous analytic results. Our expressions depend on two nonperturbative parameters with clear and simple physical interpretations. A global fit of these two quantities to all available experimental data sets that are compatible with regard to the jet algorithms demonstrates by its goodness how our results solve a longstanding problem of QCD. We show that the statistical and theoretical uncertainties both do not exceed 5% for scales above 10 GeV. We finally propose to use the jet multiplicity data as a new way to extract the strong-coupling constant. Including all the available theoretical input within our approach, we obtain {alpha}{sub s}{sup (5)}(M{sub Z})=0.1199{+-}0.0026 in the MS scheme in an approximation equivalent to next-to-next-to-leading order enhanced by the resummations of ln(x) terms through the NNLL level and of ln Q{sup 2} terms by the renormalization group, in excellent agreement with the present world average.

  11. Image compression using moving average histogram and RBF network

    Khowaja, S.; Ismaili, I.A.

    2015-01-01

    Modernization and Globalization have made the multimedia technology as one of the fastest growing field in recent times but optimal use of bandwidth and storage has been one of the topics which attract the research community to work on. Considering that images have a lion share in multimedia communication, efficient image compression technique has become the basic need for optimal use of bandwidth and space. This paper proposes a novel method for image compression based on fusion of moving average histogram and RBF (Radial Basis Function). Proposed technique employs the concept of reducing color intensity levels using moving average histogram technique followed by the correction of color intensity levels using RBF networks at reconstruction phase. Existing methods have used low resolution images for the testing purpose but the proposed method has been tested on various image resolutions to have a clear assessment of the said technique. The proposed method have been tested on 35 images with varying resolution and have been compared with the existing algorithms in terms of CR (Compression Ratio), MSE (Mean Square Error), PSNR (Peak Signal to Noise Ratio), computational complexity. The outcome shows that the proposed methodology is a better trade off technique in terms of compression ratio, PSNR which determines the quality of the image and computational complexity. (author)

  12. Average cross sections calculated in various neutron fields

    Shibata, Keiichi

    2002-01-01

    Average cross sections have been calculated for the reactions contained in the dosimetry files, JENDL/D-99, IRDF-90V2, and RRDF-98 in order to select the best data for the new library IRDF-2002. The neutron spectra used in the calculations are as follows: 1) 252 Cf spontaneous fission spectrum (NBS evaluation), 2) 235 U thermal fission spectrum (NBS evaluation), 3) Intermediate-energy Standard Neutron Field (ISNF), 4) Coupled Fast Reactivity Measurement Facility (CFRMF), 5) Coupled thermal/fast uranium and boron carbide spherical assembly (ΣΣ), 6) Fast neutron source reactor (YAYOI), 7) Experimental fast reactor (JOYO), 8) Japan Material Testing Reactor (JMTR), 9) d-Li neutron spectrum with a 2-MeV deuteron beam. The items 3)-7) represent fast neutron spectra, while JMTR is a light water reactor. The Q-value for the d-Li reaction mentioned above is 15.02 MeV. Therefore, neutrons with energies up to 17 MeV can be produced in the d-Li reaction. The calculated average cross sections were compared with the measurements. Figures 1-9 show the ratios of the calculations to the experimental data which are given. It is found from these figures that the 58 Fe(n, γ) cross section in JENDL/D-99 reproduces the measurements in the thermal and fast reactor spectra better than that in IRDF-90V2. (author)

  13. Similarity law for Widom lines and coexistence lines.

    Banuti, D T; Raju, M; Ihme, M

    2017-05-01

    The coexistence line of a fluid separates liquid and gaseous states at subcritical pressures, ending at the critical point. Only recently, it became clear that the supercritical state space can likewise be divided into regions with liquidlike and gaslike properties, separated by an extension to the coexistence line. This crossover line is commonly referred to as the Widom line, and is characterized by large changes in density or enthalpy, manifesting as maxima in the thermodynamic response functions. Thus, a reliable representation of the coexistence line and the Widom line is important for sub- and supercritical applications that depend on an accurate prediction of fluid properties. While it is known for subcritical pressures that nondimensionalization with the respective species critical pressures p_{cr} and temperatures T_{cr} only collapses coexistence line data for simple fluids, this approach is used for Widom lines of all fluids. However, we show here that the Widom line does not adhere to the corresponding states principle, but instead to the extended corresponding states principle. We resolve this problem in two steps. First, we propose a Widom line functional based on the Clapeyron equation and derive an analytical, species specific expression for the only parameter from the Soave-Redlich-Kwong equation of state. This parameter is a function of the acentric factor ω and compares well with experimental data. Second, we introduce the scaled reduced pressure p_{r}^{*} to replace the previously used reduced pressure p_{r}=p/p_{cr}. We show that p_{r}^{*} is a function of the acentric factor only and can thus be readily determined from fluid property tables. It collapses both subcritical coexistence line and supercritical Widom line data over a wide range of species with acentric factors ranging from -0.38 (helium) to 0.34 (water), including alkanes up to n-hexane. By using p_{r}^{*}, the extended corresponding states principle can be applied within

  14. Analytical expressions for conditional averages: A numerical test

    Pécseli, H.L.; Trulsen, J.

    1991-01-01

    Conditionally averaged random potential fluctuations are an important quantity for analyzing turbulent electrostatic plasma fluctuations. Experimentally, this averaging can be readily performed by sampling the fluctuations only when a certain condition is fulfilled at a reference position...

  15. Experimental demonstration of squeezed-state quantum averaging

    Lassen, Mikael Østergaard; Madsen, Lars Skovgaard; Sabuncu, Metin

    2010-01-01

    We propose and experimentally demonstrate a universal quantum averaging process implementing the harmonic mean of quadrature variances. The averaged variances are prepared probabilistically by means of linear optical interference and measurement-induced conditioning. We verify that the implemented...

  16. Studies of solar magnetic fields. V. The true average field strengths near the poles

    Howard, R [Hale Observatories, Pasadena, Calif. (USA)

    1977-05-01

    An estimate of the average magnetic field strength at the poles of the Sun from Mount Wilson measurements is made by comparing low latitude magnetic measurements in the same regions made near the center of the disk and near the limb. There is still some uncertainty because the orientation angle of the field lines in the meridional plane is unknown, but the most likely possibility is that the true average field strengths are about twice the measured values (0-2 G), with an absolute upper limit on the underestimation of the field strengths of about a factor 5. The measurements refer to latitudes below about 80/sup 0/.

  17. Extracting Credible Dependencies for Averaged One-Dependence Estimator Analysis

    LiMin Wang

    2014-01-01

    Full Text Available Of the numerous proposals to improve the accuracy of naive Bayes (NB by weakening the conditional independence assumption, averaged one-dependence estimator (AODE demonstrates remarkable zero-one loss performance. However, indiscriminate superparent attributes will bring both considerable computational cost and negative effect on classification accuracy. In this paper, to extract the most credible dependencies we present a new type of seminaive Bayesian operation, which selects superparent attributes by building maximum weighted spanning tree and removes highly correlated children attributes by functional dependency and canonical cover analysis. Our extensive experimental comparison on UCI data sets shows that this operation efficiently identifies possible superparent attributes at training time and eliminates redundant children attributes at classification time.

  18. Averaging scheme for atomic resolution off-axis electron holograms.

    Niermann, T; Lehmann, M

    2014-08-01

    All micrographs are limited by shot-noise, which is intrinsic to the detection process of electrons. For beam insensitive specimen this limitation can in principle easily be circumvented by prolonged exposure times. However, in the high-resolution regime several instrumental instabilities limit the applicable exposure time. Particularly in the case of off-axis holography the holograms are highly sensitive to the position and voltage of the electron-optical biprism. We present a novel reconstruction algorithm to average series of off-axis holograms while compensating for specimen drift, biprism drift, drift of biprism voltage, and drift of defocus, which all might cause problematic changes from exposure to exposure. We show an application of the algorithm utilizing also the possibilities of double biprism holography, which results in a high quality exit-wave reconstruction with 75 pm resolution at a very high signal-to-noise ratio. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Suicide attempts, platelet monoamine oxidase and the average evoked response

    Buchsbaum, M.S.; Haier, R.J.; Murphy, D.L.

    1977-01-01

    The relationship between suicides and suicide attempts and two biological measures, platelet monoamine oxidase levels (MAO) and average evoked response (AER) augmenting was examined in 79 off-medication psychiatric patients and in 68 college student volunteers chosen from the upper and lower deciles of MAO activity levels. In the patient sample, male individuals with low MAO and AER augmenting, a pattern previously associated with bipolar affective disorders, showed a significantly increased incidence of suicide attempts in comparison with either non-augmenting low MAO or high MAO patients. Within the normal volunteer group, all male low MAO probands with a family history of suicide or suicide attempts were AER augmenters themselves. Four completed suicides were found among relatives of low MAO probands whereas no high MAO proband had a relative who committed suicide. These findings suggest that the combination of low platelet MAO activity and AER augmenting may be associated with a possible genetic vulnerability to psychiatric disorders. (author)

  20. Forecasting Construction Tender Price Index in Ghana using Autoregressive Integrated Moving Average with Exogenous Variables Model

    Ernest Kissi

    2018-03-01

    Full Text Available Prices of construction resources keep on fluctuating due to unstable economic situations that have been experienced over the years. Clients knowledge of their financial commitments toward their intended project remains the basis for their final decision. The use of construction tender price index provides a realistic estimate at the early stage of the project. Tender price index (TPI is influenced by various economic factors, hence there are several statistical techniques that have been employed in forecasting. Some of these include regression, time series, vector error correction among others. However, in recent times the integrated modelling approach is gaining popularity due to its ability to give powerful predictive accuracy. Thus, in line with this assumption, the aim of this study is to apply autoregressive integrated moving average with exogenous variables (ARIMAX in modelling TPI. The results showed that ARIMAX model has a better predictive ability than the use of the single approach. The study further confirms the earlier position of previous research of the need to use the integrated model technique in forecasting TPI. This model will assist practitioners to forecast the future values of tender price index. Although the study focuses on the Ghanaian economy, the findings can be broadly applicable to other developing countries which share similar economic characteristics.

  1. Analysis of subchannel effects and their treatment in average channel PWR core models

    Cuervo, D.; Ahnert, C.; Aragones, J.M.

    2004-01-01

    Neutronic thermal-hydraulic coupling is meanly made at this moment using whole plant thermal-hydraulic codes with one channel per assembly or quarter of assembly in more detailed cases. To extract safety limits variables a new calculation has to be performed using thermal-hydraulic subchannel codes in an embedded or off-line manner what implies an increase of calculation time. Another problem of this separated analysis of whole core and not channel is that the whole core calculation is not resolving the real problem due to the modification of the variables values by the homogenization process that is carried out to perform the whole core analysis. This process is making that some magnitudes are over or under-predicted causing that the problem that is being solved is not the original one. The purpose of the work that is being developed is to investigate the effects of the averaging process in the results obtained by the whole core analysis and to develop some corrections that may be included in this analysis to obtain results closer to the ones obtained by a detailed subchannel analysis. This paper shows the results obtained for a sample case and the conclusions for future work. (author)

  2. The flattening of the average potential in models with fermions

    Bornholdt, S.

    1993-01-01

    The average potential is a scale dependent scalar effective potential. In a phase with spontaneous symmetry breaking its inner region becomes flat as the averaging extends over infinite volume and the average potential approaches the convex effective potential. Fermion fluctuations affect the shape of the average potential in this region and its flattening with decreasing physical scale. They have to be taken into account to find the true minimum of the scalar potential which determines the scale of spontaneous symmetry breaking. (orig.)

  3. Proceedings of CanWEA's 21. annual conference and trade show 2005. On-line Ed.

    2005-01-01

    Wind energy presents significant potential in the Canadian energy mix. The Canadian Wind Energy Association has established a national wind target of 10,000 MW of wind power capacity by 2010. The focus of this conference was on federal wind policy, as well as issues concerning research and development and interconnection with electrical grids. Small wind policy developments with details of bylaws and funding scenarios were also examined, as well as various provincial policies and initiatives. Other topics of discussion included energy capture; technical challenges in remote communities; wind and First Nations communities; dynamic analyses of various wind-hydro systems; wind energy as a social development tool; wind energy and the development of a greenhouse gas offset systems; environmental assessment guidance with reference to birds; and wind energy and industrial development. In addition, new wind power technologies were examined. Various provincial policy updates were presented. Issues concerning wind forecasting and the modelling of climatological reference data were discussed. Marketing strategies for wind power producers were presented. Canadian grid interconnection standards were reviewed and issues concerning education and training were examined in relation to the industry's projected growth. Various international policies and strategies were reviewed. Insurance and risk assessment strategies in the wind power industry were examined. Ninety-four presentations were given at this conference, of which 23 have been catalogued separately for inclusion in this database

  4. Durum Wheat (Triticum Durum Desf. Lines Show Different Abilities to Form Masked Mycotoxins under Greenhouse Conditions

    Martina Cirlini

    2013-12-01

    Full Text Available Deoxynivalenol (DON is the most prevalent trichothecene in Europe and its occurrence is associated with infections of Fusarium graminearum and F. culmorum, causal agents of Fusarium head blight (FHB on wheat. Resistance to FHB is a complex character and high variability occurs in the relationship between DON content and FHB incidence. DON conjugation to glucose (DON-3-glucoside, D3G is the primary plant mechanism for resistance towards DON accumulation. Although this mechanism has been already described in bread wheat and barley, no data are reported so far about durum wheat, a key cereal in the pasta production chain. To address this issue, the ability of durum wheat to detoxify and convert deoxynivalenol into D3G was studied under greenhouse controlled conditions. Four durum wheat varieties (Svevo, Claudio, Kofa and Neodur were assessed for DON-D3G conversion; Sumai 3, a bread wheat variety carrying a major QTL for FHB resistance (QFhs.ndsu-3B, was used as a positive control. Data reported hereby clearly demonstrate the ability of durum wheat to convert deoxynivalenol into its conjugated form, D3G.

  5. Mosquito cell line C6/36 shows resistence to Cyt1Aa6

    Zhang, L.; Huang, E.; Tang, B.; Guan, X.; Gelbič, Ivan

    2012-01-01

    Roč. 50, č. 4 (2012), s. 265-269 ISSN 0019-5189 R&D Projects: GA MŠk 2B08003 Grant - others:National Nature Science Foundation of China(CN) 31071745; Science Foundation of the Ministry of Education of China(CN) 20093515110010; Science Foundation of the Ministry of Education of China(CN) 20093515120010; Transformation Fund for Agricultural Science and Technology Achievements(CN) 2010GB2C400212; Fujian Colleges and Universities for the Development of the West Strait(CN) 0b08b005 Institutional research plan: CEZ:AV0Z50070508 Keywords : Bacillus thuringiensis * C6/36 cells * indirect immunofluorescence assay Subject RIV: ED - Physiology Impact factor: 1.195, year: 2012 http://nopr.niscair.res.in/bitstream/123456789/13748/1/IJEB%2050(4)%20265-269.pdf

  6. 20 CFR 404.220 - Average-monthly-wage method.

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Average-monthly-wage method. 404.220 Section... INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.220 Average-monthly-wage method. (a) Who is eligible for this method. You must...

  7. A time-averaged cosmic ray propagation theory

    Klimas, A.J.

    1975-01-01

    An argument is presented, which casts doubt on our ability to choose an appropriate magnetic field ensemble for computing the average behavior of cosmic ray particles. An alternate procedure, using time-averages rather than ensemble-averages, is presented. (orig.) [de

  8. 7 CFR 51.2561 - Average moisture content.

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except when...

  9. Evaluation of soft x-ray average recombination coefficient and average charge for metallic impurities in beam-heated plasmas

    Sesnic, S.S.; Bitter, M.; Hill, K.W.; Hiroe, S.; Hulse, R.; Shimada, M.; Stratton, B.; von Goeler, S.

    1986-05-01

    The soft x-ray continuum radiation in TFTR low density neutral beam discharges can be much lower than its theoretical value obtained by assuming a corona equilibrium. This reduced continuum radiation is caused by an ionization equilibrium shift toward lower states, which strongly changes the value of the average recombination coefficient of metallic impurities anti γ, even for only slight changes in the average charge, anti Z. The primary agent for this shift is the charge exchange between the highly ionized impurity ions and the neutral hydrogen, rather than impurity transport, because the central density of the neutral hydrogen is strongly enhanced at lower plasma densities with intense beam injection. In the extreme case of low density, high neutral beam power TFTR operation (energetic ion mode) the reduction in anti γ can be as much as one-half to two-thirds. We calculate the parametric dependence of anti γ and anti Z for Ti, Cr, Fe, and Ni impurities on neutral density (equivalent to beam power), electron temperature, and electron density. These values are obtained by using either a one-dimensional impurity transport code (MIST) or a zero-dimensional code with a finite particle confinement time. As an example, we show the variation of anti γ and anti Z in different TFTR discharges

  10. The relationship between dynamic and average flow rates of the coolant in the channels of complex shape

    Fedoseev, V. N.; Pisarevsky, M. I.; Balberkina, Y. N.

    2018-01-01

    This paper presents interconnection of dynamic and average flow rates of the coolant in a channel of complex geometry that is a basis for a generalization model of experimental data on heat transfer in various porous structures. Formulas for calculation of heat transfer of fuel rods in transversal fluid flow are acquired with the use of the abovementioned model. It is shown that the model describes a marginal case of separated flows in twisting channels where coolant constantly changes its flow direction and mixes in the communicating channels with large intensity. Dynamic speed is suggested to be identified by power for pumping. The coefficient of proportionality in general case depends on the geometry of the channel and the Reynolds number (Re). A calculation formula of the coefficient of proportionality for the narrow line rod packages is provided. The paper presents a comparison of experimental data and calculated values, which shows usability of the suggested models and calculation formulas.

  11. Radiosensitivity of mesothelioma cell lines

    Haekkinen, A.M. [Dept. of Oncology, Univ. Central Hospital, Helsinki (Finland); Laasonen, A. [Dept. of Pathology, Central Hospital of Etelae-Pohjanmaa, Seinaejoki (Finland); Linnainmaa, K. [Dept. of Industrial Hygiene and Toxicology, Inst. of Occupational Health, Helsinki (Finland); Mattson, K. [Dept. Pulmonary Medicine, Univ. Central Hospital, Helsinki (Finland); Pyrhoenen, S. [Dept. of Oncology, Univ. Central Hospital, Helsinki (Finland)

    1996-10-01

    The present study was carried out in order to examine the radiosensitivity of malignant pleural mesothelioma cell lines. Cell kinetics, radiation-induced delay of the cell cycle and DNA ploidy of the cell lines were also determined. For comparison an HeLa and a human foetal fibroblast cell line were simultaneously explored. Six previously cytogenetically and histologically characterized mesothelioma tumor cell lines were applied. A rapid tiazolyl blue microtiter (MTT) assay was used to analyze radiosensitivity and cell kinetics and DNA ploidy of the cultured cells were determined by flow cytometry. The survival fraction after a dose of 2 Gy (SF2), parameters {alpha} and {beta} of the linear quadratic model (LQ-model) and mean inactivation dose (D{sub MID}) were also estimated. The DNA index of four cell lines equaled 1.0 and two cell lines equaled 1.5 and 1.6. Different mesothelioma cell lines showed a great variation in radiosensitivity. Mean survival fraction after a radiation dose of 2 Gy (SF2) was 0.60 and ranged from 0.36 to 0.81 and mean {alpha} value was 0.26 (range 0.48-0.083). The SF2 of the most sensitive diploid mesothelioma cell line was 0.36: Less than that of the foetal fibroblast cell line (0.49). The survival fractions (0.81 and 0.74) of the two most resistant cell lines, which also were aneuploid, were equal to that of the HeLa cell line (0.78). The {alpha}/{beta} ratios of the most sensitive cell lines were almost an order of magnitude greater than those of the two most resistant cell lines. Radiation-induced delay of the most resistant aneuploid cell line was similar to that of HeLa cells but in the most sensitive (diploid cells) there was practically no entry into the G1 phase following the 2 Gy radiation dose during 36 h. (orig.).

  12. Self-averaging correlation functions in the mean field theory of spin glasses

    Mezard, M.; Parisi, G.

    1984-01-01

    In the infinite range spin glass model, we consider the staggered spin σsub(lambda)associated with a given eigenvector of the interaction matrix. We show that the thermal average of sub(lambda)sup(2) is a self-averaging quantity and we compute it

  13. A constant travel time budget? In search for explanations for an increase in average travel time

    Rietveld, P.; Wee, van B.

    2002-01-01

    Recent research suggests that during the past decades the average travel time of the Dutch population has probably increased. However, different datasources show different levels of increase. Possible causes of the increase in average travel time are presented here. Increased incomes have

  14. Cable line engineering

    Jang, Hak Sin; Kim, Sin Yeong

    1998-02-01

    This book is about cable line engineering. It is comprised of nine chapters, which deals with summary of cable communication such as way, process of cable communication and optical communication, Line constant of transmission on primary constant, reflection and crosstalk, communication cable line of types like flat cable, coaxial cable and loaded cable, Install of communication line with types and facility of aerial line, construction method of communication line facility, Measurement of communication line, Carrier communication of summary, PCM communication with Introduction, regeneration relay system sampling and quantization and Electric communication service and general information network with mobile communication technique and satellite communication system.

  15. Integral transforms of the quantum mechanical path integral: Hit function and path-averaged potential

    Edwards, James P.; Gerber, Urs; Schubert, Christian; Trejo, Maria Anabel; Weber, Axel

    2018-04-01

    We introduce two integral transforms of the quantum mechanical transition kernel that represent physical information about the path integral. These transforms can be interpreted as probability distributions on particle trajectories measuring respectively the relative contribution to the path integral from paths crossing a given spatial point (the hit function) and the likelihood of values of the line integral of the potential along a path in the ensemble (the path-averaged potential).

  16. Using Bayes Model Averaging for Wind Power Forecasts

    Preede Revheim, Pål; Beyer, Hans Georg

    2014-05-01

    For operational purposes predictions of the forecasts of the lumped output of groups of wind farms spread over larger geographic areas will often be of interest. A naive approach is to make forecasts for each individual site and sum them up to get the group forecast. It is however well documented that a better choice is to use a model that also takes advantage of spatial smoothing effects. It might however be the case that some sites tends to more accurately reflect the total output of the region, either in general or for certain wind directions. It will then be of interest giving these a greater influence over the group forecast. Bayesian model averaging (BMA) is a statistical post-processing method for producing probabilistic forecasts from ensembles. Raftery et al. [1] show how BMA can be used for statistical post processing of forecast ensembles, producing PDFs of future weather quantities. The BMA predictive PDF of a future weather quantity is a weighted average of the ensemble members' PDFs, where the weights can be interpreted as posterior probabilities and reflect the ensemble members' contribution to overall forecasting skill over a training period. In Revheim and Beyer [2] the BMA procedure used in Sloughter, Gneiting and Raftery [3] were found to produce fairly accurate PDFs for the future mean wind speed of a group of sites from the single sites wind speeds. However, when the procedure was attempted applied to wind power it resulted in either problems with the estimation of the parameters (mainly caused by longer consecutive periods of no power production) or severe underestimation (mainly caused by problems with reflecting the power curve). In this paper the problems that arose when applying BMA to wind power forecasting is met through two strategies. First, the BMA procedure is run with a combination of single site wind speeds and single site wind power production as input. This solves the problem with longer consecutive periods where the input data

  17. Optimal Reinsertion of Cancelled Train Lines

    Groth, Julie Jespersen; Clausen, Jens

    2006-01-01

    One recovery strategy in case of a major disruption in rail network is to cancel all trains on a specific line of the network. When the disturbance has ended, the cancelled line must be reinserted as soon as possible. In this article we present a mixed integer programming (MIP) model for calculat....... The model finds the optimal solution in an average of 0.5 CPU seconds in each test case....

  18. Plan averaging for multicriteria navigation of sliding window IMRT and VMAT

    Craft, David, E-mail: dcraft@partners.org; Papp, Dávid; Unkelbach, Jan [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114 (United States)

    2014-02-15

    Purpose: To describe a method for combining sliding window plans [intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT)] for use in treatment plan averaging, which is needed for Pareto surface navigation based multicriteria treatment planning. Methods: The authors show that by taking an appropriately defined average of leaf trajectories of sliding window plans, the authors obtain a sliding window plan whose fluence map is the exact average of the fluence maps corresponding to the initial plans. In the case of static-beam IMRT, this also implies that the dose distribution of the averaged plan is the exact dosimetric average of the initial plans. In VMAT delivery, the dose distribution of the averaged plan is a close approximation of the dosimetric average of the initial plans. Results: The authors demonstrate the method on three Pareto optimal VMAT plans created for a demanding paraspinal case, where the tumor surrounds the spinal cord. The results show that the leaf averaged plans yield dose distributions that approximate the dosimetric averages of the precomputed Pareto optimal plans well. Conclusions: The proposed method enables the navigation of deliverable Pareto optimal plans directly, i.e., interactive multicriteria exploration of deliverable sliding window IMRT and VMAT plans, eliminating the need for a sequencing step after navigation and hence the dose degradation that is caused by such a sequencing step.

  19. Plan averaging for multicriteria navigation of sliding window IMRT and VMAT

    Craft, David; Papp, Dávid; Unkelbach, Jan

    2014-01-01

    Purpose: To describe a method for combining sliding window plans [intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT)] for use in treatment plan averaging, which is needed for Pareto surface navigation based multicriteria treatment planning. Methods: The authors show that by taking an appropriately defined average of leaf trajectories of sliding window plans, the authors obtain a sliding window plan whose fluence map is the exact average of the fluence maps corresponding to the initial plans. In the case of static-beam IMRT, this also implies that the dose distribution of the averaged plan is the exact dosimetric average of the initial plans. In VMAT delivery, the dose distribution of the averaged plan is a close approximation of the dosimetric average of the initial plans. Results: The authors demonstrate the method on three Pareto optimal VMAT plans created for a demanding paraspinal case, where the tumor surrounds the spinal cord. The results show that the leaf averaged plans yield dose distributions that approximate the dosimetric averages of the precomputed Pareto optimal plans well. Conclusions: The proposed method enables the navigation of deliverable Pareto optimal plans directly, i.e., interactive multicriteria exploration of deliverable sliding window IMRT and VMAT plans, eliminating the need for a sequencing step after navigation and hence the dose degradation that is caused by such a sequencing step

  20. Plan averaging for multicriteria navigation of sliding window IMRT and VMAT.

    Craft, David; Papp, Dávid; Unkelbach, Jan

    2014-02-01

    To describe a method for combining sliding window plans [intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT)] for use in treatment plan averaging, which is needed for Pareto surface navigation based multicriteria treatment planning. The authors show that by taking an appropriately defined average of leaf trajectories of sliding window plans, the authors obtain a sliding window plan whose fluence map is the exact average of the fluence maps corresponding to the initial plans. In the case of static-beam IMRT, this also implies that the dose distribution of the averaged plan is the exact dosimetric average of the initial plans. In VMAT delivery, the dose distribution of the averaged plan is a close approximation of the dosimetric average of the initial plans. The authors demonstrate the method on three Pareto optimal VMAT plans created for a demanding paraspinal case, where the tumor surrounds the spinal cord. The results show that the leaf averaged plans yield dose distributions that approximate the dosimetric averages of the precomputed Pareto optimal plans well. The proposed method enables the navigation of deliverable Pareto optimal plans directly, i.e., interactive multicriteria exploration of deliverable sliding window IMRT and VMAT plans, eliminating the need for a sequencing step after navigation and hence the dose degradation that is caused by such a sequencing step.

  1. Analysis of emergency department waiting lines

    Urška Močnik

    2014-10-01

    Full Text Available Background: Steady increase in the numbers of patients seeking medical assistance has recently been observed at the emergency department of the health center under study. This has led to increases in waiting times for patients. The management of the health center has been considering to implement certain measures to remedy this situation. One proposed solution is to add an additional physician to the emergency department. A computer model was constructed to simulate waiting lines and analyze the economic feasibility of employing an additional physician.Aim: This paper analyzes the waiting lines at the emergency department and performs an economic feasibility study to determine whether adding an additional physician to the department would be economically justified.Methods: Data about waiting times at the emergency department were collected to study the situation. For each patient, the arrival time at the waiting room and the starting and ending times of the examination were registered. The data were collected from 13 June 2011 to 25 September 2011. The sample included data on 65 nightly standbys, nine standbys on Saturdays, and 16 standbys on Sundays. Due to incomplete entries, data for nine weekly standbys and six Saturday standbys were excluded from the sample. Based on the data collected, we calculated the waiting and examination times per patient, average number of patients, average waiting time, average examination time, share of active standby teams in total standby time, and number of patients in different time periods. The study involved 1,039 patients. Using a synthesis method, we designed a computer model of waiting lines and economic feasibility. The model was validated using comparative analysis. A what-if analysis was performed using various computer simulations with various scenarios to consider the outcomes of decision alternatives. We applied economic analysis to select the best possible solution.Results: The research results

  2. Averaging and sampling for magnetic-observatory hourly data

    J. J. Love

    2010-11-01

    Full Text Available A time and frequency-domain analysis is made of the effects of averaging and sampling methods used for constructing magnetic-observatory hourly data values. Using 1-min data as a proxy for continuous, geomagnetic variation, we construct synthetic hourly values of two standard types: instantaneous "spot" measurements and simple 1-h "boxcar" averages. We compare these average-sample types with others: 2-h average, Gaussian, and "brick-wall" low-frequency-pass. Hourly spot measurements provide a statistically unbiased representation of the amplitude range of geomagnetic-field variation, but as a representation of continuous field variation over time, they are significantly affected by aliasing, especially at high latitudes. The 1-h, 2-h, and Gaussian average-samples are affected by a combination of amplitude distortion and aliasing. Brick-wall values are not affected by either amplitude distortion or aliasing, but constructing them is, in an operational setting, relatively more difficult than it is for other average-sample types. It is noteworthy that 1-h average-samples, the present standard for observatory hourly data, have properties similar to Gaussian average-samples that have been optimized for a minimum residual sum of amplitude distortion and aliasing. For 1-h average-samples from medium and low-latitude observatories, the average of the combination of amplitude distortion and aliasing is less than the 5.0 nT accuracy standard established by Intermagnet for modern 1-min data. For medium and low-latitude observatories, average differences between monthly means constructed from 1-min data and monthly means constructed from any of the hourly average-sample types considered here are less than the 1.0 nT resolution of standard databases. We recommend that observatories and World Data Centers continue the standard practice of reporting simple 1-h-average hourly values.

  3. Average Rate of Heat-Related Hospitalizations in 23 States, 2001-2010

    U.S. Environmental Protection Agency — This map shows the 2001–2010 average rate of hospitalizations classified as “heat-related” by medical professionals in 23 states that participate in CDC’s...

  4. Line Creep in Paper Peeling

    Rosti J.

    2010-06-01

    Full Text Available We have studied experimentally the dynamics of the separation of a sheet of paper into two halves in a peeling configuration. The experimental setup consists of a peeling device, where a fracture front is driven along the plane of the paper, with a constant force. The theoretical picture is how an elastic line interacts with a random landscape of fracture toughness. We compare the results with theoretical simulations in several aspects. One recent finding concerns the autocorrelation function of the average front position. The data from the experiments produces so-called cusps or singularities in the correlation function, as predicted by the functional renormalization group theory for elastic lines. Comparisons with simulations with either a short range or a long range elastic kernel demonstrate that the latter agrees with the experimental observations, as expected.

  5. A Novel Fault Line Selection Method Based on Improved Oscillator System of Power Distribution Network

    Xiaowei Wang

    2014-01-01

    Full Text Available A novel method of fault line selection based on IOS is presented. Firstly, the IOS is established by using math model, which adopted TZSC signal to replace built-in signal of duffing chaotic oscillator by selecting appropriate parameters. Then, each line’s TZSC decomposed by db10 wavelet packet to get CFB with the maximum energy principle, and CFB was solved by IOS. Finally, maximum chaotic distance and average chaotic distance on the phase trajectory are used to judge fault line. Simulation results show that the proposed method can accurately judge fault line and healthy line in strong noisy background. Besides, the nondetection zones of proposed method are elaborated.

  6. Strain fields and line energies of dislocations in uranium dioxide

    Parfitt, David C; Bishop, Clare L; Wenman, Mark R; Grimes, Robin W

    2010-01-01

    Computer simulations are used to investigate the stability of typical dislocations in uranium dioxide. We explain in detail the methods used to produce the dislocation configurations and calculate the line energy and Peierls barrier for pure edge and screw dislocations with the shortest Burgers vector 1/2 . The easiest slip system is found to be the {100}(110) system for stoichiometric UO 2 , in agreement with experimental observations. We also examine the different strain fields associated with these line defects and the close agreement between the strain field predicted by atomic scale models and the application of elastic theory. Molecular dynamics simulations are used to investigate the processes of slip that may occur for the three different edge dislocation geometries and nudged elastic band calculations are used to establish a value for the Peierls barrier, showing the possible utility of the method in investigating both thermodynamic average behaviour and dynamic processes such as creep and plastic deformation.

  7. Confinement, average forces, and the Ehrenfest theorem for a one ...

    Home; Journals; Pramana – Journal of Physics; Volume 80; Issue 5. Confinement ... A free particle moving on the entire real line, which is then permanently confined to a line segment or `a box' (this situation is achieved by taking the limit V 0 → ∞ in a finite well potential). This case is .... Please take note of this change.

  8. Safety Impact of Average Speed Control in the UK

    Lahrmann, Harry Spaabæk; Brassøe, Bo; Johansen, Jonas Wibert

    2016-01-01

    of automatic speed control was point-based, but in recent years a potentially more effective alternative automatic speed control method has been introduced. This method is based upon records of drivers’ average travel speed over selected sections of the road and is normally called average speed control...... in the UK. The study demonstrates that the introduction of average speed control results in statistically significant and substantial reductions both in speed and in number of accidents. The evaluation indicates that average speed control has a higher safety effect than point-based automatic speed control....

  9. on the performance of Autoregressive Moving Average Polynomial

    Timothy Ademakinwa

    Distributed Lag (PDL) model, Autoregressive Polynomial Distributed Lag ... Moving Average Polynomial Distributed Lag (ARMAPDL) model. ..... Global Journal of Mathematics and Statistics. Vol. 1. ... Business and Economic Research Center.

  10. Comparison of Interpolation Methods as Applied to Time Synchronous Averaging

    Decker, Harry

    1999-01-01

    Several interpolation techniques were investigated to determine their effect on time synchronous averaging of gear vibration signals and also the effects on standard health monitoring diagnostic parameters...

  11. Light-cone averaging in cosmology: formalism and applications

    Gasperini, M.; Marozzi, G.; Veneziano, G.; Nugier, F.

    2011-01-01

    We present a general gauge invariant formalism for defining cosmological averages that are relevant for observations based on light-like signals. Such averages involve either null hypersurfaces corresponding to a family of past light-cones or compact surfaces given by their intersection with timelike hypersurfaces. Generalized Buchert-Ehlers commutation rules for derivatives of these light-cone averages are given. After introducing some adapted ''geodesic light-cone'' coordinates, we give explicit expressions for averaging the redshift to luminosity-distance relation and the so-called ''redshift drift'' in a generic inhomogeneous Universe

  12. VT Electric Transmission Line Corridors - corridor lines

    Vermont Center for Geographic Information — (Link to Metadata) The ELTRN layer depicts electric transmission line corridors in Vermont. Various methods have been used to digitize features. The data layer...

  13. The transcriptional diversity of 25 Drosophila cell lines

    Cherbas, Lucy [Indiana Univ., Bloomington, IN (United States); Willingham, Aarron [Affymetrix Inc., Santa Clara, CA (United States); Zhang, Dayu [Indiana Univ., Bloomington, IN (United States); Yang, Li [University of Connecticut Health Center, Farmington, Connecticut (United States); Zou, Yi [Indiana Univ., Bloomington, IN (United States); Eads, Brian D. [Indiana Univ., Bloomington, IN (United States); Carlson, Joseph W. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Landolin, Jane M. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Kapranov, Philipp [Affymetrix Inc., Santa Clara, CA (United States); Dumais, Jacqueline [Affymetrix Inc., Santa Clara, CA (United States); Samsonova, Anastasia [Harvard Medical School, Boston, MA (United States); Choi, Jeong-Hyeon [Indiana Univ., Bloomington, IN (United States); Roberts, Johnny [Indiana Univ., Bloomington, IN (United States); Davis, Carrie A. [Cold Spring Harbor Laboratory, Cold Spring Harbor, New York (United States); Tang, Haixu [Indiana Univ., Bloomington, IN (United States); van Baren, Marijke J. [Washington Univ., St. Louis, MO (United States); Ghosh, Srinka [Affymetrix Inc., Santa Clara, CA (United States); Dobin, Alexander [Cold Spring Harbor Laboratory, Cold Spring Harbor, New York (United States); Bell, Kim [Cold Spring Harbor Laboratory, Cold Spring Harbor, New York (United States); Lin, Wei [Cold Spring Harbor Laboratory, Cold Spring Harbor, New York (United States); Langton, Laura [Washington Univ., St. Louis, MO (United States); Duff, Michael O. [University of Connecticut Health Center, Farmington, Connecticut (United States); Tenney, Aaron E. [Washington Univ., St. Louis, MO (United States); Zaleski, Chris [Cold Spring Harbor Laboratory, Cold Spring Harbor, New York (United States); Brent, Michael R. [Washington Univ., St. Louis, MO (United States); Hoskins, Roger A. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Kaufman, Thomas C. [Indiana University, Bloomington, Indiana (United States); Andrews, Justen [Indiana University, Bloomington, Indiana (United States); Graveley, Brenton R. [University of Connecticut Health Center, Farmington, Connecticut (United States); Perrimon, Norbert [Harvard Medical School, Boston, MA (United States); Howard Hughes Medical Institute, Boston, MA (United States); Celniker, Susan E. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Gingeras, Thomas R. [Affymetrix Inc., Santa Clara, CA (United States); Cold Spring Harbor Laboratory, Cold Spring Harbor, New York (United States); Cherbas, Peter [Indiana Univ., Bloomington, IN (United States)

    2010-12-22

    Drosophila melanogaster cell lines are important resources for cell biologists. In this article, we catalog the expression of exons, genes, and unannotated transcriptional signals for 25 lines. Unannotated transcription is substantial (typically 19% of euchromatic signal). Conservatively, we identify 1405 novel transcribed regions; 684 of these appear to be new exons of neighboring, often distant, genes. Sixty-four percent of genes are expressed detectably in at least one line, but only 21% are detected in all lines. Each cell line expresses, on average, 5885 genes, including a common set of 3109. Expression levels vary over several orders of magnitude. Major signaling pathways are well represented: most differentiation pathways are ‘‘off’’ and survival/growth pathways ‘‘on.’’ Roughly 50% of the genes expressed by each line are not part of the common set, and these show considerable individuality. Thirty-one percent are expressed at a higher level in at least one cell line than in any single developmental stage, suggesting that each line is enriched for genes characteristic of small sets of cells. Most remarkable is that imaginal disc-derived lines can generally be assigned, on the basis of expression, to small territories within developing discs. These mappings reveal unexpected stability of even fine-grained spatial determination. No two cell lines show identical transcription factor expression. We conclude that each line has retained features of an individual founder cell superimposed on a common ‘‘cell line‘‘ gene expression pattern. We report the transcriptional profiles of 25 Drosophila melanogaster cell lines, principally by whole-genome tiling microarray analysis of total RNA, carried out as part of the modENCODE project. The data produced in this study add to our knowledge of the cell lines and of the Drosophila transcriptome in several ways. We summarize the expression of previously annotated genes in each of the 25

  14. Displacement of location in illusory line motion.

    Hubbard, Timothy L; Ruppel, Susan E

    2013-05-01

    Six experiments examined displacement in memory for the location of the line in illusory line motion (ILM; appearance or disappearance of a stationary cue is followed by appearance of a stationary line that is presented all at once, but the stationary line is perceived to "unfold" or "be drawn" from the end closest to the cue to the end most distant from the cue). If ILM was induced by having a single cue appear, then memory for the location of the line was displaced toward the cue, and displacement was larger if the line was closer to the cue. If ILM was induced by having one of two previously visible cues vanish, then memory for the location of the line was displaced away from the cue that vanished. In general, the magnitude of displacement increased and then decreased as retention interval increased from 50 to 250 ms and from 250 to 450 ms, respectively. Displacement of the line (a) is consistent with a combination of a spatial averaging of the locations of the cue and the line with a relatively weaker dynamic in the direction of illusory motion, (b) might be implemented in a spreading activation network similar to networks previously suggested to implement displacement resulting from implied or apparent motion, and (c) provides constraints and challenges for theories of ILM.

  15. QSOs with narrow emission lines

    Baldwin, J.A.; Mcmahon, R.; Hazard, C.; Williams, R.E.

    1988-01-01

    Observations of two new high-redshift, narrow-lined QSOs (NLQSOs) are presented and discussed together with observations of similar objects reported in the literature. Gravitational lensing is ruled out as a possible means of amplifying the luminosity for one of these objects. It is found that the NLQSOs have broad bases on their emission lines as well as the prominent narrow cores which define this class. Thus, these are not pole-on QSOs. The FWHM of the emission lines fits onto the smoothly falling tail of the lower end of the line-width distribution for complete QSO samples. The equivalent widths of the combined broad and narrow components of the lines are normal for QSOs of the luminosity range under study. However, the NLQSOs do show ionization differences from broader-lined QSOs; most significant, the semiforbidden C III/C IV intensity ratio is unusually low. The N/C abundance ratio in these objects is found to be normal; the Al/C abundance ratio may be quite high. 38 references

  16. Homotopic Polygonal Line Simplification

    Deleuran, Lasse Kosetski

    This thesis presents three contributions to the area of polygonal line simplification, or simply line simplification. A polygonal path, or simply a path is a list of points with line segments between the points. A path can be simplified by morphing it in order to minimize some objective function...

  17. Average stress in a Stokes suspension of disks

    Prosperetti, Andrea

    2004-01-01

    The ensemble-average velocity and pressure in an unbounded quasi-random suspension of disks (or aligned cylinders) are calculated in terms of average multipoles allowing for the possibility of spatial nonuniformities in the system. An expression for the stress due to the suspended particles is

  18. 47 CFR 1.959 - Computation of average terrain elevation.

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Computation of average terrain elevation. 1.959 Section 1.959 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Wireless Radio Services Applications and Proceedings Application Requirements and Procedures § 1.959 Computation of average terrain elevation. Except a...

  19. 47 CFR 80.759 - Average terrain elevation.

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Average terrain elevation. 80.759 Section 80.759 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.759 Average terrain elevation. (a)(1) Draw radials...

  20. The average covering tree value for directed graph games

    Khmelnitskaya, Anna Borisovna; Selcuk, Özer; Talman, Dolf

    We introduce a single-valued solution concept, the so-called average covering tree value, for the class of transferable utility games with limited communication structure represented by a directed graph. The solution is the average of the marginal contribution vectors corresponding to all covering

  1. The Average Covering Tree Value for Directed Graph Games

    Khmelnitskaya, A.; Selcuk, O.; Talman, A.J.J.

    2012-01-01

    Abstract: We introduce a single-valued solution concept, the so-called average covering tree value, for the class of transferable utility games with limited communication structure represented by a directed graph. The solution is the average of the marginal contribution vectors corresponding to all

  2. 18 CFR 301.7 - Average System Cost methodology functionalization.

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Average System Cost... REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER MARKETING ADMINISTRATIONS AVERAGE SYSTEM COST METHODOLOGY FOR SALES FROM UTILITIES TO BONNEVILLE POWER ADMINISTRATION UNDER NORTHWEST POWER...

  3. Analytic computation of average energy of neutrons inducing fission

    Clark, Alexander Rich

    2016-01-01

    The objective of this report is to describe how I analytically computed the average energy of neutrons that induce fission in the bare BeRP ball. The motivation of this report is to resolve a discrepancy between the average energy computed via the FMULT and F4/FM cards in MCNP6 by comparison to the analytic results.

  4. An alternative scheme of the Bogolyubov's average method

    Ortiz Peralta, T.; Ondarza R, R.; Camps C, E.

    1990-01-01

    In this paper the average energy and the magnetic moment conservation laws in the Drift Theory of charged particle motion are obtained in a simple way. The approach starts from the energy and magnetic moment conservation laws and afterwards the average is performed. This scheme is more economic from the standpoint of time and algebraic calculations than the usual procedure of Bogolyubov's method. (Author)

  5. Bounds on Average Time Complexity of Decision Trees

    Chikalov, Igor

    2011-01-01

    In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any

  6. A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages

    Malzahn, Dorthe; Opper, Manfred

    2003-01-01

    We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages...

  7. Perceptual learning in Williams syndrome: looking beyond averages.

    Patricia Gervan

    Full Text Available Williams Syndrome is a genetically determined neurodevelopmental disorder characterized by an uneven cognitive profile and surprisingly large neurobehavioral differences among individuals. Previous studies have already shown different forms of memory deficiencies and learning difficulties in WS. Here we studied the capacity of WS subjects to improve their performance in a basic visual task. We employed a contour integration paradigm that addresses occipital visual function, and analyzed the initial (i.e. baseline and after-learning performance of WS individuals. Instead of pooling the very inhomogeneous results of WS subjects together, we evaluated individual performance by expressing it in terms of the deviation from the average performance of the group of typically developing subjects of similar age. This approach helped us to reveal information about the possible origins of poor performance of WS subjects in contour integration. Although the majority of WS individuals showed both reduced baseline and reduced learning performance, individual analysis also revealed a dissociation between baseline and learning capacity in several WS subjects. In spite of impaired initial contour integration performance, some WS individuals presented learning capacity comparable to learning in the typically developing population, and vice versa, poor learning was also observed in subjects with high initial performance levels. These data indicate a dissociation between factors determining initial performance and perceptual learning.

  8. Orientation-averaged optical properties of natural aerosol aggregates

    Zhang Xiaolin; Huang Yinbo; Rao Ruizhong

    2012-01-01

    Orientation-averaged optical properties of natural aerosol aggregates were analyzed by using discrete dipole approximation (DDA) for the effective radius in the range of 0.01 to 2 μm with the corresponding size parameter from 0.1 to 23 for the wavelength of 0.55 μm. Effects of the composition and morphology on the optical properties were also investigated. The composition show small influence on the extinction-efficiency factor in Mie scattering region, scattering- and backscattering-efficiency factors. The extinction-efficiency factor with the size parameter from 9 to 23 and asymmetry factor with the size parameter below 2.3 are almost independent of the natural aerosol composition. The extinction-, absorption, scattering-, and backscattering-efficiency factors with the size parameter below 0.7 are irrespective of the aggregate morphology. The intrinsic symmetry and discontinuity of the normal direction of the particle surface have obvious effects on the scattering properties for the size parameter above 4.6. Furthermore, the scattering phase functions of natural aerosol aggregates are enhanced at the backscattering direction (opposition effect) for large size parameters in the range of Mie scattering. (authors)

  9. Accelerated Distributed Dual Averaging Over Evolving Networks of Growing Connectivity

    Liu, Sijia; Chen, Pin-Yu; Hero, Alfred O.

    2018-04-01

    We consider the problem of accelerating distributed optimization in multi-agent networks by sequentially adding edges. Specifically, we extend the distributed dual averaging (DDA) subgradient algorithm to evolving networks of growing connectivity and analyze the corresponding improvement in convergence rate. It is known that the convergence rate of DDA is influenced by the algebraic connectivity of the underlying network, where better connectivity leads to faster convergence. However, the impact of network topology design on the convergence rate of DDA has not been fully understood. In this paper, we begin by designing network topologies via edge selection and scheduling. For edge selection, we determine the best set of candidate edges that achieves the optimal tradeoff between the growth of network connectivity and the usage of network resources. The dynamics of network evolution is then incurred by edge scheduling. Further, we provide a tractable approach to analyze the improvement in the convergence rate of DDA induced by the growth of network connectivity. Our analysis reveals the connection between network topology design and the convergence rate of DDA, and provides quantitative evaluation of DDA acceleration for distributed optimization that is absent in the existing analysis. Lastly, numerical experiments show that DDA can be significantly accelerated using a sequence of well-designed networks, and our theoretical predictions are well matched to its empirical convergence behavior.

  10. Ensemble bayesian model averaging using markov chain Monte Carlo sampling

    Vrugt, Jasper A [Los Alamos National Laboratory; Diks, Cees G H [NON LANL; Clark, Martyn P [NON LANL

    2008-01-01

    Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.

  11. Self-similarity of higher-order moving averages

    Arianos, Sergio; Carbone, Anna; Türk, Christian

    2011-10-01

    In this work, higher-order moving average polynomials are defined by straightforward generalization of the standard moving average. The self-similarity of the polynomials is analyzed for fractional Brownian series and quantified in terms of the Hurst exponent H by using the detrending moving average method. We prove that the exponent H of the fractional Brownian series and of the detrending moving average variance asymptotically agree for the first-order polynomial. Such asymptotic values are compared with the results obtained by the simulations. The higher-order polynomials correspond to trend estimates at shorter time scales as the degree of the polynomial increases. Importantly, the increase of polynomial degree does not require to change the moving average window. Thus trends at different time scales can be obtained on data sets with the same size. These polynomials could be interesting for those applications relying on trend estimates over different time horizons (financial markets) or on filtering at different frequencies (image analysis).

  12. Anomalous behavior of q-averages in nonextensive statistical mechanics

    Abe, Sumiyoshi

    2009-01-01

    A generalized definition of average, termed the q-average, is widely employed in the field of nonextensive statistical mechanics. Recently, it has however been pointed out that such an average value may behave unphysically under specific deformations of probability distributions. Here, the following three issues are discussed and clarified. Firstly, the deformations considered are physical and may be realized experimentally. Secondly, in view of the thermostatistics, the q-average is unstable in both finite and infinite discrete systems. Thirdly, a naive generalization of the discussion to continuous systems misses a point, and a norm better than the L 1 -norm should be employed for measuring the distance between two probability distributions. Consequently, stability of the q-average is shown not to be established in all of the cases

  13. Calculation of heat transfer in transversely stream-lined tube bundles with chess arrangement

    Migaj, V.K.

    1978-01-01

    A semiempirical theory of heat transfer in transversely stream-lined chess-board tube bundles has been developed. The theory is based on a single cylinder model and involves external flow parameter evaluation on the basis of the solidification principle of a vortex zone. The effect of turbulence is estimated according to experimental results. The method is extended to both average and local heat transfer coefficients. Comparison with experiment shows satisfactory agreement

  14. Elevated CO2 reduced floret death in wheat under warmer average temperatures and terminal drought.

    Eduardo eDias de Oliveira

    2015-11-01

    Full Text Available Elevated CO2 often increases grain yield in wheat by enhancing grain number per ear, which can result from an increase in the potential number of florets or a reduction in the death of developed florets. The hypotheses that elevated CO2 reduces floret death rather than increases floret development, and that grain size in a genotype with more grains per unit area is limited by the rate of grain filling, were tested in a pair of sister lines contrasting in tillering capacity (restricted- vs free-tillering. The hypotheses were tested under elevated CO2, combined with +3 C above ambient temperature and terminal drought, using specialized field tunnel houses. Elevated CO2 increased net leaf photosynthetic rates and likely the availability of carbon assimilates, which significantly reduced the rates of floret death and increased the potential number of grains at anthesis in both sister lines by an average of 42%. The restricted-tillering line had faster grain-filling rates than the free-tillering line because the free-tillering line had more grains to fill. Furthermore, grain-filling rates were faster under elevated CO2 and +3 C above ambient. Terminal drought reduced grain yield in both lines by 19%. Elevated CO2 alone increased the potential number of grains, but a trade-off in yield components limited grain yield in the free-tillering line. This emphasizes the need for breeding cultivars with a greater potential number of florets, since this was not affected by the predicted future climate variables.

  15. Elevated CO2 Reduced Floret Death in Wheat Under Warmer Average Temperatures and Terminal Drought

    Dias de Oliveira, Eduardo; Palta, Jairo A.; Bramley, Helen; Stefanova, Katia; Siddique, Kadambot H. M.

    2015-01-01

    Elevated CO2 often increases grain yield in wheat by enhancing grain number per ear, which can result from an increase in the potential number of florets or a reduction in the death of developed florets. The hypotheses that elevated CO2 reduces floret death rather than increases floret development, and that grain size in a genotype with more grains per unit area is limited by the rate of grain filling, were tested in a pair of sister lines contrasting in tillering capacity (restricted- vs. free-tillering). The hypotheses were tested under elevated CO2, combined with +3°C above ambient temperature and terminal drought, using specialized field tunnel houses. Elevated CO2 increased net leaf photosynthetic rates and likely the availability of carbon assimilates, which significantly reduced the rates of floret death and increased the potential number of grains at anthesis in both sister lines by an average of 42%. The restricted-tillering line had faster grain-filling rates than the free-tillering line because the free-tillering line had more grains to fill. Furthermore, grain-filling rates were faster under elevated CO2 and +3°C above ambient. Terminal drought reduced grain yield in both lines by 19%. Elevated CO2 alone increased the potential number of grains, but a trade-off in yield components limited grain yield in the free-tillering line. This emphasizes the need for breeding cultivars with a greater potential number of florets, since this was not affected by the predicted future climate variables. PMID:26635837

  16. CO line ratios in molecular clouds: the impact of environment

    Peñaloza, Camilo H.; Clark, Paul C.; Glover, Simon C. O.; Klessen, Ralf S.

    2018-04-01

    Line emission is strongly dependent on the local environmental conditions in which the emitting tracers reside. In this work, we focus on modelling the CO emission from simulated giant molecular clouds (GMCs), and study the variations in the resulting line ratios arising from the emission from the J = 1-0, J = 2-1, and J = 3-2 transitions. We perform a set of smoothed particle hydrodynamics simulations with time-dependent chemistry, in which environmental conditions - including total cloud mass, density, size, velocity dispersion, metallicity, interstellar radiation field (ISRF), and the cosmic ray ionization rate (CRIR) - were systematically varied. The simulations were then post-processed using radiative transfer to produce synthetic emission maps in the three transitions quoted above. We find that the cloud-averaged values of the line ratios can vary by up to ±0.3 dex, triggered by changes in the environmental conditions. Changes in the ISRF and/or in the CRIR have the largest impact on line ratios since they directly affect the abundance, temperature, and distribution of CO-rich gas within the clouds. We show that the standard methods used to convert CO emission to H2 column density can underestimate the total H2 molecular gas in GMCs by factors of 2 or 3, depending on the environmental conditions in the clouds.

  17. Relationships between feeding behavior and average daily gain in cattle

    Bruno Fagundes Cunha Lage

    2013-12-01

    Full Text Available Several studies have reported relationship between eating behavior and performance in feedlot cattle. The evaluation of behavior traits demands high degree of work and trained manpower, therefore, in recent years has been used an automated feed intake measurement system (GrowSafe System ®, that identify and record individual feeding patterns. The aim of this study was to evaluate the relationship between feeding behavior traits and average daily gain in Nellore calves undergoing feed efficiency test. Date from 85 Nelore males was recorded during the feed efficiency test performed in 2012, at Centro APTA Bovinos de Corte, Instituto de Zootecnia, São Paulo State. Were analyzed the behavioral traits: time at feeder (TF, head down duration (HD, representing the time when the animal is actually eating, frequency of visits (FV and feed rate (FR calculated as the amount of dry matter (DM consumed by time at feeder (g.min-1. The ADG was calculated by linear regression of individual weights on days in test. ADG classes were obtained considering the average ADG and standard deviation (SD being: high ADG (>mean + 1.0 SD, medium ADG (± 1.0 SD from the mean and low ADG (0.05 among ADG classes for FV, indicating that these traits are not related to each other. These results shows that the ADG is related to the agility in eat food and not to the time spent in the bunk or to the number of visits in a range of 24 hours.

  18. Statistical Model Checking for Product Lines

    ter Beek, Maurice H.; Legay, Axel; Lluch Lafuente, Alberto

    2016-01-01

    average cost of products (in terms of the attributes of the products’ features) and the probability of features to be (un)installed at runtime. The product lines must be modelled in QFLan, which extends the probabilistic feature-oriented language PFLan with novel quantitative constraints among features...

  19. LINE-1 hypomethylation in cancer is highly variable and inversely correlated with microsatellite instability.

    Marcos R H Estécio

    2007-05-01

    Full Text Available Alterations in DNA methylation in cancer include global hypomethylation and gene-specific hypermethylation. It is not clear whether these two epigenetic errors are mechanistically linked or occur independently. This study was performed to determine the relationship between DNA hypomethylation, hypermethylation and microsatellite instability in cancer.We examined 61 cancer cell lines and 60 colorectal carcinomas and their adjacent tissues using LINE-1 bisulfite-PCR as a surrogate for global demethylation. Colorectal carcinomas with sporadic microsatellite instability (MSI, most of which are due to a CpG island methylation phenotype (CIMP and associated MLH1 promoter methylation, showed in average no difference in LINE-1 methylation between normal adjacent and cancer tissues. Interestingly, some tumor samples in this group showed increase in LINE-1 methylation. In contrast, MSI-showed a significant decrease in LINE-1 methylation between normal adjacent and cancer tissues (P<0.001. Microarray analysis of repetitive element methylation confirmed this observation and showed a high degree of variability in hypomethylation between samples. Additionally, unsupervised hierarchical clustering identified a group of highly hypomethylated tumors, composed mostly of tumors without microsatellite instability. We extended LINE-1 analysis to cancer cell lines from different tissues and found that 50/61 were hypomethylated compared to peripheral blood lymphocytes and normal colon mucosa. Interestingly, these cancer cell lines also exhibited a large variation in demethylation, which was tissue-specific and thus unlikely to be resultant from a stochastic process.Global hypomethylation is partially reversed in cancers with microsatellite instability and also shows high variability in cancer, which may reflect alternative progression pathways in cancer.

  20. High-frequency parameters of magnetic films showing magnetization dispersion

    Sidorenkov, V.V.; Zimin, A.B.; Kornev, Yu.V.

    1988-01-01

    Magnetization dispersion leads to skewed resonance curves shifted towards higher magnetizing fields, together with considerable reduction in the resonant absorption, while the FMR line width is considerably increased. These effects increase considerably with frequency, in contrast to films showing magnetic-anisotropy dispersion, where they decrease. It is concluded that there may be anomalies in the frequency dependence of the resonance parameters for polycrystalline magnetic films

  1. Potential for efficient frequency conversion at high average power using solid state nonlinear optical materials

    Eimerl, D.

    1985-01-01

    High-average-power frequency conversion using solid state nonlinear materials is discussed. Recent laboratory experience and new developments in design concepts show that current technology, a few tens of watts, may be extended by several orders of magnitude. For example, using KD*P, efficient doubling (>70%) of Nd:YAG at average powers approaching 100 KW is possible; and for doubling to the blue or ultraviolet regions, the average power may approach 1 MW. Configurations using segmented apertures permit essentially unlimited scaling of average power. High average power is achieved by configuring the nonlinear material as a set of thin plates with a large ratio of surface area to volume and by cooling the exposed surfaces with a flowing gas. The design and material fabrication of such a harmonic generator are well within current technology

  2. Extension of the time-average model to Candu refueling schemes involving reshuffling

    Rouben, Benjamin; Nichita, Eleodor

    2008-01-01

    Candu reactors consist of a horizontal non-pressurized heavy-water-filled vessel penetrated axially by fuel channels, each containing twelve 50-cm-long fuel bundles cooled by pressurized heavy water. Candu reactors are refueled on-line and, as a consequence, the core flux and power distributions change continuously. For design purposes, a 'time-average' model was developed in the 1970's to calculate the average over time of the flux and power distribution and to study the effects of different refueling schemes. The original time-average model only allows treatment of simple push-through refueling schemes whereby fresh fuel is inserted at one end of the channel and irradiated fuel is removed from the other end. With the advent of advanced fuel cycles and new Candu designs, novel refueling schemes may be considered, such as reshuffling discharged fuel from some channels into other channels, to achieve better overall discharge burnup. Such reshuffling schemes cannot be handled by the original time-average model. This paper presents an extension of the time-average model to allow for the treatment of refueling schemes with reshuffling. Equations for the extended model are presented, together with sample results for a simple demonstration case. (authors)

  3. PEARS Emission Line Galaxies

    Pirzkal, Nor; Rothberg, Barry; Ly, Chun; Rhoads, James E.; Malhotra, Sangeeta; Grogin, Norman A.; Dahlen, Tomas; Meurer, Gerhardt R.; Walsh, Jeremy; Hathi, Nimish P.; hide

    2012-01-01

    We present a full analysis of the Probing Evolution And Reionization Spectroscopically (PEARS) slitless grism spectroscopic data obtained vl'ith the Advanced Camera for Surveys on HST. PEARS covers fields within both the Great Observatories Origins Deep Survey (GOODS) North and South fields, making it ideal as a random surveY of galaxies, as well as the availability of a wide variety of ancillary observations to support the spectroscopic results. Using the PEARS data we are able to identify star forming galaxies within the redshift volume 0 galaxies down to a limiting flux of approx 10 - 18 erg/s/sq cm . The ELRs have also been compared to the properties of the host galaxy, including morphology, luminosity, and mass. From this analysis we find three key results: 1) The computed line luminosities show evidence of a flattening in the luminosity function with increasing redshift; 2) The star forming systems show evidence of disturbed morphologies, with star formation occurring predominantly within one effective (half-light) radius. However, the morphologies show no correlation with host stellar mass; and 3) The number density of star forming galaxies with M(*) >= 10(exp 9) Solar M decreases by an order of magnitude at z<=0.5 relative to the number at 0.5 < z < 0.9 in support of the argument for galaxy downsizing.

  4. Significance of power average of sinusoidal and non-sinusoidal ...

    2016-06-08

    Jun 8, 2016 ... PG & Research Department of Physics, Nehru Memorial College (Autonomous),. Puthanampatti .... for long time intervals, the periodic or chaotic behaviour ..... square force (short dashed line), sawtooth force (long dashed.

  5. Bounds on Average Time Complexity of Decision Trees

    Chikalov, Igor

    2011-01-01

    In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any diagnostic problem, the minimum average depth of decision tree is bounded from below by the entropy of probability distribution (with a multiplier 1/log2 k for a problem over a k-valued information system). Among diagnostic problems, the problems with a complete set of attributes have the lowest minimum average depth of decision trees (e.g, the problem of building optimal prefix code [1] and a blood test study in assumption that exactly one patient is ill [23]). For such problems, the minimum average depth of decision tree exceeds the lower bound by at most one. The minimum average depth reaches the maximum on the problems in which each attribute is "indispensable" [44] (e.g., a diagnostic problem with n attributes and kn pairwise different rows in the decision table and the problem of implementing the modulo 2 summation function). These problems have the minimum average depth of decision tree equal to the number of attributes in the problem description. © Springer-Verlag Berlin Heidelberg 2011.

  6. Lateral dispersion coefficients as functions of averaging time

    Sheih, C.M.

    1980-01-01

    Plume dispersion coefficients are discussed in terms of single-particle and relative diffusion, and are investigated as functions of averaging time. To demonstrate the effects of averaging time on the relative importance of various dispersion processes, and observed lateral wind velocity spectrum is used to compute the lateral dispersion coefficients of total, single-particle and relative diffusion for various averaging times and plume travel times. The results indicate that for a 1 h averaging time the dispersion coefficient of a plume can be approximated by single-particle diffusion alone for travel times <250 s and by relative diffusion for longer travel times. Furthermore, it is shown that the power-law formula suggested by Turner for relating pollutant concentrations for other averaging times to the corresponding 15 min average is applicable to the present example only when the averaging time is less than 200 s and the tral time smaller than about 300 s. Since the turbulence spectrum used in the analysis is an observed one, it is hoped that the results could represent many conditions encountered in the atmosphere. However, as the results depend on the form of turbulence spectrum, the calculations are not for deriving a set of specific criteria but for demonstrating the need in discriminating various processes in studies of plume dispersion

  7. 40 CFR 80.205 - How is the annual refinery or importer average and corporate pool average sulfur level determined?

    2010-07-01

    ... volume of gasoline produced or imported in batch i. Si=The sulfur content of batch i determined under § 80.330. n=The number of batches of gasoline produced or imported during the averaging period. i=Individual batch of gasoline produced or imported during the averaging period. (b) All annual refinery or...

  8. 40 CFR 600.510-12 - Calculation of average fuel economy and average carbon-related exhaust emissions.

    2010-07-01

    ... and average carbon-related exhaust emissions. 600.510-12 Section 600.510-12 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF... Transportation. (iv) [Reserved] (2) Average carbon-related exhaust emissions will be calculated to the nearest...

  9. Average inactivity time model, associated orderings and reliability properties

    Kayid, M.; Izadkhah, S.; Abouammoh, A. M.

    2018-02-01

    In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.

  10. Average L-shell fluorescence, Auger, and electron yields

    Krause, M.O.

    1980-01-01

    The dependence of the average L-shell fluorescence and Auger yields on the initial vacancy distribution is shown to be small. By contrast, the average electron yield pertaining to both Auger and Coster-Kronig transitions is shown to display a strong dependence. Numerical examples are given on the basis of Krause's evaluation of subshell radiative and radiationless yields. Average yields are calculated for widely differing vacancy distributions and are intercompared graphically for 40 3 subshell yields in most cases of inner-shell ionization

  11. Simultaneous inference for model averaging of derived parameters

    Jensen, Signe Marie; Ritz, Christian

    2015-01-01

    Model averaging is a useful approach for capturing uncertainty due to model selection. Currently, this uncertainty is often quantified by means of approximations that do not easily extend to simultaneous inference. Moreover, in practice there is a need for both model averaging and simultaneous...... inference for derived parameters calculated in an after-fitting step. We propose a method for obtaining asymptotically correct standard errors for one or several model-averaged estimates of derived parameters and for obtaining simultaneous confidence intervals that asymptotically control the family...

  12. Time average vibration fringe analysis using Hilbert transformation

    Kumar, Upputuri Paul; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad

    2010-01-01

    Quantitative phase information from a single interferogram can be obtained using the Hilbert transform (HT). We have applied the HT method for quantitative evaluation of Bessel fringes obtained in time average TV holography. The method requires only one fringe pattern for the extraction of vibration amplitude and reduces the complexity in quantifying the data experienced in the time average reference bias modulation method, which uses multiple fringe frames. The technique is demonstrated for the measurement of out-of-plane vibration amplitude on a small scale specimen using a time average microscopic TV holography system.

  13. Average multiplications in deep inelastic processes and their interpretation

    Kiselev, A.V.; Petrov, V.A.

    1983-01-01

    Inclusive production of hadrons in deep inelastic proceseseus is considered. It is shown that at high energies the jet evolution in deep inelastic processes is mainly of nonperturbative character. With the increase of a final hadron state energy the leading contribution to an average multiplicity comes from a parton subprocess due to production of massive quark and gluon jets and their further fragmentation as diquark contribution becomes less and less essential. The ratio of the total average multiplicity in deep inelastic processes to the average multiplicity in e + e - -annihilation at high energies tends to unity

  14. Fitting a function to time-dependent ensemble averaged data

    Fogelmark, Karl; Lomholt, Michael A.; Irbäck, Anders

    2018-01-01

    Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion...... method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software....

  15. Average wind statistics for SRP area meteorological towers

    Laurinat, J.E.

    1987-01-01

    A quality assured set of average wind Statistics for the seven SRP area meteorological towers has been calculated for the five-year period 1982--1986 at the request of DOE/SR. A Similar set of statistics was previously compiled for the years 1975-- 1979. The updated wind statistics will replace the old statistics as the meteorological input for calculating atmospheric radionuclide doses from stack releases, and will be used in the annual environmental report. This report details the methods used to average the wind statistics and to screen out bad measurements and presents wind roses generated by the averaged statistics

  16. Effect of gas temperature on flow rate characteristics of an averaging pitot tube type flow meter

    Yeo, Seung Hwa; Lee, Su Ryong; Lee, Choong Hoon [Seoul National University of Science and Technology, Seoul (Korea, Republic of)

    2015-01-15

    The flow rate characteristics passing through an averaging Pitot tube (APT) while constantly controlling the flow temperature were studied through experiments and CFD simulations. At controlled temperatures of 25, 50, 75, and 100 .deg .C, the flow characteristics, in this case the upstream, downstream and static pressure at the APT flow meter probe, were measured as the flow rate was increased. The flow rate through the APT flow meter was represented using the H-parameter (hydraulic height) obtained by a combination of the differential pressure and the air density measured at the APT flow meter probe. Four types of H-parameters were defined depending on the specific combination. The flow rate and the upstream, downstream and static pressures measured at the APT flow meter while changing the H-parameters were simulated by means of CFD. The flow rate curves showed different features depending on which type of H-parameter was used. When using the constant air density value in a standard state to calculate the H-parameters, the flow rate increased linearly with the H-parameter and the slope of the flow rate curve according to the H-parameter increased as the controlled target air temperature was increased. When using different air density levels corresponding to each target air temperature to calculate the H-parameter, the slope of the flow rate curve according to the H-parameter was constant and the flow rate curve could be represented by a single line. The CFD simulation results were in good agreement with the experimental results. The CFD simulations were performed while increasing the air temperature to 1200 K. The CFD simulation results for high air temperatures were similar to those at the low temperature ranging from 25 to 100 .deg. C.

  17. Effect of gas temperature on flow rate characteristics of an averaging pitot tube type flow meter

    Yeo, Seung Hwa; Lee, Su Ryong; Lee, Choong Hoon

    2015-01-01

    The flow rate characteristics passing through an averaging Pitot tube (APT) while constantly controlling the flow temperature were studied through experiments and CFD simulations. At controlled temperatures of 25, 50, 75, and 100 .deg .C, the flow characteristics, in this case the upstream, downstream and static pressure at the APT flow meter probe, were measured as the flow rate was increased. The flow rate through the APT flow meter was represented using the H-parameter (hydraulic height) obtained by a combination of the differential pressure and the air density measured at the APT flow meter probe. Four types of H-parameters were defined depending on the specific combination. The flow rate and the upstream, downstream and static pressures measured at the APT flow meter while changing the H-parameters were simulated by means of CFD. The flow rate curves showed different features depending on which type of H-parameter was used. When using the constant air density value in a standard state to calculate the H-parameters, the flow rate increased linearly with the H-parameter and the slope of the flow rate curve according to the H-parameter increased as the controlled target air temperature was increased. When using different air density levels corresponding to each target air temperature to calculate the H-parameter, the slope of the flow rate curve according to the H-parameter was constant and the flow rate curve could be represented by a single line. The CFD simulation results were in good agreement with the experimental results. The CFD simulations were performed while increasing the air temperature to 1200 K. The CFD simulation results for high air temperatures were similar to those at the low temperature ranging from 25 to 100 .deg. C.

  18. Are average and symmetric faces attractive to infants? Discrimination and looking preferences.

    Rhodes, Gillian; Geddes, Keren; Jeffery, Linda; Dziurawiec, Suzanne; Clark, Alison

    2002-01-01

    Young infants prefer to look at faces that adults find attractive, suggesting a biological basis for some face preferences. However, the basis for infant preferences is not known. Adults find average and symmetric faces attractive. We examined whether 5-8-month-old infants discriminate between different levels of averageness and symmetry in faces, and whether they prefer to look at faces with higher levels of these traits. Each infant saw 24 pairs of female faces. Each pair consisted of two versions of the same face differing either in averageness (12 pairs) or symmetry (12 pairs). Data from the mothers confirmed that adults preferred the more average and more symmetric versions in each pair. The infants were sensitive to differences in both averageness and symmetry, but showed no looking preference for the more average or more symmetric versions. On the contrary, longest looks were significantly longer for the less average versions, and both longest looks and first looks were marginally longer for the less symmetric versions. Mean looking times were also longer for the less average and less symmetric versions, but those differences were not significant. We suggest that the infant looking behaviour may reflect a novelty preference rather than an aesthetic preference.

  19. Properties of Narrow line Seyfert 1 galaxies

    Rakshit, Suvendu; Stalin, Chelliah Subramonian; Chand, Hum; Zhang, Xue-Guang

    2018-04-01

    Narrow line Seyfert 1 (NLSy1) galaxies constitute a class of active galactic nuclei characterized by the full width at half maximum (FWHM) of the Hα broad emission line 10 pixel-1. A strong correlation between the Hα and Hα emission lines is found both in the FWHM and flux. The nuclear continuum luminosity is found to be strongly correlated with the luminosity of Hα, Hα and [O III] emission lines. The black hole mass in NLSy1 galaxies is lower compared to their broad line counterparts. Compared to BLSy1 galaxies, NLSy1 galaxies have a stronger FeII emission and a higher Eddington ratio that place them in the extreme upper right corner of the R4570 - λEdd diagram. The distribution of the radio-loudness parameter (R) in NLSy1 galaxies drops rapidly at R>10 compared to the BLSy1 galaxies that have powerful radio jets. The soft X-ray photon index in NLSy1 galaxies is on average higher (2.9 ± 0.9) than BLSy1 galaxies (2.4 ± 0.8). It is anti-correlated with the Hα width but correlated with the FeII strength. NLSy1 galaxies on average have a lower amplitude of optical variability compared to their broad lines counterparts. These results suggest Eddington ratio as the main parameter that drives optical variability in these sources.

  20. Asymptotic behaviour of time averages for non-ergodic Gaussian processes

    Ślęzak, Jakub

    2017-08-01

    In this work, we study the behaviour of time-averages for stationary (non-ageing), but ergodicity-breaking Gaussian processes using their representation in Fourier space. We provide explicit formulae for various time-averaged quantities, such as mean square displacement, density, and analyse the behaviour of time-averaged characteristic function, which gives insight into rich memory structure of the studied processes. Moreover, we show applications of the ergodic criteria in Fourier space, determining the ergodicity of the generalised Langevin equation's solutions.

  1. MCBS Highlights: Ownership and Average Premiums for Medicare Supplementary Insurance Policies

    Chulis, George S.; Eppig, Franklin J.; Poisal, John A.

    1995-01-01

    This article describes private supplementary health insurance holdings and average premiums paid by Medicare enrollees. Data were collected as part of the 1992 Medicare Current Beneficiary Survey (MCBS). Data show the number of persons with insurance and average premiums paid by type of insurance held—individually purchased policies, employer-sponsored policies, or both. Distributions are shown for a variety of demographic, socioeconomic, and health status variables. Primary findings include: Seventy-eight percent of Medicare beneficiaries have private supplementary insurance; 25 percent of those with private insurance hold more than one policy. The average premium paid for private insurance in 1992 was $914. PMID:10153473

  2. Average monthly and annual climate maps for Bolivia

    Vicente-Serrano, Sergio M.

    2015-02-24

    This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.

  3. Medicare Part B Drug Average Sales Pricing Files

    U.S. Department of Health & Human Services — Manufacturer reporting of Average Sales Price (ASP) data - A manufacturers ASP must be calculated by the manufacturer every calendar quarter and submitted to CMS...

  4. High Average Power Fiber Laser for Satellite Communications, Phase I

    National Aeronautics and Space Administration — Very high average power lasers with high electrical-top-optical (E-O) efficiency, which also support pulse position modulation (PPM) formats in the MHz-data rate...

  5. A time averaged background compensator for Geiger-Mueller counters

    Bhattacharya, R.C.; Ghosh, P.K.

    1983-01-01

    The GM tube compensator described stores background counts to cancel an equal number of pulses from the measuring channel providing time averaged compensation. The method suits portable instruments. (orig.)

  6. Time averaging, ageing and delay analysis of financial time series

    Cherstvy, Andrey G.; Vinod, Deepak; Aghion, Erez; Chechkin, Aleksei V.; Metzler, Ralf

    2017-06-01

    We introduce three strategies for the analysis of financial time series based on time averaged observables. These comprise the time averaged mean squared displacement (MSD) as well as the ageing and delay time methods for varying fractions of the financial time series. We explore these concepts via statistical analysis of historic time series for several Dow Jones Industrial indices for the period from the 1960s to 2015. Remarkably, we discover a simple universal law for the delay time averaged MSD. The observed features of the financial time series dynamics agree well with our analytical results for the time averaged measurables for geometric Brownian motion, underlying the famed Black-Scholes-Merton model. The concepts we promote here are shown to be useful for financial data analysis and enable one to unveil new universal features of stock market dynamics.

  7. Historical Data for Average Processing Time Until Hearing Held

    Social Security Administration — This dataset provides historical data for average wait time (in days) from the hearing request date until a hearing was held. This dataset includes data from fiscal...

  8. GIS Tools to Estimate Average Annual Daily Traffic

    2012-06-01

    This project presents five tools that were created for a geographical information system to estimate Annual Average Daily : Traffic using linear regression. Three of the tools can be used to prepare spatial data for linear regression. One tool can be...

  9. A high speed digital signal averager for pulsed NMR

    Srinivasan, R.; Ramakrishna, J.; Ra agopalan, S.R.

    1978-01-01

    A 256-channel digital signal averager suitable for pulsed nuclear magnetic resonance spectroscopy is described. It implements 'stable averaging' algorithm and hence provides a calibrated display of the average signal at all times during the averaging process on a CRT. It has a maximum sampling rate of 2.5 μ sec and a memory capacity of 256 x 12 bit words. Number of sweeps is selectable through a front panel control in binary steps from 2 3 to 2 12 . The enhanced signal can be displayed either on a CRT or by a 3.5-digit LED display. The maximum S/N improvement that can be achieved with this instrument is 36 dB. (auth.)

  10. The average-shadowing property and topological ergodicity for flows

    Gu Rongbao; Guo Wenjing

    2005-01-01

    In this paper, the transitive property for a flow without sensitive dependence on initial conditions is studied and it is shown that a Lyapunov stable flow with the average-shadowing property on a compact metric space is topologically ergodic

  11. Application of Bayesian approach to estimate average level spacing

    Huang Zhongfu; Zhao Zhixiang

    1991-01-01

    A method to estimate average level spacing from a set of resolved resonance parameters by using Bayesian approach is given. Using the information given in the distributions of both levels spacing and neutron width, the level missing in measured sample can be corrected more precisely so that better estimate for average level spacing can be obtained by this method. The calculation of s-wave resonance has been done and comparison with other work was carried out

  12. Annual average equivalent dose of workers form health area

    Daltro, T.F.L.; Campos, L.L.

    1992-01-01

    The data of personnel monitoring during 1985 and 1991 of personnel that work in health area were studied, obtaining a general overview of the value change of annual average equivalent dose. Two different aspects were presented: the analysis of annual average equivalent dose in the different sectors of a hospital and the comparison of these doses in the same sectors in different hospitals. (C.G.C.)

  13. A precise measurement of the average b hadron lifetime

    Buskulic, Damir; De Bonis, I; Décamp, D; Ghez, P; Goy, C; Lees, J P; Lucotte, A; Minard, M N; Odier, P; Pietrzyk, B; Ariztizabal, F; Chmeissani, M; Crespo, J M; Efthymiopoulos, I; Fernández, E; Fernández-Bosman, M; Gaitan, V; Garrido, L; Martínez, M; Orteu, S; Pacheco, A; Padilla, C; Palla, Fabrizio; Pascual, A; Perlas, J A; Sánchez, F; Teubert, F; Colaleo, A; Creanza, D; De Palma, M; Farilla, A; Gelao, G; Girone, M; Iaselli, Giuseppe; Maggi, G; Maggi, M; Marinelli, N; Natali, S; Nuzzo, S; Ranieri, A; Raso, G; Romano, F; Ruggieri, F; Selvaggi, G; Silvestris, L; Tempesta, P; Zito, G; Huang, X; Lin, J; Ouyang, Q; Wang, T; Xie, Y; Xu, R; Xue, S; Zhang, J; Zhang, L; Zhao, W; Bonvicini, G; Cattaneo, M; Comas, P; Coyle, P; Drevermann, H; Engelhardt, A; Forty, Roger W; Frank, M; Hagelberg, R; Harvey, J; Jacobsen, R; Janot, P; Jost, B; Knobloch, J; Lehraus, Ivan; Markou, C; Martin, E B; Mato, P; Meinhard, H; Minten, Adolf G; Miquel, R; Oest, T; Palazzi, P; Pater, J R; Pusztaszeri, J F; Ranjard, F; Rensing, P E; Rolandi, Luigi; Schlatter, W D; Schmelling, M; Schneider, O; Tejessy, W; Tomalin, I R; Venturi, A; Wachsmuth, H W; Wiedenmann, W; Wildish, T; Witzeling, W; Wotschack, J; Ajaltouni, Ziad J; Bardadin-Otwinowska, Maria; Barrès, A; Boyer, C; Falvard, A; Gay, P; Guicheney, C; Henrard, P; Jousset, J; Michel, B; Monteil, S; Montret, J C; Pallin, D; Perret, P; Podlyski, F; Proriol, J; Rossignol, J M; Saadi, F; Fearnley, Tom; Hansen, J B; Hansen, J D; Hansen, J R; Hansen, P H; Nilsson, B S; Kyriakis, A; Simopoulou, Errietta; Siotis, I; Vayaki, Anna; Zachariadou, K; Blondel, A; Bonneaud, G R; Brient, J C; Bourdon, P; Passalacqua, L; Rougé, A; Rumpf, M; Tanaka, R; Valassi, Andrea; Verderi, M; Videau, H L; Candlin, D J; Parsons, M I; Focardi, E; Parrini, G; Corden, M; Delfino, M C; Georgiopoulos, C H; Jaffe, D E; Antonelli, A; Bencivenni, G; Bologna, G; Bossi, F; Campana, P; Capon, G; Chiarella, V; Felici, G; Laurelli, P; Mannocchi, G; Murtas, F; Murtas, G P; Pepé-Altarelli, M; Dorris, S J; Halley, A W; ten Have, I; Knowles, I G; Lynch, J G; Morton, W T; O'Shea, V; Raine, C; Reeves, P; Scarr, J M; Smith, K; Smith, M G; Thompson, A S; Thomson, F; Thorn, S; Turnbull, R M; Becker, U; Braun, O; Geweniger, C; Graefe, G; Hanke, P; Hepp, V; Kluge, E E; Putzer, A; Rensch, B; Schmidt, M; Sommer, J; Stenzel, H; Tittel, K; Werner, S; Wunsch, M; Beuselinck, R; Binnie, David M; Cameron, W; Colling, D J; Dornan, Peter J; Konstantinidis, N P; Moneta, L; Moutoussi, A; Nash, J; San Martin, G; Sedgbeer, J K; Stacey, A M; Dissertori, G; Girtler, P; Kneringer, E; Kuhn, D; Rudolph, G; Bowdery, C K; Brodbeck, T J; Colrain, P; Crawford, G; Finch, A J; Foster, F; Hughes, G; Sloan, Terence; Whelan, E P; Williams, M I; Galla, A; Greene, A M; Kleinknecht, K; Quast, G; Raab, J; Renk, B; Sander, H G; Wanke, R; Van Gemmeren, P; Zeitnitz, C; Aubert, Jean-Jacques; Bencheikh, A M; Benchouk, C; Bonissent, A; Bujosa, G; Calvet, D; Carr, J; Diaconu, C A; Etienne, F; Thulasidas, M; Nicod, D; Payre, P; Rousseau, D; Talby, M; Abt, I; Assmann, R W; Bauer, C; Blum, Walter; Brown, D; Dietl, H; Dydak, Friedrich; Ganis, G; Gotzhein, C; Jakobs, K; Kroha, H; Lütjens, G; Lutz, Gerhard; Männer, W; Moser, H G; Richter, R H; Rosado-Schlosser, A; Schael, S; Settles, Ronald; Seywerd, H C J; Stierlin, U; Saint-Denis, R; Wolf, G; Alemany, R; Boucrot, J; Callot, O; Cordier, A; Courault, F; Davier, M; Duflot, L; Grivaz, J F; Heusse, P; Jacquet, M; Kim, D W; Le Diberder, F R; Lefrançois, J; Lutz, A M; Musolino, G; Nikolic, I A; Park, H J; Park, I C; Schune, M H; Simion, S; Veillet, J J; Videau, I; Abbaneo, D; Azzurri, P; Bagliesi, G; Batignani, G; Bettarini, S; Bozzi, C; Calderini, G; Carpinelli, M; Ciocci, M A; Ciulli, V; Dell'Orso, R; Fantechi, R; Ferrante, I; Foà, L; Forti, F; Giassi, A; Giorgi, M A; Gregorio, A; Ligabue, F; Lusiani, A; Marrocchesi, P S; Messineo, A; Rizzo, G; Sanguinetti, G; Sciabà, A; Spagnolo, P; Steinberger, Jack; Tenchini, Roberto; Tonelli, G; Triggiani, G; Vannini, C; Verdini, P G; Walsh, J; Betteridge, A P; Blair, G A; Bryant, L M; Cerutti, F; Gao, Y; Green, M G; Johnson, D L; Medcalf, T; Mir, L M; Perrodo, P; Strong, J A; Bertin, V; Botterill, David R; Clifft, R W; Edgecock, T R; Haywood, S; Edwards, M; Maley, P; Norton, P R; Thompson, J C; Bloch-Devaux, B; Colas, P; Duarte, H; Emery, S; Kozanecki, Witold; Lançon, E; Lemaire, M C; Locci, E; Marx, B; Pérez, P; Rander, J; Renardy, J F; Rosowsky, A; Roussarie, A; Schuller, J P; Schwindling, J; Si Mohand, D; Trabelsi, A; Vallage, B; Johnson, R P; Kim, H Y; Litke, A M; McNeil, M A; Taylor, G; Beddall, A; Booth, C N; Boswell, R; Cartwright, S L; Combley, F; Dawson, I; Köksal, A; Letho, M; Newton, W M; Rankin, C; Thompson, L F; Böhrer, A; Brandt, S; Cowan, G D; Feigl, E; Grupen, Claus; Lutters, G; Minguet-Rodríguez, J A; Rivera, F; Saraiva, P; Smolik, L; Stephan, F; Apollonio, M; Bosisio, L; Della Marina, R; Giannini, G; Gobbo, B; Ragusa, F; Rothberg, J E; Wasserbaech, S R; Armstrong, S R; Bellantoni, L; Elmer, P; Feng, P; Ferguson, D P S; Gao, Y S; González, S; Grahl, J; Harton, J L; Hayes, O J; Hu, H; McNamara, P A; Nachtman, J M; Orejudos, W; Pan, Y B; Saadi, Y; Schmitt, M; Scott, I J; Sharma, V; Turk, J; Walsh, A M; Wu Sau Lan; Wu, X; Yamartino, J M; Zheng, M; Zobernig, G

    1996-01-01

    An improved measurement of the average b hadron lifetime is performed using a sample of 1.5 million hadronic Z decays, collected during the 1991-1993 runs of ALEPH, with the silicon vertex detector fully operational. This uses the three-dimensional impact parameter distribution of lepton tracks coming from semileptonic b decays and yields an average b hadron lifetime of 1.533 \\pm 0.013 \\pm 0.022 ps.

  14. Averaging Bias Correction for Future IPDA Lidar Mission MERLIN

    Tellier Yoann

    2018-01-01

    Full Text Available The CNES/DLR MERLIN satellite mission aims at measuring methane dry-air mixing ratio column (XCH4 and thus improving surface flux estimates. In order to get a 1% precision on XCH4 measurements, MERLIN signal processing assumes an averaging of data over 50 km. The induced biases due to the non-linear IPDA lidar equation are not compliant with accuracy requirements. This paper analyzes averaging biases issues and suggests correction algorithms tested on realistic simulated scenes.

  15. Averaging Bias Correction for Future IPDA Lidar Mission MERLIN

    Tellier, Yoann; Pierangelo, Clémence; Wirth, Martin; Gibert, Fabien

    2018-04-01

    The CNES/DLR MERLIN satellite mission aims at measuring methane dry-air mixing ratio column (XCH4) and thus improving surface flux estimates. In order to get a 1% precision on XCH4 measurements, MERLIN signal processing assumes an averaging of data over 50 km. The induced biases due to the non-linear IPDA lidar equation are not compliant with accuracy requirements. This paper analyzes averaging biases issues and suggests correction algorithms tested on realistic simulated scenes.

  16. The average action for scalar fields near phase transitions

    Wetterich, C.

    1991-08-01

    We compute the average action for fields in two, three and four dimensions, including the effects of wave function renormalization. A study of the one loop evolution equations for the scale dependence of the average action gives a unified picture of the qualitatively different behaviour in various dimensions for discrete as well as abelian and nonabelian continuous symmetry. The different phases and the phase transitions can be infered from the evolution equation. (orig.)

  17. Average geodesic distance of skeleton networks of Sierpinski tetrahedron

    Yang, Jinjin; Wang, Songjing; Xi, Lifeng; Ye, Yongchao

    2018-04-01

    The average distance is concerned in the research of complex networks and is related to Wiener sum which is a topological invariant in chemical graph theory. In this paper, we study the skeleton networks of the Sierpinski tetrahedron, an important self-similar fractal, and obtain their asymptotic formula for average distances. To provide the formula, we develop some technique named finite patterns of integral of geodesic distance on self-similar measure for the Sierpinski tetrahedron.

  18. A new extraction method of loess shoulder-line based on Marr-Hildreth operator and terrain mask.

    Sheng Jiang

    Full Text Available Loess shoulder-lines are significant structural lines which divide the complicated loess landform into loess interfluves and gully-slope lands. Existing extraction algorithms for shoulder-lines mainly are based on local maximum of terrain features. These algorithms are sensitive to noise for complicated loess surface and the extraction parameters are difficult to be determined, making the extraction results usually inaccurate. This paper presents a new extraction approach for loess shoulder-lines, in which Marr-Hildreth edge operator is employed to construct initial shoulder-lines. Then the terrain mask for confining the boundary of shoulder-lines is proposed based on slope degree classification and morphology methods, avoiding interference from non-valley area and modify the initial loess shoulder-lines. A case study is conducted in Yijun located in the northern Shanxi Loess Plateau of China. The Digital Elevation Models with a grid size of 5 m is applied as original data. To obtain optimal scale parameters, the Euclidean Distance Offset Percentages between shoulder-lines is calculated by the Marr-Hildreth operator and the manual delineations. The experimental results show that the new method could achieve the highest extraction accuracy when σ = 5 in Gaussian smoothing. According to the accuracy assessment, the average extraction accuracy is about 88.5%, which indicates that the proposed method is applicable for the extraction of loess shoulder-lines in the loess hilly and gully areas.

  19. Directed line liquids

    Kamien, R.D.

    1992-01-01

    This thesis is devoted to the study of ensembles of dense directed lines. These lines are principally to be thought of as polymers, though they also have the morphology of flux lines in high temperature superconductors, strings of colloidal spheres in electrorheological fluids and the world lines of quantum mechanical bosons. The authors discuss how directed polymer melts, string-like formations in electrorheological and ferro-fluids, flux lines in high temperature superconductors and the world lines of quantum mechanical bosons all share similar descriptions. They study a continuous transition in all of these systems, and then study the critical mixing properties of binary mixtures of directed polymers through the renormalization group. They predict the exponents for a directed polymer blend consolute point and a novel two-phase superfluid liquid-gas critical point

  20. The Average IQ of Sub-Saharan Africans: Comments on Wicherts, Dolan, and van der Maas

    Lynn, Richard; Meisenberg, Gerhard

    2010-01-01

    Wicherts, Dolan, and van der Maas (2009) contend that the average IQ of sub-Saharan Africans is about 80. A critical evaluation of the studies presented by WDM shows that many of these are based on unrepresentative elite samples. We show that studies of 29 acceptably representative samples on tests other than the Progressive Matrices give a…

  1. Linear morphoea follows Blaschko's lines.

    Weibel, L; Harper, J I

    2008-07-01

    The aetiology of morphoea (or localized scleroderma) remains unknown. It has previously been suggested that lesions of linear morphoea may follow Blaschko's lines and thus reflect an embryological development. However, the distribution of linear morphoea has never been accurately evaluated. We aimed to identify common patterns of clinical presentation in children with linear morphoea and to establish whether linear morphoea follows the lines of Blaschko. A retrospective chart review of 65 children with linear morphoea was performed. According to clinical photographs the skin lesions of these patients were plotted on to standardized head and body charts. With the aid of Adobe Illustrator a final figure was produced including an overlay of all individual lesions which was used for comparison with the published lines of Blaschko. Thirty-four (53%) patients had the en coup de sabre subtype, 27 (41%) presented with linear morphoea on the trunk and/or limbs and four (6%) children had a combination of the two. In 55 (85%) children the skin lesions were confined to one side of the body, showing no preference for either left or right side. On comparing the overlays of all body and head lesions with the original lines of Blaschko there was an excellent correlation. Our data indicate that linear morphoea follows the lines of Blaschko. We hypothesize that in patients with linear morphoea susceptible cells are present in a mosaic state and that exposure to some trigger factor may result in the development of this condition.

  2. Series Transmission Line Transformer

    Buckles, Robert A.; Booth, Rex; Yen, Boris T.

    2004-06-29

    A series transmission line transformer is set forth which includes two or more of impedance matched sets of at least two transmissions lines such as shielded cables, connected in parallel at one end ans series at the other in a cascading fashion. The cables are wound about a magnetic core. The series transmission line transformer (STLT) which can provide for higher impedance ratios and bandwidths, which is scalable, and which is of simpler design and construction.

  3. Theory and analysis of accuracy for the method of characteristics direction probabilities with boundary averaging

    Liu, Zhouyu; Collins, Benjamin; Kochunas, Brendan; Downar, Thomas; Xu, Yunlin; Wu, Hongchun

    2015-01-01

    Highlights: • The CDP combines the benefits of the CPM’s efficiency and the MOC’s flexibility. • Boundary averaging reduces the computation effort with losing minor accuracy. • An analysis model is used to justify the choice of optimize averaging strategy. • Numerical results show the performance and accuracy. - Abstract: The method of characteristic direction probabilities (CDP) combines the benefits of the collision probability method (CPM) and the method of characteristics (MOC) for the solution of the integral form of the Botlzmann Transport Equation. By coupling only the fine regions traversed by the characteristic rays in a particular direction, the computational effort required to calculate the probability matrices and to solve the matrix system is considerably reduced compared to the CPM. Furthermore, boundary averaging is performed to reduce the storage and computation but the capability of dealing with complicated geometries is preserved since the same ray tracing information is used as in MOC. An analysis model for the outgoing angular flux is used to analyze a variety of outgoing angular flux averaging methods for the boundary and to justify the choice of optimize averaging strategy. The boundary average CDP method was then implemented in the Michigan PArallel Characteristic based Transport (MPACT) code to perform 2-D and 3-D transport calculations. The numerical results are given for different cases to show the effect of averaging on the outgoing angular flux, region scalar flux and the eigenvalue. Comparison of the results with the case with no averaging demonstrates that an angular dependent averaging strategy is possible for the CDP to improve its computational performance without compromising the achievable accuracy

  4. Temperature diagnostic line ratios of Fe XVII

    Raymond, J.C.; Smith, B.W.; Los Alamos National Lab., NM)

    1986-01-01

    Based on extensive calculations of the excitation rates of Fe XVII, four temperature-sensitive line ratios are investigated, paying special attention to the contribution of resonances to the excitation rates and to the contributions of dielectronic recombination satellites to the observed line intensities. The predictions are compared to FPCS observations of Puppis A and to Solar Maximum Mission (SMM) and SOLEX observations of the sun. Temperature-sensitive line ratios are also computed for emitting gas covering a broad temperature range. It is found that each ratio yields a differently weighted average for the temperature and that this accounts for some apparent discrepancies between the theoretical ratios and solar observations. The effects of this weighting on the Fe XVII temperature diagnostics and on the analogous Fe XXIV/Fe XXV satellite line temperature diagnostics are discussed. 27 references

  5. Modelling lidar volume-averaging and its significance to wind turbine wake measurements

    Meyer Forsting, A. R.; Troldborg, N.; Borraccino, A.

    2017-05-01

    Lidar velocity measurements need to be interpreted differently than conventional in-situ readings. A commonly ignored factor is “volume-averaging”, which refers to lidars not sampling in a single, distinct point but along its entire beam length. However, especially in regions with large velocity gradients, like the rotor wake, can it be detrimental. Hence, an efficient algorithm mimicking lidar flow sampling is presented, which considers both pulsed and continous-wave lidar weighting functions. The flow-field around a 2.3 MW turbine is simulated using Detached Eddy Simulation in combination with an actuator line to test the algorithm and investigate the potential impact of volume-averaging. Even with very few points discretising the lidar beam is volume-averaging captured accurately. The difference in a lidar compared to a point measurement is greatest at the wake edges and increases from 30% one rotor diameter (D) downstream of the rotor to 60% at 3D.

  6. Integrating robust timetabling in line plan optimization for railway systems

    Burggraeve, Sofie; Bull, Simon Henry; Lusby, Richard Martin

    The line planning problem in rail is to select a number of lines froma potential pool which provides sufficient passenger capacity and meetsoperational requirements, with some objective measure of solution linequality. We model the problem of minimizing the average passenger systemtime, including...... frequency-dependent estimates for switching between lines,working with the Danish rail operator DSB and data for Copenhagen commuters.We present a multi-commodity flow formulation for the problemof freely routing passengers, coupled to discrete line-frequency decisionsselecting lines from a predefined pool...

  7. SRAP analysis for space induced mutant line of maize (Zea mays L.)

    Du Wenping; Yu Guirong; Song Jun; Xu Liyuan

    2011-01-01

    In order to detect the effects of space mutation on maize, 16 SRAP primers were applied for the discrimination of the maize inbred line '968' and its 93 mutant materials, 154 polymorphic fragments were amplified. The average of polymorphic bands detected by per SRAP primer combination was 9.6 with a range from 5 to 18. Genetic similarities among the 94 materials ranged from 0.481 to 1.000 with an average of 0.903, and the largest genetic distance was found between mutant line 37 and control. The 94 materials were divided into six groups with the similarity coefficient of 0.732. The phylogenetic analysis showed distinct variation among the mutants. The results indicated that SRAP markers could be used for analyzing genetic variation of mutants. (authors)

  8. Estimating average glandular dose by measuring glandular rate in mammograms

    Goto, Sachiko; Azuma, Yoshiharu; Sumimoto, Tetsuhiro; Eiho, Shigeru

    2003-01-01

    The glandular rate of the breast was objectively measured in order to calculate individual patient exposure dose (average glandular dose) in mammography. By employing image processing techniques and breast-equivalent phantoms with various glandular rate values, a conversion curve for pixel value to glandular rate can be determined by a neural network. Accordingly, the pixel values in clinical mammograms can be converted to the glandular rate value for each pixel. The individual average glandular dose can therefore be calculated using the individual glandular rates on the basis of the dosimetry method employed for quality control in mammography. In the present study, a data set of 100 craniocaudal mammograms from 50 patients was used to evaluate our method. The average glandular rate and average glandular dose of the data set were 41.2% and 1.79 mGy, respectively. The error in calculating the individual glandular rate can be estimated to be less than ±3%. When the calculation error of the glandular rate is taken into consideration, the error in the individual average glandular dose can be estimated to be 13% or less. We feel that our method for determining the glandular rate from mammograms is useful for minimizing subjectivity in the evaluation of patient breast composition. (author)

  9. Accurate phenotyping: Reconciling approaches through Bayesian model averaging.

    Carla Chia-Ming Chen

    Full Text Available Genetic research into complex diseases is frequently hindered by a lack of clear biomarkers for phenotype ascertainment. Phenotypes for such diseases are often identified on the basis of clinically defined criteria; however such criteria may not be suitable for understanding the genetic composition of the diseases. Various statistical approaches have been proposed for phenotype definition; however our previous studies have shown that differences in phenotypes estimated using different approaches have substantial impact on subsequent analyses. Instead of obtaining results based upon a single model, we propose a new method, using Bayesian model averaging to overcome problems associated with phenotype definition. Although Bayesian model averaging has been used in other fields of research, this is the first study that uses Bayesian model averaging to reconcile phenotypes obtained using multiple models. We illustrate the new method by applying it to simulated genetic and phenotypic data for Kofendred personality disorder-an imaginary disease with several sub-types. Two separate statistical methods were used to identify clusters of individuals with distinct phenotypes: latent class analysis and grade of membership. Bayesian model averaging was then used to combine the two clusterings for the purpose of subsequent linkage analyses. We found that causative genetic loci for the disease produced higher LOD scores using model averaging than under either individual model separately. We attribute this improvement to consolidation of the cores of phenotype clusters identified using each individual method.

  10. Yearly, seasonal and monthly daily average diffuse sky radiation models

    Kassem, A.S.; Mujahid, A.M.; Turner, D.W.

    1993-01-01

    A daily average diffuse sky radiation regression model based on daily global radiation was developed utilizing two year data taken near Blytheville, Arkansas (Lat. =35.9 0 N, Long. = 89.9 0 W), U.S.A. The model has a determination coefficient of 0.91 and 0.092 standard error of estimate. The data were also analyzed for a seasonal dependence and four seasonal average daily models were developed for the spring, summer, fall and winter seasons. The coefficient of determination is 0.93, 0.81, 0.94 and 0.93, whereas the standard error of estimate is 0.08, 0.102, 0.042 and 0.075 for spring, summer, fall and winter, respectively. A monthly average daily diffuse sky radiation model was also developed. The coefficient of determination is 0.92 and the standard error of estimate is 0.083. A seasonal monthly average model was also developed which has 0.91 coefficient of determination and 0.085 standard error of estimate. The developed monthly daily average and daily models compare well with a selected number of previously developed models. (author). 11 ref., figs., tabs

  11. Comparison of power pulses from homogeneous and time-average-equivalent models

    De, T.K.; Rouben, B.

    1995-01-01

    The time-average-equivalent model is an 'instantaneous' core model designed to reproduce the same three dimensional power distribution as that generated by a time-average model. However it has been found that the time-average-equivalent model gives a full-core static void reactivity about 8% smaller than the time-average or homogeneous models. To investigate the consequences of this difference in static void reactivity in time dependent calculations, simulations of the power pulse following a hypothetical large-loss-of-coolant accident were performed with a homogeneous model and compared with the power pulse from the time-average-equivalent model. The results show that there is a much smaller difference in peak dynamic reactivity than in static void reactivity between the two models. This is attributed to the fact that voiding is not complete, but also to the retardation effect of the delayed-neutron precursors on the dynamic flux shape. The difference in peak reactivity between the models is 0.06 milli-k. The power pulses are essentially the same in the two models, because the delayed-neutron fraction in the time-average-equivalent model is lower than in the homogeneous model, which compensates for the lower void reactivity in the time-average-equivalent model. (author). 1 ref., 5 tabs., 9 figs

  12. An Experimental Study Related to Planning Abilities of Gifted and Average Students

    Marilena Z. Leana-Taşcılar

    2016-02-01

    Full Text Available Gifted students differ from their average peers in psychological, social, emotional and cognitive development. One of these differences in the cognitive domain is related to executive functions. One of the most important executive functions is planning and organization ability. The aim of this study was to compare planning abilities of gifted students with those of their average peers and to test the effectiveness of a training program on planning abilities of gifted students and average students. First, students’ intelligence and planning abilities were measured and then assigned to either experimental or control group. The groups were matched by intelligence and planning ability (experimental: (13 gifted and 8 average; control: 14 gifted and 8 average. In total 182 students (79 gifted and 103 average participated in the study. Then, a training program was implemented in the experimental group to find out if it improved students’ planning ability. Results showed that boys had better planning abilities than girls did, and gifted students had better planning abilities than their average peers did. Significant results were obtained in favor of the experimental group in the posttest scores

  13. Average cross sections for the 252Cf neutron spectrum

    Dezso, Z.; Csikai, J.

    1977-01-01

    A number of average cross sections have been measured for 252 Cf neutrons in (n, γ), (n,p), (n,2n), (n,α) reactions by the activation method and for fission by fission chamber. Cross sections have been determined for 19 elements and 45 reactions. The (n,γ) cross section values lie in the interval from 0.3 to 200 mb. The data as a function of target neutron number increases up to about N=60 with minimum near to dosed shells. The values lie between 0.3 mb and 113 mb. These cross sections decrease significantly with increasing the threshold energy. The values are below 20 mb. The data do not exceed 10 mb. Average (n,p) cross sections as a function of the threshold energy and average fission cross sections as a function of Zsup(4/3)/A are shown. The results obtained are summarized in tables

  14. Testing averaged cosmology with type Ia supernovae and BAO data

    Santos, B.; Alcaniz, J.S. [Departamento de Astronomia, Observatório Nacional, 20921-400, Rio de Janeiro – RJ (Brazil); Coley, A.A. [Department of Mathematics and Statistics, Dalhousie University, Halifax, B3H 3J5 Canada (Canada); Devi, N. Chandrachani, E-mail: thoven@on.br, E-mail: aac@mathstat.dal.ca, E-mail: chandrachaniningombam@astro.unam.mx, E-mail: alcaniz@on.br [Instituto de Astronomía, Universidad Nacional Autónoma de México, Box 70-264, México City, México (Mexico)

    2017-02-01

    An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO data is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.

  15. Average contraction and synchronization of complex switched networks

    Wang Lei; Wang Qingguo

    2012-01-01

    This paper introduces an average contraction analysis for nonlinear switched systems and applies it to investigating the synchronization of complex networks of coupled systems with switching topology. For a general nonlinear system with a time-dependent switching law, a basic convergence result is presented according to average contraction analysis, and a special case where trajectories of a distributed switched system converge to a linear subspace is then investigated. Synchronization is viewed as the special case with all trajectories approaching the synchronization manifold, and is thus studied for complex networks of coupled oscillators with switching topology. It is shown that the synchronization of a complex switched network can be evaluated by the dynamics of an isolated node, the coupling strength and the time average of the smallest eigenvalue associated with the Laplacians of switching topology and the coupling fashion. Finally, numerical simulations illustrate the effectiveness of the proposed methods. (paper)

  16. Testing averaged cosmology with type Ia supernovae and BAO data

    Santos, B.; Alcaniz, J.S.; Coley, A.A.; Devi, N. Chandrachani

    2017-01-01

    An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO data is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.

  17. Perceived Average Orientation Reflects Effective Gist of the Surface.

    Cha, Oakyoon; Chong, Sang Chul

    2018-03-01

    The human ability to represent ensemble visual information, such as average orientation and size, has been suggested as the foundation of gist perception. To effectively summarize different groups of objects into the gist of a scene, observers should form ensembles separately for different groups, even when objects have similar visual features across groups. We hypothesized that the visual system utilizes perceptual groups characterized by spatial configuration and represents separate ensembles for different groups. Therefore, participants could not integrate ensembles of different perceptual groups on a task basis. We asked participants to determine the average orientation of visual elements comprising a surface with a contour situated inside. Although participants were asked to estimate the average orientation of all the elements, they ignored orientation signals embedded in the contour. This constraint may help the visual system to keep the visual features of occluding objects separate from those of the occluded objects.

  18. Object detection by correlation coefficients using azimuthally averaged reference projections.

    Nicholson, William V

    2004-11-01

    A method of computing correlation coefficients for object detection that takes advantage of using azimuthally averaged reference projections is described and compared with two alternative methods-computing a cross-correlation function or a local correlation coefficient versus the azimuthally averaged reference projections. Two examples of an application from structural biology involving the detection of projection views of biological macromolecules in electron micrographs are discussed. It is found that a novel approach to computing a local correlation coefficient versus azimuthally averaged reference projections, using a rotational correlation coefficient, outperforms using a cross-correlation function and a local correlation coefficient in object detection from simulated images with a range of levels of simulated additive noise. The three approaches perform similarly in detecting macromolecular views in electron microscope images of a globular macrolecular complex (the ribosome). The rotational correlation coefficient outperforms the other methods in detection of keyhole limpet hemocyanin macromolecular views in electron micrographs.

  19. Measurement of average radon gas concentration at workplaces

    Kavasi, N.; Somlai, J.; Kovacs, T.; Gorjanacz, Z.; Nemeth, Cs.; Szabo, T.; Varhegyi, A.; Hakl, J.

    2003-01-01

    In this paper results of measurement of average radon gas concentration at workplaces (the schools and kindergartens and the ventilated workplaces) are presented. t can be stated that the one month long measurements means very high variation (as it is obvious in the cases of the hospital cave and the uranium tailing pond). Consequently, in workplaces where the expectable changes of radon concentration considerable with the seasons should be measure for 12 months long. If it is not possible, the chosen six months period should contain summer and winter months as well. The average radon concentration during working hours can be differ considerable from the average of the whole time in the cases of frequent opening the doors and windows or using artificial ventilation. (authors)

  20. Size and emotion averaging: costs of dividing attention after all.

    Brand, John; Oriet, Chris; Tottenham, Laurie Sykes

    2012-03-01

    Perceptual averaging is a process by which sets of similar items are represented by summary statistics such as their average size, luminance, or orientation. Researchers have argued that this process is automatic, able to be carried out without interference from concurrent processing. Here, we challenge this conclusion and demonstrate a reliable cost of computing the mean size of circles distinguished by colour (Experiments 1 and 2) and the mean emotionality of faces distinguished by sex (Experiment 3). We also test the viability of two strategies that could have allowed observers to guess the correct response without computing the average size or emotionality of both sets concurrently. We conclude that although two means can be computed concurrently, doing so incurs a cost of dividing attention.

  1. Minnesota County Boundaries - lines

    Minnesota Department of Natural Resources — Minnesota county boundaries derived from a combination of 1:24,000 scale PLS lines, 1:100,000 scale TIGER, 1:100,000 scale DLG, and 1:24,000 scale hydrography lines....

  2. Database of emission lines

    Binette, L.; Ortiz, P.; Joguet, B.; Rola, C.

    1998-11-01

    A widely accessible data bank (available through Netscape) and consiting of all (or most) of the emission lines reported in the litterature is being built. It will comprise objects as diverse as HII regions, PN, AGN, HHO. One of its use will be to define/refine existing diagnostic emission line diagrams.

  3. Estimation of linkage disequilibrium and analysis of genetic diversity in Korean chicken lines

    Seo, Dongwon; Lee, Doo Ho; Choi, Nuri; Sudrajad, Pita; Lee, Seung-Hwan

    2018-01-01

    The development of genetic markers for animal breeding is an effective strategy to reduce the time and cost required to improve economically important traits. To implement genomic selection in the multibreed chicken population of Korea, an understanding of the linkage disequilibrium (LD) status of the target population is essential. In this study, we performed population genetic analyses to investigate LD decay, the effective population size, and breed diversity using 600K high-density single nucleotide polymorphism genotypes of 189 native chickens in 14 lines (including Korean native chicken, imported and adapted purebred and commercial chickens). The results indicated that commercial native chickens have less calculated LD (average, r2 = 0.13–0.26) and purebred native chickens have more calculated LD (average, r2 = 0.24–0.37) across the entire genome. The effective population sizes of the examined lines showed patterns opposite to those of population LD. The phylogeny and admixture analyses showed that commercial and purebred chickens were well distinguished, except for Rhode Island Red (RIR) purebred lines of NC (NIAS_RIR_C) and ND (NIAS_RIR_D). These lines are difficult to distinguish clearly because they originated from the same respective breeds. The results of this study may provide important information for the development of genetic markers that can be used in breeding to improve the economic traits of native chickens. PMID:29425208

  4. Estimation of linkage disequilibrium and analysis of genetic diversity in Korean chicken lines.

    Seo, Dongwon; Lee, Doo Ho; Choi, Nuri; Sudrajad, Pita; Lee, Seung-Hwan; Lee, Jun-Heon

    2018-01-01

    The development of genetic markers for animal breeding is an effective strategy to reduce the time and cost required to improve economically important traits. To implement genomic selection in the multibreed chicken population of Korea, an understanding of the linkage disequilibrium (LD) status of the target population is essential. In this study, we performed population genetic analyses to investigate LD decay, the effective population size, and breed diversity using 600K high-density single nucleotide polymorphism genotypes of 189 native chickens in 14 lines (including Korean native chicken, imported and adapted purebred and commercial chickens). The results indicated that commercial native chickens have less calculated LD (average, r2 = 0.13-0.26) and purebred native chickens have more calculated LD (average, r2 = 0.24-0.37) across the entire genome. The effective population sizes of the examined lines showed patterns opposite to those of population LD. The phylogeny and admixture analyses showed that commercial and purebred chickens were well distinguished, except for Rhode Island Red (RIR) purebred lines of NC (NIAS_RIR_C) and ND (NIAS_RIR_D). These lines are difficult to distinguish clearly because they originated from the same respective breeds. The results of this study may provide important information for the development of genetic markers that can be used in breeding to improve the economic traits of native chickens.

  5. A virtual pebble game to ensemble average graph rigidity.

    González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J

    2015-01-01

    The body-bar Pebble Game (PG) algorithm is commonly used to calculate network rigidity properties in proteins and polymeric materials. To account for fluctuating interactions such as hydrogen bonds, an ensemble of constraint topologies are sampled, and average network properties are obtained by averaging PG characterizations. At a simpler level of sophistication, Maxwell constraint counting (MCC) provides a rigorous lower bound for the number of internal degrees of freedom (DOF) within a body-bar network, and it is commonly employed to test if a molecular structure is globally under-constrained or over-constrained. MCC is a mean field approximation (MFA) that ignores spatial fluctuations of distance constraints by replacing the actual molecular structure by an effective medium that has distance constraints globally distributed with perfect uniform density. The Virtual Pebble Game (VPG) algorithm is a MFA that retains spatial inhomogeneity in the density of constraints on all length scales. Network fluctuations due to distance constraints that may be present or absent based on binary random dynamic variables are suppressed by replacing all possible constraint topology realizations with the probabilities that distance constraints are present. The VPG algorithm is isomorphic to the PG algorithm, where integers for counting "pebbles" placed on vertices or edges in the PG map to real numbers representing the probability to find a pebble. In the VPG, edges are assigned pebble capacities, and pebble movements become a continuous flow of probability within the network. Comparisons between the VPG and average PG results over a test set of proteins and disordered lattices demonstrate the VPG quantitatively estimates the ensemble average PG results well. The VPG performs about 20% faster than one PG, and it provides a pragmatic alternative to averaging PG rigidity characteristics over an ensemble of constraint topologies. The utility of the VPG falls in between the most

  6. Exactly averaged equations for flow and transport in random media

    Shvidler, Mark; Karasaki, Kenzi

    2001-01-01

    It is well known that exact averaging of the equations of flow and transport in random porous media can be realized only for a small number of special, occasionally exotic, fields. On the other hand, the properties of approximate averaging methods are not yet fully understood. For example, the convergence behavior and the accuracy of truncated perturbation series. Furthermore, the calculation of the high-order perturbations is very complicated. These problems for a long time have stimulated attempts to find the answer for the question: Are there in existence some exact general and sufficiently universal forms of averaged equations? If the answer is positive, there arises the problem of the construction of these equations and analyzing them. There exist many publications related to these problems and oriented on different applications: hydrodynamics, flow and transport in porous media, theory of elasticity, acoustic and electromagnetic waves in random fields, etc. We present a method of finding the general form of exactly averaged equations for flow and transport in random fields by using (1) an assumption of the existence of Green's functions for appropriate stochastic problems, (2) some general properties of the Green's functions, and (3) the some basic information about the random fields of the conductivity, porosity and flow velocity. We present a general form of the exactly averaged non-local equations for the following cases. 1. Steady-state flow with sources in porous media with random conductivity. 2. Transient flow with sources in compressible media with random conductivity and porosity. 3. Non-reactive solute transport in random porous media. We discuss the problem of uniqueness and the properties of the non-local averaged equations, for the cases with some types of symmetry (isotropic, transversal isotropic, orthotropic) and we analyze the hypothesis of the structure non-local equations in general case of stochastically homogeneous fields. (author)

  7. Increase in average foveal thickness after internal limiting membrane peeling

    Kumagai K

    2017-04-01

    Full Text Available Kazuyuki Kumagai,1 Mariko Furukawa,1 Tetsuyuki Suetsugu,1 Nobuchika Ogino2 1Department of Ophthalmology, Kami-iida Daiichi General Hospital, 2Department of Ophthalmology, Nishigaki Eye Clinic, Aichi, Japan Purpose: To report the findings in three cases in which the average foveal thickness was increased after a thin epiretinal membrane (ERM was removed by vitrectomy with internal limiting membrane (ILM peeling.Methods: The foveal contour was normal preoperatively in all eyes. All cases underwent successful phacovitrectomy with ILM peeling for a thin ERM. The optical coherence tomography (OCT images were examined before and after the surgery. The changes in the average foveal (1 mm thickness and the foveal areas within 500 µm from the foveal center were measured. The postoperative changes in the inner and outer retinal areas determined from the cross-sectional OCT images were analyzed.Results: The average foveal thickness and the inner and outer foveal areas increased significantly after the surgery in each of the three cases. The percentage increase in the average foveal thickness relative to the baseline thickness was 26% in Case 1, 29% in Case 2, and 31% in Case 3. The percentage increase in the foveal inner retinal area was 71% in Case 1, 113% in Case 2, and 110% in Case 3, and the percentage increase in foveal outer retinal area was 8% in Case 1, 13% in Case 2, and 18% in Case 3.Conclusion: The increase in the average foveal thickness and the inner and outer foveal areas suggests that a centripetal movement of the inner and outer retinal layers toward the foveal center probably occurred due to the ILM peeling. Keywords: internal limiting membrane, optical coherence tomography, average foveal thickness, epiretinal membrane, vitrectomy

  8. Positivity of the spherically averaged atomic one-electron density

    Fournais, Søren; Hoffmann-Ostenhof, Maria; Hoffmann-Ostenhof, Thomas

    2008-01-01

    We investigate the positivity of the spherically averaged atomic one-electron density . For a which stems from a physical ground state we prove that for r ≥  0. This article may be reproduced in its entirety for non-commercial purposes.......We investigate the positivity of the spherically averaged atomic one-electron density . For a which stems from a physical ground state we prove that for r ≥  0. This article may be reproduced in its entirety for non-commercial purposes....

  9. Research & development and growth: A Bayesian model averaging analysis

    Horváth, Roman

    2011-01-01

    Roč. 28, č. 6 (2011), s. 2669-2673 ISSN 0264-9993. [Society for Non-linear Dynamics and Econometrics Annual Conferencen. Washington DC, 16.03.2011-18.03.2011] R&D Projects: GA ČR GA402/09/0965 Institutional research plan: CEZ:AV0Z10750506 Keywords : Research and development * Growth * Bayesian model averaging Subject RIV: AH - Economic s Impact factor: 0.701, year: 2011 http://library.utia.cas.cz/separaty/2011/E/horvath-research & development and growth a bayesian model averaging analysis.pdf

  10. MAIN STAGES SCIENTIFIC AND PRODUCTION MASTERING THE TERRITORY AVERAGE URAL

    V.S. Bochko

    2006-09-01

    Full Text Available Questions of the shaping Average Ural, as industrial territory, on base her scientific study and production mastering are considered in the article. It is shown that studies of Ural resources and particularities of the vital activity of its population were concerned by Russian and foreign scientist in XVIII-XIX centuries. It is noted that in XX century there was a transition to systematic organizing-economic study of production power, society and natures of Average Ural. More attention addressed on new problems of region and on needs of their scientific solving.

  11. High-Average, High-Peak Current Injector Design

    Biedron, S G; Virgo, M

    2005-01-01

    There is increasing interest in high-average-power (>100 kW), um-range FELs. These machines require high peak current (~1 kA), modest transverse emittance, and beam energies of ~100 MeV. High average currents (~1 A) place additional constraints on the design of the injector. We present a design for an injector intended to produce the required peak currents at the injector, eliminating the need for magnetic compression within the linac. This reduces the potential for beam quality degradation due to CSR and space charge effects within magnetic chicanes.

  12. Microbes make average 2 nanometer diameter crystalline UO2 particles.

    Suzuki, Y.; Kelly, S. D.; Kemner, K. M.; Banfield, J. F.

    2001-12-01

    It is well known that phylogenetically diverse groups of microorganisms are capable of catalyzing the reduction of highly soluble U(VI) to highly insoluble U(IV), which rapidly precipitates as uraninite (UO2). Because biological uraninite is highly insoluble, microbial uranyl reduction is being intensively studied as the basis for a cost-effective in-situ bioremediation strategy. Previous studies have described UO2 biomineralization products as amorphous or poorly crystalline. The objective of this study is to characterize the nanocrystalline uraninite in detail in order to determine the particle size, crystallinity, and size-related structural characteristics, and to examine the implications of these for reoxidation and transport. In this study, we obtained U-contaminated sediment and water from an inactive U mine and incubated them anaerobically with nutrients to stimulate reductive precipitation of UO2 by indigenous anaerobic bacteria, mainly Gram-positive spore-forming Desulfosporosinus and Clostridium spp. as revealed by RNA-based phylogenetic analysis. Desulfosporosinus sp. was isolated from the sediment and UO2 was precipitated by this isolate from a simple solution that contains only U and electron donors. We characterized UO2 formed in both of the experiments by high resolution-TEM (HRTEM) and X-ray absorption fine structure analysis (XAFS). The results from HRTEM showed that both the pure and the mixed cultures of microorganisms precipitated around 1.5 - 3 nm crystalline UO2 particles. Some particles as small as around 1 nm could be imaged. Rare particles around 10 nm in diameter were also present. Particles adhere to cells and form colloidal aggregates with low fractal dimension. In some cases, coarsening by oriented attachment on \\{111\\} is evident. Our preliminary results from XAFS for the incubated U-contaminated sample also indicated an average diameter of UO2 of 2 nm. In nanoparticles, the U-U distance obtained by XAFS was 0.373 nm, 0.012 nm

  13. Increasing average period lengths by switching of robust chaos maps in finite precision

    Nagaraj, N.; Shastry, M. C.; Vaidya, P. G.

    2008-12-01

    Grebogi, Ott and Yorke (Phys. Rev. A 38, 1988) have investigated the effect of finite precision on average period length of chaotic maps. They showed that the average length of periodic orbits (T) of a dynamical system scales as a function of computer precision (ɛ) and the correlation dimension (d) of the chaotic attractor: T ˜ɛ-d/2. In this work, we are concerned with increasing the average period length which is desirable for chaotic cryptography applications. Our experiments reveal that random and chaotic switching of deterministic chaotic dynamical systems yield higher average length of periodic orbits as compared to simple sequential switching or absence of switching. To illustrate the application of switching, a novel generalization of the Logistic map that exhibits Robust Chaos (absence of attracting periodic orbits) is first introduced. We then propose a pseudo-random number generator based on chaotic switching between Robust Chaos maps which is found to successfully pass stringent statistical tests of randomness.

  14. An Experimental Observation of Axial Variation of Average Size of Methane Clusters in a Gas Jet

    Ji-Feng, Han; Chao-Wen, Yang; Jing-Wei, Miao; Jian-Feng, Lu; Meng, Liu; Xiao-Bing, Luo; Mian-Gong, Shi

    2010-01-01

    Axial variation of average size of methane clusters in a gas jet produced by supersonic expansion of methane through a cylindrical nozzle of 0.8 mm in diameter is observed using a Rayleigh scattering method. The scattered light intensity exhibits a power scaling on the backing pressure ranging from 16 to 50 bar, and the power is strongly Z dependent varying from 8.4 (Z = 3 mm) to 5.4 (Z = 11 mm), which is much larger than that of the argon cluster. The scattered light intensity versus axial position shows that the position of 5 mm has the maximum signal intensity. The estimation of the average cluster size on axial position Z indicates that the cluster growth process goes forward until the maximum average cluster size is reached at Z = 9 mm, and the average cluster size will decrease gradually for Z > 9 mm

  15. On averaging the Kubo-Hall conductivity of magnetic Bloch bands leading to Chern numbers

    Riess, J.

    1997-01-01

    The authors re-examine the topological approach to the integer quantum Hall effect in its original form where an average of the Kubo-Hall conductivity of a magnetic Bloch band has been considered. For the precise definition of this average it is crucial to make a sharp distinction between the discrete Bloch wave numbers k 1 , k 2 and the two continuous integration parameters α 1 , α 2 . The average over the parameter domain 0 ≤ α j 1 , k 2 . They show how this can be transformed into a single integral over the continuous magnetic Brillouin zone 0 ≤ α j j , j = 1, 2, n j = number of unit cells in j-direction, keeping k 1 , k 2 fixed. This average prescription for the Hall conductivity of a magnetic Bloch band is exactly the same as the one used for a many-body system in the presence of disorder

  16. [The smile line, a literature search].

    van der Geld, P A; van Waas, M A

    2003-09-01

    Beautiful teeth, visible when smiling, are in line with the present ideal of beauty. The display of teeth when smiling is determined by the smile line: the projection of the lower border of the upper lip on the maxillary teeth when smiling. On the basis of a literature search the determining methods of the smile line are discussed, demographic data of the position of the smile line are given, and factors of influence are examined. There is no unequivocal method for determining the position of the smile line. A rough distinction can be made between qualitative and (semi)-quantitative methods. The (semi)-quantitative methods have clear advantages for research purposes, but their reliability is unknown. It was demonstrated that among minimally 40% of subjects the maxillary gingiva was not visible when smiling. The mandibular gingiva was not visible when smiling among more than 90% of subjects. Furthermore, it appeared that among women the smile line was on average higher situated than among men and that it has not yet been proven that the smile line will be situated lower when growing older.

  17. A Product Line Enhanced Unified Process

    Zhang, Weishan; Kunz, Thomas

    2006-01-01

    The Unified Process facilitates reuse for a single system, but falls short handling multiple similar products. In this paper we present an enhanced Unified Process, called UPEPL, integrating the product line technology in order to alleviate this problem. In UPEPL, the product line related activit...... activities are added and could be conducted side by side with other classical UP activities. In this way both the advantages of Unified Process and software product lines could co-exist in UPEPL. We show how to use UPEPL with an industrial mobile device product line in our case study....

  18. Averaging processes in granular flows driven by gravity

    Rossi, Giulia; Armanini, Aronne

    2016-04-01

    One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental

  19. Limit cycles from a cubic reversible system via the third-order averaging method

    Linping Peng

    2015-04-01

    Full Text Available This article concerns the bifurcation of limit cycles from a cubic integrable and non-Hamiltonian system. By using the averaging theory of the first and second orders, we show that under any small cubic homogeneous perturbation, at most two limit cycles bifurcate from the period annulus of the unperturbed system, and this upper bound is sharp. By using the averaging theory of the third order, we show that two is also the maximal number of limit cycles emerging from the period annulus of the unperturbed system.

  20. Measurement of the average mass of proteins adsorbed to a nanoparticle by using a suspended microchannel resonator.

    Nejadnik, M Reza; Jiskoot, Wim

    2015-02-01

    We assessed the potential of a suspended microchannel resonator (SMR) to measure the adsorption of proteins to nanoparticles. Standard polystyrene beads suspended in buffer were weighed by a SMR system. Particle suspensions were mixed with solutions of bovine serum albumin (BSA) or monoclonal human antibody (IgG), incubated at room temperature for 3 h and weighed again with SMR. The difference in buoyant mass of the bare and protein-coated polystyrene beads was calculated into real mass of adsorbed proteins. The average surface area occupied per protein molecule was calculated, assuming a monolayer of adsorbed protein. In parallel, dynamic light scattering (DLS), nanoparticle tracking analysis (NTA), and zeta potential measurements were performed. SMR revealed a statistically significant increase in the mass of beads because of adsorption of proteins (for BSA and IgG), whereas DLS and NTA did not show a difference between the size of bare and protein-coated beads. The change in the zeta potential of the beads was also measurable. The surface area occupied per protein molecule was in line with their known size. Presented results show that SMR can be used to measure the mass of adsorbed protein to nanoparticles with a high precision in the presence of free protein. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  1. Multi-Decadal Averages of Basal Melt for Ross Ice Shelf, Antarctica Using Airborne Observations

    Das, I.; Bell, R. E.; Tinto, K. J.; Frearson, N.; Kingslake, J.; Padman, L.; Siddoway, C. S.; Fricker, H. A.

    2017-12-01

    Changes in ice shelf mass balance are key to the long term stability of the Antarctic Ice Sheet. Although the most extensive ice shelf mass loss currently is occurring in the Amundsen Sea sector of West Antarctica, many other ice shelves experience changes in thickness on time scales from annual to ice age cycles. Here, we focus on the Ross Ice Shelf. An 18-year record (1994-2012) of satellite radar altimetry shows substantial variability in Ross Ice Shelf height on interannual time scales, complicating detection of potential long-term climate-change signals in the mass budget of this ice shelf. Variability of radar signal penetration into the ice-shelf surface snow and firn layers further complicates assessment of mass changes. We investigate Ross Ice Shelf mass balance using aerogeophysical data from the ROSETTA-Ice surveys using IcePod. We use two ice-penetrating radars; a 2 GHz unit that images fine-structure in the upper 400 m of the ice surface and a 360 MHz radar to identify the ice shelf base. We have identified internal layers that are continuous along flow from the grounding line to the ice shelf front. Based on layer continuity, we conclude that these layers must be the horizons between the continental ice of the outlet glaciers and snow accumulation once the ice is afloat. We use the Lagrangian change in thickness of these layers, after correcting for strain rates derived using modern day InSAR velocities, to estimate multidecadal averaged basal melt rates. This method provides a novel way to quantify basal melt, avoiding the confounding impacts of spatial and short-timescale variability in surface accumulation and firn densification processes. Our estimates show elevated basal melt rates (> -1m/yr) around Byrd and Mullock glaciers within 100 km from the ice shelf front. We also compare modern InSAR velocity derived strain rates with estimates from the comprehensive ground-based RIGGS observations during 1973-1978 to estimate the potential magnitude of

  2. Comparisons on Genetic Diversity among the Isonuclear-Alloplasmic Male Sterile Lines and Their Maintainer Lines in Rice

    Jin-quan LI

    2007-06-01

    Full Text Available Four sets of rice isonuclear-alloplasmic lines including 16 male sterile lines and their maintainer lines were analyzed by using 91 pairs of SSR primers to study the genetic diversity of nuclear genome and their relative relationships. A total of 169 alleles were detected in the 16 lines, with a frequency of polymorphic loci of 53.85% and an average number of alleles per locus of 1.8, and the average gene diversity was 0.228. Four sets of the isonuclear-alloplasmic male sterile lines shared 146 identical alleles, corresponding to 86.39% of the total alleles; meanwhile, there are 23 different alleles among the tested materials, being 13.61% of the total alleles. On average, 78.70% identical alleles and 21.30% different alleles of the total alleles were detected between the isonuclear-alloplasmic male sterile lines and their maintainer lines. There were 53.85% identical alleles and 46.15% different alleles of the total alleles among the homozygous allonucleus male sterile lines. The fingerprints were established for some male sterile lines and maintainer lines. All the materials tested were divided into three groups at the 0.2 genetic distance based on the cluster analysis. Eight lines of Huanong A and Huayu A (including Huanong B and Huayu B were in the first group, four lines of Kezhen A (including Kezhen B in the second group, and four lines of Zhenshan 97A (including Zhenshan 97B in the third group. For the isonuclear-alloplasmic male sterile lines, the similarity coefficient between Y (Yegong type and WA (wild abortive type or between CW (Raoping wild rice and WA type reached 87–98%.

  3. Line profile variations in selected Seyfert galaxies

    Kollatschny, W; Zetzl, M; Ulbrich, K

    2010-01-01

    Continua as well as the broad emission lines in Seyfert 1 galaxies vary in different galaxies with different amplitudes on typical timescales of days to years. We present the results of two independent variability campaigns taken with the Hobby-Eberly Telescope. We studied in detail the integrated line and continuum variations in the optical spectra of the narrow-line Seyfert galaxy Mrk 110 and the very broad-line Seyfert galaxy Mrk 926. The broad-line emitting region in Mrk 110 has radii of four to 33 light-days as a function of the ionization degree of the emission lines. The line-profile variations are matched by Keplerian disk models with some accretion disk wind. The broad-line region in Mrk 926 is very small showing an extension of two to three light-days only. We could detect a structure in the rms line-profiles as well as in the response of the line profile segments of Mrk 926 indicating the BLR is structured.

  4. Altering graphene line defect properties using chemistry

    Vasudevan, Smitha; White, Carter; Gunlycke, Daniel

    2012-02-01

    First-principles calculations are presented of a fundamental topological line defect in graphene that was observed and reported in Nature Nanotech. 5, 326 (2010). These calculations show that atoms and smaller molecules can bind covalently to the surface in the vicinity of the graphene line defect. It is also shown that the chemistry at the line defect has a strong effect on its electronic and magnetic properties, e.g. the ferromagnetically aligned moments along the line defect can be quenched by some adsorbates. The strong effect of the adsorbates on the line defect properties can be understood by examining how these adsorbates affect the boundary-localized states in the vicinity of the Fermi level. We also expect that the line defect chemistry will significantly affect the scattering properties of incident low-energy particles approaching it from graphene.

  5. Small Bandwidth Asymptotics for Density-Weighted Average Derivatives

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    This paper proposes (apparently) novel standard error formulas for the density-weighted average derivative estimator of Powell, Stock, and Stoker (1989). Asymptotic validity of the standard errors developed in this paper does not require the use of higher-order kernels and the standard errors...

  6. High Average Power UV Free Electron Laser Experiments At JLAB

    Douglas, David; Benson, Stephen; Evtushenko, Pavel; Gubeli, Joseph; Hernandez-Garcia, Carlos; Legg, Robert; Neil, George; Powers, Thomas; Shinn, Michelle; Tennant, Christopher; Williams, Gwyn

    2012-01-01

    Having produced 14 kW of average power at ∼2 microns, JLAB has shifted its focus to the ultraviolet portion of the spectrum. This presentation will describe the JLab UV Demo FEL, present specifics of its driver ERL, and discuss the latest experimental results from FEL experiments and machine operations.

  7. Establishment of Average Body Measurement and the Development ...

    cce

    body measurement for height and backneck to waist for ages 2,3,4 and 5 years. The ... average measurements of the different parts of the body must be established. ..... and OAU Charter on Rights of the child: Lagos: Nigeria Country office.

  8. Adaptive Spontaneous Transitions between Two Mechanisms of Numerical Averaging.

    Brezis, Noam; Bronfman, Zohar Z; Usher, Marius

    2015-06-04

    We investigated the mechanism with which humans estimate numerical averages. Participants were presented with 4, 8 or 16 (two-digit) numbers, serially and rapidly (2 numerals/second) and were instructed to convey the sequence average. As predicted by a dual, but not a single-component account, we found a non-monotonic influence of set-size on accuracy. Moreover, we observed a marked decrease in RT as set-size increases and RT-accuracy tradeoff in the 4-, but not in the 16-number condition. These results indicate that in accordance with the normative directive, participants spontaneously employ analytic/sequential thinking in the 4-number condition and intuitive/holistic thinking in the 16-number condition. When the presentation rate is extreme (10 items/sec) we find that, while performance still remains high, the estimations are now based on intuitive processing. The results are accounted for by a computational model postulating population-coding underlying intuitive-averaging and working-memory-mediated symbolic procedures underlying analytical-averaging, with flexible allocation between the two.

  9. Determination of the average lifetime of bottom hadrons

    Althoff, M; Braunschweig, W; Kirschfink, F J; Martyn, H U; Rosskamp, P; Schmitz, D; Siebke, H; Wallraff, W [Technische Hochschule Aachen (Germany, F.R.). Lehrstuhl fuer Experimentalphysik 1A und 1. Physikalisches Inst.; Eisenmann, J; Fischer, H M

    1984-12-27

    We have determined the average lifetime of hadrons containing b quarks produced in e/sup +/e/sup -/ annihilation to be tausub(B)=1.83 x 10/sup -12/ s. Our method uses charged decay products from both non-leptonic and semileptonic decay modes.

  10. Determination of the average lifetime of bottom hadrons

    Althoff, M; Braunschweig, W; Kirschfink, F J; Martyn, H U; Rosskamp, P; Schmitz, D; Siebke, H; Wallraff, W; Eisenmann, J; Fischer, H M

    1984-12-27

    We have determined the average lifetime of hadrons containing b quarks produced in e e annihilation to be tausub(B)=1.83x10 S s. Our method uses charged decay products from both non-leptonic and semileptonic decay modes. (orig./HSI).

  11. Time Series ARIMA Models of Undergraduate Grade Point Average.

    Rogers, Bruce G.

    The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…

  12. Crystallographic extraction and averaging of data from small image areas

    Perkins, GA; Downing, KH; Glaeser, RM

    The accuracy of structure factor phases determined from electron microscope images is determined mainly by the level of statistical significance, which is limited by the low level of allowed electron exposure and by the number of identical unit cells that can be averaged. It is shown here that

  13. Reducing Noise by Repetition: Introduction to Signal Averaging

    Hassan, Umer; Anwar, Muhammad Sabieh

    2010-01-01

    This paper describes theory and experiments, taken from biophysics and physiological measurements, to illustrate the technique of signal averaging. In the process, students are introduced to the basic concepts of signal processing, such as digital filtering, Fourier transformation, baseline correction, pink and Gaussian noise, and the cross- and…

  14. The background effective average action approach to quantum gravity

    D’Odorico, G.; Codello, A.; Pagani, C.

    2016-01-01

    of an UV attractive non-Gaussian fixed-point, which we find characterized by real critical exponents. Our closure method is general and can be applied systematically to more general truncations of the gravitational effective average action. © Springer International Publishing Switzerland 2016....

  15. Error estimates in horocycle averages asymptotics: challenges from string theory

    Cardella, M.A.

    2010-01-01

    For modular functions of rapid decay, a classical result connects the error estimate in their long horocycle average asymptotic to the Riemann hypothesis. We study similar asymptotics, for modular functions with not that mild growing conditions, such as of polynomial growth and of exponential growth

  16. Moving average rules as a source of market instability

    Chiarella, C.; He, X.Z.; Hommes, C.H.

    2006-01-01

    Despite the pervasiveness of the efficient markets paradigm in the academic finance literature, the use of various moving average (MA) trading rules remains popular with financial market practitioners. This paper proposes a stochastic dynamic financial market model in which demand for traded assets

  17. arXiv Averaged Energy Conditions and Bouncing Universes

    Giovannini, Massimo

    2017-11-16

    The dynamics of bouncing universes is characterized by violating certain coordinate-invariant restrictions on the total energy-momentum tensor, customarily referred to as energy conditions. Although there could be epochs in which the null energy condition is locally violated, it may perhaps be enforced in an averaged sense. Explicit examples of this possibility are investigated in different frameworks.

  18. 26 CFR 1.1301-1 - Averaging of farm income.

    2010-04-01

    ... January 1, 2003, rental income based on a share of a tenant's production determined under an unwritten... the Collection of Income Tax at Source on Wages (Federal income tax withholding), or the amount of net... 26 Internal Revenue 11 2010-04-01 2010-04-01 true Averaging of farm income. 1.1301-1 Section 1...

  19. Implications of Methodist clergies' average lifespan and missional ...

    2015-06-09

    Jun 9, 2015 ... The author of Genesis 5 paid meticulous attention to the lifespan of several people ... of Southern Africa (MCSA), and to argue that memories of the ... average ages at death were added up and the sum was divided by 12 (which represents the 12 ..... not explicit in how the departed Methodist ministers were.

  20. Pareto Principle in Datamining: an Above-Average Fencing Algorithm

    K. Macek

    2008-01-01

    Full Text Available This paper formulates a new datamining problem: which subset of input space has the relatively highest output where the minimal size of this subset is given. This can be useful where usual datamining methods fail because of error distribution asymmetry. The paper provides a novel algorithm for this datamining problem, and compares it with clustering of above-average individuals.

  1. Average Distance Travelled To School by Primary and Secondary ...

    This study investigated average distance travelled to school by students in primary and secondary schools in Anambra, Enugu, and Ebonyi States and effect on attendance. These are among the top ten densely populated and educationally advantaged States in Nigeria. Research evidences report high dropout rates in ...

  2. Computation of the average energy for LXY electrons

    Grau Carles, A.; Grau, A.

    1996-01-01

    The application of an atomic rearrangement model in which we only consider the three shells K, L and M, to compute the counting efficiency for electron capture nuclides, requires a fine averaged energy value for LMN electrons. In this report, we illustrate the procedure with two example, ''125 I and ''109 Cd. (Author) 4 refs

  3. 75 FR 78157 - Farmer and Fisherman Income Averaging

    2010-12-15

    ... to the averaging of farm and fishing income in computing income tax liability. The regulations...: PART 1--INCOME TAXES 0 Paragraph 1. The authority citation for part 1 continues to read in part as... section 1 tax would be increased if one-third of elected farm income were allocated to each year. The...

  4. Domain-averaged Fermi-hole Analysis for Solids

    Baranov, A.; Ponec, Robert; Kohout, M.

    2012-01-01

    Roč. 137, č. 21 (2012), s. 214109 ISSN 0021-9606 R&D Projects: GA ČR GA203/09/0118 Institutional support: RVO:67985858 Keywords : bonding in solids * domain averaged fermi hole * natural orbitals Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 3.164, year: 2012

  5. Characteristics of phase-averaged equations for modulated wave groups

    Klopman, G.; Petit, H.A.H.; Battjes, J.A.

    2000-01-01

    The project concerns the influence of long waves on coastal morphology. The modelling of the combined motion of the long waves and short waves in the horizontal plane is done by phase-averaging over the short wave motion and using intra-wave modelling for the long waves, see e.g. Roelvink (1993).

  6. A depth semi-averaged model for coastal dynamics

    Antuono, M.; Colicchio, G.; Lugni, C.; Greco, M.; Brocchini, M.

    2017-05-01

    The present work extends the semi-integrated method proposed by Antuono and Brocchini ["Beyond Boussinesq-type equations: Semi-integrated models for coastal dynamics," Phys. Fluids 25(1), 016603 (2013)], which comprises a subset of depth-averaged equations (similar to Boussinesq-like models) and a Poisson equation that accounts for vertical dynamics. Here, the subset of depth-averaged equations has been reshaped in a conservative-like form and both the Poisson equation formulations proposed by Antuono and Brocchini ["Beyond Boussinesq-type equations: Semi-integrated models for coastal dynamics," Phys. Fluids 25(1), 016603 (2013)] are investigated: the former uses the vertical velocity component (formulation A) and the latter a specific depth semi-averaged variable, ϒ (formulation B). Our analyses reveal that formulation A is prone to instabilities as wave nonlinearity increases. On the contrary, formulation B allows an accurate, robust numerical implementation. Test cases derived from the scientific literature on Boussinesq-type models—i.e., solitary and Stokes wave analytical solutions for linear dispersion and nonlinear evolution and experimental data for shoaling properties—are used to assess the proposed solution strategy. It is found that the present method gives reliable predictions of wave propagation in shallow to intermediate waters, in terms of both semi-averaged variables and conservation properties.

  7. Effect of tank geometry on its average performance

    Orlov, Aleksey A.; Tsimbalyuk, Alexandr F.; Malyugin, Roman V.; Leontieva, Daria A.; Kotelnikova, Alexandra A.

    2018-03-01

    The mathematical model of non-stationary filling of vertical submerged tanks with gaseous uranium hexafluoride is presented in the paper. There are calculations of the average productivity, heat exchange area, and filling time of various volumes tanks with smooth inner walls depending on their "height : radius" ratio as well as the average productivity, degree, and filling time of horizontal ribbing tank with volume 6.10-2 m3 with change central hole diameter of the ribs. It has been shown that the growth of "height / radius" ratio in tanks with smooth inner walls up to the limiting values allows significantly increasing tank average productivity and reducing its filling time. Growth of H/R ratio of tank with volume 1.0 m3 to the limiting values (in comparison with the standard tank having H/R equal 3.49) augments tank productivity by 23.5 % and the heat exchange area by 20%. Besides, we have demonstrated that maximum average productivity and a minimum filling time are reached for the tank with volume 6.10-2 m3 having central hole diameter of horizontal ribs 6.4.10-2 m.

  8. An averaged polarizable potential for multiscale modeling in phospholipid membranes

    Witzke, Sarah; List, Nanna Holmgaard; Olsen, Jógvan Magnus Haugaard

    2017-01-01

    A set of average atom-centered charges and polarizabilities has been developed for three types of phospholipids for use in polarizable embedding calculations. The lipids investigated are 1,2-dimyristoyl-sn-glycero-3-phosphocholine, 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine, and 1-palmitoyl...

  9. Understanding coastal morphodynamic patterns from depth-averaged sediment concentration

    Ribas, F.; Falques, A.; de Swart, H. E.; Dodd, N.; Garnier, R.; Calvete, D.

    This review highlights the important role of the depth-averaged sediment concentration (DASC) to understand the formation of a number of coastal morphodynamic features that have an alongshore rhythmic pattern: beach cusps, surf zone transverse and crescentic bars, and shoreface-connected sand

  10. Determination of average activating thermal neutron flux in bulk samples

    Doczi, R.; Csikai, J.; Doczi, R.; Csikai, J.; Hassan, F. M.; Ali, M.A.

    2004-01-01

    A previous method used for the determination of the average neutron flux within bulky samples has been applied for the measurements of hydrogen contents of different samples. An analytical function is given for the description of the correlation between the activity of Dy foils and the hydrogen concentrations. Results obtained by the activation and the thermal neutron reflection methods are compared

  11. Capillary Electrophoresis Sensitivity Enhancement Based on Adaptive Moving Average Method.

    Drevinskas, Tomas; Telksnys, Laimutis; Maruška, Audrius; Gorbatsova, Jelena; Kaljurand, Mihkel

    2018-06-05

    In the present work, we demonstrate a novel approach to improve the sensitivity of the "out of lab" portable capillary electrophoretic measurements. Nowadays, many signal enhancement methods are (i) underused (nonoptimal), (ii) overused (distorts the data), or (iii) inapplicable in field-portable instrumentation because of a lack of computational power. The described innovative migration velocity-adaptive moving average method uses an optimal averaging window size and can be easily implemented with a microcontroller. The contactless conductivity detection was used as a model for the development of a signal processing method and the demonstration of its impact on the sensitivity. The frequency characteristics of the recorded electropherograms and peaks were clarified. Higher electrophoretic mobility analytes exhibit higher-frequency peaks, whereas lower electrophoretic mobility analytes exhibit lower-frequency peaks. On the basis of the obtained data, a migration velocity-adaptive moving average algorithm was created, adapted, and programmed into capillary electrophoresis data-processing software. Employing the developed algorithm, each data point is processed depending on a certain migration time of the analyte. Because of the implemented migration velocity-adaptive moving average method, the signal-to-noise ratio improved up to 11 times for sampling frequency of 4.6 Hz and up to 22 times for sampling frequency of 25 Hz. This paper could potentially be used as a methodological guideline for the development of new smoothing algorithms that require adaptive conditions in capillary electrophoresis and other separation methods.

  12. Grade Point Average: What's Wrong and What's the Alternative?

    Soh, Kay Cheng

    2011-01-01

    Grade point average (GPA) has been around for more than two centuries. However, it has created a lot of confusion, frustration, and anxiety to GPA-producers and users alike, especially when used across-nation for different purposes. This paper looks into the reasons for such a state of affairs from the perspective of educational measurement. It…

  13. The Effect of Honors Courses on Grade Point Averages

    Spisak, Art L.; Squires, Suzanne Carter

    2016-01-01

    High-ability entering college students give three main reasons for not choosing to become part of honors programs and colleges; they and/or their parents believe that honors classes at the university level require more work than non-honors courses, are more stressful, and will adversely affect their self-image and grade point average (GPA) (Hill;…

  14. 40 CFR 63.652 - Emissions averaging provisions.

    2010-07-01

    ... emissions more than the reference control technology, but the combination of the pollution prevention... emissions average. This must include any Group 1 emission points to which the reference control technology... agrees has a higher nominal efficiency than the reference control technology. Information on the nominal...

  15. Average and local structure of selected metal deuterides

    Soerby, Magnus H.

    2005-07-01

    deuterides at 1 bar D2 and elevated temperatures (373-573 K) is presented in Paper 1. Deuterium atoms occupy chiefly three types of tetrahedral interstitial sites; two coordinated by 4 Zr atoms and one coordinated by 3 Zr and 1 Ni atoms. The site preference is predominantly ruled by sample composition and less by temperature. On the other hand, the spatial deuterium distribution among the preferred sites is strongly temperature dependant as the long-range correlations break down on heating. The sample is fully decomposed into tetragonal ZrD2 and Zr7Ni10 at 873 K. Th2AlD4 was the only metal deuteride with reported D-D separation substantially below 2 Aa (1.79 Aa) prior to the discovery of RENiInD1.33. However, as being the first ternary deuteride ever studied by PND, the original structure solution was based on very low-resolution data. The present reinvestigation (Paper 2) shows that the site preference was correctly determined, but the deuterium atoms are slightly shifted compared to the earlier report, now yielding acceptable interatomic separations. Solely Th4 tetrahedra are occupied in various Th2Al deuterides. Th8Al4D11 (Th2AlD2.75) takes a superstructure with tripled c-axis due to deuterium ordering. Th2AlD2.3 is disordered and the average distance between partly occupied sites appears as just 1.55 Aa in Rietveld refinements. However, short-range order is expected to prevent D-D distances under 2 Aa. Paper 3 present the first Reverse Monte Carlo (RMC) study of a metal deuteride. RMC is used in combination with total neutron scattering to model short-range deuterium correlations in disordered c-VD0.77. A practically complete blocking of interstitial sites closer than 2 Aa from any occupied deuterium site is observed. The short-range correlations resemble those of the fully ordered low temperature phase c-VD0.75 at length scales up to about 3 Aa, i.e. for the first two coordination spheres. Paper 4 concerns RMC modelling of short-range deuterium correlations in ZrCr2D4

  16. Average and local structure of selected metal deuterides

    Soerby, Magnus H.

    2004-01-01

    elevated temperatures (373-573 K) is presented in Paper 1. Deuterium atoms occupy chiefly three types of tetrahedral interstitial sites; two coordinated by 4 Zr atoms and one coordinated by 3 Zr and 1 Ni atoms. The site preference is predominantly ruled by sample composition and less by temperature. On the other hand, the spatial deuterium distribution among the preferred sites is strongly temperature dependant as the long-range correlations break down on heating. The sample is fully decomposed into tetragonal ZrD2 and Zr7Ni10 at 873 K. Th2AlD4 was the only metal deuteride with reported D-D separation substantially below 2 Aa (1.79 Aa) prior to the discovery of RENiInD1.33. However, as being the first ternary deuteride ever studied by PND, the original structure solution was based on very low-resolution data. The present reinvestigation (Paper 2) shows that the site preference was correctly determined, but the deuterium atoms are slightly shifted compared to the earlier report, now yielding acceptable interatomic separations. Solely Th4 tetrahedra are occupied in various Th2Al deuterides. Th8Al4D11 (Th2AlD2.75) takes a superstructure with tripled c-axis due to deuterium ordering. Th2AlD2.3 is disordered and the average distance between partly occupied sites appears as just 1.55 Aa in Rietveld refinements. However, short-range order is expected to prevent D-D distances under 2 Aa. Paper 3 present the first Reverse Monte Carlo (RMC) study of a metal deuteride. RMC is used in combination with total neutron scattering to model short-range deuterium correlations in disordered c-VD0.77. A practically complete blocking of interstitial sites closer than 2 Aa from any occupied deuterium site is observed. The short-range correlations resemble those of the fully ordered low temperature phase c-VD0.75 at length scales up to about 3 Aa, i.e. for the first two coordination spheres. Paper 4 concerns RMC modelling of short-range deuterium correlations in ZrCr2D4 at ambient and low

  17. UV Photography Shows Hidden Sun Damage

    ... mcat1=de12", ]; for (var c = 0; c UV photography shows hidden sun damage A UV photograph gives ... developing skin cancer and prematurely aged skin. Normal photography UV photography 18 months of age: This boy's ...

  18. An average salary: approaches to the index determination

    T. M. Pozdnyakova

    2017-01-01

    Full Text Available The article “An average salary: approaches to the index determination” is devoted to studying various methods of calculating this index, both used by official state statistics of the Russian Federation and offered by modern researchers.The purpose of this research is to analyze the existing approaches to calculating the average salary of employees of enterprises and organizations, as well as to make certain additions that would help to clarify this index.The information base of the research is laws and regulations of the Russian Federation Government, statistical and analytical materials of the Federal State Statistics Service of Russia for the section «Socio-economic indexes: living standards of the population», as well as materials of scientific papers, describing different approaches to the average salary calculation. The data on the average salary of employees of educational institutions of the Khabarovsk region served as the experimental base of research. In the process of conducting the research, the following methods were used: analytical, statistical, calculated-mathematical and graphical.The main result of the research is an option of supplementing the method of calculating average salary index within enterprises or organizations, used by Goskomstat of Russia, by means of introducing a correction factor. Its essence consists in the specific formation of material indexes for different categories of employees in enterprises or organizations, mainly engaged in internal secondary jobs. The need for introducing this correction factor comes from the current reality of working conditions of a wide range of organizations, when an employee is forced, in addition to the main position, to fulfill additional job duties. As a result, the situation is frequent when the average salary at the enterprise is difficult to assess objectively because it consists of calculating multiple rates per staff member. In other words, the average salary of

  19. DIME Students Show Off their Lego(TM) Challenge Creation

    2002-01-01

    Two students show the Lego (TM) Challenge device they designed and built to operate in the portable drop tower demonstrator as part of the second Dropping in a Microgravity Environment (DIME) competition held April 23-25, 2002, at NASA's Glenn Research Center. Competitors included two teams from Sycamore High School, Cincinnati, OH, and one each from Bay High School, Bay Village, OH, and COSI Academy, Columbus, OH. DIME is part of NASA's education and outreach activities. Details are on line at http://microgravity.grc.nasa.gov/DIME_2002.html.

  20. High-average-power diode-pumped Yb: YAG lasers

    Avizonis, P V; Beach, R; Bibeau, C M; Emanuel, M A; Harris, D G; Honea, E C; Monroe, R S; Payne, S A; Skidmore, J A; Sutton, S B

    1999-01-01

    A scaleable diode end-pumping technology for high-average-power slab and rod lasers has been under development for the past several years at Lawrence Livermore National Laboratory (LLNL). This technology has particular application to high average power Yb:YAG lasers that utilize a rod configured gain element. Previously, this rod configured approach has achieved average output powers in a single 5 cm long by 2 mm diameter Yb:YAG rod of 430 W cw and 280 W q-switched. High beam quality (M(sup 2)= 2.4) q-switched operation has also been demonstrated at over 180 W of average output power. More recently, using a dual rod configuration consisting of two, 5 cm long by 2 mm diameter laser rods with birefringence compensation, we have achieved 1080 W of cw output with an M(sup 2) value of 13.5 at an optical-to-optical conversion efficiency of 27.5%. With the same dual rod laser operated in a q-switched mode, we have also demonstrated 532 W of average power with an M(sup 2) and lt; 2.5 at 17% optical-to-optical conversion efficiency. These q-switched results were obtained at a 10 kHz repetition rate and resulted in 77 nsec pulse durations. These improved levels of operational performance have been achieved as a result of technology advancements made in several areas that will be covered in this manuscript. These enhancements to our architecture include: (1) Hollow lens ducts that enable the use of advanced cavity architectures permitting birefringence compensation and the ability to run in large aperture-filling near-diffraction-limited modes. (2) Compound laser rods with flanged-nonabsorbing-endcaps fabricated by diffusion bonding. (3) Techniques for suppressing amplified spontaneous emission (ASE) and parasitics in the polished barrel rods

  1. High average power diode pumped solid state lasers for CALIOPE

    Comaskey, B.; Halpin, J.; Moran, B.

    1994-07-01

    Diode pumping of solid state media offers the opportunity for very low maintenance, high efficiency, and compact laser systems. For remote sensing, such lasers may be used to pump tunable non-linear sources, or if tunable themselves, act directly or through harmonic crystals as the probe. The needs of long range remote sensing missions require laser performance in the several watts to kilowatts range. At these power performance levels, more advanced thermal management technologies are required for the diode pumps. The solid state laser design must now address a variety of issues arising from the thermal loads, including fracture limits, induced lensing and aberrations, induced birefringence, and laser cavity optical component performance degradation with average power loading. In order to highlight the design trade-offs involved in addressing the above issues, a variety of existing average power laser systems are briefly described. Included are two systems based on Spectra Diode Laboratory's water impingement cooled diode packages: a two times diffraction limited, 200 watt average power, 200 Hz multi-rod laser/amplifier by Fibertek, and TRW's 100 watt, 100 Hz, phase conjugated amplifier. The authors also present two laser systems built at Lawrence Livermore National Laboratory (LLNL) based on their more aggressive diode bar cooling package, which uses microchannel cooler technology capable of 100% duty factor operation. They then present the design of LLNL's first generation OPO pump laser for remote sensing. This system is specified to run at 100 Hz, 20 nsec pulses each with 300 mJ, less than two times diffraction limited, and with a stable single longitudinal mode. The performance of the first testbed version will be presented. The authors conclude with directions their group is pursuing to advance average power lasers. This includes average power electro-optics, low heat load lasing media, and heat capacity lasers

  2. Construction of average adult Japanese voxel phantoms for dose assessment

    Sato, Kaoru; Takahashi, Fumiaki; Satoh, Daiki; Endo, Akira

    2011-12-01

    The International Commission on Radiological Protection (ICRP) adopted the adult reference voxel phantoms based on the physiological and anatomical reference data of Caucasian on October, 2007. The organs and tissues of these phantoms were segmented on the basis of ICRP Publication 103. In future, the dose coefficients for internal dose and dose conversion coefficients for external dose calculated using the adult reference voxel phantoms will be widely used for the radiation protection fields. On the other hand, the body sizes and organ masses of adult Japanese are generally smaller than those of adult Caucasian. In addition, there are some cases that the anatomical characteristics such as body sizes, organ masses and postures of subjects influence the organ doses in dose assessment for medical treatments and radiation accident. Therefore, it was needed to use human phantoms with average anatomical characteristics of Japanese. The authors constructed the averaged adult Japanese male and female voxel phantoms by modifying the previously developed high-resolution adult male (JM) and female (JF) voxel phantoms. It has been modified in the following three aspects: (1) The heights and weights were agreed with the Japanese averages; (2) The masses of organs and tissues were adjusted to the Japanese averages within 10%; (3) The organs and tissues, which were newly added for evaluation of the effective dose in ICRP Publication 103, were modeled. In this study, the organ masses, distances between organs, specific absorbed fractions (SAFs) and dose conversion coefficients of these phantoms were compared with those evaluated using the ICRP adult reference voxel phantoms. This report provides valuable information on the anatomical and dosimetric characteristics of the averaged adult Japanese male and female voxel phantoms developed as reference phantoms of adult Japanese. (author)

  3. Reproducing multi-model ensemble average with Ensemble-averaged Reconstructed Forcings (ERF) in regional climate modeling

    Erfanian, A.; Fomenko, L.; Wang, G.

    2016-12-01

    Multi-model ensemble (MME) average is considered the most reliable for simulating both present-day and future climates. It has been a primary reference for making conclusions in major coordinated studies i.e. IPCC Assessment Reports and CORDEX. The biases of individual models cancel out each other in MME average, enabling the ensemble mean to outperform individual members in simulating the mean climate. This enhancement however comes with tremendous computational cost, which is especially inhibiting for regional climate modeling as model uncertainties can originate from both RCMs and the driving GCMs. Here we propose the Ensemble-based Reconstructed Forcings (ERF) approach to regional climate modeling that achieves a similar level of bias reduction at a fraction of cost compared with the conventional MME approach. The new method constructs a single set of initial and boundary conditions (IBCs) by averaging the IBCs of multiple GCMs, and drives the RCM with this ensemble average of IBCs to conduct a single run. Using a regional climate model (RegCM4.3.4-CLM4.5), we tested the method over West Africa for multiple combination of (up to six) GCMs. Our results indicate that the performance of the ERF method is comparable to that of the MME average in simulating the mean climate. The bias reduction seen in ERF simulations is achieved by using more realistic IBCs in solving the system of equations underlying the RCM physics and dynamics. This endows the new method with a theoretical advantage in addition to reducing computational cost. The ERF output is an unaltered solution of the RCM as opposed to a climate state that might not be physically plausible due to the averaging of multiple solutions with the conventional MME approach. The ERF approach should be considered for use in major international efforts such as CORDEX. Key words: Multi-model ensemble, ensemble analysis, ERF, regional climate modeling

  4. Determination of the diagnostic x-ray tube practical peak voltage (PPV) from average or average peak voltage measurements

    Hourdakis, C J, E-mail: khour@gaec.gr [Ionizing Radiation Calibration Laboratory-Greek Atomic Energy Commission, PO Box 60092, 15310 Agia Paraskevi, Athens, Attiki (Greece)

    2011-04-07

    The practical peak voltage (PPV) has been adopted as the reference measuring quantity for the x-ray tube voltage. However, the majority of commercial kV-meter models measure the average peak, U-bar{sub P}, the average, U-bar, the effective, U{sub eff} or the maximum peak, U{sub P} tube voltage. This work proposed a method for determination of the PPV from measurements with a kV-meter that measures the average U-bar or the average peak, U-bar{sub p} voltage. The kV-meter reading can be converted to the PPV by applying appropriate calibration coefficients and conversion factors. The average peak k{sub PPV,kVp} and the average k{sub PPV,Uav} conversion factors were calculated from virtual voltage waveforms for conventional diagnostic radiology (50-150 kV) and mammography (22-35 kV) tube voltages and for voltage ripples from 0% to 100%. Regression equation and coefficients provide the appropriate conversion factors at any given tube voltage and ripple. The influence of voltage waveform irregularities, like 'spikes' and pulse amplitude variations, on the conversion factors was investigated and discussed. The proposed method and the conversion factors were tested using six commercial kV-meters at several x-ray units. The deviations between the reference and the calculated - according to the proposed method - PPV values were less than 2%. Practical aspects on the voltage ripple measurement were addressed and discussed. The proposed method provides a rigorous base to determine the PPV with kV-meters from U-bar{sub p} and U-bar measurement. Users can benefit, since all kV-meters, irrespective of their measuring quantity, can be used to determine the PPV, complying with the IEC standard requirements.

  5. Lip line preference for variant face types.

    Anwar, Nabila; Fida, Mubassar

    2012-06-01

    To determine the effect of altered lip line on attractiveness and to find preferred lip line for vertical face types in both genders. Cross-sectional analytical study. The Aga Khan University Hospital, Karachi, from May to July 2009. Photographs of two selected subjects were altered to produce three face types for the same individual with the aim of keeping the frame of the smile constant. Lip line was then altered for both the subjects as: both dentitions visible, upper incisors visible, upper incisors and 2 mm gum and 4 mm gum visible. The pictures were rated by different professionals for attractiveness. Descriptive statistics for the raters and multiple factor ANOVA was used to find the most attractive lip line. The total number of raters was 100 with the mean age of 30.3 ± 8 years. The alterations in the smile parameters produced statistically significant difference in the attractiveness of faces, whereas the perception difference was found to be insignificant amongst raters of different professions. Preferred lip line was the one showing only the upper incisors in dolico and mesofacial male and female genders whereas 2 mm gum show was preferred in brachyfacial subjects. The variability in lip line showed significant difference in the perceived attractiveness. Preferred lip lines as the one showing only the upper incisors in dolico and mesofacial male and female genders whereas 2 mm gum show was preferred in brachyfacial subjects.

  6. FAR-INFRARED LINE SPECTRA OF SEYFERT GALAXIES FROM THE HERSCHEL-PACS SPECTROMETER

    Spinoglio, Luigi; Pereira-Santaella, Miguel; Busquet, Gemma [Istituto di Astrofisica e Planetologia Spaziali, INAF, Via Fosso del Cavaliere 100, I-00133 Roma (Italy); Dasyra, Kalliopi M. [Observatoire de Paris, LERMA (CNRS:UMR8112), 61 Av. de l' Observatoire, F-75014, Paris (France); Calzoletti, Luca [Agenzia Spaziale Italiana (ASI) Science Data Center, I-00044 Frascati (Roma) (Italy); Malkan, Matthew A. [Astronomy Division, University of California, Los Angeles, CA 90095-1547 (United States); Tommasin, Silvia, E-mail: luigi.spinoglio@iaps.inaf.it [Weizmann Institute of Science, Department of Neurobiology, Rehovot 76100 (Israel)

    2015-01-20

    We observed the far-IR fine-structure lines of 26 Seyfert galaxies with the Herschel-PACS spectrometer. These observations are complemented with Spitzer Infrared Spectrograph and Herschel SPIRE spectroscopy. We used the ionic lines to determine electron densities in the ionized gas and the [C I] lines, observed with SPIRE, to measure the neutral gas densities, while the [O I] lines measure the gas temperature, at densities below ∼10{sup 4} cm{sup –3}. Using the [O I]145 μm/63 μm and [S III]33/18 μm line ratios, we find an anti-correlation of the temperature with the gas density. Various fine-structure line ratios show density stratifications in these active galaxies. On average, electron densities increase with the ionization potential of the ions. The infrared lines arise partly in the narrow line region, photoionized by the active galactic nucleus (AGN), partly in H II regions photoionized by hot stars, and partly in photo-dissociated regions. We attempt to separate the contributions to the line emission produced in these different regions by comparing our observed emission line ratios to theoretical values. In particular, we tried to separate the contribution of AGNs and star formation by using a combination of Spitzer and Herschel lines, and we found that besides the well-known mid-IR line ratios, the line ratio of [O III]88 μm/[O IV]26 μm can reliably discriminate the two emission regions, while the far-IR line ratio of [C II]157 μm/[O I]63 μm is only able to mildly separate the two regimes. By comparing the observed [C II]157 μm/[N II]205 μm ratio with photoionization models, we also found that most of the [C II] emission in the galaxies we examined is due to photodissociation regions.

  7. Wood pole overhead lines

    Wareing, Brian

    2005-01-01

    This new book concentrates on the mechanical aspects of distribution wood pole lines, including live line working, environmental influences, climate change and international standards. Other topics include statutory requirements, safety, profiling, traditional and probabilistic design, weather loads, bare and covered conductors, different types of overhead systems, conductor choice, construction and maintenance. A section has also been devoted to the topic of lightning, which is one of the major sources of faults on overhead lines. The book focuses on the effects of this problem and the strate

  8. Carcass and meat quality traits of Iberian pig as affected by sex and crossbreeding with different Duroc genetic lines

    A. Robina

    2013-11-01

    Full Text Available A total of 144 pigs were used to study the effects of sex (barrows or gilts and terminal sire line (Iberian or three genetic lines of Duroc: Duroc 1, Duroc 2 and Duroc 3 on performance and carcass and meat quality traits. Gilts showed slightly lower average daily gain, shoulder weight and trimming losses, but slightly better primal cuts yields and higher loin weight, while there was no significant effect of sex on meat quality traits or on the fatty acid composition of lard and muscle. There were important differences in performance and in carcass and primal cuts quality traits between pure Iberian pigs and all Iberian × Duroc crossbreeds evaluated, partly due to the lower slaughter weights reached by the formers. The different sire lines showed differences in several traits; Duroc 1 group showed lower backfat thickness and ham and shoulder trimming losses, and higher primal cut yields than Duroc 2 and Duroc 3 groups. Intramuscular fat (IMF content remained unaffected by crossbreeding, but meat color resulted more intense and redder in crosses from the Duroc 1 sire line. The accumulation of fatty acids in lard was not affected by Duroc sire line, while animals of the group Duroc 2 showed higher levels of monounsaturated fatty acid and lower of polyunsaturated ones in IMF. These results highlight the importance of considering not only performance, but also carcass and meat quality traits when deciding the Duroc sire line for crossbreeding in Iberian pig production.

  9. Simplified automatic on-line document searching

    Ebinuma, Yukio

    1983-01-01

    The author proposed searching method for users who need not-comprehensive retrieval. That is to provide flexible number of related documents for the users automatically. A group of technical terms are used as search terms to express an inquiry. Logical sums of the terms in the ascending order of frequency of the usage are prepared sequentially and automatically, and then the search formulas, qsub(m) and qsub(m-1) which meet certain threshold values are selected automatically also. Users justify precision of the search output up to 20 items retrieved by the formula qsub(m). If a user wishes more than 30% of recall ratio, the serach result should be output by qsub(m), and if he wishes less than 30% of it, it should be output by qsub(m-1). The search by this method using one year volume of INIS Database (76,600 items) and five inquiries resulted in 32% of recall ratio and 36% of precision ratio on the average in the case of qsub(m). The connecting time of a terminal was within 15 minutes per an inquiry. It showed more efficiency than that of an inexperienced searcher. The method can be applied to on-line searching system for database in which natural language only or natural language and controlled vocabulary are used. (author)

  10. Estimation of Initial Position Using Line Segment Matching in Maps

    Chongyang Wei

    2016-06-01

    Full Text Available While navigating in a typical traffic scene, with a drastic drift or sudden jump in its Global Positioning System (GPS position, the localization based on such an initial position is unable to extract precise overlapping data from the prior map in order to match the current data, thus rendering the localization as unfeasible. In this paper, we first propose a new method to estimate an initial position by matching the infrared reflectivity maps. The maps consist of a highly precise prior map, built with the offline simultaneous localization and mapping (SLAM technique, and a smooth current map, built with the integral over velocities. Considering the attributes of the maps, we first propose to exploit the stable, rich line segments to match the lidar maps. To evaluate the consistency of the candidate line pairs in both maps, we propose to adopt the local appearance, pairwise geometric attribute and structural likelihood to construct an affinity graph, as well as employ a spectral algorithm to solve the graph efficiently. The initial position is obtained according to the relationship between the vehicle's current position and matched lines. Experiments on the campus with a GPS error of dozens of metres show that our algorithm can provide an accurate initial value with average longitudinal and lateral errors being 1.68m and 1.04m, respectively.

  11. Dynamics of fluid lines, sheets, filaments and membranes

    Coutris, N.

    1988-01-01

    We establish the dynamic equations of two types of fluid structures: 1) lines-filaments and 2) sheets-membranes. In the first part, we consider one-dimensional (line) and two-dimensional (sheet) fluid structures. The second part concerns the associated three- dimensional structures: filaments and membranes. In the third part, we establish the equations for thickened lines and thickened sheets. For that purpose, we introduce a thickness in the models of the first part. The fourth part concerns the thinning of the filament and the membrane. Then, by an asymptotic process, we deduce the corresponding equations from the equations of the second part in order to show the purely formal equivalence of the equations of the third and fourth parts. To obtain the equations, we make use of theorems whose proofs can be found in the appendices. The equations can be applied to many areas of interest: instabilities of liquid jets and liquid films, modelisation of interfaces between two different fluids as sheets or membranes, modelisation with the averaged equations over a cross section of single phase flows and two-phase flows in channels with a nonrectilinear axis such as bends or pump casings [fr

  12. Educational Outreach: The Space Science Road Show

    Cox, N. L. J.

    2002-01-01

    The poster presented will give an overview of a study towards a "Space Road Show". The topic of this show is space science. The target group is adolescents, aged 12 to 15, at Dutch high schools. The show and its accompanying experiments would be supported with suitable educational material. Science teachers at schools can decide for themselves if they want to use this material in advance, afterwards or not at all. The aims of this outreach effort are: to motivate students for space science and engineering, to help them understand the importance of (space) research, to give them a positive feeling about the possibilities offered by space and in the process give them useful knowledge on space basics. The show revolves around three main themes: applications, science and society. First the students will get some historical background on the importance of space/astronomy to civilization. Secondly they will learn more about novel uses of space. On the one hand they will learn of "Views on Earth" involving technologies like Remote Sensing (or Spying), Communication, Broadcasting, GPS and Telemedicine. On the other hand they will experience "Views on Space" illustrated by past, present and future space research missions, like the space exploration missions (Cassini/Huygens, Mars Express and Rosetta) and the astronomy missions (Soho and XMM). Meanwhile, the students will learn more about the technology of launchers and satellites needed to accomplish these space missions. Throughout the show and especially towards the end attention will be paid to the third theme "Why go to space"? Other reasons for people to get into space will be explored. An important question in this is the commercial (manned) exploration of space. Thus, the questions of benefit of space to society are integrated in the entire show. It raises some fundamental questions about the effects of space travel on our environment, poverty and other moral issues. The show attempts to connect scientific with

  13. Managing first-line failure.

    Cooper, David A

    2014-01-01

    WHO standard of care for failure of a first regimen, usually 2N(t)RTI's and an NNRTI, consists of a ritonavir-boosted protease inhibitor with a change in N(t)RTI's. Until recently, there was no evidence to support these recommendations which were based on expert opinion. Two large randomized clinical trials, SECOND LINE and EARNEST both showed excellent response rates (>80%) for the WHO standard of care and indicated that a novel regimen of a boosted protease inhibitor with an integrase inhibitor had equal efficacy with no difference in toxicity. In EARNEST, a third arm consisting of induction with the combined protease and integrase inhibitor followed by protease inhibitor monotherapy maintenance was inferior and led to substantial (20%) protease inhibitor resistance. These studies confirm the validity of the current recommendations of WHO and point to a novel public health approach of using two new classes for second line when standard first-line therapy has failed, which avoids resistance genotyping. Notwithstanding, adherence must be stressed in those failing first-line treatments. Protease inhibitor monotherapy is not suitable for a public health approach in low- and middle-income countries.

  14. Solar magnetic field studies using the 12 micron emission lines. I - Quiet sun time series and sunspot slices

    Deming, Drake; Boyle, Robert J.; Jennings, Donald E.; Wiedemann, Gunter

    1988-01-01

    The use of the extremely Zeeman-sensitive IR emission line Mg I, at 12.32 microns, to study solar magnetic fields. Time series observations of the line in the quiet sun were obtained in order to determine the response time of the line to the five-minute oscillations. Based upon the velocity amplitude and average period measured in the line, it is concluded that it is formed in the temperature minimum region. The magnetic structure of sunspots is investigated by stepping a small field of view in linear 'slices' through the spots. The region of penumbral line formation does not show the Evershed outflow common in photospheric lines. The line intensity is a factor of two greater in sunspot penumbrae than in the photosphere, and at the limb the penumbral emission begins to depart from optical thinness, the line source function increasing with height. For a spot near disk center, the radial decrease in absolute magnetic field strength is steeper than the generally accepted dependence.

  15. MUC1 gene polymorphism in three Nelore lines selected for growth and its association with growth and carcass traits.

    de Souza, Fabio Ricardo Pablos; Maione, Sandra; Sartore, Stefano; Soglia, Dominga; Spalenza, Veronica; Cauvin, Elsa; Martelli, Lucia Regina; Mercadante, Maria Eugênia Zerlotti; Sacchi, Paola; de Albuquerque, Lucia Galvão; Rasero, Roberto

    2012-02-01

    The objective of this study was to describe the VNTR polymorphism of the mucin 1 gene (MUC1) in three Nelore lines selected for yearling weight to determine whether allele and genotype frequencies of this polymorphism were affected by selection for growth. In addition, the effects of the polymorphism on growth and carcass traits were evaluated. Birth, weaning and yearling weights, rump height, Longissimus muscle area, backfat thickness, and rump fat thickness, were analyzed. A total of 295 Nelore heifers from the Beef Cattle Research Center, Instituto de Zootecnia de Sertãozinho, were used, including 41 of the control line, 102 of the selection line and 152 of the traditional. The selection and traditional lines comprise animals selected for higher yearling weight, whereas control line animals are selected for yearling weight close to the average. Five alleles were identified, with allele 1 being the most frequent in the three lines, especially in the lines selected for higher means for yearling weight. Heterozygosity was significantly higher in the control line. Association analyses showed significant effects of allele 1 on birth weight and weaning weight while the allele 3 exert significant effects on yearling weight and back fat thickness. Despite these findings, application of this marker to marker-assisted selection requires more consistent results based on the genotyping of a larger number of animals in order to increase the accuracy of the statistical analyses.

  16. Corrections for gravitational lensing of supernovae: better than average?

    Gunnarsson, Christofer; Dahlen, Tomas; Goobar, Ariel; Jonsson, Jakob; Mortsell, Edvard

    2005-01-01

    We investigate the possibility of correcting for the magnification due to gravitational lensing of standard candle sources, such as Type Ia supernovae. Our method uses the observed properties of the foreground galaxies along the lines-of-sight to each source and the accuracy of the lensing correction depends on the quality and depth of these observations as well as the uncertainties in translating the observed luminosities to the matter distribution in the lensing galaxies. The current work i...

  17. Children’s Attitudes and Stereotype Content Toward Thin, Average-Weight, and Overweight Peers

    Federica Durante

    2014-05-01

    Full Text Available Six- to 11-year-old children’s attitudes toward thin, average-weight, and overweight targets were investigated with associated warmth and competence stereotypes. The results showed positive attitudes toward average-weight targets and negative attitudes toward overweight peers: Both attitudes decreased as a function of children’s age. Thin targets were perceived more positively than overweight ones but less positively than average-weight targets. Notably, social desirability concerns predicted the decline of anti-fat bias in older children. Finally, the results showed ambivalent stereotypes toward thin and overweight targets—particularly among older children—mirroring the stereotypes observed in adults. This result suggests that by the end of elementary school, children manage the two fundamental dimensions of social judgment similar to adults.

  18. LINE FUSION GENES: a database of LINE expression in human genes

    Park Hong-Seog

    2006-06-01

    Full Text Available Abstract Background Long Interspersed Nuclear Elements (LINEs are the most abundant retrotransposons in humans. About 79% of human genes are estimated to contain at least one segment of LINE per transcription unit. Recent studies have shown that LINE elements can affect protein sequences, splicing patterns and expression of human genes. Description We have developed a database, LINE FUSION GENES, for elucidating LINE expression throughout the human gene database. We searched the 28,171 genes listed in the NCBI database for LINE elements and analyzed their structures and expression patterns. The results show that the mRNA sequences of 1,329 genes were affected by LINE expression. The LINE expression types were classified on the basis of LINEs in the 5' UTR, exon or 3' UTR sequences of the mRNAs. Our database provides further information, such as the tissue distribution and chromosomal location of the genes, and the domain structure that is changed by LINE integration. We have linked all the accession numbers to the NCBI data bank to provide mRNA sequences for subsequent users. Conclusion We believe that our work will interest genome scientists and might help them to gain insight into the implications of LINE expression for human evolution and disease. Availability http://www.primate.or.kr/line

  19. 2008 LHC Open Days Physics: the show

    2008-01-01

    A host of events and activities await visitors to the LHC Open Days on 5 and 6 April. A highlight will be the physics shows funded by the European Physical Society (EPS), which are set to surprise and challenge children and adults alike! School children use their experience of riding a bicycle to understand how planets move around the sun (Copyright : Circus Naturally) Participating in the Circus Naturally show could leave a strange taste in your mouth! (Copyright : Circus Naturally) The Rino Foundation’s experiments with liquid nitrogen can be pretty exciting! (Copyright: The Rino Foundation)What does a bicycle have in common with the solar system? Have you ever tried to weigh air or visualise sound? Ever heard of a vacuum bazooka? If you want to discover the answers to these questions and more then come to the Physics Shows taking place at the CERN O...

  20. Online Italian fandoms of American TV shows

    Eleonora Benecchi

    2015-06-01

    Full Text Available The Internet has changed media fandom in two main ways: it helps fans connect with each other despite physical distance, leading to the formation of international fan communities; and it helps fans connect with the creators of the TV show, deepening the relationship between TV producers and international fandoms. To assess whether Italian fan communities active online are indeed part of transnational online communities and whether the Internet has actually altered their relationship with the creators of the original text they are devoted to, qualitative analysis and narrative interviews of 26 Italian fans of American TV shows were conducted to explore the fan-producer relationship. Results indicated that the online Italian fans surveyed preferred to stay local, rather than using geography-leveling online tools. Further, the sampled Italian fans' relationships with the show runners were mediated or even absent.

  1. Implications of selection in common bean lines in contrasting environments concerning nitrogen levels

    Isabela Volpi Furtini

    2014-10-01

    Full Text Available Grain productivities of 100 bean lines were evaluated in the presence and absence of nitrogen fertilizer in order to identify those with high nitrogen use efficiency (NUE and to determine the correlated response observed in a stressed environment following selection in a non-stressed environment. The genetic and phenotypic characteristics of the lines, as well as the response index to applied nitrogen, were determined. The average grain productivities at both locations were 39.5% higher in the presence of nitrogen fertilizer, with 8.3 kg of grain being produced per kg of nitrogen applied. NUE varied greatly between lines. Lines BP-16, CVII-85-11, BP-24, Ouro Negro and MA-IV-15-203 were the most efficient and responsive. The results showed that it is possible to select bean lines in stressed and non-stressed environments. It was inferred that common bean lines for environments with low nitrogen availability should preferably be selected under nitrogen stress.

  2. Veterans Crisis Line

    Department of Veterans Affairs — The caring responders at the Veterans Crisis Line are specially trained and experienced in helping Veterans of all ages and circumstances. Some of the responders are...

  3. Product line design

    Anderson, S. P.; Celik, Levent

    2015-01-01

    Roč. 157, May (2015), s. 517-526 ISSN 0022-0531 Institutional support: RVO:67985998 Keywords : product line design * product differentiation * second-degree price discrimination Subject RIV: AH - Economics Impact factor: 1.097, year: 2015

  4. Kansas Electric Transmission Lines

    Kansas Data Access and Support Center — This data set is a digital representation of the EletcircTransmission lines for the State of Kansas as maintained by the Kansas Corporation Commission. Data is...

  5. Electric Power Transmission Lines

    Department of Homeland Security — Transmission Lines are the system of structures, wires, insulators and associated hardware that carry electric energy from one point to another in an electric power...

  6. Beef steers with average dry matter intake and divergent average daily gain have altered gene expression in the jejunum

    The objective of this study was to determine the association of differentially expressed genes (DEG) in the jejunum of steers with average DMI and high or low ADG. Feed intake and growth were measured in a cohort of 144 commercial Angus steers consuming a finishing diet containing (on a DM basis) 67...

  7. The classical correlation limits the ability of the measurement-induced average coherence

    Zhang, Jun; Yang, Si-Ren; Zhang, Yang; Yu, Chang-Shui

    2017-04-01

    Coherence is the most fundamental quantum feature in quantum mechanics. For a bipartite quantum state, if a measurement is performed on one party, the other party, based on the measurement outcomes, will collapse to a corresponding state with some probability and hence gain the average coherence. It is shown that the average coherence is not less than the coherence of its reduced density matrix. In particular, it is very surprising that the extra average coherence (and the maximal extra average coherence with all the possible measurements taken into account) is upper bounded by the classical correlation of the bipartite state instead of the quantum correlation. We also find the sufficient and necessary condition for the null maximal extra average coherence. Some examples demonstrate the relation and, moreover, show that quantum correlation is neither sufficient nor necessary for the nonzero extra average coherence within a given measurement. In addition, the similar conclusions are drawn for both the basis-dependent and the basis-free coherence measure.

  8. Duchenne muscular dystrophy models show their age

    Chamberlain, Jeffrey S.

    2010-01-01

    The lack of appropriate animal models has hampered efforts to develop therapies for Duchenne muscular dystrophy (DMD). A new mouse model lacking both dystrophin and telomerase (Sacco et al., 2010) closely mimics the pathological progression of human DMD and shows that muscle stem cell activity is a key determinant of disease severity.

  9. Show Them You Really Want the Job

    Perlmutter, David D.

    2012-01-01

    Showing that one really "wants" the job entails more than just really wanting the job. An interview is part Broadway casting call, part intellectual dating game, part personality test, and part, well, job interview. When there are 300 applicants for a position, many of them will "fit" the required (and even the preferred) skills listed in the job…

  10. A Talk Show from the Past.

    Gallagher, Arlene F.

    1991-01-01

    Describes a two-day activity in which elementary students examine voting rights, the right to assemble, and women's suffrage. Explains the game, "Assemble, Reassemble," and a student-produced talk show with five students playing the roles of leaders of the women's suffrage movement. Profiles Elizabeth Cady Stanton, Lucretia Mott, Susan…

  11. Laser entertainment and light shows in education

    Sabaratnam, Andrew T.; Symons, Charles

    2002-05-01

    Laser shows and beam effects have been a source of entertainment since its first public performance May 9, 1969, at Mills College in Oakland, California. Since 1997, the Photonics Center, NgeeAnn Polytechnic, Singapore, has been using laser shows as a teaching tool. Students are able to exhibit their creative skills and learn at the same time how lasers are used in the entertainment industry. Students will acquire a number of skills including handling three- phase power supply, operation of cooling system, and laser alignment. Students also acquire an appreciation of the arts, learning about shapes and contours as they develop graphics for the shows. After holography, laser show animation provides a combination of the arts and technology. This paper aims to briefly describe how a krypton-argon laser, galvanometer scanners, a polychromatic acousto-optic modulator and related electronics are put together to develop a laser projector. The paper also describes how students are trained to make their own laser animation and beam effects with music, and at the same time have an appreciation of the operation of a Class IV laser and the handling of optical components.

  12. The Last Great American Picture Show

    Elsaesser, Thomas; King, Noel; Horwath, Alexander

    2004-01-01

    The Last Great American Picture Show brings together essays by scholars and writers who chart the changing evaluations of the American cinema of the 1970s, sometimes referred to as the decade of the lost generation, but now more and more recognized as the first New Hollywood, without which the

  13. SAF line powder operations

    Frederickson, J.R.; Horgos, R.M.

    1983-10-01

    An automated nuclear fuel fabrication line is being designed for installation in the Fuels and Materials Examination Facility (FMEF) near Richland, Washington. The fabrication line will consist of seven major process systems: Receiving and Powder Preparation; Powder Conditioning; Pressing and Boat Loading; Debinding, Sintering, and Property Adjustment; Boat Transport; Pellet Inspection and Finishing; and Pin Operations. Fuel powder processing through pellet pressing will be discussed in this paper

  14. Capital Improvements Business Line

    2012-08-08

    NAVFAC Southwest Dan Waid Program & Business Mgmt NAVFAC SW Capital Improvements Business Line NAVFAC SW 8 August 2012 1 Report...REPORT TYPE 3. DATES COVERED 00-00-2012 to 00-00-2012 4. TITLE AND SUBTITLE Capital Improvements Business Line 5a. CONTRACT NUMBER 5b. GRANT...AVAILABILITY STATEMENT Approved for public release; distribution unlimited 13. SUPPLEMENTARY NOTES Presented at the 2012 Navy Gold Coast Small Business

  15. Multiple-level defect species evaluation from average carrier decay

    Debuf, Didier

    2003-10-01

    An expression for the average decay is determined by solving the the carrier continuity equations, which include terms for multiple defect recombination. This expression is the decay measured by techniques such as the contactless photoconductance decay method, which determines the average or volume integrated decay. Implicit in the above is the requirement for good surface passivation such that only bulk properties are observed. A proposed experimental configuration is given to achieve the intended goal of an assessment of the type of defect in an n-type Czochralski-grown silicon semiconductor with an unusually high relative lifetime. The high lifetime is explained in terms of a ground excited state multiple-level defect system. Also, minority carrier trapping is investigated.

  16. Kumaraswamy autoregressive moving average models for double bounded environmental data

    Bayer, Fábio Mariano; Bayer, Débora Missio; Pumi, Guilherme

    2017-12-01

    In this paper we introduce the Kumaraswamy autoregressive moving average models (KARMA), which is a dynamic class of models for time series taking values in the double bounded interval (a,b) following the Kumaraswamy distribution. The Kumaraswamy family of distribution is widely applied in many areas, especially hydrology and related fields. Classical examples are time series representing rates and proportions observed over time. In the proposed KARMA model, the median is modeled by a dynamic structure containing autoregressive and moving average terms, time-varying regressors, unknown parameters and a link function. We introduce the new class of models and discuss conditional maximum likelihood estimation, hypothesis testing inference, diagnostic analysis and forecasting. In particular, we provide closed-form expressions for the conditional score vector and conditional Fisher information matrix. An application to environmental real data is presented and discussed.

  17. Database of average-power damage thresholds at 1064 nm

    Rainer, F.; Hildum, E.A.; Milam, D.

    1987-01-01

    We have completed a database of average-power, laser-induced, damage thresholds at 1064 nm on a variety of materials. Measurements were made with a newly constructed laser to provide design input for moderate and high average-power laser projects. The measurements were conducted with 16-ns pulses at pulse-repetition frequencies ranging from 6 to 120 Hz. Samples were typically irradiated for time ranging from a fraction of a second up to 5 minutes (36,000 shots). We tested seven categories of samples which included antireflective coatings, high reflectors, polarizers, single and multiple layers of the same material, bare and overcoated metal surfaces, bare polished surfaces, and bulk materials. The measured damage threshold ranged from 2 for some metals to > 46 J/cm 2 for a bare polished glass substrate. 4 refs., 7 figs., 1 tab

  18. Partial Averaged Navier-Stokes approach for cavitating flow

    Zhang, L; Zhang, Y N

    2015-01-01

    Partial Averaged Navier Stokes (PANS) is a numerical approach developed for studying practical engineering problems (e.g. cavitating flow inside hydroturbines) with a resonance cost and accuracy. One of the advantages of PANS is that it is suitable for any filter width, leading a bridging method from traditional Reynolds Averaged Navier-Stokes (RANS) to direct numerical simulations by choosing appropriate parameters. Comparing with RANS, the PANS model will inherit many physical nature from parent RANS but further resolve more scales of motion in great details, leading to PANS superior to RANS. As an important step for PANS approach, one need to identify appropriate physical filter-width control parameters e.g. ratios of unresolved-to-total kinetic energy and dissipation. In present paper, recent studies of cavitating flow based on PANS approach are introduced with a focus on the influences of filter-width control parameters on the simulation results

  19. The B-dot Earth Average Magnetic Field

    Capo-Lugo, Pedro A.; Rakoczy, John; Sanders, Devon

    2013-01-01

    The average Earth's magnetic field is solved with complex mathematical models based on mean square integral. Depending on the selection of the Earth magnetic model, the average Earth's magnetic field can have different solutions. This paper presents a simple technique that takes advantage of the damping effects of the b-dot controller and is not dependent of the Earth magnetic model; but it is dependent on the magnetic torquers of the satellite which is not taken into consideration in the known mathematical models. Also the solution of this new technique can be implemented so easily that the flight software can be updated during flight, and the control system can have current gains for the magnetic torquers. Finally, this technique is verified and validated using flight data from a satellite that it has been in orbit for three years.

  20. Thermal effects in high average power optical parametric amplifiers.

    Rothhardt, Jan; Demmler, Stefan; Hädrich, Steffen; Peschel, Thomas; Limpert, Jens; Tünnermann, Andreas

    2013-03-01

    Optical parametric amplifiers (OPAs) have the reputation of being average power scalable due to the instantaneous nature of the parametric process (zero quantum defect). This Letter reveals serious challenges originating from thermal load in the nonlinear crystal caused by absorption. We investigate these thermal effects in high average power OPAs based on beta barium borate. Absorption of both pump and idler waves is identified to contribute significantly to heating of the nonlinear crystal. A temperature increase of up to 148 K with respect to the environment is observed and mechanical tensile stress up to 40 MPa is found, indicating a high risk of crystal fracture under such conditions. By restricting the idler to a wavelength range far from absorption bands and removing the crystal coating we reduce the peak temperature and the resulting temperature gradient significantly. Guidelines for further power scaling of OPAs and other nonlinear devices are given.